sha
stringlengths
40
40
text
stringlengths
1
13.4M
id
stringlengths
2
117
tags
sequencelengths
1
7.91k
created_at
stringlengths
25
25
metadata
stringlengths
2
875k
last_modified
stringlengths
25
25
arxiv
sequencelengths
0
25
languages
sequencelengths
0
7.91k
tags_str
stringlengths
17
159k
text_str
stringlengths
1
447k
text_lists
sequencelengths
0
352
processed_texts
sequencelengths
1
353
1b477e04e64bf933635197427fdb15c8343a88a1
# Dataset Card for Europeana Newspapers ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Github](https://github.com/EuropeanaNewspapers/ner-corpora) - **Repository:** [Github](https://github.com/EuropeanaNewspapers/ner-corpora) - **Paper:** [Aclweb](https://www.aclweb.org/anthology/L16-1689/) - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@jplu](https://github.com/jplu) for adding this dataset.
euronews
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:multilingual", "size_categories:n<1K", "source_datasets:original", "language:de", "language:fr", "language:nl", "license:cc0-1.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["crowdsourced"], "language": ["de", "fr", "nl"], "license": ["cc0-1.0"], "multilinguality": ["multilingual"], "size_categories": ["n<1K"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "paperswithcode_id": "europeana-newspapers", "pretty_name": "Europeana Newspapers", "dataset_info": [{"config_name": "fr-bnf", "features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-PER", "2": "I-PER", "3": "B-ORG", "4": "I-ORG", "5": "B-LOC", "6": "I-LOC"}}}}], "splits": [{"name": "train", "num_bytes": 3340299, "num_examples": 1}], "download_size": 1542418, "dataset_size": 3340299}, {"config_name": "nl-kb", "features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-PER", "2": "I-PER", "3": "B-ORG", "4": "I-ORG", "5": "B-LOC", "6": "I-LOC"}}}}], "splits": [{"name": "train", "num_bytes": 3104213, "num_examples": 1}], "download_size": 1502162, "dataset_size": 3104213}, {"config_name": "de-sbb", "features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-PER", "2": "I-PER", "3": "B-ORG", "4": "I-ORG", "5": "B-LOC", "6": "I-LOC"}}}}], "splits": [{"name": "train", "num_bytes": 817295, "num_examples": 1}], "download_size": 407756, "dataset_size": 817295}, {"config_name": "de-onb", "features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-PER", "2": "I-PER", "3": "B-ORG", "4": "I-ORG", "5": "B-LOC", "6": "I-LOC"}}}}], "splits": [{"name": "train", "num_bytes": 502369, "num_examples": 1}], "download_size": 271252, "dataset_size": 502369}, {"config_name": "de-lft", "features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-PER", "2": "I-PER", "3": "B-ORG", "4": "I-ORG", "5": "B-LOC", "6": "I-LOC"}}}}], "splits": [{"name": "train", "num_bytes": 1263429, "num_examples": 1}], "download_size": 677779, "dataset_size": 1263429}]}
2024-01-18T11:03:24+00:00
[]
[ "de", "fr", "nl" ]
TAGS #task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-expert-generated #language_creators-crowdsourced #multilinguality-multilingual #size_categories-n<1K #source_datasets-original #language-German #language-French #language-Dutch #license-cc0-1.0 #region-us
# Dataset Card for Europeana Newspapers ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: Github - Repository: Github - Paper: Aclweb - Leaderboard: - Point of Contact: ### Dataset Summary ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions Thanks to @jplu for adding this dataset.
[ "# Dataset Card for Europeana Newspapers", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: Github\n- Repository: Github\n- Paper: Aclweb\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @jplu for adding this dataset." ]
[ "TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-expert-generated #language_creators-crowdsourced #multilinguality-multilingual #size_categories-n<1K #source_datasets-original #language-German #language-French #language-Dutch #license-cc0-1.0 #region-us \n", "# Dataset Card for Europeana Newspapers", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: Github\n- Repository: Github\n- Paper: Aclweb\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @jplu for adding this dataset." ]
7f826bee29d063e470ad5664832bf2698c36abb2
# Dataset Card for Europa Education and Culture Translation Memory (EAC-TM) ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://ec.europa.eu/jrc/en/language-technologies/eac-translation-memory](https://ec.europa.eu/jrc/en/language-technologies/eac-translation-memory) - **Paper:** [https://link.springer.com/article/10.1007/s10579-014-9277-0](https://link.springer.com/article/10.1007/s10579-014-9277-0) - **Point of Contact:** [[email protected]](mailto:[email protected]) ### Dataset Summary This dataset is a corpus of manually produced translations from english to up to 25 languages, released in 2012 by the European Union's Directorate General for Education and Culture (EAC). To load a language pair that is not part of the config, just specify the language code as language pair. For example, if you want to translate Czech to Greek: `dataset = load_dataset("europa_eac_tm", language_pair=("cs", "el"))` ### Supported Tasks and Leaderboards - `text2text-generation`: the dataset can be used to train a model for `machine-translation`. Machine translation models are usually evaluated using metrics such as [BLEU](https://huggingface.co/metrics/bleu), [ROUGE](https://huggingface.co/metrics/rouge) or [SacreBLEU](https://huggingface.co/metrics/sacrebleu). You can use the [mBART](https://huggingface.co/facebook/mbart-large-cc25) model for this task. This task has active leaderboards which can be found at [https://paperswithcode.com/task/machine-translation](https://paperswithcode.com/task/machine-translation), which usually rank models based on [BLEU score](https://huggingface.co/metrics/bleu). ### Languages The sentences in this dataset were originally written in English (source language is English) and then translated into the other languages. The sentences are extracted from electroniv forms: application and report forms for decentralised actions of EAC's Life-long Learning Programme (LLP) and the Youth in Action Programme. The contents in the electronic forms are technically split into two types: (a) the labels and contents of drop-down menus (referred to as 'Forms' Data) and (b) checkboxes (referred to as 'Reference Data'). The dataset contains traduction of English sentences or parts of sentences to Bulgarian, Czech, Danish, Dutch, Estonian, German, Greek, Finnish, French, Croatian, Hungarian, Icelandic, Italian, Latvian, Lithuanian, Maltese, Norwegian, Polish, Portuguese, Romanian, Slovak, Slovenian, Spanish, Swedish and Turkish. Language codes: - `bg` - `cs` - `da` - `de` - `el` - `en` - `es` - `et` - `fi` - `fr` - `hr` - `hu` - `is` - `it` - `lt` - `lv` - `mt` - `nl` - `no` - `pl` - `pt` - `ro` - `sk` - `sl` - `sv` - `tr` ## Dataset Structure ### Data Instances ``` { "translation": { "en":"Sentence to translate", "<target_language>": "Phrase à traduire", }, "sentence_type": 0 } ``` ### Data Fields - `translation`: Mapping of sentences to translate (in English) and translated sentences. - `sentence_type`: Integer value, 0 if the sentence is a 'form data' (extracted from the labels and contents of drop-down menus of the source electronic forms) or 1 if the sentence is a 'reference data' (extracted from the electronic forms checkboxes). ### Data Splits The data is not splitted (only the `train` split is available). ## Dataset Creation ### Curation Rationale The EAC-TM is relatively small compared to the JRC-Acquis and to DGT-TM, but it has the advantage that it focuses on a very different domain, namely that of education and culture. Also, it includes translation units for the languages Croatian (HR), Icelandic (IS), Norwegian (Bokmål, NB or Norwegian, NO) and Turkish (TR). ### Source Data #### Initial Data Collection and Normalization EAC-TM was built in the context of translating electronic forms: application and report forms for decentralised actions of EAC's Life-long Learning Programme (LLP) and the Youth in Action Programme. All documents and sentences were originally written in English (source language is English) and then translated into the other languages. The contents in the electronic forms are technically split into two types: (a) the labels and contents of drop-down menus (referred to as 'Forms' Data) and (b) checkboxes (referred to as 'Reference Data'). Due to the different types of data, the two collections are kept separate. For example, labels can be 'Country', 'Please specify your home country' etc., while examples for reference data are 'Germany', 'Basic/general programmes', 'Education and Culture' etc. The data consists of translations carried out between the end of the year 2008 and July 2012. #### Who are the source language producers? The texts were translated by staff of the National Agencies of the Lifelong Learning and Youth in Action programmes. They are typically professionals in the field of education/youth and EU programmes. They are thus not professional translators, but they are normally native speakers of the target language. ### Annotations #### Annotation process Sentences were manually translated by humans. #### Who are the annotators? The texts were translated by staff of the National Agencies of the Lifelong Learning and Youth in Action programmes. They are typically professionals in the field of education/youth and EU programmes. They are thus not professional translators, but they are normally native speakers of the target language. ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information © European Union, 1995-2020 The Commission's reuse policy is implemented by the [Commission Decision of 12 December 2011 on the reuse of Commission documents](https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32011D0833). Unless otherwise indicated (e.g. in individual copyright notices), content owned by the EU on this website is licensed under the [Creative Commons Attribution 4.0 International (CC BY 4.0) licence](http://creativecommons.org/licenses/by/4.0/). This means that reuse is allowed, provided appropriate credit is given and changes are indicated. You may be required to clear additional rights if a specific content depicts identifiable private individuals or includes third-party works. To use or reproduce content that is not owned by the EU, you may need to seek permission directly from the rightholders. Software or documents covered by industrial property rights, such as patents, trade marks, registered designs, logos and names, are excluded from the Commission's reuse policy and are not licensed to you. ### Citation Information ``` @Article{Steinberger2014, author={Steinberger, Ralf and Ebrahim, Mohamed and Poulis, Alexandros and Carrasco-Benitez, Manuel and Schl{\"u}ter, Patrick and Przybyszewski, Marek and Gilbro, Signe}, title={An overview of the European Union's highly multilingual parallel corpora}, journal={Language Resources and Evaluation}, year={2014}, month={Dec}, day={01}, volume={48}, number={4}, pages={679-707}, issn={1574-0218}, doi={10.1007/s10579-014-9277-0}, url={https://doi.org/10.1007/s10579-014-9277-0} } ``` ### Contributions Thanks to [@SBrandeis](https://github.com/SBrandeis) for adding this dataset.
europa_eac_tm
[ "task_categories:translation", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:translation", "size_categories:1K<n<10K", "source_datasets:original", "language:bg", "language:cs", "language:da", "language:de", "language:el", "language:en", "language:es", "language:et", "language:fi", "language:fr", "language:hr", "language:hu", "language:is", "language:it", "language:lt", "language:lv", "language:mt", "language:nl", "language:no", "language:pl", "language:pt", "language:ro", "language:sk", "language:sl", "language:sv", "language:tr", "license:cc-by-4.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["bg", "cs", "da", "de", "el", "en", "es", "et", "fi", "fr", "hr", "hu", "is", "it", "lt", "lv", "mt", "nl", "no", "pl", "pt", "ro", "sk", "sl", "sv", "tr"], "license": ["cc-by-4.0"], "multilinguality": ["translation"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["translation"], "task_ids": [], "pretty_name": "Europa Education and Culture Translation Memory (EAC-TM)", "dataset_info": [{"config_name": "en2bg", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "bg"]}}}, {"name": "sentence_type", "dtype": {"class_label": {"names": {"0": "form_data", "1": "sentence_data"}}}}], "splits": [{"name": "train", "num_bytes": 664252, "num_examples": 4061}], "download_size": 3521416, "dataset_size": 664252}, {"config_name": "en2cs", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "cs"]}}}, {"name": "sentence_type", "dtype": {"class_label": {"names": {"0": "form_data", "1": "sentence_data"}}}}], "splits": [{"name": "train", "num_bytes": 365983, "num_examples": 3351}], "download_size": 3521416, "dataset_size": 365983}, {"config_name": "en2da", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "da"]}}}, {"name": "sentence_type", "dtype": {"class_label": {"names": {"0": "form_data", "1": "sentence_data"}}}}], "splits": [{"name": "train", "num_bytes": 422079, "num_examples": 3757}], "download_size": 3521416, "dataset_size": 422079}, {"config_name": "en2de", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "de"]}}}, {"name": "sentence_type", "dtype": {"class_label": {"names": {"0": "form_data", "1": "sentence_data"}}}}], "splits": [{"name": "train", "num_bytes": 579566, "num_examples": 4473}], "download_size": 3521416, "dataset_size": 579566}, {"config_name": "en2el", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "el"]}}}, {"name": "sentence_type", "dtype": {"class_label": {"names": {"0": "form_data", "1": "sentence_data"}}}}], "splits": [{"name": "train", "num_bytes": 491346, "num_examples": 2818}], "download_size": 3521416, "dataset_size": 491346}, {"config_name": "en2es", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "es"]}}}, {"name": "sentence_type", "dtype": {"class_label": {"names": {"0": "form_data", "1": "sentence_data"}}}}], "splits": [{"name": "train", "num_bytes": 555218, "num_examples": 4303}], "download_size": 3521416, "dataset_size": 555218}, {"config_name": "en2et", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "et"]}}}, {"name": "sentence_type", "dtype": {"class_label": {"names": {"0": "form_data", "1": "sentence_data"}}}}], "splits": [{"name": "train", "num_bytes": 247284, "num_examples": 2270}], "download_size": 3521416, "dataset_size": 247284}, {"config_name": "en2fi", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "fi"]}}}, {"name": "sentence_type", "dtype": {"class_label": {"names": {"0": "form_data", "1": "sentence_data"}}}}], "splits": [{"name": "train", "num_bytes": 150560, "num_examples": 1458}], "download_size": 3521416, "dataset_size": 150560}, {"config_name": "en2fr", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "fr"]}}}, {"name": "sentence_type", "dtype": {"class_label": {"names": {"0": "form_data", "1": "sentence_data"}}}}], "splits": [{"name": "train", "num_bytes": 575579, "num_examples": 4476}], "download_size": 3521416, "dataset_size": 575579}, {"config_name": "en2hu", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "hu"]}}}, {"name": "sentence_type", "dtype": {"class_label": {"names": {"0": "form_data", "1": "sentence_data"}}}}], "splits": [{"name": "train", "num_bytes": 454802, "num_examples": 3455}], "download_size": 3521416, "dataset_size": 454802}, {"config_name": "en2is", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "is"]}}}, {"name": "sentence_type", "dtype": {"class_label": {"names": {"0": "form_data", "1": "sentence_data"}}}}], "splits": [{"name": "train", "num_bytes": 268194, "num_examples": 2206}], "download_size": 3521416, "dataset_size": 268194}, {"config_name": "en2it", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "it"]}}}, {"name": "sentence_type", "dtype": {"class_label": {"names": {"0": "form_data", "1": "sentence_data"}}}}], "splits": [{"name": "train", "num_bytes": 270634, "num_examples": 2170}], "download_size": 3521416, "dataset_size": 270634}, {"config_name": "en2lt", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "lt"]}}}, {"name": "sentence_type", "dtype": {"class_label": {"names": {"0": "form_data", "1": "sentence_data"}}}}], "splits": [{"name": "train", "num_bytes": 358844, "num_examples": 3386}], "download_size": 3521416, "dataset_size": 358844}, {"config_name": "en2lv", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "lv"]}}}, {"name": "sentence_type", "dtype": {"class_label": {"names": {"0": "form_data", "1": "sentence_data"}}}}], "splits": [{"name": "train", "num_bytes": 437487, "num_examples": 3880}], "download_size": 3521416, "dataset_size": 437487}, {"config_name": "en2mt", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "mt"]}}}, {"name": "sentence_type", "dtype": {"class_label": {"names": {"0": "form_data", "1": "sentence_data"}}}}], "splits": [{"name": "train", "num_bytes": 178675, "num_examples": 1722}], "download_size": 3521416, "dataset_size": 178675}, {"config_name": "en2nb", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "nb"]}}}, {"name": "sentence_type", "dtype": {"class_label": {"names": {"0": "form_data", "1": "sentence_data"}}}}], "splits": [{"name": "train", "num_bytes": 85833, "num_examples": 642}], "download_size": 3521416, "dataset_size": 85833}, {"config_name": "en2nl", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "nl"]}}}, {"name": "sentence_type", "dtype": {"class_label": {"names": {"0": "form_data", "1": "sentence_data"}}}}], "splits": [{"name": "train", "num_bytes": 188531, "num_examples": 1805}], "download_size": 3521416, "dataset_size": 188531}, {"config_name": "en2pl", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "pl"]}}}, {"name": "sentence_type", "dtype": {"class_label": {"names": {"0": "form_data", "1": "sentence_data"}}}}], "splits": [{"name": "train", "num_bytes": 515976, "num_examples": 4027}], "download_size": 3521416, "dataset_size": 515976}, {"config_name": "en2pt", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "pt"]}}}, {"name": "sentence_type", "dtype": {"class_label": {"names": {"0": "form_data", "1": "sentence_data"}}}}], "splits": [{"name": "train", "num_bytes": 422125, "num_examples": 3501}], "download_size": 3521416, "dataset_size": 422125}, {"config_name": "en2ro", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "ro"]}}}, {"name": "sentence_type", "dtype": {"class_label": {"names": {"0": "form_data", "1": "sentence_data"}}}}], "splits": [{"name": "train", "num_bytes": 345468, "num_examples": 3159}], "download_size": 3521416, "dataset_size": 345468}, {"config_name": "en2sk", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "sk"]}}}, {"name": "sentence_type", "dtype": {"class_label": {"names": {"0": "form_data", "1": "sentence_data"}}}}], "splits": [{"name": "train", "num_bytes": 306049, "num_examples": 2972}], "download_size": 3521416, "dataset_size": 306049}, {"config_name": "en2sl", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "sl"]}}}, {"name": "sentence_type", "dtype": {"class_label": {"names": {"0": "form_data", "1": "sentence_data"}}}}], "splits": [{"name": "train", "num_bytes": 577524, "num_examples": 4644}], "download_size": 3521416, "dataset_size": 577524}, {"config_name": "en2sv", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "sv"]}}}, {"name": "sentence_type", "dtype": {"class_label": {"names": {"0": "form_data", "1": "sentence_data"}}}}], "splits": [{"name": "train", "num_bytes": 304954, "num_examples": 2909}], "download_size": 3521416, "dataset_size": 304954}, {"config_name": "en2tr", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "tr"]}}}, {"name": "sentence_type", "dtype": {"class_label": {"names": {"0": "form_data", "1": "sentence_data"}}}}], "splits": [{"name": "train", "num_bytes": 328267, "num_examples": 3198}], "download_size": 3521416, "dataset_size": 328267}]}
2024-01-18T11:03:25+00:00
[]
[ "bg", "cs", "da", "de", "el", "en", "es", "et", "fi", "fr", "hr", "hu", "is", "it", "lt", "lv", "mt", "nl", "no", "pl", "pt", "ro", "sk", "sl", "sv", "tr" ]
TAGS #task_categories-translation #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-translation #size_categories-1K<n<10K #source_datasets-original #language-Bulgarian #language-Czech #language-Danish #language-German #language-Modern Greek (1453-) #language-English #language-Spanish #language-Estonian #language-Finnish #language-French #language-Croatian #language-Hungarian #language-Icelandic #language-Italian #language-Lithuanian #language-Latvian #language-Maltese #language-Dutch #language-Norwegian #language-Polish #language-Portuguese #language-Romanian #language-Slovak #language-Slovenian #language-Swedish #language-Turkish #license-cc-by-4.0 #region-us
# Dataset Card for Europa Education and Culture Translation Memory (EAC-TM) ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Paper: URL - Point of Contact: ralf.steinberg@URL ### Dataset Summary This dataset is a corpus of manually produced translations from english to up to 25 languages, released in 2012 by the European Union's Directorate General for Education and Culture (EAC). To load a language pair that is not part of the config, just specify the language code as language pair. For example, if you want to translate Czech to Greek: 'dataset = load_dataset("europa_eac_tm", language_pair=("cs", "el"))' ### Supported Tasks and Leaderboards - 'text2text-generation': the dataset can be used to train a model for 'machine-translation'. Machine translation models are usually evaluated using metrics such as BLEU, ROUGE or SacreBLEU. You can use the mBART model for this task. This task has active leaderboards which can be found at URL which usually rank models based on BLEU score. ### Languages The sentences in this dataset were originally written in English (source language is English) and then translated into the other languages. The sentences are extracted from electroniv forms: application and report forms for decentralised actions of EAC's Life-long Learning Programme (LLP) and the Youth in Action Programme. The contents in the electronic forms are technically split into two types: (a) the labels and contents of drop-down menus (referred to as 'Forms' Data) and (b) checkboxes (referred to as 'Reference Data'). The dataset contains traduction of English sentences or parts of sentences to Bulgarian, Czech, Danish, Dutch, Estonian, German, Greek, Finnish, French, Croatian, Hungarian, Icelandic, Italian, Latvian, Lithuanian, Maltese, Norwegian, Polish, Portuguese, Romanian, Slovak, Slovenian, Spanish, Swedish and Turkish. Language codes: - 'bg' - 'cs' - 'da' - 'de' - 'el' - 'en' - 'es' - 'et' - 'fi' - 'fr' - 'hr' - 'hu' - 'is' - 'it' - 'lt' - 'lv' - 'mt' - 'nl' - 'no' - 'pl' - 'pt' - 'ro' - 'sk' - 'sl' - 'sv' - 'tr' ## Dataset Structure ### Data Instances ### Data Fields - 'translation': Mapping of sentences to translate (in English) and translated sentences. - 'sentence_type': Integer value, 0 if the sentence is a 'form data' (extracted from the labels and contents of drop-down menus of the source electronic forms) or 1 if the sentence is a 'reference data' (extracted from the electronic forms checkboxes). ### Data Splits The data is not splitted (only the 'train' split is available). ## Dataset Creation ### Curation Rationale The EAC-TM is relatively small compared to the JRC-Acquis and to DGT-TM, but it has the advantage that it focuses on a very different domain, namely that of education and culture. Also, it includes translation units for the languages Croatian (HR), Icelandic (IS), Norwegian (Bokmål, NB or Norwegian, NO) and Turkish (TR). ### Source Data #### Initial Data Collection and Normalization EAC-TM was built in the context of translating electronic forms: application and report forms for decentralised actions of EAC's Life-long Learning Programme (LLP) and the Youth in Action Programme. All documents and sentences were originally written in English (source language is English) and then translated into the other languages. The contents in the electronic forms are technically split into two types: (a) the labels and contents of drop-down menus (referred to as 'Forms' Data) and (b) checkboxes (referred to as 'Reference Data'). Due to the different types of data, the two collections are kept separate. For example, labels can be 'Country', 'Please specify your home country' etc., while examples for reference data are 'Germany', 'Basic/general programmes', 'Education and Culture' etc. The data consists of translations carried out between the end of the year 2008 and July 2012. #### Who are the source language producers? The texts were translated by staff of the National Agencies of the Lifelong Learning and Youth in Action programmes. They are typically professionals in the field of education/youth and EU programmes. They are thus not professional translators, but they are normally native speakers of the target language. ### Annotations #### Annotation process Sentences were manually translated by humans. #### Who are the annotators? The texts were translated by staff of the National Agencies of the Lifelong Learning and Youth in Action programmes. They are typically professionals in the field of education/youth and EU programmes. They are thus not professional translators, but they are normally native speakers of the target language. ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information © European Union, 1995-2020 The Commission's reuse policy is implemented by the Commission Decision of 12 December 2011 on the reuse of Commission documents. Unless otherwise indicated (e.g. in individual copyright notices), content owned by the EU on this website is licensed under the Creative Commons Attribution 4.0 International (CC BY 4.0) licence. This means that reuse is allowed, provided appropriate credit is given and changes are indicated. You may be required to clear additional rights if a specific content depicts identifiable private individuals or includes third-party works. To use or reproduce content that is not owned by the EU, you may need to seek permission directly from the rightholders. Software or documents covered by industrial property rights, such as patents, trade marks, registered designs, logos and names, are excluded from the Commission's reuse policy and are not licensed to you. ### Contributions Thanks to @SBrandeis for adding this dataset.
[ "# Dataset Card for Europa Education and Culture Translation Memory (EAC-TM)", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Paper: URL\n- Point of Contact: ralf.steinberg@URL", "### Dataset Summary\n\nThis dataset is a corpus of manually produced translations from english to up to 25 languages, released in 2012 by the European Union's Directorate General for Education and Culture (EAC).\n\nTo load a language pair that is not part of the config, just specify the language code as language pair. For example, if you want to translate Czech to Greek:\n\n'dataset = load_dataset(\"europa_eac_tm\", language_pair=(\"cs\", \"el\"))'", "### Supported Tasks and Leaderboards\n\n- 'text2text-generation': the dataset can be used to train a model for 'machine-translation'. Machine translation models are usually evaluated using metrics such as BLEU, ROUGE or SacreBLEU. You can use the mBART model for this task. This task has active leaderboards which can be found at URL which usually rank models based on BLEU score.", "### Languages\n\nThe sentences in this dataset were originally written in English (source language is English) and then translated into the other languages. The sentences are extracted from electroniv forms: application and report forms for decentralised actions of EAC's Life-long Learning Programme (LLP) and the Youth in Action Programme. The contents in the electronic forms are technically split into two types: (a) the labels and contents of drop-down menus (referred to as 'Forms' Data) and (b) checkboxes (referred to as 'Reference Data').\n\nThe dataset contains traduction of English sentences or parts of sentences to Bulgarian, Czech, Danish, Dutch, Estonian, German, Greek, Finnish, French, Croatian, Hungarian, Icelandic, Italian, Latvian, Lithuanian, Maltese, Norwegian, Polish, Portuguese, Romanian, Slovak, Slovenian, Spanish, Swedish and Turkish.\n\nLanguage codes:\n- 'bg'\n- 'cs'\n- 'da'\n- 'de'\n- 'el'\n- 'en'\n- 'es'\n- 'et'\n- 'fi'\n- 'fr'\n- 'hr'\n- 'hu'\n- 'is'\n- 'it'\n- 'lt'\n- 'lv'\n- 'mt'\n- 'nl'\n- 'no'\n- 'pl'\n- 'pt'\n- 'ro'\n- 'sk'\n- 'sl'\n- 'sv'\n- 'tr'", "## Dataset Structure", "### Data Instances", "### Data Fields\n\n- 'translation': Mapping of sentences to translate (in English) and translated sentences.\n\n- 'sentence_type': Integer value, 0 if the sentence is a 'form data' (extracted from the labels and contents of drop-down menus of the source electronic forms) or 1 if the sentence is a 'reference data' (extracted from the electronic forms checkboxes).", "### Data Splits\n\nThe data is not splitted (only the 'train' split is available).", "## Dataset Creation", "### Curation Rationale\n\nThe EAC-TM is relatively small compared to the JRC-Acquis and to DGT-TM, but it has the advantage that it focuses on a very different domain, namely that of education and culture. Also, it includes translation units for the languages Croatian (HR), Icelandic (IS), Norwegian (Bokmål, NB or Norwegian, NO) and Turkish (TR).", "### Source Data", "#### Initial Data Collection and Normalization\n\nEAC-TM was built in the context of translating electronic forms: application and report forms for decentralised actions of EAC's Life-long Learning Programme (LLP) and the Youth in Action Programme. All documents and sentences were originally written in English (source language is English) and then translated into the other languages.\n\nThe contents in the electronic forms are technically split into two types: (a) the labels and contents of drop-down menus (referred to as 'Forms' Data) and (b) checkboxes (referred to as 'Reference Data'). Due to the different types of data, the two collections are kept separate. For example, labels can be 'Country', 'Please specify your home country' etc., while examples for reference data are 'Germany', 'Basic/general programmes', 'Education and Culture' etc.\n\nThe data consists of translations carried out between the end of the year 2008 and July 2012.", "#### Who are the source language producers?\n\nThe texts were translated by staff of the National Agencies of the Lifelong Learning and Youth in Action programmes. They are typically professionals in the field of education/youth and EU programmes. They are thus not professional translators, but they are normally native speakers of the target language.", "### Annotations", "#### Annotation process\n\nSentences were manually translated by humans.", "#### Who are the annotators?\n\nThe texts were translated by staff of the National Agencies of the Lifelong Learning and Youth in Action programmes. They are typically professionals in the field of education/youth and EU programmes. They are thus not professional translators, but they are normally native speakers of the target language.", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\n© European Union, 1995-2020\n\nThe Commission's reuse policy is implemented by the Commission Decision of 12 December 2011 on the reuse of Commission documents.\n\nUnless otherwise indicated (e.g. in individual copyright notices), content owned by the EU on this website is licensed under the Creative Commons Attribution 4.0 International (CC BY 4.0) licence. This means that reuse is allowed, provided appropriate credit is given and changes are indicated.\n\nYou may be required to clear additional rights if a specific content depicts identifiable private individuals or includes third-party works. To use or reproduce content that is not owned by the EU, you may need to seek permission directly from the rightholders. Software or documents covered by industrial property rights, such as patents, trade marks, registered designs, logos and names, are excluded from the Commission's reuse policy and are not licensed to you.", "### Contributions\n\nThanks to @SBrandeis for adding this dataset." ]
[ "TAGS\n#task_categories-translation #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-translation #size_categories-1K<n<10K #source_datasets-original #language-Bulgarian #language-Czech #language-Danish #language-German #language-Modern Greek (1453-) #language-English #language-Spanish #language-Estonian #language-Finnish #language-French #language-Croatian #language-Hungarian #language-Icelandic #language-Italian #language-Lithuanian #language-Latvian #language-Maltese #language-Dutch #language-Norwegian #language-Polish #language-Portuguese #language-Romanian #language-Slovak #language-Slovenian #language-Swedish #language-Turkish #license-cc-by-4.0 #region-us \n", "# Dataset Card for Europa Education and Culture Translation Memory (EAC-TM)", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Paper: URL\n- Point of Contact: ralf.steinberg@URL", "### Dataset Summary\n\nThis dataset is a corpus of manually produced translations from english to up to 25 languages, released in 2012 by the European Union's Directorate General for Education and Culture (EAC).\n\nTo load a language pair that is not part of the config, just specify the language code as language pair. For example, if you want to translate Czech to Greek:\n\n'dataset = load_dataset(\"europa_eac_tm\", language_pair=(\"cs\", \"el\"))'", "### Supported Tasks and Leaderboards\n\n- 'text2text-generation': the dataset can be used to train a model for 'machine-translation'. Machine translation models are usually evaluated using metrics such as BLEU, ROUGE or SacreBLEU. You can use the mBART model for this task. This task has active leaderboards which can be found at URL which usually rank models based on BLEU score.", "### Languages\n\nThe sentences in this dataset were originally written in English (source language is English) and then translated into the other languages. The sentences are extracted from electroniv forms: application and report forms for decentralised actions of EAC's Life-long Learning Programme (LLP) and the Youth in Action Programme. The contents in the electronic forms are technically split into two types: (a) the labels and contents of drop-down menus (referred to as 'Forms' Data) and (b) checkboxes (referred to as 'Reference Data').\n\nThe dataset contains traduction of English sentences or parts of sentences to Bulgarian, Czech, Danish, Dutch, Estonian, German, Greek, Finnish, French, Croatian, Hungarian, Icelandic, Italian, Latvian, Lithuanian, Maltese, Norwegian, Polish, Portuguese, Romanian, Slovak, Slovenian, Spanish, Swedish and Turkish.\n\nLanguage codes:\n- 'bg'\n- 'cs'\n- 'da'\n- 'de'\n- 'el'\n- 'en'\n- 'es'\n- 'et'\n- 'fi'\n- 'fr'\n- 'hr'\n- 'hu'\n- 'is'\n- 'it'\n- 'lt'\n- 'lv'\n- 'mt'\n- 'nl'\n- 'no'\n- 'pl'\n- 'pt'\n- 'ro'\n- 'sk'\n- 'sl'\n- 'sv'\n- 'tr'", "## Dataset Structure", "### Data Instances", "### Data Fields\n\n- 'translation': Mapping of sentences to translate (in English) and translated sentences.\n\n- 'sentence_type': Integer value, 0 if the sentence is a 'form data' (extracted from the labels and contents of drop-down menus of the source electronic forms) or 1 if the sentence is a 'reference data' (extracted from the electronic forms checkboxes).", "### Data Splits\n\nThe data is not splitted (only the 'train' split is available).", "## Dataset Creation", "### Curation Rationale\n\nThe EAC-TM is relatively small compared to the JRC-Acquis and to DGT-TM, but it has the advantage that it focuses on a very different domain, namely that of education and culture. Also, it includes translation units for the languages Croatian (HR), Icelandic (IS), Norwegian (Bokmål, NB or Norwegian, NO) and Turkish (TR).", "### Source Data", "#### Initial Data Collection and Normalization\n\nEAC-TM was built in the context of translating electronic forms: application and report forms for decentralised actions of EAC's Life-long Learning Programme (LLP) and the Youth in Action Programme. All documents and sentences were originally written in English (source language is English) and then translated into the other languages.\n\nThe contents in the electronic forms are technically split into two types: (a) the labels and contents of drop-down menus (referred to as 'Forms' Data) and (b) checkboxes (referred to as 'Reference Data'). Due to the different types of data, the two collections are kept separate. For example, labels can be 'Country', 'Please specify your home country' etc., while examples for reference data are 'Germany', 'Basic/general programmes', 'Education and Culture' etc.\n\nThe data consists of translations carried out between the end of the year 2008 and July 2012.", "#### Who are the source language producers?\n\nThe texts were translated by staff of the National Agencies of the Lifelong Learning and Youth in Action programmes. They are typically professionals in the field of education/youth and EU programmes. They are thus not professional translators, but they are normally native speakers of the target language.", "### Annotations", "#### Annotation process\n\nSentences were manually translated by humans.", "#### Who are the annotators?\n\nThe texts were translated by staff of the National Agencies of the Lifelong Learning and Youth in Action programmes. They are typically professionals in the field of education/youth and EU programmes. They are thus not professional translators, but they are normally native speakers of the target language.", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\n© European Union, 1995-2020\n\nThe Commission's reuse policy is implemented by the Commission Decision of 12 December 2011 on the reuse of Commission documents.\n\nUnless otherwise indicated (e.g. in individual copyright notices), content owned by the EU on this website is licensed under the Creative Commons Attribution 4.0 International (CC BY 4.0) licence. This means that reuse is allowed, provided appropriate credit is given and changes are indicated.\n\nYou may be required to clear additional rights if a specific content depicts identifiable private individuals or includes third-party works. To use or reproduce content that is not owned by the EU, you may need to seek permission directly from the rightholders. Software or documents covered by industrial property rights, such as patents, trade marks, registered designs, logos and names, are excluded from the Commission's reuse policy and are not licensed to you.", "### Contributions\n\nThanks to @SBrandeis for adding this dataset." ]
2b92086f0faa5bcaddbd1c1b4961133d524acdd2
# Dataset Card for [Dataset Name] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://ec.europa.eu/jrc/en/language-technologies/ecdc-translation-memory](https://ec.europa.eu/jrc/en/language-technologies/ecdc-translation-memory) - **Paper:** [https://link.springer.com/article/10.1007/s10579-014-9277-0](https://link.springer.com/article/10.1007/s10579-014-9277-0) - **Point of Contact:** [Ralf Steinberger](mailto:[email protected]) ### Dataset Summary In October 2012, the European Union (EU) agency 'European Centre for Disease Prevention and Control' (ECDC) released a translation memory (TM), i.e. a collection of sentences and their professionally produced translations, in twenty-five languages. ECDC-TM covers 25 languages: the 23 official languages of the EU plus Norwegian (Norsk) and Icelandic. ECDC-TM was created by translating from English into the following 24 languages: Bulgarian, Czech, Danish, Dutch, English, Estonian, Gaelige (Irish), German, Greek, Finnish, French, Hungarian, Icelandic, Italian, Latvian, Lithuanian, Maltese, Norwegian (NOrsk), Polish, Portuguese, Romanian, Slovak, Slovenian, Spanish and Swedish. All documents and sentences were originally written in English. They were then translated into the other languages by professional translators from the Translation Centre CdT in Luxembourg. To load a language pair that is not part of the config, just specify the language code as language pair. For example, if you want to translate Czech to Greek: `dataset = load_dataset("europa_ecdc_tm", language_pair=("cs", "el"))` ### Supported Tasks and Leaderboards - `text2text-generation`: the dataset can be used to train a model for `machine-translation`. Machine translation models are usually evaluated using metrics such as [BLEU](https://huggingface.co/metrics/bleu), [ROUGE](https://huggingface.co/metrics/rouge) or [SacreBLEU](https://huggingface.co/metrics/sacrebleu). You can use the [mBART](https://huggingface.co/facebook/mbart-large-cc25) model for this task. This task has active leaderboards which can be found at [https://paperswithcode.com/task/machine-translation](https://paperswithcode.com/task/machine-translation), which usually rank models based on [BLEU score](https://huggingface.co/metrics/bleu). ### Languages All documents and sentences were originally written in English (`en`). They were then translated into the other languages by professional translators from the Translation Centre CdT in Luxembourg. Translations are available in these languages: `en`, `bg`, `cs`, `da`, `de`, `el`, `en`, `es`, `et`, `fi`, `fr`, `ga`, `hu`, `is`, `it`, `lt`, `lv`, `mt`, `nl`, `no`, `pl`, `pt`, `ro`, `sk`, `sl`, `sv`. ## Dataset Structure ### Data Instances ``` { "translation": { "<source_language>":"Sentence to translate", "<target_language>": "Translated sentence", }, } ``` ### Data Fields - `translation`: a multilingual `string` variable, with possible languages including `en`, `bg`, `cs`, `da`, `de`, `el`, `en`, `es`, `et`, `fi`, `fr`, `ga`, `hu`, `is`, `it`, `lt`, `lv`, `mt`, `nl`, `no`, `pl`, `pt`, `ro`, `sk`, `sl`, `sv`. ### Data Splits The data is not splitted (only the `train` split is available). ## Dataset Creation ### Curation Rationale The ECDC-TM is relatively small compared to the JRC-Acquis and to DGT-TM, but it has the advantage that it focuses on a very different domain, namely that of public health. Also, it includes translation units for the languages Irish (Gaelige, GA), Norwegian (Norsk, NO) and Icelandic (IS). ### Source Data #### Initial Data Collection and Normalization ECDC-TM was built on the basis of the website of the European Centre for Disease Prevention and Control (ECDC). The major part of the documents talks about health-related topics (anthrax, botulism, cholera, dengue fever, hepatitis, etc.), but some of the web pages also describe the organisation ECDC (e.g. its organisation, job opportunities) and its activities (e.g. epidemic intelligence, surveillance). #### Who are the source language producers? All documents and sentences were originally written in English, by the ECDC website content producers. ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? All documents and sentences were thus originally written in English. They were then translated into the other languages by professional translators from the Translation Centre CdT in Luxembourg. ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset Contains translations of sentences in the public healthcare domain, including technical terms (disease and treatment names for example). ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Copyright © EU / ECDC, 2020 #### Copyright The Work (as defined below) is provided under the terms of this Licence (or later versions of this Licence published by the European Commission). The work is protected by copyright and/or other applicable law. Any use of the work other than as authorised under this Licence or copyright law is prohibited. The terms provided herein conform to the reuse policy established by the Commission's Reuse Decision (2011/833/EU). By exercising any rights to the work provided here, you accept and agree to be bound by the terms of this Licence. The Owner (as defined below) grants You the rights conferred by this Licence in consideration of your acceptance of such terms and conditions. #### Definitions The ‘Owner’ shall mean jointly the European Union represented by the European Commission and the European Centre for Disease Prevention and Control, which are the original licensors and/or control the copyright and any other intellectual and industrial property rights related to the Work. The ‘Work’ is the information and/or data offered to You under this Licence, according to the ‘Copyright Notice’: Copyright (c) EU/ECDC, <YEAR> ‘You’ means the natural or legal person, or body of persons corporate or incorporate, acquiring rights under this Licence. ‘Use’ means any act which is restricted by copyright or database rights, whether in the original medium or in any other medium, and includes, without limitation, distributing, copying, adapting, or modifying as may be technically necessary to use the Work in a different mode or format. It includes ‘re‐Use’, meaning the use, communication to the public and/or distribution of the Works for purposes other than the initial purpose for which the Work was produced. #### Rights You are herewith granted a worldwide, royalty‐free, perpetual, non‐exclusive Licence to Use and re‐Use the Works and any modifications thereof for any commercial and non‐ commercial purpose allowed by the law, provided that the following conditions are met: a) Unmodified distributions must retain the above Copyright Notice; b) Unmodified distributions must retain the following ‘No Warranty’ disclaimer; c) You will not use the name of the Owner to endorse or promote products and services derived from Use of the Work without specific prior written permission. #### No warranty Each Work is provided ‘as is’ without, to the full extent permitted by law, representations, warranties, obligations and liabilities of any kind, either express or implied, including, but not limited to, any implied warranty of merchantability, integration, satisfactory quality and fitness for a particular purpose. Except in the cases of wilful misconduct or damages directly caused to natural persons, the Owner will not be liable for any incidental, consequential, direct or indirect damages, including, but not limited to, the loss of data, lost profits or any other financial loss arising from the use of, or inability to use, the Work even if the Owner has been notified of the possibility of such loss, damages, claims or costs, or for any claim by any third party. The Owner may be liable under national statutory product liability laws as far as such laws apply to the Work. ### Citation Information ``` @Article{Steinberger2014, author={Steinberger, Ralf and Ebrahim, Mohamed and Poulis, Alexandros and Carrasco-Benitez, Manuel and Schl{\"u}ter, Patrick and Przybyszewski, Marek and Gilbro, Signe}, title={An overview of the European Union's highly multilingual parallel corpora}, journal={Language Resources and Evaluation}, year={2014}, month={Dec}, day={01}, volume={48}, number={4}, pages={679-707}, issn={1574-0218}, doi={10.1007/s10579-014-9277-0}, url={https://doi.org/10.1007/s10579-014-9277-0} } ``` ### Contributions Thanks to [@SBrandeis](https://github.com/SBrandeis) for adding this dataset.
europa_ecdc_tm
[ "task_categories:translation", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:translation", "size_categories:1K<n<10K", "source_datasets:original", "language:bg", "language:cs", "language:da", "language:de", "language:el", "language:en", "language:es", "language:et", "language:fi", "language:fr", "language:ga", "language:hu", "language:is", "language:it", "language:lt", "language:lv", "language:mt", "language:nl", "language:no", "language:pl", "language:pt", "language:ro", "language:sk", "language:sl", "language:sv", "license:cc-by-sa-4.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["bg", "cs", "da", "de", "el", "en", "es", "et", "fi", "fr", "ga", "hu", "is", "it", "lt", "lv", "mt", "nl", "no", "pl", "pt", "ro", "sk", "sl", "sv"], "license": ["cc-by-sa-4.0"], "multilinguality": ["translation"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["translation"], "task_ids": [], "pretty_name": "EuropaEcdcTm", "dataset_info": [{"config_name": "en2bg", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "bg"]}}}], "splits": [{"name": "train", "num_bytes": 798444, "num_examples": 2567}], "download_size": 4286636, "dataset_size": 798444}, {"config_name": "en2cs", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "cs"]}}}], "splits": [{"name": "train", "num_bytes": 585423, "num_examples": 2562}], "download_size": 4286636, "dataset_size": 585423}, {"config_name": "en2da", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "da"]}}}], "splits": [{"name": "train", "num_bytes": 545106, "num_examples": 2577}], "download_size": 4286636, "dataset_size": 545106}, {"config_name": "en2de", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "de"]}}}], "splits": [{"name": "train", "num_bytes": 588974, "num_examples": 2560}], "download_size": 4286636, "dataset_size": 588974}, {"config_name": "en2el", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "el"]}}}], "splits": [{"name": "train", "num_bytes": 849151, "num_examples": 2530}], "download_size": 4286636, "dataset_size": 849151}, {"config_name": "en2es", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "es"]}}}], "splits": [{"name": "train", "num_bytes": 582798, "num_examples": 2564}], "download_size": 4286636, "dataset_size": 582798}, {"config_name": "en2et", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "et"]}}}], "splits": [{"name": "train", "num_bytes": 543554, "num_examples": 2581}], "download_size": 4286636, "dataset_size": 543554}, {"config_name": "en2fi", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "fi"]}}}], "splits": [{"name": "train", "num_bytes": 573069, "num_examples": 2617}], "download_size": 4286636, "dataset_size": 573069}, {"config_name": "en2fr", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "fr"]}}}], "splits": [{"name": "train", "num_bytes": 595489, "num_examples": 2561}], "download_size": 4286636, "dataset_size": 595489}, {"config_name": "en2ga", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "ga"]}}}], "splits": [{"name": "train", "num_bytes": 286362, "num_examples": 1356}], "download_size": 4286636, "dataset_size": 286362}, {"config_name": "en2hu", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "hu"]}}}], "splits": [{"name": "train", "num_bytes": 600536, "num_examples": 2571}], "download_size": 4286636, "dataset_size": 600536}, {"config_name": "en2is", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "is"]}}}], "splits": [{"name": "train", "num_bytes": 557055, "num_examples": 2511}], "download_size": 4286636, "dataset_size": 557055}, {"config_name": "en2it", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "it"]}}}], "splits": [{"name": "train", "num_bytes": 576797, "num_examples": 2534}], "download_size": 4286636, "dataset_size": 576797}, {"config_name": "en2lt", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "lt"]}}}], "splits": [{"name": "train", "num_bytes": 645429, "num_examples": 2545}], "download_size": 4286636, "dataset_size": 645429}, {"config_name": "en2lv", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "lv"]}}}], "splits": [{"name": "train", "num_bytes": 576217, "num_examples": 2542}], "download_size": 4286636, "dataset_size": 576217}, {"config_name": "en2mt", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "mt"]}}}], "splits": [{"name": "train", "num_bytes": 608263, "num_examples": 2539}], "download_size": 4286636, "dataset_size": 608263}, {"config_name": "en2nl", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "nl"]}}}], "splits": [{"name": "train", "num_bytes": 569643, "num_examples": 2510}], "download_size": 4286636, "dataset_size": 569643}, {"config_name": "en2no", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "no"]}}}], "splits": [{"name": "train", "num_bytes": 536725, "num_examples": 2537}], "download_size": 4286636, "dataset_size": 536725}, {"config_name": "en2pl", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "pl"]}}}], "splits": [{"name": "train", "num_bytes": 644402, "num_examples": 2546}], "download_size": 4286636, "dataset_size": 644402}, {"config_name": "en2pt", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "pt"]}}}], "splits": [{"name": "train", "num_bytes": 583638, "num_examples": 2531}], "download_size": 4286636, "dataset_size": 583638}, {"config_name": "en2ro", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "ro"]}}}], "splits": [{"name": "train", "num_bytes": 585159, "num_examples": 2555}], "download_size": 4286636, "dataset_size": 585159}, {"config_name": "en2sk", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "sk"]}}}], "splits": [{"name": "train", "num_bytes": 627797, "num_examples": 2525}], "download_size": 4286636, "dataset_size": 627797}, {"config_name": "en2sl", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "sl"]}}}], "splits": [{"name": "train", "num_bytes": 594027, "num_examples": 2545}], "download_size": 4286636, "dataset_size": 594027}, {"config_name": "en2sv", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "sv"]}}}], "splits": [{"name": "train", "num_bytes": 546349, "num_examples": 2527}], "download_size": 4286636, "dataset_size": 546349}]}
2024-01-18T11:03:26+00:00
[]
[ "bg", "cs", "da", "de", "el", "en", "es", "et", "fi", "fr", "ga", "hu", "is", "it", "lt", "lv", "mt", "nl", "no", "pl", "pt", "ro", "sk", "sl", "sv" ]
TAGS #task_categories-translation #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-translation #size_categories-1K<n<10K #source_datasets-original #language-Bulgarian #language-Czech #language-Danish #language-German #language-Modern Greek (1453-) #language-English #language-Spanish #language-Estonian #language-Finnish #language-French #language-Irish #language-Hungarian #language-Icelandic #language-Italian #language-Lithuanian #language-Latvian #language-Maltese #language-Dutch #language-Norwegian #language-Polish #language-Portuguese #language-Romanian #language-Slovak #language-Slovenian #language-Swedish #license-cc-by-sa-4.0 #region-us
# Dataset Card for [Dataset Name] ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Paper: URL - Point of Contact: Ralf Steinberger ### Dataset Summary In October 2012, the European Union (EU) agency 'European Centre for Disease Prevention and Control' (ECDC) released a translation memory (TM), i.e. a collection of sentences and their professionally produced translations, in twenty-five languages. ECDC-TM covers 25 languages: the 23 official languages of the EU plus Norwegian (Norsk) and Icelandic. ECDC-TM was created by translating from English into the following 24 languages: Bulgarian, Czech, Danish, Dutch, English, Estonian, Gaelige (Irish), German, Greek, Finnish, French, Hungarian, Icelandic, Italian, Latvian, Lithuanian, Maltese, Norwegian (NOrsk), Polish, Portuguese, Romanian, Slovak, Slovenian, Spanish and Swedish. All documents and sentences were originally written in English. They were then translated into the other languages by professional translators from the Translation Centre CdT in Luxembourg. To load a language pair that is not part of the config, just specify the language code as language pair. For example, if you want to translate Czech to Greek: 'dataset = load_dataset("europa_ecdc_tm", language_pair=("cs", "el"))' ### Supported Tasks and Leaderboards - 'text2text-generation': the dataset can be used to train a model for 'machine-translation'. Machine translation models are usually evaluated using metrics such as BLEU, ROUGE or SacreBLEU. You can use the mBART model for this task. This task has active leaderboards which can be found at URL which usually rank models based on BLEU score. ### Languages All documents and sentences were originally written in English ('en'). They were then translated into the other languages by professional translators from the Translation Centre CdT in Luxembourg. Translations are available in these languages: 'en', 'bg', 'cs', 'da', 'de', 'el', 'en', 'es', 'et', 'fi', 'fr', 'ga', 'hu', 'is', 'it', 'lt', 'lv', 'mt', 'nl', 'no', 'pl', 'pt', 'ro', 'sk', 'sl', 'sv'. ## Dataset Structure ### Data Instances ### Data Fields - 'translation': a multilingual 'string' variable, with possible languages including 'en', 'bg', 'cs', 'da', 'de', 'el', 'en', 'es', 'et', 'fi', 'fr', 'ga', 'hu', 'is', 'it', 'lt', 'lv', 'mt', 'nl', 'no', 'pl', 'pt', 'ro', 'sk', 'sl', 'sv'. ### Data Splits The data is not splitted (only the 'train' split is available). ## Dataset Creation ### Curation Rationale The ECDC-TM is relatively small compared to the JRC-Acquis and to DGT-TM, but it has the advantage that it focuses on a very different domain, namely that of public health. Also, it includes translation units for the languages Irish (Gaelige, GA), Norwegian (Norsk, NO) and Icelandic (IS). ### Source Data #### Initial Data Collection and Normalization ECDC-TM was built on the basis of the website of the European Centre for Disease Prevention and Control (ECDC). The major part of the documents talks about health-related topics (anthrax, botulism, cholera, dengue fever, hepatitis, etc.), but some of the web pages also describe the organisation ECDC (e.g. its organisation, job opportunities) and its activities (e.g. epidemic intelligence, surveillance). #### Who are the source language producers? All documents and sentences were originally written in English, by the ECDC website content producers. ### Annotations #### Annotation process #### Who are the annotators? All documents and sentences were thus originally written in English. They were then translated into the other languages by professional translators from the Translation Centre CdT in Luxembourg. ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset Contains translations of sentences in the public healthcare domain, including technical terms (disease and treatment names for example). ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information Copyright © EU / ECDC, 2020 #### Copyright The Work (as defined below) is provided under the terms of this Licence (or later versions of this Licence published by the European Commission). The work is protected by copyright and/or other applicable law. Any use of the work other than as authorised under this Licence or copyright law is prohibited. The terms provided herein conform to the reuse policy established by the Commission's Reuse Decision (2011/833/EU). By exercising any rights to the work provided here, you accept and agree to be bound by the terms of this Licence. The Owner (as defined below) grants You the rights conferred by this Licence in consideration of your acceptance of such terms and conditions. #### Definitions The ‘Owner’ shall mean jointly the European Union represented by the European Commission and the European Centre for Disease Prevention and Control, which are the original licensors and/or control the copyright and any other intellectual and industrial property rights related to the Work. The ‘Work’ is the information and/or data offered to You under this Licence, according to the ‘Copyright Notice’: Copyright (c) EU/ECDC, <YEAR> ‘You’ means the natural or legal person, or body of persons corporate or incorporate, acquiring rights under this Licence. ‘Use’ means any act which is restricted by copyright or database rights, whether in the original medium or in any other medium, and includes, without limitation, distributing, copying, adapting, or modifying as may be technically necessary to use the Work in a different mode or format. It includes ‘re‐Use’, meaning the use, communication to the public and/or distribution of the Works for purposes other than the initial purpose for which the Work was produced. #### Rights You are herewith granted a worldwide, royalty‐free, perpetual, non‐exclusive Licence to Use and re‐Use the Works and any modifications thereof for any commercial and non‐ commercial purpose allowed by the law, provided that the following conditions are met: a) Unmodified distributions must retain the above Copyright Notice; b) Unmodified distributions must retain the following ‘No Warranty’ disclaimer; c) You will not use the name of the Owner to endorse or promote products and services derived from Use of the Work without specific prior written permission. #### No warranty Each Work is provided ‘as is’ without, to the full extent permitted by law, representations, warranties, obligations and liabilities of any kind, either express or implied, including, but not limited to, any implied warranty of merchantability, integration, satisfactory quality and fitness for a particular purpose. Except in the cases of wilful misconduct or damages directly caused to natural persons, the Owner will not be liable for any incidental, consequential, direct or indirect damages, including, but not limited to, the loss of data, lost profits or any other financial loss arising from the use of, or inability to use, the Work even if the Owner has been notified of the possibility of such loss, damages, claims or costs, or for any claim by any third party. The Owner may be liable under national statutory product liability laws as far as such laws apply to the Work. ### Contributions Thanks to @SBrandeis for adding this dataset.
[ "# Dataset Card for [Dataset Name]", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Paper: URL\n- Point of Contact: Ralf Steinberger", "### Dataset Summary\n\nIn October 2012, the European Union (EU) agency 'European Centre for Disease Prevention and Control' (ECDC) released a translation memory (TM), i.e. a collection of sentences and their professionally produced translations, in twenty-five languages.\n\nECDC-TM covers 25 languages: the 23 official languages of the EU plus Norwegian (Norsk) and Icelandic. ECDC-TM was created by translating from English into the following 24 languages: Bulgarian, Czech, Danish, Dutch, English, Estonian, Gaelige (Irish), German, Greek, Finnish, French, Hungarian, Icelandic, Italian, Latvian, Lithuanian, Maltese, Norwegian (NOrsk), Polish, Portuguese, Romanian, Slovak, Slovenian, Spanish and Swedish.\n\nAll documents and sentences were originally written in English. They were then translated into the other languages by professional translators from the Translation Centre CdT in Luxembourg.\n\nTo load a language pair that is not part of the config, just specify the language code as language pair. For example, if you want to translate Czech to Greek:\n\n'dataset = load_dataset(\"europa_ecdc_tm\", language_pair=(\"cs\", \"el\"))'", "### Supported Tasks and Leaderboards\n\n- 'text2text-generation': the dataset can be used to train a model for 'machine-translation'. Machine translation models are usually evaluated using metrics such as BLEU, ROUGE or SacreBLEU. You can use the mBART model for this task. This task has active leaderboards which can be found at URL which usually rank models based on BLEU score.", "### Languages\n\nAll documents and sentences were originally written in English ('en'). They were then translated into the other languages by professional translators from the Translation Centre CdT in Luxembourg.\n\nTranslations are available in these languages: 'en', 'bg', 'cs', 'da', 'de', 'el', 'en', 'es', 'et', 'fi', 'fr', 'ga', 'hu', 'is', 'it', 'lt', 'lv', 'mt', 'nl', 'no', 'pl', 'pt', 'ro', 'sk', 'sl', 'sv'.", "## Dataset Structure", "### Data Instances", "### Data Fields\n\n- 'translation': a multilingual 'string' variable, with possible languages including 'en', 'bg', 'cs', 'da', 'de', 'el', 'en', 'es', 'et', 'fi', 'fr', 'ga', 'hu', 'is', 'it', 'lt', 'lv', 'mt', 'nl', 'no', 'pl', 'pt', 'ro', 'sk', 'sl', 'sv'.", "### Data Splits\n\nThe data is not splitted (only the 'train' split is available).", "## Dataset Creation", "### Curation Rationale\n\nThe ECDC-TM is relatively small compared to the JRC-Acquis and to DGT-TM, but it has the advantage that it focuses on a very different domain, namely that of public health. Also, it includes translation units for the languages Irish (Gaelige, GA), Norwegian (Norsk, NO) and Icelandic (IS).", "### Source Data", "#### Initial Data Collection and Normalization\n\nECDC-TM was built on the basis of the website of the European Centre for Disease Prevention and Control (ECDC). The major part of the documents talks about health-related topics (anthrax, botulism, cholera, dengue fever, hepatitis, etc.), but some of the web pages also describe the organisation ECDC (e.g. its organisation, job opportunities) and its activities (e.g. epidemic intelligence, surveillance).", "#### Who are the source language producers?\n\nAll documents and sentences were originally written in English, by the ECDC website content producers.", "### Annotations", "#### Annotation process", "#### Who are the annotators?\n\nAll documents and sentences were thus originally written in English. They were then translated into the other languages by professional translators from the Translation Centre CdT in Luxembourg.", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nContains translations of sentences in the public healthcare domain, including technical terms (disease and treatment names for example).", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nCopyright © EU / ECDC, 2020", "#### Copyright\n\nThe Work (as defined below) is provided under the terms of this Licence (or later versions of\nthis Licence published by the European Commission). The work is protected by copyright\nand/or other applicable law. Any use of the work other than as authorised under this\nLicence or copyright law is prohibited. \nThe terms provided herein conform to the reuse policy established by the Commission's\nReuse Decision (2011/833/EU).\nBy exercising any rights to the work provided here, you accept and agree to be bound by the\nterms of this Licence. The Owner (as defined below) grants You the rights conferred by this\nLicence in consideration of your acceptance of such terms and conditions.", "#### Definitions\n\nThe ‘Owner’ shall mean jointly the European Union represented by the European\nCommission and the European Centre for Disease Prevention and Control, which are the\noriginal licensors and/or control the copyright and any other intellectual and industrial\nproperty rights related to the Work.\nThe ‘Work’ is the information and/or data offered to You under this Licence, according to\nthe ‘Copyright Notice’:\nCopyright (c) EU/ECDC, <YEAR>\n‘You’ means the natural or legal person, or body of persons corporate or incorporate,\nacquiring rights under this Licence.\n‘Use’ means any act which is restricted by copyright or database rights, whether in the\noriginal medium or in any other medium, and includes, without limitation, distributing,\ncopying, adapting, or modifying as may be technically necessary to use the Work in a\ndifferent mode or format. It includes ‘re‐Use’, meaning the use, communication to the\npublic and/or distribution of the Works for purposes other than the initial purpose for which\nthe Work was produced.", "#### Rights \n\nYou are herewith granted a worldwide, royalty‐free, perpetual, non‐exclusive Licence to Use\nand re‐Use the Works and any modifications thereof for any commercial and non‐\ncommercial purpose allowed by the law, provided that the following conditions are met:\na) Unmodified distributions must retain the above Copyright Notice;\nb) Unmodified distributions must retain the following ‘No Warranty’ disclaimer;\nc) You will not use the name of the Owner to endorse or promote products and\nservices derived from Use of the Work without specific prior written permission.", "#### No warranty\n\nEach Work is provided ‘as is’ without, to the full extent permitted by law, representations,\nwarranties, obligations and liabilities of any kind, either express or implied, including, but\nnot limited to, any implied warranty of merchantability, integration, satisfactory quality and\nfitness for a particular purpose.\nExcept in the cases of wilful misconduct or damages directly caused to natural persons, the\nOwner will not be liable for any incidental, consequential, direct or indirect damages,\nincluding, but not limited to, the loss of data, lost profits or any other financial loss arising\nfrom the use of, or inability to use, the Work even if the Owner has been notified of the\npossibility of such loss, damages, claims or costs, or for any claim by any third party. The\nOwner may be liable under national statutory product liability laws as far as such laws apply\nto the Work.", "### Contributions\n\nThanks to @SBrandeis for adding this dataset." ]
[ "TAGS\n#task_categories-translation #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-translation #size_categories-1K<n<10K #source_datasets-original #language-Bulgarian #language-Czech #language-Danish #language-German #language-Modern Greek (1453-) #language-English #language-Spanish #language-Estonian #language-Finnish #language-French #language-Irish #language-Hungarian #language-Icelandic #language-Italian #language-Lithuanian #language-Latvian #language-Maltese #language-Dutch #language-Norwegian #language-Polish #language-Portuguese #language-Romanian #language-Slovak #language-Slovenian #language-Swedish #license-cc-by-sa-4.0 #region-us \n", "# Dataset Card for [Dataset Name]", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Paper: URL\n- Point of Contact: Ralf Steinberger", "### Dataset Summary\n\nIn October 2012, the European Union (EU) agency 'European Centre for Disease Prevention and Control' (ECDC) released a translation memory (TM), i.e. a collection of sentences and their professionally produced translations, in twenty-five languages.\n\nECDC-TM covers 25 languages: the 23 official languages of the EU plus Norwegian (Norsk) and Icelandic. ECDC-TM was created by translating from English into the following 24 languages: Bulgarian, Czech, Danish, Dutch, English, Estonian, Gaelige (Irish), German, Greek, Finnish, French, Hungarian, Icelandic, Italian, Latvian, Lithuanian, Maltese, Norwegian (NOrsk), Polish, Portuguese, Romanian, Slovak, Slovenian, Spanish and Swedish.\n\nAll documents and sentences were originally written in English. They were then translated into the other languages by professional translators from the Translation Centre CdT in Luxembourg.\n\nTo load a language pair that is not part of the config, just specify the language code as language pair. For example, if you want to translate Czech to Greek:\n\n'dataset = load_dataset(\"europa_ecdc_tm\", language_pair=(\"cs\", \"el\"))'", "### Supported Tasks and Leaderboards\n\n- 'text2text-generation': the dataset can be used to train a model for 'machine-translation'. Machine translation models are usually evaluated using metrics such as BLEU, ROUGE or SacreBLEU. You can use the mBART model for this task. This task has active leaderboards which can be found at URL which usually rank models based on BLEU score.", "### Languages\n\nAll documents and sentences were originally written in English ('en'). They were then translated into the other languages by professional translators from the Translation Centre CdT in Luxembourg.\n\nTranslations are available in these languages: 'en', 'bg', 'cs', 'da', 'de', 'el', 'en', 'es', 'et', 'fi', 'fr', 'ga', 'hu', 'is', 'it', 'lt', 'lv', 'mt', 'nl', 'no', 'pl', 'pt', 'ro', 'sk', 'sl', 'sv'.", "## Dataset Structure", "### Data Instances", "### Data Fields\n\n- 'translation': a multilingual 'string' variable, with possible languages including 'en', 'bg', 'cs', 'da', 'de', 'el', 'en', 'es', 'et', 'fi', 'fr', 'ga', 'hu', 'is', 'it', 'lt', 'lv', 'mt', 'nl', 'no', 'pl', 'pt', 'ro', 'sk', 'sl', 'sv'.", "### Data Splits\n\nThe data is not splitted (only the 'train' split is available).", "## Dataset Creation", "### Curation Rationale\n\nThe ECDC-TM is relatively small compared to the JRC-Acquis and to DGT-TM, but it has the advantage that it focuses on a very different domain, namely that of public health. Also, it includes translation units for the languages Irish (Gaelige, GA), Norwegian (Norsk, NO) and Icelandic (IS).", "### Source Data", "#### Initial Data Collection and Normalization\n\nECDC-TM was built on the basis of the website of the European Centre for Disease Prevention and Control (ECDC). The major part of the documents talks about health-related topics (anthrax, botulism, cholera, dengue fever, hepatitis, etc.), but some of the web pages also describe the organisation ECDC (e.g. its organisation, job opportunities) and its activities (e.g. epidemic intelligence, surveillance).", "#### Who are the source language producers?\n\nAll documents and sentences were originally written in English, by the ECDC website content producers.", "### Annotations", "#### Annotation process", "#### Who are the annotators?\n\nAll documents and sentences were thus originally written in English. They were then translated into the other languages by professional translators from the Translation Centre CdT in Luxembourg.", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nContains translations of sentences in the public healthcare domain, including technical terms (disease and treatment names for example).", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nCopyright © EU / ECDC, 2020", "#### Copyright\n\nThe Work (as defined below) is provided under the terms of this Licence (or later versions of\nthis Licence published by the European Commission). The work is protected by copyright\nand/or other applicable law. Any use of the work other than as authorised under this\nLicence or copyright law is prohibited. \nThe terms provided herein conform to the reuse policy established by the Commission's\nReuse Decision (2011/833/EU).\nBy exercising any rights to the work provided here, you accept and agree to be bound by the\nterms of this Licence. The Owner (as defined below) grants You the rights conferred by this\nLicence in consideration of your acceptance of such terms and conditions.", "#### Definitions\n\nThe ‘Owner’ shall mean jointly the European Union represented by the European\nCommission and the European Centre for Disease Prevention and Control, which are the\noriginal licensors and/or control the copyright and any other intellectual and industrial\nproperty rights related to the Work.\nThe ‘Work’ is the information and/or data offered to You under this Licence, according to\nthe ‘Copyright Notice’:\nCopyright (c) EU/ECDC, <YEAR>\n‘You’ means the natural or legal person, or body of persons corporate or incorporate,\nacquiring rights under this Licence.\n‘Use’ means any act which is restricted by copyright or database rights, whether in the\noriginal medium or in any other medium, and includes, without limitation, distributing,\ncopying, adapting, or modifying as may be technically necessary to use the Work in a\ndifferent mode or format. It includes ‘re‐Use’, meaning the use, communication to the\npublic and/or distribution of the Works for purposes other than the initial purpose for which\nthe Work was produced.", "#### Rights \n\nYou are herewith granted a worldwide, royalty‐free, perpetual, non‐exclusive Licence to Use\nand re‐Use the Works and any modifications thereof for any commercial and non‐\ncommercial purpose allowed by the law, provided that the following conditions are met:\na) Unmodified distributions must retain the above Copyright Notice;\nb) Unmodified distributions must retain the following ‘No Warranty’ disclaimer;\nc) You will not use the name of the Owner to endorse or promote products and\nservices derived from Use of the Work without specific prior written permission.", "#### No warranty\n\nEach Work is provided ‘as is’ without, to the full extent permitted by law, representations,\nwarranties, obligations and liabilities of any kind, either express or implied, including, but\nnot limited to, any implied warranty of merchantability, integration, satisfactory quality and\nfitness for a particular purpose.\nExcept in the cases of wilful misconduct or damages directly caused to natural persons, the\nOwner will not be liable for any incidental, consequential, direct or indirect damages,\nincluding, but not limited to, the loss of data, lost profits or any other financial loss arising\nfrom the use of, or inability to use, the Work even if the Owner has been notified of the\npossibility of such loss, damages, claims or costs, or for any claim by any third party. The\nOwner may be liable under national statutory product liability laws as far as such laws apply\nto the Work.", "### Contributions\n\nThanks to @SBrandeis for adding this dataset." ]
7bf01192f595a3a99cc80ddf2e55838b30cc2246
# Dataset Card for europarl-bilingual ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Statmt](http://www.statmt.org/europarl/) - **Repository:** [OPUS Europarl](https://opus.nlpl.eu/Europarl.php) - **Paper:** [Aclweb](https://www.aclweb.org/anthology/L12-1246/) - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Needs More Information] ### Dataset Summary A parallel corpus extracted from the European Parliament web site by Philipp Koehn (University of Edinburgh). The main intended use is to aid statistical machine translation research. To load a language pair which isn't part of the config, all you need to do is specify the language code as pairs. You can find the valid pairs in Homepage section of Dataset Description: https://opus.nlpl.eu/Europarl.php E.g. `dataset = load_dataset("europarl_bilingual", lang1="fi", lang2="fr")` ### Supported Tasks and Leaderboards Tasks: Machine Translation, Cross Lingual Word Embeddings (CWLE) Alignment ### Languages - 21 languages, 211 bitexts - total number of files: 207,775 - total number of tokens: 759.05M - total number of sentence fragments: 30.32M Every pair of the following languages is available: - bg - cs - da - de - el - en - es - et - fi - fr - hu - it - lt - lv - nl - pl - pt - ro - sk - sl - sv ## Dataset Structure ### Data Instances Here is an example from the en-fr pair: ``` { 'translation': { 'en': 'Resumption of the session', 'fr': 'Reprise de la session' } } ``` ### Data Fields - `translation`: a dictionary containing two strings paired with a key indicating the corresponding language. ### Data Splits - `train`: only train split is provided. Authors did not provide a separation of examples in `train`, `dev` and `test`. ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information The data set comes with the same license as the original sources. Please, check the information about the source that is given on http://opus.nlpl.eu/Europarl-v8.php ### Citation Information ``` @InProceedings{TIEDEMANN12.463, author = {J�rg Tiedemann}, title = {Parallel Data, Tools and Interfaces in OPUS}, booktitle = {Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)}, year = {2012}, month = {may}, date = {23-25}, address = {Istanbul, Turkey}, editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Thierry Declerck and Mehmet Ugur Dogan and Bente Maegaard and Joseph Mariani and Jan Odijk and Stelios Piperidis}, publisher = {European Language Resources Association (ELRA)}, isbn = {978-2-9517408-7-7}, language = {english} } ``` ### Contributions Thanks to [@lucadiliello](https://github.com/lucadiliello) for adding this dataset.
europarl_bilingual
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:translation", "size_categories:100K<n<1M", "source_datasets:original", "language:bg", "language:cs", "language:da", "language:de", "language:el", "language:en", "language:es", "language:et", "language:fi", "language:fr", "language:hu", "language:it", "language:lt", "language:lv", "language:nl", "language:pl", "language:pt", "language:ro", "language:sk", "language:sl", "language:sv", "license:unknown", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["bg", "cs", "da", "de", "el", "en", "es", "et", "fi", "fr", "hu", "it", "lt", "lv", "nl", "pl", "pt", "ro", "sk", "sl", "sv"], "license": ["unknown"], "multilinguality": ["translation"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["translation"], "task_ids": [], "pretty_name": "europarl-bilingual", "dataset_info": [{"config_name": "bg-cs", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["bg", "cs"]}}}], "splits": [{"name": "train", "num_bytes": 175372131, "num_examples": 402657}], "download_size": 77543700, "dataset_size": 175372131}, {"config_name": "bg-da", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["bg", "da"]}}}], "splits": [{"name": "train", "num_bytes": 169901335, "num_examples": 393449}], "download_size": 161209111, "dataset_size": 169901335}, {"config_name": "bg-de", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["bg", "de"]}}}], "splits": [{"name": "train", "num_bytes": 179830695, "num_examples": 393298}], "download_size": 173031810, "dataset_size": 179830695}, {"config_name": "bg-el", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["bg", "el"]}}}], "splits": [{"name": "train", "num_bytes": 232659899, "num_examples": 377341}], "download_size": 164911397, "dataset_size": 232659899}, {"config_name": "bg-en", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["bg", "en"]}}}], "splits": [{"name": "train", "num_bytes": 175002243, "num_examples": 408290}], "download_size": 175210123, "dataset_size": 175002243}, {"config_name": "bg-es", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["bg", "es"]}}}], "splits": [{"name": "train", "num_bytes": 175608108, "num_examples": 388226}], "download_size": 167299422, "dataset_size": 175608108}, {"config_name": "bg-et", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["bg", "et"]}}}], "splits": [{"name": "train", "num_bytes": 169828337, "num_examples": 400712}], "download_size": 74382173, "dataset_size": 169828337}, {"config_name": "bg-fi", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["bg", "fi"]}}}], "splits": [{"name": "train", "num_bytes": 173345926, "num_examples": 396624}], "download_size": 159647184, "dataset_size": 173345926}, {"config_name": "bg-fr", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["bg", "fr"]}}}], "splits": [{"name": "train", "num_bytes": 179518097, "num_examples": 393644}], "download_size": 173290519, "dataset_size": 179518097}, {"config_name": "bg-hu", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["bg", "hu"]}}}], "splits": [{"name": "train", "num_bytes": 173346636, "num_examples": 382773}], "download_size": 77741287, "dataset_size": 173346636}, {"config_name": "bg-it", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["bg", "it"]}}}], "splits": [{"name": "train", "num_bytes": 178372027, "num_examples": 377822}], "download_size": 167706004, "dataset_size": 178372027}, {"config_name": "bg-lt", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["bg", "lt"]}}}], "splits": [{"name": "train", "num_bytes": 168242178, "num_examples": 392554}], "download_size": 74614251, "dataset_size": 168242178}, {"config_name": "bg-lv", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["bg", "lv"]}}}], "splits": [{"name": "train", "num_bytes": 173267674, "num_examples": 398355}], "download_size": 74564662, "dataset_size": 173267674}, {"config_name": "bg-nl", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["bg", "nl"]}}}], "splits": [{"name": "train", "num_bytes": 174737553, "num_examples": 388273}], "download_size": 170765314, "dataset_size": 174737553}, {"config_name": "bg-pl", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["bg", "pl"]}}}], "splits": [{"name": "train", "num_bytes": 175528692, "num_examples": 395269}], "download_size": 78179477, "dataset_size": 175528692}, {"config_name": "bg-pt", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["bg", "pt"]}}}], "splits": [{"name": "train", "num_bytes": 174578955, "num_examples": 388972}], "download_size": 170237403, "dataset_size": 174578955}, {"config_name": "bg-ro", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["bg", "ro"]}}}], "splits": [{"name": "train", "num_bytes": 175218264, "num_examples": 389381}], "download_size": 60489220, "dataset_size": 175218264}, {"config_name": "bg-sk", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["bg", "sk"]}}}], "splits": [{"name": "train", "num_bytes": 170977227, "num_examples": 393815}], "download_size": 77065166, "dataset_size": 170977227}, {"config_name": "bg-sl", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["bg", "sl"]}}}], "splits": [{"name": "train", "num_bytes": 159371534, "num_examples": 380231}], "download_size": 72025259, "dataset_size": 159371534}, {"config_name": "bg-sv", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["bg", "sv"]}}}], "splits": [{"name": "train", "num_bytes": 172562375, "num_examples": 398236}], "download_size": 160015782, "dataset_size": 172562375}, {"config_name": "cs-da", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["cs", "da"]}}}], "splits": [{"name": "train", "num_bytes": 189814103, "num_examples": 618055}], "download_size": 174829844, "dataset_size": 189814103}, {"config_name": "cs-de", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["cs", "de"]}}}], "splits": [{"name": "train", "num_bytes": 187747987, "num_examples": 568589}], "download_size": 186471876, "dataset_size": 187747987}, {"config_name": "cs-el", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["cs", "el"]}}}], "splits": [{"name": "train", "num_bytes": 289333860, "num_examples": 599489}], "download_size": 178443921, "dataset_size": 289333860}, {"config_name": "cs-en", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["cs", "en"]}}}], "splits": [{"name": "train", "num_bytes": 196378085, "num_examples": 647095}], "download_size": 188756690, "dataset_size": 196378085}, {"config_name": "cs-es", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["cs", "es"]}}}], "splits": [{"name": "train", "num_bytes": 201972536, "num_examples": 619774}], "download_size": 180848885, "dataset_size": 201972536}, {"config_name": "cs-et", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["cs", "et"]}}}], "splits": [{"name": "train", "num_bytes": 189852839, "num_examples": 636512}], "download_size": 87913231, "dataset_size": 189852839}, {"config_name": "cs-fi", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["cs", "fi"]}}}], "splits": [{"name": "train", "num_bytes": 193370836, "num_examples": 619320}], "download_size": 173216683, "dataset_size": 193370836}, {"config_name": "cs-fr", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["cs", "fr"]}}}], "splits": [{"name": "train", "num_bytes": 207043213, "num_examples": 628200}], "download_size": 186873132, "dataset_size": 207043213}, {"config_name": "cs-hu", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["cs", "hu"]}}}], "splits": [{"name": "train", "num_bytes": 201392624, "num_examples": 616160}], "download_size": 91341961, "dataset_size": 201392624}, {"config_name": "cs-it", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["cs", "it"]}}}], "splits": [{"name": "train", "num_bytes": 203150534, "num_examples": 607017}], "download_size": 181266237, "dataset_size": 203150534}, {"config_name": "cs-lt", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["cs", "lt"]}}}], "splits": [{"name": "train", "num_bytes": 189504979, "num_examples": 624292}], "download_size": 88260876, "dataset_size": 189504979}, {"config_name": "cs-lv", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["cs", "lv"]}}}], "splits": [{"name": "train", "num_bytes": 193888740, "num_examples": 627873}], "download_size": 88126869, "dataset_size": 193888740}, {"config_name": "cs-nl", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["cs", "nl"]}}}], "splits": [{"name": "train", "num_bytes": 199512564, "num_examples": 618414}], "download_size": 184381636, "dataset_size": 199512564}, {"config_name": "cs-pl", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["cs", "pl"]}}}], "splits": [{"name": "train", "num_bytes": 197967454, "num_examples": 621387}], "download_size": 91806300, "dataset_size": 197967454}, {"config_name": "cs-pt", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["cs", "pt"]}}}], "splits": [{"name": "train", "num_bytes": 197178140, "num_examples": 609729}], "download_size": 183745721, "dataset_size": 197178140}, {"config_name": "cs-ro", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["cs", "ro"]}}}], "splits": [{"name": "train", "num_bytes": 127321661, "num_examples": 392085}], "download_size": 73245197, "dataset_size": 127321661}, {"config_name": "cs-sk", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["cs", "sk"]}}}], "splits": [{"name": "train", "num_bytes": 196186957, "num_examples": 636128}], "download_size": 90623958, "dataset_size": 196186957}, {"config_name": "cs-sl", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["cs", "sl"]}}}], "splits": [{"name": "train", "num_bytes": 179909545, "num_examples": 611624}], "download_size": 85558670, "dataset_size": 179909545}, {"config_name": "cs-sv", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["cs", "sv"]}}}], "splits": [{"name": "train", "num_bytes": 194656792, "num_examples": 631544}], "download_size": 173672259, "dataset_size": 194656792}, {"config_name": "da-de", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["da", "de"]}}}], "splits": [{"name": "train", "num_bytes": 624355083, "num_examples": 1928414}], "download_size": 276778385, "dataset_size": 624355083}, {"config_name": "da-el", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["da", "el"]}}}], "splits": [{"name": "train", "num_bytes": 604008313, "num_examples": 1280579}], "download_size": 265542591, "dataset_size": 604008313}, {"config_name": "da-en", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["da", "en"]}}}], "splits": [{"name": "train", "num_bytes": 612701093, "num_examples": 1991647}], "download_size": 279497322, "dataset_size": 612701093}, {"config_name": "da-es", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["da", "es"]}}}], "splits": [{"name": "train", "num_bytes": 631311642, "num_examples": 1943931}], "download_size": 271357896, "dataset_size": 631311642}, {"config_name": "da-et", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["da", "et"]}}}], "splits": [{"name": "train", "num_bytes": 182908097, "num_examples": 635018}], "download_size": 171538628, "dataset_size": 182908097}, {"config_name": "da-fi", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["da", "fi"]}}}], "splits": [{"name": "train", "num_bytes": 599820497, "num_examples": 1917260}], "download_size": 263430295, "dataset_size": 599820497}, {"config_name": "da-fr", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["da", "fr"]}}}], "splits": [{"name": "train", "num_bytes": 658108095, "num_examples": 1992590}], "download_size": 277504353, "dataset_size": 658108095}, {"config_name": "da-hu", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["da", "hu"]}}}], "splits": [{"name": "train", "num_bytes": 196114245, "num_examples": 617519}], "download_size": 174981657, "dataset_size": 196114245}, {"config_name": "da-it", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["da", "it"]}}}], "splits": [{"name": "train", "num_bytes": 630400040, "num_examples": 1876703}], "download_size": 271654671, "dataset_size": 630400040}, {"config_name": "da-lt", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["da", "lt"]}}}], "splits": [{"name": "train", "num_bytes": 184071192, "num_examples": 614923}], "download_size": 171931855, "dataset_size": 184071192}, {"config_name": "da-lv", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["da", "lv"]}}}], "splits": [{"name": "train", "num_bytes": 188638250, "num_examples": 627809}], "download_size": 171781368, "dataset_size": 188638250}, {"config_name": "da-nl", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["da", "nl"]}}}], "splits": [{"name": "train", "num_bytes": 634339405, "num_examples": 1987498}], "download_size": 275140635, "dataset_size": 634339405}, {"config_name": "da-pl", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["da", "pl"]}}}], "splits": [{"name": "train", "num_bytes": 193218656, "num_examples": 642544}], "download_size": 175344681, "dataset_size": 193218656}, {"config_name": "da-pt", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["da", "pt"]}}}], "splits": [{"name": "train", "num_bytes": 631413013, "num_examples": 1930454}], "download_size": 274286241, "dataset_size": 631413013}, {"config_name": "da-ro", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["da", "ro"]}}}], "splits": [{"name": "train", "num_bytes": 124974166, "num_examples": 388156}], "download_size": 156965207, "dataset_size": 124974166}, {"config_name": "da-sk", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["da", "sk"]}}}], "splits": [{"name": "train", "num_bytes": 190277240, "num_examples": 621907}], "download_size": 174378230, "dataset_size": 190277240}, {"config_name": "da-sl", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["da", "sl"]}}}], "splits": [{"name": "train", "num_bytes": 173968152, "num_examples": 595944}], "download_size": 169356730, "dataset_size": 173968152}, {"config_name": "da-sv", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["da", "sv"]}}}], "splits": [{"name": "train", "num_bytes": 567189130, "num_examples": 1871171}], "download_size": 263342660, "dataset_size": 567189130}, {"config_name": "de-el", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["de", "el"]}}}], "splits": [{"name": "train", "num_bytes": 603303137, "num_examples": 1223026}], "download_size": 277232265, "dataset_size": 603303137}, {"config_name": "de-en", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["de", "en"]}}}], "splits": [{"name": "train", "num_bytes": 641864487, "num_examples": 1961119}], "download_size": 291376506, "dataset_size": 641864487}, {"config_name": "de-es", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["de", "es"]}}}], "splits": [{"name": "train", "num_bytes": 651057814, "num_examples": 1887879}], "download_size": 283096221, "dataset_size": 651057814}, {"config_name": "de-et", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["de", "et"]}}}], "splits": [{"name": "train", "num_bytes": 181554876, "num_examples": 578248}], "download_size": 183218377, "dataset_size": 181554876}, {"config_name": "de-fi", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["de", "fi"]}}}], "splits": [{"name": "train", "num_bytes": 621960938, "num_examples": 1871185}], "download_size": 275244245, "dataset_size": 621960938}, {"config_name": "de-fr", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["de", "fr"]}}}], "splits": [{"name": "train", "num_bytes": 680963340, "num_examples": 1942666}], "download_size": 289325334, "dataset_size": 680963340}, {"config_name": "de-hu", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["de", "hu"]}}}], "splits": [{"name": "train", "num_bytes": 193068884, "num_examples": 563571}], "download_size": 186625855, "dataset_size": 193068884}, {"config_name": "de-it", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["de", "it"]}}}], "splits": [{"name": "train", "num_bytes": 653857504, "num_examples": 1832989}], "download_size": 283411719, "dataset_size": 653857504}, {"config_name": "de-lt", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["de", "lt"]}}}], "splits": [{"name": "train", "num_bytes": 182429076, "num_examples": 565892}], "download_size": 183552115, "dataset_size": 182429076}, {"config_name": "de-lv", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["de", "lv"]}}}], "splits": [{"name": "train", "num_bytes": 186374102, "num_examples": 573226}], "download_size": 183437158, "dataset_size": 186374102}, {"config_name": "de-nl", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["de", "nl"]}}}], "splits": [{"name": "train", "num_bytes": 655711533, "num_examples": 1934111}], "download_size": 286849380, "dataset_size": 655711533}, {"config_name": "de-pl", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["de", "pl"]}}}], "splits": [{"name": "train", "num_bytes": 189642761, "num_examples": 579166}], "download_size": 187004630, "dataset_size": 189642761}, {"config_name": "de-pt", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["de", "pt"]}}}], "splits": [{"name": "train", "num_bytes": 654723289, "num_examples": 1884176}], "download_size": 286068045, "dataset_size": 654723289}, {"config_name": "de-ro", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["de", "ro"]}}}], "splits": [{"name": "train", "num_bytes": 133686126, "num_examples": 385663}], "download_size": 168794955, "dataset_size": 133686126}, {"config_name": "de-sk", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["de", "sk"]}}}], "splits": [{"name": "train", "num_bytes": 187484752, "num_examples": 569381}], "download_size": 186001546, "dataset_size": 187484752}, {"config_name": "de-sl", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["de", "sl"]}}}], "splits": [{"name": "train", "num_bytes": 171891826, "num_examples": 546212}], "download_size": 180994167, "dataset_size": 171891826}, {"config_name": "de-sv", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["de", "sv"]}}}], "splits": [{"name": "train", "num_bytes": 590635137, "num_examples": 1842026}], "download_size": 275145356, "dataset_size": 590635137}, {"config_name": "el-en", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["el", "en"]}}}], "splits": [{"name": "train", "num_bytes": 606689426, "num_examples": 1292180}], "download_size": 279571396, "dataset_size": 606689426}, {"config_name": "el-es", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["el", "es"]}}}], "splits": [{"name": "train", "num_bytes": 621773509, "num_examples": 1272383}], "download_size": 271592910, "dataset_size": 621773509}, {"config_name": "el-et", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["el", "et"]}}}], "splits": [{"name": "train", "num_bytes": 282330974, "num_examples": 599915}], "download_size": 175257825, "dataset_size": 282330974}, {"config_name": "el-fi", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["el", "fi"]}}}], "splits": [{"name": "train", "num_bytes": 583209381, "num_examples": 1227612}], "download_size": 263682672, "dataset_size": 583209381}, {"config_name": "el-fr", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["el", "fr"]}}}], "splits": [{"name": "train", "num_bytes": 637660521, "num_examples": 1290796}], "download_size": 277664049, "dataset_size": 637660521}, {"config_name": "el-hu", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["el", "hu"]}}}], "splits": [{"name": "train", "num_bytes": 293591416, "num_examples": 586250}], "download_size": 178679940, "dataset_size": 293591416}, {"config_name": "el-it", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["el", "it"]}}}], "splits": [{"name": "train", "num_bytes": 619754868, "num_examples": 1231222}], "download_size": 271890467, "dataset_size": 619754868}, {"config_name": "el-lt", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["el", "lt"]}}}], "splits": [{"name": "train", "num_bytes": 281773875, "num_examples": 590850}], "download_size": 175584581, "dataset_size": 281773875}, {"config_name": "el-lv", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["el", "lv"]}}}], "splits": [{"name": "train", "num_bytes": 287747485, "num_examples": 596929}], "download_size": 175479598, "dataset_size": 287747485}, {"config_name": "el-nl", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["el", "nl"]}}}], "splits": [{"name": "train", "num_bytes": 619747333, "num_examples": 1277297}], "download_size": 275234928, "dataset_size": 619747333}, {"config_name": "el-pl", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["el", "pl"]}}}], "splits": [{"name": "train", "num_bytes": 291216179, "num_examples": 591069}], "download_size": 179121800, "dataset_size": 291216179}, {"config_name": "el-pt", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["el", "pt"]}}}], "splits": [{"name": "train", "num_bytes": 619089974, "num_examples": 1261188}], "download_size": 274510323, "dataset_size": 619089974}, {"config_name": "el-ro", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["el", "ro"]}}}], "splits": [{"name": "train", "num_bytes": 186445257, "num_examples": 372839}], "download_size": 160638758, "dataset_size": 186445257}, {"config_name": "el-sk", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["el", "sk"]}}}], "splits": [{"name": "train", "num_bytes": 290180513, "num_examples": 600684}], "download_size": 178030033, "dataset_size": 290180513}, {"config_name": "el-sl", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["el", "sl"]}}}], "splits": [{"name": "train", "num_bytes": 269700597, "num_examples": 579109}], "download_size": 172981018, "dataset_size": 269700597}, {"config_name": "el-sv", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["el", "sv"]}}}], "splits": [{"name": "train", "num_bytes": 598841855, "num_examples": 1273743}], "download_size": 264310725, "dataset_size": 598841855}, {"config_name": "en-es", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "es"]}}}], "splits": [{"name": "train", "num_bytes": 645806091, "num_examples": 2009073}], "download_size": 285275775, "dataset_size": 645806091}, {"config_name": "en-et", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "et"]}}}], "splits": [{"name": "train", "num_bytes": 190057019, "num_examples": 651236}], "download_size": 185547113, "dataset_size": 190057019}, {"config_name": "en-fi", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "fi"]}}}], "splits": [{"name": "train", "num_bytes": 612796933, "num_examples": 1969624}], "download_size": 277526569, "dataset_size": 612796933}, {"config_name": "en-fr", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "fr"]}}}], "splits": [{"name": "train", "num_bytes": 674922213, "num_examples": 2051014}], "download_size": 291576418, "dataset_size": 674922213}, {"config_name": "en-hu", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "hu"]}}}], "splits": [{"name": "train", "num_bytes": 200219937, "num_examples": 625178}], "download_size": 189011893, "dataset_size": 200219937}, {"config_name": "en-it", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "it"]}}}], "splits": [{"name": "train", "num_bytes": 649121845, "num_examples": 1946253}], "download_size": 285912672, "dataset_size": 649121845}, {"config_name": "en-lt", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "lt"]}}}], "splits": [{"name": "train", "num_bytes": 188689136, "num_examples": 634284}], "download_size": 185983375, "dataset_size": 188689136}, {"config_name": "en-lv", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "lv"]}}}], "splits": [{"name": "train", "num_bytes": 193229251, "num_examples": 639318}], "download_size": 185755567, "dataset_size": 193229251}, {"config_name": "en-nl", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "nl"]}}}], "splits": [{"name": "train", "num_bytes": 648639286, "num_examples": 2027447}], "download_size": 289379311, "dataset_size": 648639286}, {"config_name": "en-pl", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "pl"]}}}], "splits": [{"name": "train", "num_bytes": 197111400, "num_examples": 631160}], "download_size": 189526719, "dataset_size": 197111400}, {"config_name": "en-pt", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "pt"]}}}], "splits": [{"name": "train", "num_bytes": 649484557, "num_examples": 2002943}], "download_size": 288280201, "dataset_size": 649484557}, {"config_name": "en-ro", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "ro"]}}}], "splits": [{"name": "train", "num_bytes": 127546377, "num_examples": 400356}], "download_size": 170919568, "dataset_size": 127546377}, {"config_name": "en-sk", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "sk"]}}}], "splits": [{"name": "train", "num_bytes": 194301334, "num_examples": 639958}], "download_size": 188348297, "dataset_size": 194301334}, {"config_name": "en-sl", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "sl"]}}}], "splits": [{"name": "train", "num_bytes": 179662136, "num_examples": 624803}], "download_size": 182965262, "dataset_size": 179662136}, {"config_name": "en-sv", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "sv"]}}}], "splits": [{"name": "train", "num_bytes": 583167767, "num_examples": 1892723}], "download_size": 277758290, "dataset_size": 583167767}, {"config_name": "es-et", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["es", "et"]}}}], "splits": [{"name": "train", "num_bytes": 194077194, "num_examples": 618350}], "download_size": 177610241, "dataset_size": 194077194}, {"config_name": "es-fi", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["es", "fi"]}}}], "splits": [{"name": "train", "num_bytes": 624352744, "num_examples": 1901596}], "download_size": 269239484, "dataset_size": 624352744}, {"config_name": "es-fr", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["es", "fr"]}}}], "splits": [{"name": "train", "num_bytes": 686124508, "num_examples": 1982990}], "download_size": 283235952, "dataset_size": 686124508}, {"config_name": "es-hu", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["es", "hu"]}}}], "splits": [{"name": "train", "num_bytes": 207128226, "num_examples": 604007}], "download_size": 181057656, "dataset_size": 207128226}, {"config_name": "es-it", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["es", "it"]}}}], "splits": [{"name": "train", "num_bytes": 659832078, "num_examples": 1880982}], "download_size": 277595675, "dataset_size": 659832078}, {"config_name": "es-lt", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["es", "lt"]}}}], "splits": [{"name": "train", "num_bytes": 195424327, "num_examples": 611082}], "download_size": 178003980, "dataset_size": 195424327}, {"config_name": "es-lv", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["es", "lv"]}}}], "splits": [{"name": "train", "num_bytes": 199870901, "num_examples": 615496}], "download_size": 177847154, "dataset_size": 199870901}, {"config_name": "es-nl", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["es", "nl"]}}}], "splits": [{"name": "train", "num_bytes": 659669649, "num_examples": 1954351}], "download_size": 281116315, "dataset_size": 659669649}, {"config_name": "es-pl", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["es", "pl"]}}}], "splits": [{"name": "train", "num_bytes": 203960308, "num_examples": 609297}], "download_size": 181528675, "dataset_size": 203960308}, {"config_name": "es-pt", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["es", "pt"]}}}], "splits": [{"name": "train", "num_bytes": 660610724, "num_examples": 1933321}], "download_size": 280106119, "dataset_size": 660610724}, {"config_name": "es-ro", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["es", "ro"]}}}], "splits": [{"name": "train", "num_bytes": 132099300, "num_examples": 387653}], "download_size": 163044165, "dataset_size": 132099300}, {"config_name": "es-sk", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["es", "sk"]}}}], "splits": [{"name": "train", "num_bytes": 201711884, "num_examples": 619027}], "download_size": 180405877, "dataset_size": 201711884}, {"config_name": "es-sl", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["es", "sl"]}}}], "splits": [{"name": "train", "num_bytes": 185526475, "num_examples": 599168}], "download_size": 175277856, "dataset_size": 185526475}, {"config_name": "es-sv", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["es", "sv"]}}}], "splits": [{"name": "train", "num_bytes": 594313079, "num_examples": 1826855}], "download_size": 269509656, "dataset_size": 594313079}, {"config_name": "et-fi", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["et", "fi"]}}}], "splits": [{"name": "train", "num_bytes": 186411056, "num_examples": 620939}], "download_size": 169999062, "dataset_size": 186411056}, {"config_name": "et-fr", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["et", "fr"]}}}], "splits": [{"name": "train", "num_bytes": 199983753, "num_examples": 630126}], "download_size": 183656005, "dataset_size": 199983753}, {"config_name": "et-hu", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["et", "hu"]}}}], "splits": [{"name": "train", "num_bytes": 195505472, "num_examples": 628044}], "download_size": 88087464, "dataset_size": 195505472}, {"config_name": "et-it", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["et", "it"]}}}], "splits": [{"name": "train", "num_bytes": 195809060, "num_examples": 607088}], "download_size": 178033859, "dataset_size": 195809060}, {"config_name": "et-lt", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["et", "lt"]}}}], "splits": [{"name": "train", "num_bytes": 181591116, "num_examples": 622003}], "download_size": 85049307, "dataset_size": 181591116}, {"config_name": "et-lv", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["et", "lv"]}}}], "splits": [{"name": "train", "num_bytes": 186830733, "num_examples": 637468}], "download_size": 84838432, "dataset_size": 186830733}, {"config_name": "et-nl", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["et", "nl"]}}}], "splits": [{"name": "train", "num_bytes": 192674741, "num_examples": 621150}], "download_size": 181153226, "dataset_size": 192674741}, {"config_name": "et-pl", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["et", "pl"]}}}], "splits": [{"name": "train", "num_bytes": 191037236, "num_examples": 639046}], "download_size": 88518099, "dataset_size": 191037236}, {"config_name": "et-pt", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["et", "pt"]}}}], "splits": [{"name": "train", "num_bytes": 191956598, "num_examples": 616238}], "download_size": 180565606, "dataset_size": 191956598}, {"config_name": "et-ro", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["et", "ro"]}}}], "splits": [{"name": "train", "num_bytes": 122191834, "num_examples": 389087}], "download_size": 70103283, "dataset_size": 122191834}, {"config_name": "et-sk", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["et", "sk"]}}}], "splits": [{"name": "train", "num_bytes": 188728692, "num_examples": 634168}], "download_size": 87465164, "dataset_size": 188728692}, {"config_name": "et-sl", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["et", "sl"]}}}], "splits": [{"name": "train", "num_bytes": 172379502, "num_examples": 609731}], "download_size": 82340544, "dataset_size": 172379502}, {"config_name": "et-sv", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["et", "sv"]}}}], "splits": [{"name": "train", "num_bytes": 189514511, "num_examples": 656646}], "download_size": 170410673, "dataset_size": 189514511}, {"config_name": "fi-fr", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["fi", "fr"]}}}], "splits": [{"name": "train", "num_bytes": 658941046, "num_examples": 1964126}], "download_size": 275801815, "dataset_size": 658941046}, {"config_name": "fi-hu", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["fi", "hu"]}}}], "splits": [{"name": "train", "num_bytes": 199866442, "num_examples": 606348}], "download_size": 173436552, "dataset_size": 199866442}, {"config_name": "fi-it", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["fi", "it"]}}}], "splits": [{"name": "train", "num_bytes": 630203540, "num_examples": 1845203}], "download_size": 269923911, "dataset_size": 630203540}, {"config_name": "fi-lt", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["fi", "lt"]}}}], "splits": [{"name": "train", "num_bytes": 187759286, "num_examples": 613113}], "download_size": 170349480, "dataset_size": 187759286}, {"config_name": "fi-lv", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["fi", "lv"]}}}], "splits": [{"name": "train", "num_bytes": 192467707, "num_examples": 616816}], "download_size": 170245682, "dataset_size": 192467707}, {"config_name": "fi-nl", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["fi", "nl"]}}}], "splits": [{"name": "train", "num_bytes": 629656948, "num_examples": 1940808}], "download_size": 273354291, "dataset_size": 629656948}, {"config_name": "fi-pl", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["fi", "pl"]}}}], "splits": [{"name": "train", "num_bytes": 196692739, "num_examples": 612689}], "download_size": 173878256, "dataset_size": 196692739}, {"config_name": "fi-pt", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["fi", "pt"]}}}], "splits": [{"name": "train", "num_bytes": 625813096, "num_examples": 1885062}], "download_size": 272449208, "dataset_size": 625813096}, {"config_name": "fi-ro", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["fi", "ro"]}}}], "splits": [{"name": "train", "num_bytes": 128424133, "num_examples": 391430}], "download_size": 155413895, "dataset_size": 128424133}, {"config_name": "fi-sk", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["fi", "sk"]}}}], "splits": [{"name": "train", "num_bytes": 194407846, "num_examples": 623686}], "download_size": 172774950, "dataset_size": 194407846}, {"config_name": "fi-sl", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["fi", "sl"]}}}], "splits": [{"name": "train", "num_bytes": 177582459, "num_examples": 596661}], "download_size": 167734483, "dataset_size": 177582459}, {"config_name": "fi-sv", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["fi", "sv"]}}}], "splits": [{"name": "train", "num_bytes": 590589773, "num_examples": 1883314}], "download_size": 262138250, "dataset_size": 590589773}, {"config_name": "fr-hu", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["fr", "hu"]}}}], "splits": [{"name": "train", "num_bytes": 213345700, "num_examples": 615791}], "download_size": 187084192, "dataset_size": 213345700}, {"config_name": "fr-it", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["fr", "it"]}}}], "splits": [{"name": "train", "num_bytes": 694854791, "num_examples": 1943673}], "download_size": 283931275, "dataset_size": 694854791}, {"config_name": "fr-lt", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["fr", "lt"]}}}], "splits": [{"name": "train", "num_bytes": 200610624, "num_examples": 620660}], "download_size": 184000557, "dataset_size": 200610624}, {"config_name": "fr-lv", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["fr", "lv"]}}}], "splits": [{"name": "train", "num_bytes": 205814878, "num_examples": 626280}], "download_size": 183883161, "dataset_size": 205814878}, {"config_name": "fr-nl", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["fr", "nl"]}}}], "splits": [{"name": "train", "num_bytes": 693784423, "num_examples": 2029551}], "download_size": 287389308, "dataset_size": 693784423}, {"config_name": "fr-pl", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["fr", "pl"]}}}], "splits": [{"name": "train", "num_bytes": 210001183, "num_examples": 621402}], "download_size": 187532501, "dataset_size": 210001183}, {"config_name": "fr-pt", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["fr", "pt"]}}}], "splits": [{"name": "train", "num_bytes": 689789351, "num_examples": 1980132}], "download_size": 286436517, "dataset_size": 689789351}, {"config_name": "fr-ro", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["fr", "ro"]}}}], "splits": [{"name": "train", "num_bytes": 133973522, "num_examples": 387846}], "download_size": 169044065, "dataset_size": 133973522}, {"config_name": "fr-sk", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["fr", "sk"]}}}], "splits": [{"name": "train", "num_bytes": 207736993, "num_examples": 631846}], "download_size": 186425028, "dataset_size": 207736993}, {"config_name": "fr-sl", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["fr", "sl"]}}}], "splits": [{"name": "train", "num_bytes": 190523805, "num_examples": 606897}], "download_size": 181374508, "dataset_size": 190523805}, {"config_name": "fr-sv", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["fr", "sv"]}}}], "splits": [{"name": "train", "num_bytes": 623443554, "num_examples": 1880390}], "download_size": 275743717, "dataset_size": 623443554}, {"config_name": "hu-it", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["hu", "it"]}}}], "splits": [{"name": "train", "num_bytes": 207768447, "num_examples": 589563}], "download_size": 181442707, "dataset_size": 207768447}, {"config_name": "hu-lt", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["hu", "lt"]}}}], "splits": [{"name": "train", "num_bytes": 195366291, "num_examples": 610298}], "download_size": 88456570, "dataset_size": 195366291}, {"config_name": "hu-lv", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["hu", "lv"]}}}], "splits": [{"name": "train", "num_bytes": 200475742, "num_examples": 621101}], "download_size": 88300472, "dataset_size": 200475742}, {"config_name": "hu-nl", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["hu", "nl"]}}}], "splits": [{"name": "train", "num_bytes": 205617797, "num_examples": 605806}], "download_size": 184560090, "dataset_size": 205617797}, {"config_name": "hu-pl", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["hu", "pl"]}}}], "splits": [{"name": "train", "num_bytes": 204095081, "num_examples": 621820}], "download_size": 91932370, "dataset_size": 204095081}, {"config_name": "hu-pt", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["hu", "pt"]}}}], "splits": [{"name": "train", "num_bytes": 204293487, "num_examples": 599639}], "download_size": 184009255, "dataset_size": 204293487}, {"config_name": "hu-ro", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["hu", "ro"]}}}], "splits": [{"name": "train", "num_bytes": 129428826, "num_examples": 377239}], "download_size": 73491360, "dataset_size": 129428826}, {"config_name": "hu-sk", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["hu", "sk"]}}}], "splits": [{"name": "train", "num_bytes": 201934745, "num_examples": 618247}], "download_size": 90886028, "dataset_size": 201934745}, {"config_name": "hu-sl", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["hu", "sl"]}}}], "splits": [{"name": "train", "num_bytes": 187295201, "num_examples": 601671}], "download_size": 85848963, "dataset_size": 187295201}, {"config_name": "hu-sv", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["hu", "sv"]}}}], "splits": [{"name": "train", "num_bytes": 201010172, "num_examples": 631872}], "download_size": 173806423, "dataset_size": 201010172}, {"config_name": "it-lt", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["it", "lt"]}}}], "splits": [{"name": "train", "num_bytes": 194730310, "num_examples": 593003}], "download_size": 178347064, "dataset_size": 194730310}, {"config_name": "it-lv", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["it", "lv"]}}}], "splits": [{"name": "train", "num_bytes": 200106637, "num_examples": 599394}], "download_size": 178242433, "dataset_size": 200106637}, {"config_name": "it-nl", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["it", "nl"]}}}], "splits": [{"name": "train", "num_bytes": 667554644, "num_examples": 1919855}], "download_size": 281535603, "dataset_size": 667554644}, {"config_name": "it-pl", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["it", "pl"]}}}], "splits": [{"name": "train", "num_bytes": 204343831, "num_examples": 594472}], "download_size": 181869443, "dataset_size": 204343831}, {"config_name": "it-pt", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["it", "pt"]}}}], "splits": [{"name": "train", "num_bytes": 662888825, "num_examples": 1877432}], "download_size": 280344907, "dataset_size": 662888825}, {"config_name": "it-ro", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["it", "ro"]}}}], "splits": [{"name": "train", "num_bytes": 130259763, "num_examples": 367904}], "download_size": 163411428, "dataset_size": 130259763}, {"config_name": "it-sk", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["it", "sk"]}}}], "splits": [{"name": "train", "num_bytes": 201935420, "num_examples": 603467}], "download_size": 180786705, "dataset_size": 201935420}, {"config_name": "it-sl", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["it", "sl"]}}}], "splits": [{"name": "train", "num_bytes": 184859642, "num_examples": 579968}], "download_size": 175764011, "dataset_size": 184859642}, {"config_name": "it-sv", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["it", "sv"]}}}], "splits": [{"name": "train", "num_bytes": 596242670, "num_examples": 1766096}], "download_size": 269861070, "dataset_size": 596242670}, {"config_name": "lt-lv", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["lt", "lv"]}}}], "splits": [{"name": "train", "num_bytes": 188060955, "num_examples": 621857}], "download_size": 85277601, "dataset_size": 188060955}, {"config_name": "lt-nl", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["lt", "nl"]}}}], "splits": [{"name": "train", "num_bytes": 193749342, "num_examples": 613308}], "download_size": 181477191, "dataset_size": 193749342}, {"config_name": "lt-pl", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["lt", "pl"]}}}], "splits": [{"name": "train", "num_bytes": 191712803, "num_examples": 617296}], "download_size": 88896956, "dataset_size": 191712803}, {"config_name": "lt-pt", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["lt", "pt"]}}}], "splits": [{"name": "train", "num_bytes": 191496681, "num_examples": 603223}], "download_size": 180925582, "dataset_size": 191496681}, {"config_name": "lt-ro", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["lt", "ro"]}}}], "splits": [{"name": "train", "num_bytes": 122958316, "num_examples": 384679}], "download_size": 70386543, "dataset_size": 122958316}, {"config_name": "lt-sk", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["lt", "sk"]}}}], "splits": [{"name": "train", "num_bytes": 189101772, "num_examples": 622997}], "download_size": 87817035, "dataset_size": 189101772}, {"config_name": "lt-sl", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["lt", "sl"]}}}], "splits": [{"name": "train", "num_bytes": 173710681, "num_examples": 602442}], "download_size": 82776077, "dataset_size": 173710681}, {"config_name": "lt-sv", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["lt", "sv"]}}}], "splits": [{"name": "train", "num_bytes": 188733924, "num_examples": 628817}], "download_size": 170761964, "dataset_size": 188733924}, {"config_name": "lv-nl", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["lv", "nl"]}}}], "splits": [{"name": "train", "num_bytes": 198965150, "num_examples": 618352}], "download_size": 181381125, "dataset_size": 198965150}, {"config_name": "lv-pl", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["lv", "pl"]}}}], "splits": [{"name": "train", "num_bytes": 198845485, "num_examples": 638453}], "download_size": 88758761, "dataset_size": 198845485}, {"config_name": "lv-pt", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["lv", "pt"]}}}], "splits": [{"name": "train", "num_bytes": 198412113, "num_examples": 615580}], "download_size": 180801629, "dataset_size": 198412113}, {"config_name": "lv-ro", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["lv", "ro"]}}}], "splits": [{"name": "train", "num_bytes": 127087848, "num_examples": 390857}], "download_size": 70314589, "dataset_size": 127087848}, {"config_name": "lv-sk", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["lv", "sk"]}}}], "splits": [{"name": "train", "num_bytes": 194466502, "num_examples": 629803}], "download_size": 87693678, "dataset_size": 194466502}, {"config_name": "lv-sl", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["lv", "sl"]}}}], "splits": [{"name": "train", "num_bytes": 178009999, "num_examples": 607381}], "download_size": 82594307, "dataset_size": 178009999}, {"config_name": "lv-sv", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["lv", "sv"]}}}], "splits": [{"name": "train", "num_bytes": 194010201, "num_examples": 643600}], "download_size": 170626197, "dataset_size": 194010201}, {"config_name": "nl-pl", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["nl", "pl"]}}}], "splits": [{"name": "train", "num_bytes": 202577192, "num_examples": 612797}], "download_size": 185014758, "dataset_size": 202577192}, {"config_name": "nl-pt", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["nl", "pt"]}}}], "splits": [{"name": "train", "num_bytes": 666335238, "num_examples": 1957189}], "download_size": 284348205, "dataset_size": 666335238}, {"config_name": "nl-ro", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["nl", "ro"]}}}], "splits": [{"name": "train", "num_bytes": 129250903, "num_examples": 380736}], "download_size": 166521373, "dataset_size": 129250903}, {"config_name": "nl-sk", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["nl", "sk"]}}}], "splits": [{"name": "train", "num_bytes": 200169118, "num_examples": 622650}], "download_size": 183925381, "dataset_size": 200169118}, {"config_name": "nl-sl", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["nl", "sl"]}}}], "splits": [{"name": "train", "num_bytes": 184588246, "num_examples": 600023}], "download_size": 178917463, "dataset_size": 184588246}, {"config_name": "nl-sv", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["nl", "sv"]}}}], "splits": [{"name": "train", "num_bytes": 600924875, "num_examples": 1870685}], "download_size": 273628695, "dataset_size": 600924875}, {"config_name": "pl-pt", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["pl", "pt"]}}}], "splits": [{"name": "train", "num_bytes": 202077773, "num_examples": 608181}], "download_size": 184478728, "dataset_size": 202077773}, {"config_name": "pl-ro", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["pl", "ro"]}}}], "splits": [{"name": "train", "num_bytes": 130211235, "num_examples": 389341}], "download_size": 73935732, "dataset_size": 130211235}, {"config_name": "pl-sk", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["pl", "sk"]}}}], "splits": [{"name": "train", "num_bytes": 198571926, "num_examples": 624330}], "download_size": 91348753, "dataset_size": 198571926}, {"config_name": "pl-sl", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["pl", "sl"]}}}], "splits": [{"name": "train", "num_bytes": 182038291, "num_examples": 600511}], "download_size": 86313727, "dataset_size": 182038291}, {"config_name": "pl-sv", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["pl", "sv"]}}}], "splits": [{"name": "train", "num_bytes": 197987693, "num_examples": 657951}], "download_size": 174170909, "dataset_size": 197987693}, {"config_name": "pt-ro", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["pt", "ro"]}}}], "splits": [{"name": "train", "num_bytes": 128921939, "num_examples": 381404}], "download_size": 165965899, "dataset_size": 128921939}, {"config_name": "pt-sk", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["pt", "sk"]}}}], "splits": [{"name": "train", "num_bytes": 197887183, "num_examples": 611895}], "download_size": 183332222, "dataset_size": 197887183}, {"config_name": "pt-sl", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["pt", "sl"]}}}], "splits": [{"name": "train", "num_bytes": 182608021, "num_examples": 593455}], "download_size": 178188570, "dataset_size": 182608021}, {"config_name": "pt-sv", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["pt", "sv"]}}}], "splits": [{"name": "train", "num_bytes": 598677198, "num_examples": 1823402}], "download_size": 272500072, "dataset_size": 598677198}, {"config_name": "ro-sk", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["ro", "sk"]}}}], "splits": [{"name": "train", "num_bytes": 125917165, "num_examples": 387839}], "download_size": 72817194, "dataset_size": 125917165}, {"config_name": "ro-sl", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["ro", "sl"]}}}], "splits": [{"name": "train", "num_bytes": 116060031, "num_examples": 374859}], "download_size": 67766532, "dataset_size": 116060031}, {"config_name": "ro-sv", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["ro", "sv"]}}}], "splits": [{"name": "train", "num_bytes": 126359961, "num_examples": 390133}], "download_size": 155757942, "dataset_size": 126359961}, {"config_name": "sk-sl", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["sk", "sl"]}}}], "splits": [{"name": "train", "num_bytes": 179514252, "num_examples": 609698}], "download_size": 85175048, "dataset_size": 179514252}, {"config_name": "sk-sv", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["sk", "sv"]}}}], "splits": [{"name": "train", "num_bytes": 195200876, "num_examples": 636353}], "download_size": 173202439, "dataset_size": 195200876}, {"config_name": "sl-sv", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["sl", "sv"]}}}], "splits": [{"name": "train", "num_bytes": 178446367, "num_examples": 608740}], "download_size": 168196323, "dataset_size": 178446367}]}
2024-02-02T09:42:38+00:00
[]
[ "bg", "cs", "da", "de", "el", "en", "es", "et", "fi", "fr", "hu", "it", "lt", "lv", "nl", "pl", "pt", "ro", "sk", "sl", "sv" ]
TAGS #task_categories-translation #annotations_creators-found #language_creators-found #multilinguality-translation #size_categories-100K<n<1M #source_datasets-original #language-Bulgarian #language-Czech #language-Danish #language-German #language-Modern Greek (1453-) #language-English #language-Spanish #language-Estonian #language-Finnish #language-French #language-Hungarian #language-Italian #language-Lithuanian #language-Latvian #language-Dutch #language-Polish #language-Portuguese #language-Romanian #language-Slovak #language-Slovenian #language-Swedish #license-unknown #region-us
# Dataset Card for europarl-bilingual ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: Statmt - Repository: OPUS Europarl - Paper: Aclweb - Leaderboard: - Point of Contact: ### Dataset Summary A parallel corpus extracted from the European Parliament web site by Philipp Koehn (University of Edinburgh). The main intended use is to aid statistical machine translation research. To load a language pair which isn't part of the config, all you need to do is specify the language code as pairs. You can find the valid pairs in Homepage section of Dataset Description: URL E.g. 'dataset = load_dataset("europarl_bilingual", lang1="fi", lang2="fr")' ### Supported Tasks and Leaderboards Tasks: Machine Translation, Cross Lingual Word Embeddings (CWLE) Alignment ### Languages - 21 languages, 211 bitexts - total number of files: 207,775 - total number of tokens: 759.05M - total number of sentence fragments: 30.32M Every pair of the following languages is available: - bg - cs - da - de - el - en - es - et - fi - fr - hu - it - lt - lv - nl - pl - pt - ro - sk - sl - sv ## Dataset Structure ### Data Instances Here is an example from the en-fr pair: ### Data Fields - 'translation': a dictionary containing two strings paired with a key indicating the corresponding language. ### Data Splits - 'train': only train split is provided. Authors did not provide a separation of examples in 'train', 'dev' and 'test'. ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information The data set comes with the same license as the original sources. Please, check the information about the source that is given on URL ### Contributions Thanks to @lucadiliello for adding this dataset.
[ "# Dataset Card for europarl-bilingual", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: Statmt\n- Repository: OPUS Europarl\n- Paper: Aclweb\n- Leaderboard: \n- Point of Contact:", "### Dataset Summary\n\nA parallel corpus extracted from the European Parliament web site by Philipp Koehn (University of Edinburgh). The main intended use is to aid statistical machine translation research.\n\nTo load a language pair which isn't part of the config, all you need to do is specify the language code as pairs.\nYou can find the valid pairs in Homepage section of Dataset Description: URL\nE.g.\n\n'dataset = load_dataset(\"europarl_bilingual\", lang1=\"fi\", lang2=\"fr\")'", "### Supported Tasks and Leaderboards\n\nTasks: Machine Translation, Cross Lingual Word Embeddings (CWLE) Alignment", "### Languages\n\n- 21 languages, 211 bitexts\n- total number of files: 207,775\n- total number of tokens: 759.05M\n- total number of sentence fragments: 30.32M\n\nEvery pair of the following languages is available:\n- bg\n- cs\n- da\n- de\n- el\n- en\n- es\n- et\n- fi\n- fr\n- hu\n- it\n- lt\n- lv\n- nl\n- pl\n- pt\n- ro\n- sk\n- sl\n- sv", "## Dataset Structure", "### Data Instances\n\nHere is an example from the en-fr pair:", "### Data Fields\n\n- 'translation': a dictionary containing two strings paired with a key indicating the corresponding language.", "### Data Splits\n\n- 'train': only train split is provided. Authors did not provide a separation of examples in 'train', 'dev' and 'test'.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nThe data set comes with the same license\nas the original sources.\nPlease, check the information about the source\nthat is given on\nURL", "### Contributions\n\nThanks to @lucadiliello for adding this dataset." ]
[ "TAGS\n#task_categories-translation #annotations_creators-found #language_creators-found #multilinguality-translation #size_categories-100K<n<1M #source_datasets-original #language-Bulgarian #language-Czech #language-Danish #language-German #language-Modern Greek (1453-) #language-English #language-Spanish #language-Estonian #language-Finnish #language-French #language-Hungarian #language-Italian #language-Lithuanian #language-Latvian #language-Dutch #language-Polish #language-Portuguese #language-Romanian #language-Slovak #language-Slovenian #language-Swedish #license-unknown #region-us \n", "# Dataset Card for europarl-bilingual", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: Statmt\n- Repository: OPUS Europarl\n- Paper: Aclweb\n- Leaderboard: \n- Point of Contact:", "### Dataset Summary\n\nA parallel corpus extracted from the European Parliament web site by Philipp Koehn (University of Edinburgh). The main intended use is to aid statistical machine translation research.\n\nTo load a language pair which isn't part of the config, all you need to do is specify the language code as pairs.\nYou can find the valid pairs in Homepage section of Dataset Description: URL\nE.g.\n\n'dataset = load_dataset(\"europarl_bilingual\", lang1=\"fi\", lang2=\"fr\")'", "### Supported Tasks and Leaderboards\n\nTasks: Machine Translation, Cross Lingual Word Embeddings (CWLE) Alignment", "### Languages\n\n- 21 languages, 211 bitexts\n- total number of files: 207,775\n- total number of tokens: 759.05M\n- total number of sentence fragments: 30.32M\n\nEvery pair of the following languages is available:\n- bg\n- cs\n- da\n- de\n- el\n- en\n- es\n- et\n- fi\n- fr\n- hu\n- it\n- lt\n- lv\n- nl\n- pl\n- pt\n- ro\n- sk\n- sl\n- sv", "## Dataset Structure", "### Data Instances\n\nHere is an example from the en-fr pair:", "### Data Fields\n\n- 'translation': a dictionary containing two strings paired with a key indicating the corresponding language.", "### Data Splits\n\n- 'train': only train split is provided. Authors did not provide a separation of examples in 'train', 'dev' and 'test'.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nThe data set comes with the same license\nas the original sources.\nPlease, check the information about the source\nthat is given on\nURL", "### Contributions\n\nThanks to @lucadiliello for adding this dataset." ]
9696726d05c677d3bb9b344a4debafc74925d49a
# Dataset Card for "event2Mind" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://uwnlp.github.io/event2mind/](https://uwnlp.github.io/event2mind/) - **Repository:** https://github.com/uwnlp/event2mind - **Paper:** [Event2Mind: Commonsense Inference on Events, Intents, and Reactions](https://arxiv.org/abs/1805.06939) - **Point of Contact:** [Hannah Rashkin](mailto:[email protected]), [Maarten Sap](mailto:[email protected]) - **Size of downloaded dataset files:** 1.30 MB - **Size of the generated dataset:** 7.24 MB - **Total amount of disk used:** 8.54 MB ### Dataset Summary In Event2Mind, we explore the task of understanding stereotypical intents and reactions to events. Through crowdsourcing, we create a large corpus with 25,000 events and free-form descriptions of their intents and reactions, both of the event's subject and (potentially implied) other participants. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### default - **Size of downloaded dataset files:** 1.30 MB - **Size of the generated dataset:** 7.24 MB - **Total amount of disk used:** 8.54 MB An example of 'validation' looks as follows. ``` { "Event": "It shrinks in the wash", "Osent": "1", "Otheremotion": "[\"upset\", \"angry\"]", "Source": "it_events", "Xemotion": "[\"none\"]", "Xintent": "[\"none\"]", "Xsent": "" } ``` ### Data Fields The data fields are the same among all splits. #### default - `Source`: a `string` feature. - `Event`: a `string` feature. - `Xintent`: a `string` feature. - `Xemotion`: a `string` feature. - `Otheremotion`: a `string` feature. - `Xsent`: a `string` feature. - `Osent`: a `string` feature. ### Data Splits | name |train|validation|test| |-------|----:|---------:|---:| |default|46472| 5401|5221| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @inproceedings{rashkin-etal-2018-event2mind, title = "{E}vent2{M}ind: Commonsense Inference on Events, Intents, and Reactions", author = "Rashkin, Hannah and Sap, Maarten and Allaway, Emily and Smith, Noah A. and Choi, Yejin", booktitle = "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = jul, year = "2018", address = "Melbourne, Australia", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/P18-1043", doi = "10.18653/v1/P18-1043", pages = "463--473", } ``` ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun) for adding this dataset.
event2Mind
[ "task_categories:text2text-generation", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:unknown", "common-sense-inference", "arxiv:1805.06939", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text2text-generation"], "task_ids": [], "paperswithcode_id": "event2mind", "pretty_name": "Event2Mind", "tags": ["common-sense-inference"], "dataset_info": {"features": [{"name": "Source", "dtype": "string"}, {"name": "Event", "dtype": "string"}, {"name": "Xintent", "dtype": "string"}, {"name": "Xemotion", "dtype": "string"}, {"name": "Otheremotion", "dtype": "string"}, {"name": "Xsent", "dtype": "string"}, {"name": "Osent", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 649273, "num_examples": 5221}, {"name": "train", "num_bytes": 5916384, "num_examples": 46472}, {"name": "validation", "num_bytes": 672365, "num_examples": 5401}], "download_size": 1300770, "dataset_size": 7238022}}
2024-01-18T11:03:28+00:00
[ "1805.06939" ]
[ "en" ]
TAGS #task_categories-text2text-generation #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-unknown #common-sense-inference #arxiv-1805.06939 #region-us
Dataset Card for "event2Mind" ============================= Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: Event2Mind: Commonsense Inference on Events, Intents, and Reactions * Point of Contact: Hannah Rashkin, Maarten Sap * Size of downloaded dataset files: 1.30 MB * Size of the generated dataset: 7.24 MB * Total amount of disk used: 8.54 MB ### Dataset Summary In Event2Mind, we explore the task of understanding stereotypical intents and reactions to events. Through crowdsourcing, we create a large corpus with 25,000 events and free-form descriptions of their intents and reactions, both of the event's subject and (potentially implied) other participants. ### Supported Tasks and Leaderboards ### Languages Dataset Structure ----------------- ### Data Instances #### default * Size of downloaded dataset files: 1.30 MB * Size of the generated dataset: 7.24 MB * Total amount of disk used: 8.54 MB An example of 'validation' looks as follows. ### Data Fields The data fields are the same among all splits. #### default * 'Source': a 'string' feature. * 'Event': a 'string' feature. * 'Xintent': a 'string' feature. * 'Xemotion': a 'string' feature. * 'Otheremotion': a 'string' feature. * 'Xsent': a 'string' feature. * 'Osent': a 'string' feature. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to @thomwolf, @patrickvonplaten, @lewtun for adding this dataset.
[ "### Dataset Summary\n\n\nIn Event2Mind, we explore the task of understanding stereotypical intents and reactions to events. Through crowdsourcing, we create a large corpus with 25,000 events and free-form descriptions of their intents and reactions, both of the event's subject and (potentially implied) other participants.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### default\n\n\n* Size of downloaded dataset files: 1.30 MB\n* Size of the generated dataset: 7.24 MB\n* Total amount of disk used: 8.54 MB\n\n\nAn example of 'validation' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### default\n\n\n* 'Source': a 'string' feature.\n* 'Event': a 'string' feature.\n* 'Xintent': a 'string' feature.\n* 'Xemotion': a 'string' feature.\n* 'Otheremotion': a 'string' feature.\n* 'Xsent': a 'string' feature.\n* 'Osent': a 'string' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @thomwolf, @patrickvonplaten, @lewtun for adding this dataset." ]
[ "TAGS\n#task_categories-text2text-generation #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-unknown #common-sense-inference #arxiv-1805.06939 #region-us \n", "### Dataset Summary\n\n\nIn Event2Mind, we explore the task of understanding stereotypical intents and reactions to events. Through crowdsourcing, we create a large corpus with 25,000 events and free-form descriptions of their intents and reactions, both of the event's subject and (potentially implied) other participants.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### default\n\n\n* Size of downloaded dataset files: 1.30 MB\n* Size of the generated dataset: 7.24 MB\n* Total amount of disk used: 8.54 MB\n\n\nAn example of 'validation' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### default\n\n\n* 'Source': a 'string' feature.\n* 'Event': a 'string' feature.\n* 'Xintent': a 'string' feature.\n* 'Xemotion': a 'string' feature.\n* 'Otheremotion': a 'string' feature.\n* 'Xsent': a 'string' feature.\n* 'Osent': a 'string' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @thomwolf, @patrickvonplaten, @lewtun for adding this dataset." ]
5bfbb120b1524715ae5bb28708d258184c8b76c7
# Dataset Card for Evidence Infer ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://evidence-inference.ebm-nlp.com/ - **Repository:** https://github.com/jayded/evidence-inference - **Paper:** [Evidence Inference 2.0: More Data, Better Models](https://arxiv.org/abs/2005.04177) - **Leaderboard:** http://evidence-inference.ebm-nlp.com/leaderboard/ - **Point of Contact:** []() ### Dataset Summary Data and code from our "Inferring Which Medical Treatments Work from Reports of Clinical Trials", NAACL 2019. This work concerns inferring the results reported in clinical trials from text. The dataset consists of biomedical articles describing randomized control trials (RCTs) that compare multiple treatments. Each of these articles will have multiple questions, or 'prompts' associated with them. These prompts will ask about the relationship between an intervention and comparator with respect to an outcome, as reported in the trial. For example, a prompt may ask about the reported effects of aspirin as compared to placebo on the duration of headaches. For the sake of this task, we assume that a particular article will report that the intervention of interest either significantly increased, significantly decreased or had significant effect on the outcome, relative to the comparator. The dataset could be used for automatic data extraction of the results of a given RCT. This would enable readers to discover the effectiveness of different treatments without needing to read the paper. We have recently collected additional data for this task (https://arxiv.org/abs/2005.04177), which we will present at BioNLP 2020. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages - English (`en`). ## Dataset Structure ### Data Instances ``` {'Text': "TITLE: Liraglutide, a once-daily human GLP-1 analogue, added to a sulphonylurea over 26 weeks produces greater improvements in glycaemic and weight control compared with adding rosiglitazone or placebo in subjects with Type 2 diabetes (LEAD-1 SU)\n\n ABSTRACT.AIM:\nTo compare the effects of combining liraglutide (0.6, 1.2 or 1.8 mg/day) or rosiglitazone 4 mg/day (all n ≥ 228) or placebo (n = 114) with glimepiride (2–4 mg/day) on glycaemic control, body weight and safety in Type 2 diabetes.\n\nABSTRACT.METHODS:\nIn total, 1041 adults (mean ± sd), age 56 ± 10 years, weight 82 ± 17 kg and glycated haemoglobin (HbA1c) 8.4 ± 1.0% at 116 sites in 21 countries were stratified based on previous oral glucose-lowering mono : combination therapies (30 : 70%) to participate in a five-arm, 26-week, double-dummy, randomized study.\n\nABSTRACT.RESULTS:\nLiraglutide (1.2 or 1.8 mg) produced greater reductions in HbA1c from baseline, (−1.1%, baseline 8.5%) compared with placebo (+0.2%, P < 0.0001, baseline 8.4%) or rosiglitazone (−0.4%, P < 0.0001, baseline 8.4%) when added to glimepiride. Liraglutide 0.6 mg was less effective (−0.6%, baseline 8.4%). Fasting plasma glucose decreased by week 2, with a 1.6 mmol/l decrease from baseline at week 26 with liraglutide 1.2 mg (baseline 9.8 mmol/l) or 1.8 mg (baseline 9.7 mmol/l) compared with a 0.9 mmol/l increase (placebo, P < 0.0001, baseline 9.5 mmol/l) or 1.0 mmol/l decrease (rosiglitazone, P < 0.006, baseline 9.9 mmol/l). Decreases in postprandial plasma glucose from baseline were greater with liraglutide 1.2 or 1.8 mg [−2.5 to −2.7 mmol/l (baseline 12.9 mmol/l for both)] compared with placebo (−0.4 mmol/l, P < 0.0001, baseline 12.7 mmol/l) or rosiglitazone (−1.8 mmol/l, P < 0.05, baseline 13.0 mmol/l). Changes in body weight with liraglutide 1.8 mg (−0.2 kg, baseline 83.0 kg), 1.2 mg (+0.3 kg, baseline 80.0 kg) or placebo (−0.1 kg, baseline 81.9 kg) were less than with rosiglitazone (+2.1 kg, P < 0.0001, baseline 80.6 kg). Main adverse events for all treatments were minor hypoglycaemia (< 10%), nausea (< 11%), vomiting (< 5%) and diarrhoea (< 8%).\n\nABSTRACT.CONCLUSIONS:\nLiraglutide added to glimepiride was well tolerated and provided improved glycaemic control and favourable weight profile.\n\nBODY.INTRODUCTION:\nMost drugs that target Type 2 diabetes (T2D) also cause weight gain or hypoglycaemia, or both, with the risk increasing with combination therapy. Glucagon-like peptide-1 (GLP-1)-based therapies stimulate insulin secretion and reduce glucagon secretion only during hyperglycaemia. GLP-1 also slows gastric emptying and reduces appetite [1]. Although American Diabetes Association (ADA)/European Association for the Study of Diabetes (EASD) guidelines recommend lifestyle and metformin as initial therapy for T2D [2], sulphonylureas are used widely, particularly when metformin or thiazolidinediones are not tolerated. Glycaemic control eventually deteriorates with sulphonylureas while hypoglycaemia and weight gain are common [3]. Incretin therapy improves glycaemic control with low hypoglycaemic risk, while delayed gastric emptying and reduced appetite can reduce weight [1,4]. Liraglutide is a once-daily human GLP-1 analogue with 97% linear amino-acid sequence homology to human GLP-1 [5] and half-life of 13 h after subcutaneous administration that produces 24-h blood glucose control [6]. Liraglutide monotherapy for 14 weeks reduced glycated haemoglobin (HbA1c) by 1.7% and fasting plasma glucose (FPG) by 3.4 mmol/l without causing hypoglycaemia, along with weight loss (∼3 kg) compared with placebo [7]. Improvements in pancreatic B-cell function [7–9] and blood pressure [7], along with decreased glucagon secretion [7,10], also occurred. As part of the phase 3 programme [the Liraglutide Effect and Action in Diabetes (LEAD) programme] with liraglutide in > 4000 subjects with T2D as monotherapy or in combination therapy, this 26-week trial examined liraglutide plus glimepiride compared with either placebo or rosiglitazone added to glimepiride on glycaemic control and body weight.\n\nBODY.SUBJECTS AND METHODS.STUDY PARTICIPANTS:\nInclusion criteria: T2D treated with oral glucose-lowering agents (OGLAs) for ≥ 3 months; 18–80 years of age; HbA1c 7.0–11.0% (previous OGLA monotherapy) or 7.0–10.0% (previous OGLA combination therapy); body mass index (BMI) ≤ 45.0 kg/m2. Exclusion criteria: used insulin within 3 months, impaired liver or renal function, uncontrolled hypertension (≥ 180/100 mmHg), cancer or used any drugs apart from OGLAs likely to affect glucose concentrations. Subjects provided written informed consent. The study was conducted in accordance with good clinical practice guidelines and approved by independent ethics committees.\n\nBODY.SUBJECTS AND METHODS.STUDY DESIGN:\nThe study was a 26-week, double-blind, double-dummy, randomized, active-control, five-armed parallel (116 sites in 21 countries, primarily Europe and Asia) trial enrolling 1041 subjects (1–37 subjects per centre), all receiving glimepiride (2–4 mg/day) in combination with (Fig. 1): FIGURE 1Overview of trial design and treatment arms. one of three liraglutide doses [0.6, 1.2 or 1.8 mg, injected subcutaneously (Novo Nordisk, Bagsvaerd, Denmark) and rosiglitazone placebo];liraglutide placebo and rosiglitazone placebo;liraglutide placebo and rosiglitazone 4 mg/day (rosiglitazone; AvandiaTM; GlaxoSmithKline, London, UK). The doses of rosiglitazone and glimepiride used were determined by the highest doses approved in all participating counties. After discontinuing previous OGLAs except glimepiride, separate 2-week titration and maintenance periods with glimepiride (open-label) preceded randomization (Fig. 1). Subjects were stratified according to previous treatment (monotherapy or combination therapy). After randomization, 2-week treatment titration and 24-week treatment (maintenance) phases (Fig. 1) were completed. Liraglutide was up-titrated weekly in 0.6-mg increments until allocated doses were reached. Glimepiride could be adjusted between 2 and 4 mg/day in case of hypoglycaemia or other adverse events (AEs), while other drug doses were fixed. Liraglutide (active and placebo) was supplied in 3-ml pre-filled pens with 31G needles (Novo Nordisk). Subjects were encouraged to inject liraglutide into the upper arm, thigh or abdomen at the same time each day. Rosiglitazone and glimepiride were taken in the morning or with the first meal.\n\nBODY.SUBJECTS AND METHODS.STUDY MEASUREMENTS.EFFICACY:\nThe primary endpoint was change from baseline HbA1c after 26 weeks of treatment. Secondary endpoints included: percentages of subjects reaching HbA1c (< 7.0%, ≤ 6.5%), FPG (5.0 to ≤ 7.2 mmol/l) and postprandial plasma glucose (PPG; 10.0 mmol/l) targets [11–13]; changes in body weight, FPG, mean PPG, indices of pancreatic B-cell function [pro-insulin : insulin ratio and homeostasis model assessment (HOMA)-B], HOMA-insulin resistance (HOMA-IR) and blood pressure (BP). HbA1c was measured centrally (MDS Pharma Services, King of Prussia, PA, USA) by high performance liquid chromatography while plasma glucose (PG) was self-measured using MediSense® glucose meters (Abbott Diagnostics Inc., Abbott Park, IL, USA). Insulin and C-peptide were measured by chemiluminescence, proinsulin by ELISA, while glucagon was measured in aprotinin-treated plasma by radioimmunoassay. The proinsulin : insulin ratio was calculated from fasting insulin and fasting proinsulin. HOMA-B and HOMA-IR were both calculated from FPG and fasting insulin. Samples measured centrally were collected and transported according to detailed procedures in the MDS Pharma Services manual. Samples stored at ambient temperature were shipped by courier to the central laboratory on the same day as collection, while frozen samples were shipped every 3 weeks.\n\nBODY.SUBJECTS AND METHODS.STUDY MEASUREMENTS.SAFETY:\nSafety variables included hypoglycaemic episodes based on PG levels (< 3.1 mmol/l), liraglutide antibodies including cross-reacting and neutralizing antibodies, tolerability (gastrointestinal complaints) and pulse. AEs, vital signs, electrocardiogram (ECG), biochemical and haematology measures including calcitonin were also monitored. Self-treated hypoglycaemic episodes were classified as minor, while those requiring third-party assistance were considered major. Serum antibodies against liraglutide were measured by radioimmunoprecipitation assay.\n\nBODY.SUBJECTS AND METHODS.STATISTICAL ANALYSES:\nAll efficacy and safety analyses were based on intent-to-treat criteria, defined as subjects who were exposed to ≥ 1 dose of trial product(s). Efficacy endpoints were analysed by ancova with treatment, country and previous glucose-lowering treatment as fixed effects and baseline values as covariates. Missing data were imputed by last observation carried forward (LOCF). Sample size calculations were based on predicted HbA1c and body weight after trial completion. As the three liraglutide + glimepiride groups were to be compared with both rosiglitazone + glimepiride and glimepiride monotherapy, two calculations were performed. These sample size calculations assumed a standard deviation of 1.2% of HbA1c, the non-inferiority/superiority margin vs. active control was set to 0.4% and the difference to detect (superiority vs. placebo) was set to 0.5%. For body weight, a coefficient of variation of 3% (based on phase 2a trials for liraglutide) and a difference to detect of 3% were assumed. A combined power (calculated as the product of the marginal powers for HbA1c and body weight) of at least 85% was required. These calculations indicated that at least 168 and 81 patients completing the study would be needed for the combination and glimepiride monotherapy groups, respectively. Assuming a drop-out rate of 25%, targets for randomization were 228 in each of the combination therapy groups and 114 in the placebo group (total n = 1026). To protect against Type 1 errors, HbA1c was analysed using hierarchical testing for descending doses of liraglutide. First, superiority of liraglutide 1.8 mg to placebo was tested and, only if superior to placebo, non-inferiority to rosiglitazone was tested. If non-inferiority was obtained, superiority to rosiglitazone for liraglutide 1.8 mg was tested and superiority to placebo for liraglutide 1.2 mg was tested. If superiority was confirmed, non-inferiority to rosiglitazone would be tested and so on (i.e. testing sequence was stopped when hypotheses could not be rejected). Superiority was concluded when upper limits of two-sided 95% confidence intervals (CIs) for treatment differences were below 0%; non-inferiority was concluded if these values were < 0.4%; for secondary endpoints, Type 1 errors were controlled by estimating simultaneous CIs using Dunnett's method. Proportions of subjects achieving HbA1c (HbA1c < 7.0%, and ≤ 6.5%) and FPG (5.0 ≤ FPG ≤ 7.2 mmol/l) targets [13] were compared between treatments using logistic regression with allocated treatment and baseline values as covariates. Chi-square analyses assessed differences in treatments for percentages of subjects achieving no, one, two or three PPG values < 10 mmol/l [13]. Hypoglycaemic episodes were analysed under the assumption that number per subject were negatively binomially distributed using a generalized linear model, including treatment and country as fixed effects. Other safety data were compared by descriptive statistics. Values for descriptive statistics are expressed as means ± sd, while ancova results are expressed as least square means ± SEM or with 95% CI unless otherwise noted. Significance levels were set to 5% for two-sided tests and 2.5% for one-sided tests.\n\nBODY.RESULTS.DISPOSITION AND DEMOGRAPHICS:\nThe treatment groups were well balanced (Table 1). Of 1712 subjects screened, 1041 were randomized and 1040 were exposed to trial drugs; 147 subjects (14.1%) withdrew (Fig. 2). Withdrawals were higher with placebo (27%) and rosiglitazone treatment (16%) compared with liraglutide 0.6 mg (11%), liraglutide 1.2 mg (14%) and liraglutide 1.8 mg (9%) treatment. Thirty-eight subjects (3.7%) withdrew as a result of AEs (Fig. 2). Table 1 Demographic characteristics of study participants Liraglutide 0.6 mg ( n = 233) Liraglutide 1.2 mg ( n = 228) Liraglutide 1.8 mg ( n = 234) Placebo ( n = 114) Rosiglitazone ( n = 232) Male : female (%) 54 : 46 45 : 55 53 : 47 47 : 53 47 : 53 Age (years) 55.7 ± 9.9 57.7 ± 9.0 55.6 ± 10.0 54.7 ± 10.0 56.0 ± 9.8 Duration of diabetes (years) 6.5 (4.0,10.2) 6.7 (4.0,10.7) 6.5 (3.7,10.5) 6.5 (4.5,10.6) 6.6 (4.3,10.7) Previous on mono : combi (%) 30 : 70 31 : 69 27 : 73 32 : 68 32 : 68 FPG (mmol/l) 10.0 ± 2.4 9.8 ± 2.7 9.7 ± 2.4 9.5 ± 2.0 9.9 ± 2.5 HbA 1c (%) 8.4 ± 1.0 8.5 ± 1.1 8.5 ± 0.9 8.4 ± 1.0 8.4 ± 1.0 Diabetic retinopathy (%) 17.2 14.9 12.0 13.2 16.4 Hypertension (%) 69.1 68.0 69.7 64.9 66.8 BMI (kg/m 2 ) 30.0 ± 5.0 29.8 ± 5.1 30.0 ± 5.1 30.3 ± 5.4 29.4 ± 4.8 Weight (kg) 82.6 ± 17.7 80.0 ± 17.1 83.0 ± 18.1 81.9 ± 17.1 80.6 ± 17.0 Systolic blood pressure (mmHg) 131 ± 16 133 ± 15 132 ± 16 131 ± 15.3 133 ± 15 Data are mean ± sd and percentages, except for duration of diabetes, where data are median, 25th and 75th percentile. BMI, body mass index; FPG, fasting plasma glucose; HbA 1c , glycated haemoglobin; mono : combi, previous treatment with either monotherapy or combination therapy; sd , standard deviation. FIGURE 2Flow of patients through the study.\n\nBODY.RESULTS.EFFICACY.HBA:\nHbA1c decreased rapidly with all doses of liraglutide when added to glimepiride compared with either rosiglitazone or placebo (i.e. glimepiride monotherapy), irrespective of previous therapy. The greatest decreases occurred with liraglutide 1.2 and 1.8 mg (Fig. 3a–c). After 26 weeks, HbA1c decreased by 1.1% from baseline (primary endpoint) with either liraglutide 1.2 or 1.8 mg, respectively, compared with either placebo (+0.2%) or rosiglitazone (−0.4%) (Fig. 3d). Estimated treatment differences and 95% CIs to placebo were: liraglutide 1.8 mg: −1.4% (1.6, −1.1); liraglutide 1.2 mg: −1.3% (1.5, −1.1); liraglutide 0.6 mg: −0.8% (−1.1, −0.6); rosiglitazone: −0.7% (−0.9, −0.4). All liraglutide doses were superior to placebo (P < 0.0001), while the two higher liraglutide doses were superior to rosiglitazone (P < 0.0001). Liraglutide 0.6 mg was non-inferior to rosiglitazone. Rosiglitazone also was superior to placebo (P < 0.0001). FIGURE 3Mean glycated haemoglobin (HbA1c) by treatment and week (intent-to-treat population with last observation carried forward): (a) overall population; (b) previously on monotherapy; or (c) previously on combination therapy; (d) mean changes in HbA1c from baseline after 26 weeks of treatment. Keys: (a–c) liraglutide 0.6 mg: grey dotted line with squares; liraglutide 1.2 mg: black solid line with triangles; liraglutide 1.8 mg: black dotted line with squares; rosiglitazone: grey solid line with circles; placebo: black solid line with circles. (d) liraglutide 0.6 mg: black stripes on white; liraglutide 1.2 mg: white stripes on black, liraglutide 1.8 mg: grey tint; rosiglitazone: white; placebo: black. ****P < 0.0001 compared with placebo; ††††P < 0.0001 compared with rosiglitazone. HbA1c decreases were greater for subjects who entered from monotherapy compared with combination therapy (Fig. 3d). However, because the increase with placebo was higher for individuals entering on combination therapy (0.7 vs. 0.23%), the differences between treatment groups in favour of liraglutide were similar irrespective of whether subjects were treated previously with monotherapy or combination therapy. Neither age, gender nor BMI affected these trends.\n\nBODY.RESULTS.EFFICACY.PERCENTAGE REACHING AN HBA:\nThe percentage of subjects reaching ADA [2] and International Diabetes Federation (IDF)/American Association of Clinical Endocrinologists (AACE) [11,12] treatment HbA1c goals with liraglutide was dose dependent (Fig. 4). At week 26, 42% and 21% of subjects treated with liraglutide 1.8 mg reached an HbA1c < 7.0% and ≤ 6.5%, respectively, compared with 8% and 4% for placebo (Fig. 4). The estimated proportion of subjects treated with either liraglutide 1.2 or 1.8 mg reaching ADA/EASD and IDF/AACE HbA1c targets was substantially greater compared with either placebo (P < 0.0001) or rosiglitazone (Fig. 4; P ≤ 0.0003), with more patients reaching < 7.0% with liraglutide 1.8 mg compared with 1.2 mg (P = 0.018). FIGURE 4Subjects achieving specified glycated haemoglobin (HbA1c) levels: (a) percentage reaching HbA1c < 7.0% (American Diabetes Association/European Association for the Study of Diabetes target); (b) percentage reaching HbA1c < 6.5% (International Diabetes Federation/American Association of Clinical Endocrinologists targets); (c) cumulative distribution of HbA1c at 26 weeks for the intent-to-treat (ITT) population; and (d) for the ITT last observation carried forward (LOCF) population. Keys: (a, b) liraglutide 0.6 mg: black stripes on white; liraglutide 1.2 mg: white stripes on black, liraglutide 1.8 mg: grey tint; rosiglitazone: white; placebo: black. (c, d) liraglutide 0.6 mg: pale grey solid line; liraglutide 1.2 mg: grey solid line, liraglutide 1.8 mg: black solid line; rosiglitazone: dotted black line; placebo: dotted grey line; baseline visit: long dashed black line. ****P < 0.0001 or **P < 0.01 compared with placebo; ††††P < 0.0001 or †††P = 0.0005 compared with rosiglitazone.\n\nBODY.RESULTS.EFFICACY.FASTING PLASMA GLUCOSE:\nBy week 2, subjects treated with liraglutide had rapid and larger decreases in FPG vs. comparator treatment. At week 26, all doses of liraglutide decreased FPG more than did placebo (Fig. 5; P < 0.0001), while only liraglutide 1.2 or 1.8 mg produced greater reductions than rosiglitazone. FPG treatment differences to placebo were 1.7 mmol/l for liraglutide 0.6 mg and 2.6 mmol/l for both liraglutide 1.2 and 1.8 mg. An 0.7-mmol/l greater reduction in FPG was achieved with either liraglutide 1.2 or 1.8 mg compared with rosiglitazone (P ≤ 0.006) after 26 weeks. FIGURE 5Mean changes from baseline in fasting plasma glucose after 26 weeks of treatment. ****P < 0.0001 compared with placebo; ††P < 0.01 compared with rosiglitazone. The percentage of subjects achieving FPG values between 5.0 mmol/l and ≤ 7.2 mmol/l (ADA target) after 26 weeks was higher with liraglutide: 0.6 mg (19%; P = 0.002); 1.2 mg (37%; P < 0.001); and 1.8 mg (38%;P < 0.001) compared with placebo (7%). The liraglutide 1.2 and 1.8 mg treatment groups also had more subjects achieving the same FPG target at end of treatment compared with rosiglitazone (26%) (P = 0.007 and P = 0.01, respectively).\n\nBODY.RESULTS.EFFICACY.POSTPRANDIAL PLASMA GLUCOSE:\nPPG was reduced similarly after each meal. The greatest reductions in mean PPG values from baseline (average of values obtained 90 min after breakfast, lunch and evening meal) occurred with liraglutide 1.2 mg (2.5 mmol/l) and liraglutide 1.8 mg (2.7 mmol/l). By comparison, the reduction from baseline in mean PPG values was 1.8 mmol/l for rosiglitazone and liraglutide 0.6 mg and 0.4 mmol/l for placebo. Treatment differences for PPG were greater with all doses of liraglutide compared with placebo (1.5–2.4 mmol/l; P < 0.0001) and greater with liraglutide 1.2 mg (0.64 mmol/l; P = 0.043) and 1.8 mg (0.87 mmol/l;P = 0.0022) compared with rosiglitazone.\n\nBODY.RESULTS.EFFICACY.PPG MEASUREMENTS < 10.0 MMOL/L:\nThe percentage of subjects with one, two or three PPG measurements < 10.0 mmol/l (ADA target) were greater for all doses of liraglutide compared with placebo (P < 0.05) but not rosiglitazone.\n\nBODY.RESULTS.BODY WEIGHT:\nMean weight at baseline was 81.6 kg. Mean reductions in weight from baseline to end of treatment were 0.2 kg with liraglutide 1.8 mg and 0.1 kg with placebo treatment, while increases occurred with either liraglutide 0.6 mg (0.7 kg), liraglutide 1.2 mg (0.3 kg) or rosiglitazone (2.1 kg) (Fig. 6). Unlike rosiglitazone, weight did not increase substantially with liraglutide and the differences between rosiglitazone and liraglutide were statistically significant (−2.3 to −1.4 kg; P < 0.0001), although there were no significant differences compared with placebo. Gender appeared to have no influence on the results, as indicated when added as a fixed effect in the ancova model. FIGURE 6Mean changes in body weight from baseline after 26 weeks of treatment. *P < 0.05 compared with placebo; ††††P < 0.0001 compared with rosiglitazone.\n\nBODY.RESULTS.INDICES OF PANCREATIC B-CELL FUNCTION AND INSULIN RESISTANCE:\nReductions in the proinsulin : insulin ratio were greater with both liraglutide 1.2 and 1.8 mg compared with either rosiglitazone or placebo (Table 2; P ≤ 0.02). HOMA-B increased with liraglutide (1.8 or 1.2 mg) compared with rosiglitazone (P < 0.05), while this increase was only different to placebo with liraglutide 1.2 mg (P = 0.01) and not liraglutide 1.8 mg (P = 0.051). There were no significant differences between treatments for HOMA-IR. Table 2 Selected indices of pancreatic B-cell function Variable Treatment Baseline Week 26 (LOCF) Least square difference from placebo (95% CI) Least square difference from rosiglitazone (95% CI) Proinsulin : insulin ratio Liraglutide 0.6 mg 0.42 ± 0.22 0.38 ± 0.24 −0.05 (−0.11; 0.00) −0.02 (−0.06; 0.03) Liraglutide 1.2 mg 0.45 ± 0.31 0.33 ± 0.20 −0.10 (−0.16; −0.05) † −0.07 (−0.11; −0.02) * Liraglutide 1.8 mg 0.48 ± 0.33 0.36 ± 0.20 −0.09 (−0.15; −0.03) * −0.05 (−0.10; −0.01) * Placebo 0.44 ± 0.27 0.46 ± 0.29 Rosiglitazone 0.45 ± 0.29 0.40 ± 0.20 HOMA-B (%) Liraglutide 0.6 mg 51 ± 43.3 70 ± 88.6 15 (−19.10; 49.0) 11 (−16.7; 39.0) Liraglutide 1.2 mg 71 ± 254.3 99 ± 184.3 43 (8.10; 76.9) * 39 (10.3; 67.0) * Liraglutide 1.8 mg 56 ± 84.6 91 ± 108.2 34 (−0.23; 68.5) 30 (2.00; 58.6) * Placebo 56 ± 103.3 52 ± 107.3 Rosiglitazone 46 ± 36.2 59 ± 63.3 * P ≤ 0.05; † P < 0.0001. CI, confidence interval; HOMA, homeostatis model assessment; LOCF, last observation carried forward. \n\nBODY.RESULTS.BLOOD PRESSURE AND PULSE:\nAlthough decreases in systolic blood pressure occurred with either liraglutide 1.2 or 1.8 mg (2.6–2.8 mmHg), they were not significantly different from placebo or rosiglitazone (0.9–2.3 mmHg). Reductions in diastolic blood pressure also occurred with all treatments (0.7–1.4 mmHg), with no significant differences between treatments. Pulse increases above baseline ranged from 2 to 4 beats/min with the three doses of liraglutide and 1 beat/min with rosiglitazone, while pulse decreased by 1 beat/min with placebo. Changes in pulse for all doses of liraglutide were significant vs. placebo (P ≤ 0.002). This also was true with either liraglutide 1.8 or 1.2 mg compared with rosiglitazone (P < 0.01).\n\nBODY.RESULTS.SAFETY:\nThe most common treatment-emergent AEs that were considered by investigators to be either possibly or probably related to liraglutide were gastrointestinal (diarrhoea, nausea, dyspepsia and constipation) and nervous system disorders (headache and dizziness), particularly during the first 4 weeks. Nausea was highest with liraglutide 1.2 mg (10.5%) and lowest with placebo (1.8%). Vomiting (4.4%) and diarrhoea (7.9%) were also higher with liraglutide 1.2 mg. Withdrawals because of nausea ranged from 0.9–2.2%, vomiting 0.4–0.9% and diarrhoea 0–1.3%. Nausea was more common with liraglutide compared with placebo and rosiglitazone, particularly during the first 4 weeks (Fig. 7). Frequency of nausea was less in the liraglutide 0.6 mg treatment group compared with the higher doses of liraglutide. Generally, the occurrence of nausea dissipated from 4 to 26 weeks of treatment in all groups using liraglutide (Fig. 7). FIGURE 7Percentage of subjects experiencing nausea over the course of the study. Key: liraglutide 0.6 mg with glimepiride: black line with filled circles; liraglutide 1.2 mg with glimepiride: black line with filled triangles; liraglutide 1.8 mg with glimepiride: grey line with hollow circles; glimepiride grey lines with filled squares; rosiglitazone and glimepiride: grey line with hollow triangles. The incidence of serious AEs ranged between 3 and 5%: placebo (3%), rosiglitazone (3%), liraglutide 0.6 mg (3%), liraglutide 1.2 mg (4%) and liraglutide 1.8 mg (5%). Most treatment-emergent serious AEs were judged by investigators to be unlikely to be related to trial products. No deaths were reported during the trial. One subject developed chronic pancreatitis whilst taking liraglutide 0.6 mg; the person had no reported previous history of pancreatitis. The subject continued on liraglutide therapy and completed the trial. At screening, five patients had been previously diagnosed with pancreatitis. As pancreatitis was not an exclusion criterion, these patients were randomized as follows: one to liraglutide 0.6 mg, one to liraglutide 1.2 mg, two to liraglutide 1.8 mg and one to rosiglitazone + glimepiride. All five patients completed the trial without reporting pancreatitis as an adverse event. Hypoglycaemia was infrequent with all treatments. One major hypoglycaemic episode (self-measured blood glucose = 3.0 mmol/l) occurred 9 days after treatment started in a subject receiving liraglutide 1.8 mg in combination with glimepiride. Although medical assistance was not needed, the subject required third-party assistance. The investigator judged the episode as likely to be related to glimepiride and reduced the dose from 4 to 3 mg after the incident. Minor hypoglycaemia occurred in < 10% of subjects for any treatment. The proportion of subjects experiencing minor hypoglycaemia during the trial was lowest with placebo (i.e. glimepiride monotherapy 2.6%; 0.17 events/subject-year), comparable with liraglutide 0.6 mg (5.2%, 0.17 events/subject-year) and rosiglitazone (4.3%, 0.12 events/subject-year) groups and similar between the liraglutide 1.2 mg (9.2%, 0.51 events/subject-year) and liraglutide 1.8 mg (8.1%, 0.47 events/subject-year) treatment groups. Incidence was higher with liraglutide 1.2 mg (P = 0.0024) and 1.8 mg (P = 0.0065) compared with rosiglitazone and liraglutide 1.2 mg compared with placebo (P = 0.048), occurring in the setting of lower mean HbA1c values. Antibodies to liraglutide were found in 9–13% of subjects treated with liraglutide. No significant effects of these antibodies on HbA1c were found in pooled analyses of four trials including the current study. There were no clinically relevant changes in ophthalmoscopy, biochemistry, urinalysis, haematology or ECG assessments. No significant differences in calcitonin were found between the three groups treated with liraglutide when compared with either placebo or rosiglitazone at the end of the trial at week 26.\n\nBODY.DISCUSSION:\nTreatment with liraglutide plus glimepiride was superior to glimepiride monotherapy at all doses of liraglutide and superior to rosiglitazone plus glimepiride for the two higher liraglutide doses for improving HbA1c. Similar findings for reductions in FPG and PPG highlight improved 24-h glucose control with once-daily liraglutide, with substantially more subjects reaching glycaemic targets, particularly with liraglutide 1.8 mg. Improvements in pancreatic B-cell function were larger with liraglutide 1.2 and 1.8 mg compared with rosiglitazone. Liraglutide was well tolerated and occurrence of gastrointestinal AEs was low overall, particularly after week 4. Although rates of hypoglycaemia were low in all treatment groups (< 10%), minor hypoglycaemic events occurred more often in patients treated with glimepiride plus liraglutide 1.2 or 1.8 mg than with glimepiride alone. It should be noted, however, that patients treated with liraglutide 1.2 or 1.8 mg achieved a lower HbA1c than those receiving glimepiride monotherapy. At lower HbA1c levels, sulphonylureas are known to elicit hypoglycaemia more readily than at higher levels. In clinical practice it may be possible to reduce the dose of sulphonylurea (when used with liraglutide) to minimize risk of hypoglycaemia and maintain HbA1cimprovements. Although weight effects were modest, liraglutide produced more favourable weight effects compared with rosiglitazone, which produced substantial weight gain. In other studies with liraglutide, subjects adding a 1.8-mg dose to metformin lost 2.8 kg [14], while those adding both metformin and glimepiride lost 1.8 kg compared with placebo [15] (both over 26 weeks) and those on liraglutide monotherapy (1.8 mg) lost 2.45 kg over 52 weeks [16]. In our study, because sulphonylureas usually cause weight gain, inclusion or optimization of glimepiride but not metformin may have mitigated the weight benefits typically associated with liraglutide. Lack of weight effects could be secondary to lower baseline body weight, withdrawal of previous metformin treatment or defensive snacking to minimize risk of hypoglycaemia. It might have been expected that the greater weight gain with rosiglitazone compared with liraglutide 1.8 mg would be associated with a concurrent increase in insulin resistance with rosiglitazone. The absence of this effect could reflect the insulin-sensitizing nature of rosiglitazone. Improvements in pancreatic B-cell function associated with liraglutide are consistent with other studies [7–9]. Study strengths include inclusion of both placebo and active (rosiglitazone) comparators and that OGLAs were optimized (not maximized) before randomization to minimize risk of hypoglycaemia. Limitations of the study include short duration of the trial and restriction on glimepiride and rosiglitazone in some countries that precluded maximal dosing. The impact of using other GLP-1-based treatments [such as exenatide, or the dipeptidyl peptidase-4 (DPP-4) inhibitor, sitagliptin] with sulphonylureas in subjects with T2D has been studied. In a 30-week American trial where exenatide twice a day was added to sulphonylureas, HbA1c was reduced by 0.46% from baseline with 5 μg and 0.86% with 10 μg [17] compared with 1.1% with liraglutide 1.8 or 1.2 mg. This reduction in HbA1c with liraglutide is consistent with other LEAD trials investigating liraglutide as monotherapy or in combination with various OGLA drugs. In these trials, HbA1c was reduced by 1–1.5%[14,16,18–20]. Reductions in FPG with exenatide were 0.3 and 0.6 mmol/l from baseline with 5 μg and 10 μg, respectively, compared with 1.4 mmol/l with liraglutide 1.8 mg; weight loss of 1.6 kg occurred with exenatide 10 μg compared with 0.2 kg for liraglutide 1.8 mg [17]. Differences in weight effects may be as a result of lower baseline weight in this trial (82 kg) compared with exenatide (96 kg) and discontinuation of previous metformin therapy, unlike the exenatide trial where exenatide was added to previous sulphonylurea monotherapy [17]. Other large-scale trials with liraglutide in combination with sulphonylureas have demonstrated weight loss of 2–3 kg [18,20]. Withdrawals from exenatide trials ranged from 24–30% compared with 9–14% with liraglutide in this study. Nausea with exenatide ranged from 39% with 5 μg to 51% with 10 μg [17] compared with 10.5% for liraglutide. Furthermore, 41% were positive for anti-exenatide antibodies compared with 9–13% with anti-liraglutide antibodies. With sitagliptin 100 mg once daily for 24 weeks, HbA1c decreased by 0.3% from baseline in subjects receiving glimepiride, with 11% achieving an HbA1c < 7.0%[21]. Reductions in FPG and PPG from baseline were 0.05 and 1.4 mmol/l, respectively, while weight increased by 0.8 kg and the prevalence of nausea was < 1%. Although head-to-head trials are required to test true differences between these agents, the marked effects of liraglutide on FPG may be as a result of consistent blood levels of liraglutide maintained over 24 h compared with exenatide which has to be administered 60 min before breakfast and dinner and has a half-life of 1.5–3.6 h [22]. In a recent 26-week head-to-head trial comparing liraglutide with exenatide, liraglutide produced a 0.3% greater decrease on HbA1c (P < 0.0001) [20]. Because DPP-4 inhibitors inhibit the degradation of GLP-1, the efficacy of sitagliptin is dependent on levels of endogenous GLP-1 which is physiologically low compared with the much higher pharmacological levels of liraglutide. Pharmacological levels may be needed to induce satiety, weight loss and possibly larger HbA1c reductions. Liraglutide is an effective and well-tolerated once-daily human GLP-1 analogue that improves overall glycaemic control and indices of pancreatic B-cell function with minimal weight gain and risk of hypoglycaemia when used in combination with a sulphonylurea for T2D.\n\nBODY.COMPETING INTERESTS:\nThe study was funded by Novo Nordisk, the manufacturer of liraglutide. In collaboration with the investigators, Novo Nordisk was responsible for the study design, protocol, statistical analysis plans, oversight, analysis and reporting of the results. Data were recorded at the clinical centres and maintained by the sponsor. The LEAD-1 SU study group had full access to the data. Final responsibility for the decision to submit the manuscript for publication was the authors. MM has received lecture fees from Novo Nordisk, Servier, MSD; JS has received honoraria, grants and lecture fees from Novo Nordisk; MB, WMWB and NAK have no conflicts to declare; JS has received lecture fees from Novo Nordisk; MZ is employed by, and holds stock in, Novo Nordisk; TLT is employed by Novo Nordisk; SC is a member of the international advisory board on liraglutide for Novo Nordisk and has received lecture fees from Novo Nordisk.", 'PMCID': 2871176, 'Prompts': {'PromptID': [150, 113, 140, 106, 142, 149, 148, 152, 154, 125, 121, 124, 107, 105, 133, 103, 126, 118, 132, 122, 141, 151, 112, 153, 102, 129, 104, 116, 136, 123, 135, 139, 101, 99, 144, 145, 147, 117, 143, 111, 137, 114, 108, 128, 134, 115, 127, 131, 109, 146, 110, 100, 138, 119, 130], 'PMCID': [2871176, 2871176, 2871176, 2871176, 2871176, 2871176, 2871176, 2871176, 2871176, 2871176, 2871176, 2871176, 2871176, 2871176, 2871176, 2871176, 2871176, 2871176, 2871176, 2871176, 2871176, 2871176, 2871176, 2871176, 2871176, 2871176, 2871176, 2871176, 2871176, 2871176, 2871176, 2871176, 2871176, 2871176, 2871176, 2871176, 2871176, 2871176, 2871176, 2871176, 2871176, 2871176, 2871176, 2871176, 2871176, 2871176, 2871176, 2871176, 2871176, 2871176, 2871176, 2871176, 2871176, 2871176, 2871176], 'Outcome': ['Incidence of minor hypoglycaemia', 'Patients reaching HbA1c goals less than 7.0% and equal or less than 6.5%', 'HOMA-IR', 'HbA1c level at 26 weeks', 'Reductions in systolic blood pressure', 'Pulse variations', 'Pulse variations', 'Incidence of minor hypoglycaemia', 'Changes in calcitonin at week 26', 'Postprandial plasma glucose', 'ADA fasting plasma glucose goals between 5.0 mmol/l and less than 7.2 mmol/l', 'Postprandial plasma glucose', 'HbA1c level at 26 weeks', 'HbA1c level at 26 weeks', 'Proinsulin : insulin ratio', 'Postprandial plasma glucose', 'ADA postprandial plasma glucose goals less than 10.0 mmol/l', 'ADA fasting plasma glucose goals between 5.0 mmol/l and less than 7.2 mmol/l', 'Proinsulin : insulin ratio', 'ADA fasting plasma glucose goals between 5.0 mmol/l and less than 7.2 mmol/l', 'Reductions in systolic blood pressure', 'Incidence of minor hypoglycaemia', 'Patients reaching HbA1c goals less than 7.0% and equal or less than 6.5%', 'Changes in calcitonin at week 26', 'Fasting plasma glucose at week 26', 'ADA postprandial plasma glucose goals less than 10.0 mmol/l', 'Postprandial plasma glucose', 'Fasting plasma glucose at week 26', 'HOMA-B', 'Postprandial plasma glucose', 'HOMA-B', 'HOMA-IR', 'Fasting plasma glucose at week 26', 'HbA1c level at 26 weeks', 'Reductions in systolic blood pressure', 'Decreases in diastolic blood pressure', 'Pulse variations', 'Fasting plasma glucose at week 26', 'Reductions in systolic blood pressure', 'Patients reaching HbA1c goals less than 7.0% and equal or less than 6.5%', 'HOMA-B', 'Patients reaching HbA1c goals less than 7.0% ', 'HbA1c level at 26 weeks', 'ADA postprandial plasma glucose goals less than 10.0 mmol/l', 'Proinsulin : insulin ratio', 'Fasting plasma glucose at week 26', 'ADA postprandial plasma glucose goals less than 10.0 mmol/l', 'Proinsulin : insulin ratio', 'HbA1c level at 26 weeks', 'Decreases in diastolic blood pressure', 'Patients reaching HbA1c goals less than 7.0% and equal or less than 6.5%', 'HbA1c level at 26 weeks', 'HOMA-B', 'ADA fasting plasma glucose goals between 5.0 mmol/l and less than 7.2 mmol/l', 'Weight gain'], 'Intervention': ['Liraglutide (1.2 mg) plus glimepiride', 'Liraglutide (1.8 mg) plus glimepiride', 'Liraglutide (all doses) plus glimepiride', 'Liraglutide (0.6 mg) plus glimepiride', 'Liraglutide (1.8 mg) plus glimepiride ', 'Liraglutide (1.8 mg) plus glimepiride', 'Liraglutide (1.2 mg) plus glimepiride', 'Liraglutide (1.2 mg) plus glimepiride', 'Liraglutide (all doses) plus glimepiride', 'Liraglutide (1.8 mg) plus glimepiride', 'Liraglutide (1.2 mg) plus glimepiride', 'Liraglutide (0.6 mg) plus glimepiride', 'Liraglutide (1.8 mg) plus glimepiride', 'Liraglutide (1.8 mg) plus glimepiride', 'Liraglutide (1.8 mg) plus glimepiride ', 'Liraglutide (1.2 mg) plus glimepiride', 'Liraglutide (0.6 mg) plus glimepiride', 'Liraglutide (0.6 mg) plus glimepiride', 'Liraglutide (1.2 mg) plus glimepiride ', 'Liraglutide (1.8 mg) plus glimepiride', 'Liraglutide (1.2 mg) plus glimepiride ', 'Liraglutide (1.8 mg) plus glimepiride', 'Liraglutide (1.2 mg) plus glimepiride', 'Liraglutide (all doses) plus glimepiride', 'Liraglutide (1.2 mg) plus glimepiride', 'Liraglutide (all doses) plus glimepiride', 'Liraglutide (1.2 mg) plus glimepiride', 'Liraglutide (0.6 mg) plus glimepiride', 'Liraglutide (1.8 mg) plus glimepiride ', 'Liraglutide (1.8 mg) plus glimepiride', 'Liraglutide (1.2 mg) plus glimepiride ', 'Liraglutide (all doses) plus glimepiride', 'Liraglutide (1.2 mg) plus glimepiride', 'Liraglutide (1.2 mg) plus glimepiride', 'Liraglutide (1.8 mg) plus glimepiride ', 'Liraglutide (all doses) plus glimepiride', 'Liraglutide (all doses) plus glimepiride', 'Liraglutide (1.8 mg) plus glimepiride', 'Liraglutide (1.2 mg) plus glimepiride ', 'Liraglutide (1.8 mg) plus glimepiride', 'Liraglutide (1.2 mg) plus glimepiride ', 'Liraglutide (1.8 mg) plus glimepiride', 'Liraglutide (0.6 mg) plus glimepiride', 'Liraglutide (1.8 mg) plus glimepiride', 'Liraglutide (1.8 mg) plus glimepiride ', 'Liraglutide (1.8 mg) plus glimepiride', 'Liraglutide (1.2 mg) plus glimepiride', 'Liraglutide (1.2 mg) plus glimepiride ', 'Rosiglitazone plus glimepiride', 'Liraglutide (all doses) plus glimepiride', 'Liraglutide (1.2 mg) plus glimepiride', 'Liraglutide (1.2 mg) plus glimepiride', 'Liraglutide (1.8 mg) plus glimepiride ', 'Liraglutide (1.2 mg) plus glimepiride', 'Rosiglitazone plus glimepiride'], 'Comparator': ['Rosiglitazone plus glimepiride', 'Rosiglitazone plus glimepiride', 'Rosiglitazone plus glimepiride', 'Placebo plus glimepiride', 'Placebo plus glimepiride ', 'Rosiglitazone plus glimepiride', 'Rosiglitazone plus glimepiride', 'Placebo plus glimepiride', 'Placebo plus glimepiride', 'Rosiglitazone plus glimepiride', 'Rosiglitazone plus glimepiride', 'Placebo plus glimepiride', 'Rosiglitazone plus glimepiride', 'Placebo plus glimepiride', 'Rosiglitazone plus glimepiride ', 'Placebo plus glimepiride', 'Placebo plus glimepiride', 'Placebo plus glimepiride', 'Placebo plus glimepiride ', 'Rosiglitazone plus glimepiride', 'Placebo plus glimepiride ', 'Rosiglitazone plus glimepiride', 'Rosiglitazone plus glimepiride', 'Rosiglitazone plus glimepiride', 'Rosiglitazone plus glimepiride', 'Rosiglitazone plus glimepiride', 'Rosiglitazone plus glimepiride', 'Placebo plus glimepiride', 'Rosiglitazone plus glimepiride ', 'Placebo plus glimepiride', 'Rosiglitazone plus glimepiride ', 'Placebo plus glimepiride', 'Placebo plus glimepiride', 'Placebo plus glimepiride', 'Rosiglitazone plus glimepiride ', 'Placebo plus glimepiride', 'Placebo plus glimepiride', 'Rosiglitazone plus glimepiride', 'Rosiglitazone plus glimepiride ', 'Placebo plus glimepiride', 'Placebo plus glimepiride ', 'Liraglutide (1.2 mg) plus glimepiride', 'Rosiglitazone plus glimepiride', 'Placebo plus glimepiride', 'Placebo plus glimepiride ', 'Placebo plus glimepiride', 'Placebo plus glimepiride', 'Rosiglitazone plus glimepiride ', 'Placebo plus glimepiride', 'Rosiglitazone plus glimepiride', 'Placebo plus glimepiride', 'Rosiglitazone plus glimepiride', 'Placebo plus glimepiride ', 'Placebo plus glimepiride', 'Liraglutide plus glimepiride'], 'Annotations': [{'UserID': [0, 3, 2], 'PromptID': [150, 150, 150], 'PMCID': [2871176, 2871176, 2871176], 'Valid Label': [True, True, True], 'Valid Reasoning': [True, True, True], 'Label': ['significantly increased', 'significantly increased', 'significantly increased'], 'Annotations': ['The proportion of subjects experiencing minor hypoglycaemia during the trial was lowest with placebo (i.e. glimepiride monotherapy 2.6%; 0.17 events/subject-year), comparable with liraglutide 0.6 mg (5.2%, 0.17 events/subject-year) and rosiglitazone (4.3%, 0.12 events/subject-year) groups and similar between the liraglutide 1.2 mg (9.2%, 0.51 events/subject-year) and liraglutide 1.8 mg (8.1%, 0.47 events/subject-year) treatment groups. Incidence was higher with liraglutide 1.2 mg (P = 0.0024) and 1.8 mg (P = 0.0065) compared with rosiglitazone and liraglutide 1.2 mg compared with placebo (P = 0.048), occurring in the setting of lower mean HbA1c values.', 'Incidence was higher with liraglutide 1.2 mg (P = 0.0024) and 1.8 mg (P = 0.0065) compared with rosiglitazone', 'Incidence was higher with liraglutide 1.2 mg (P = 0.0024) and 1.8 mg (P = 0.0065) compared with rosiglitazone and liraglutide 1.2 mg compared with placebo (P = 0.048), occurring in the setting of lower mean HbA1c values.'], 'Label Code': [1, 1, 1], 'In Abstract': [True, True, True], 'Evidence Start': [25524, 25964, 25964], 'Evidence End': [26184, 26073, 26184]}, {'UserID': [0, 1, 3, 2], 'PromptID': [113, 113, 113, 113], 'PMCID': [2871176, 2871176, 2871176, 2871176], 'Valid Label': [True, True, True, True], 'Valid Reasoning': [True, True, True, True], 'Label': ['significantly increased', 'significantly increased', 'significantly increased', 'significantly increased'], 'Annotations': ['The estimated proportion of subjects treated with either liraglutide 1.2 or 1.8 mg reaching ADA/EASD and IDF/AACE HbA1c targets was substantially greater compared with either placebo (P < 0.0001) or rosiglitazone (Fig. 4; P ≤ 0.0003)', 'he estimated proportion of subjects treated with either liraglutide 1.2 or 1.8 mg reaching ADA/EASD and IDF/AACE HbA1c targets was substantially greater compared with either placebo (P < 0.0001) or rosiglitazone (Fig. 4; P ≤ 0.0003), with more patients reaching < 7.0% with liraglutide 1.8 mg compared with 1.2 mg (P = 0.018). ', 'The estimated proportion of subjects treated with either liraglutide 1.2 or 1.8 mg reaching ADA/EASD and IDF/AACE HbA1c targets was substantially greater compared with either placebo (P < 0.0001) or rosiglitazone (Fig. 4; P ≤ 0.0003), ', 'The estimated proportion of subjects treated with either liraglutide 1.2 or 1.8 mg reaching ADA/EASD and IDF/AACE HbA1c targets was substantially greater compared with either placebo (P < 0.0001) or rosiglitazone (Fig. 4; P ≤ 0.0003), with more patients reaching < 7.0% with liraglutide 1.8 mg compared with 1.2 mg (P = 0.018). '], 'Label Code': [1, 1, 1, 1], 'In Abstract': [True, True, True, True], 'Evidence Start': [16120, 16121, 16120, 16120], 'Evidence End': [16353, 16449, 16355, 16449]}, {'UserID': [0, 1, 3, 2], 'PromptID': [140, 140, 140, 140], 'PMCID': [2871176, 2871176, 2871176, 2871176], 'Valid Label': [True, True, True, True], 'Valid Reasoning': [True, True, True, True], 'Label': ['no significant difference', 'no significant difference', 'no significant difference', 'no significant difference'], 'Annotations': ['There were no significant differences between treatments for HOMA-IR.', 'There were no significant differences between treatments for HOMA-IR.', 'There were no significant differences between treatments for HOMA-IR.', 'There were no significant differences between treatments for HOMA-IR.'], 'Label Code': [0, 0, 0, 0], 'In Abstract': [True, True, True, True], 'Evidence Start': [20943, 20943, 20943, 20943], 'Evidence End': [21012, 21012, 21012, 21012]}, {'UserID': [0, 1, 3, 2], 'PromptID': [106, 106, 106, 106], 'PMCID': [2871176, 2871176, 2871176, 2871176], 'Valid Label': [True, True, True, True], 'Valid Reasoning': [True, True, True, True], 'Label': ['significantly decreased', 'significantly decreased', 'significantly decreased', 'significantly decreased'], 'Annotations': ['All liraglutide doses were superior to placebo (P < 0.0001)', 'Estimated treatment differences and 95% CIs to placebo were: liraglutide 1.8 mg: −1.4% (1.6, −1.1); liraglutide 1.2 mg: −1.3% (1.5, −1.1); liraglutide 0.6 mg: −0.8% (−1.1, −0.6); rosiglitazone: −0.7% (−0.9, −0.4). All liraglutide doses were superior to placebo (P < 0.0001), while the two higher liraglutide doses were superior to rosiglitazone (P < 0.0001). ', 'All liraglutide doses were superior to placebo (P < 0.0001),', 'All liraglutide doses were superior to placebo (P < 0.0001), while the two higher liraglutide doses were superior to rosiglitazone (P < 0.0001).'], 'Label Code': [-1, -1, -1, -1], 'In Abstract': [True, True, True, True], 'Evidence Start': [14169, 13955, 14169, 14169], 'Evidence End': [14228, 14314, 14229, 14313]}, {'UserID': [0, 1, 3, 2], 'PromptID': [142, 142, 142, 142], 'PMCID': [2871176, 2871176, 2871176, 2871176], 'Valid Label': [True, True, True, True], 'Valid Reasoning': [True, True, True, True], 'Label': ['no significant difference', 'no significant difference', 'no significant difference', 'no significant difference'], 'Annotations': ['Although decreases in systolic blood pressure occurred with either liraglutide 1.2 or 1.8 mg (2.6–2.8 mmHg), they were not significantly different from placebo or rosiglitazone (0.9–2.3 mmHg)', 'Although decreases in systolic blood pressure occurred with either liraglutide 1.2 or 1.8 mg (2.6–2.8 mmHg), they were not significantly different from placebo or rosiglitazone (0.9–2.3 mmHg). ', 'Although decreases in systolic blood pressure occurred with either liraglutide 1.2 or 1.8 mg (2.6–2.8 mmHg), they were not significantly different from placebo or rosiglitazone (0.9–2.3 mmHg)', 'Although decreases in systolic blood pressure occurred with either liraglutide 1.2 or 1.8 mg (2.6–2.8 mmHg), they were not significantly different from placebo or rosiglitazone (0.9–2.3 mmHg). '], 'Label Code': [0, 0, 0, 0], 'In Abstract': [True, True, True, True], 'Evidence Start': [22039, 22039, 22039, 22039], 'Evidence End': [22230, 22232, 22230, 22232]}, {'UserID': [0, 1, 3, 2], 'PromptID': [149, 149, 149, 149], 'PMCID': [2871176, 2871176, 2871176, 2871176], 'Valid Label': [True, True, True, True], 'Valid Reasoning': [True, True, True, True], 'Label': ['significantly increased', 'significantly increased', 'significantly increased', 'significantly increased'], 'Annotations': ['Changes in pulse for all doses of liraglutide were significant vs. placebo (P ≤ 0.002). This also was true with either liraglutide 1.8 or 1.2 mg compared with rosiglitazone (P < 0.01).', 'Changes in pulse for all doses of liraglutide were significant vs. placebo (P ≤ 0.002). This also was true with either liraglutide 1.8 or 1.2 mg compared with rosiglitazone (P < 0.01).', 'Pulse increases above baseline ranged from 2 to 4 beats/min with the three doses of liraglutide and 1 beat/min with rosiglitazone, while pulse decreased by 1 beat/min with placebo. Changes in pulse for all doses of liraglutide were significant vs. placebo (P ≤ 0.002)', 'Changes in pulse for all doses of liraglutide were significant vs. placebo (P ≤ 0.002). This also was true with either liraglutide 1.8 or 1.2 mg compared with rosiglitazone (P < 0.01).'], 'Label Code': [1, 1, 1, 1], 'In Abstract': [True, True, True, True], 'Evidence Start': [22554, 22554, 22373, 22554], 'Evidence End': [22738, 22738, 22640, 22738]}, {'UserID': [0, 1, 3, 2], 'PromptID': [148, 148, 148, 148], 'PMCID': [2871176, 2871176, 2871176, 2871176], 'Valid Label': [True, True, True, True], 'Valid Reasoning': [True, True, True, True], 'Label': ['significantly increased', 'significantly increased', 'significantly increased', 'significantly increased'], 'Annotations': ['Changes in pulse for all doses of liraglutide were significant vs. placebo (P ≤ 0.002). This also was true with either liraglutide 1.8 or 1.2 mg compared with rosiglitazone (P < 0.01).', 'Changes in pulse for all doses of liraglutide were significant vs. placebo (P ≤ 0.002)', 'Changes in pulse for all doses of liraglutide were significant vs. placebo (P ≤ 0.002). This also was true with either liraglutide 1.8 or 1.2 mg compared with rosiglitazone (P < 0.01).', 'Pulse increases above baseline ranged from 2 to 4 beats/min with the three doses of liraglutide and 1 beat/min with rosiglitazone, while pulse decreased by 1 beat/min with placebo. Changes in pulse for all doses of liraglutide were significant vs. placebo (P ≤ 0.002). This also was true with either liraglutide 1.8 or 1.2 mg compared with rosiglitazone (P < 0.01).'], 'Label Code': [1, 1, 1, 1], 'In Abstract': [True, True, True, True], 'Evidence Start': [22554, 22554, 22554, 22373], 'Evidence End': [22738, 22640, 22738, 22738]}, {'UserID': [0, 1, 3, 2], 'PromptID': [152, 152, 152, 152], 'PMCID': [2871176, 2871176, 2871176, 2871176], 'Valid Label': [True, True, True, True], 'Valid Reasoning': [True, True, True, True], 'Label': ['significantly increased', 'significantly increased', 'significantly increased', 'significantly increased'], 'Annotations': ['The proportion of subjects experiencing minor hypoglycaemia during the trial was lowest with placebo (i.e. glimepiride monotherapy 2.6%; 0.17 events/subject-year), comparable with liraglutide 0.6 mg (5.2%, 0.17 events/subject-year) and rosiglitazone (4.3%, 0.12 events/subject-year) groups and similar between the liraglutide 1.2 mg (9.2%, 0.51 events/subject-year) and liraglutide 1.8 mg (8.1%, 0.47 events/subject-year) treatment groups. Incidence was higher with liraglutide 1.2 mg (P = 0.0024) and 1.8 mg (P = 0.0065) compared with rosiglitazone and liraglutide 1.2 mg compared with placebo (P = 0.048), occurring in the setting of lower mean HbA1c values.', 'Incidence was higher with liraglutide 1.2 mg (P = 0.0024) and 1.8 mg (P = 0.0065) compared with rosiglitazone and liraglutide 1.2 mg compared with placebo (P = 0.048), occurring in the setting of lower mean HbA1c values.', 'Incidence was higher with liraglutide 1.2 mg (P = 0.0024) and 1.8 mg (P = 0.0065) compared with rosiglitazone and liraglutide 1.2 mg compared with placebo (P = 0.048),', 'Incidence was higher with liraglutide 1.2 mg (P = 0.0024) and 1.8 mg (P = 0.0065) compared with rosiglitazone and liraglutide 1.2 mg compared with placebo (P = 0.048), occurring in the setting of lower mean HbA1c values.'], 'Label Code': [1, 1, 1, 1], 'In Abstract': [True, True, True, True], 'Evidence Start': [25524, 25964, 25964, 25964], 'Evidence End': [26184, 26184, 26131, 26184]}, {'UserID': [0, 1, 3, 2], 'PromptID': [154, 154, 154, 154], 'PMCID': [2871176, 2871176, 2871176, 2871176], 'Valid Label': [True, True, True, True], 'Valid Reasoning': [True, True, True, True], 'Label': ['no significant difference', 'no significant difference', 'no significant difference', 'no significant difference'], 'Annotations': ['No significant differences in calcitonin were found between the three groups treated with liraglutide when compared with either placebo or rosiglitazone at the end of the trial at week 26.', 'No significant differences in calcitonin were found between the three groups treated with liraglutide when compared with either placebo or rosiglitazone at the end of the trial at week 26.', 'No significant differences in calcitonin were found between the three groups treated with liraglutide when compared with either placebo or rosiglitazone at the end of the trial at week 26.', 'No significant differences in calcitonin were found between the three groups treated with liraglutide when compared with either placebo or rosiglitazone at the end of the trial at week 26.'], 'Label Code': [0, 0, 0, 0], 'In Abstract': [True, True, True, True], 'Evidence Start': [26515, 26515, 26515, 26515], 'Evidence End': [26703, 26703, 26703, 26703]}, {'UserID': [0, 1, 3, 2], 'PromptID': [125, 125, 125, 125], 'PMCID': [2871176, 2871176, 2871176, 2871176], 'Valid Label': [True, True, True, True], 'Valid Reasoning': [True, True, True, True], 'Label': ['significantly decreased', 'significantly decreased', 'significantly decreased', 'significantly decreased'], 'Annotations': ['Treatment differences for PPG were greater with all doses of liraglutide compared with placebo (1.5–2.4 mmol/l; P < 0.0001) and greater with liraglutide 1.2 mg (0.64 mmol/l; P = 0.043) and 1.8 mg (0.87 mmol/l;P = 0.0022) compared with rosiglitazone.', 'Decreases in postprandial plasma glucose from baseline were greater with liraglutide 1.2 or 1.8 mg [−2.5 to −2.7 mmol/l (baseline 12.9 mmol/l for both)] compared with placebo (−0.4 mmol/l, P < 0.0001, baseline 12.7 mmol/l) or rosiglitazone (−1.8 mmol/l, P < 0.05, baseline 13.0 mmol/l). ', 'Decreases in postprandial plasma glucose from baseline were greater with liraglutide 1.2 or 1.8 mg [−2.5 to −2.7 mmol/l (baseline 12.9 mmol/l for both)] compared with placebo (−0.4 mmol/l, P < 0.0001, baseline 12.7 mmol/l) or rosiglitazone (−1.8 mmol/l, P < 0.05, baseline 13.0 mmol/l). ', 'Decreases in postprandial plasma glucose from baseline were greater with liraglutide 1.2 or 1.8 mg [−2.5 to −2.7 mmol/l (baseline 12.9 mmol/l for both)] compared with placebo (−0.4 mmol/l, P < 0.0001, baseline 12.7 mmol/l) or rosiglitazone (−1.8 mmol/l, P < 0.05, baseline 13.0 mmol/l). '], 'Label Code': [-1, -1, -1, -1], 'In Abstract': [True, True, True, True], 'Evidence Start': [19128, 1469, 1469, 1469], 'Evidence End': [19377, 1756, 1756, 1756]}, {'UserID': [0, 3], 'PromptID': [121, 121], 'PMCID': [2871176, 2871176], 'Valid Label': [True, True], 'Valid Reasoning': [True, True], 'Label': ['significantly increased', 'significantly increased'], 'Annotations': ['The percentage of subjects achieving FPG values between 5.0 mmol/l and ≤ 7.2 mmol/l (ADA target) after 26 weeks was higher with liraglutide: 0.6 mg (19%; P = 0.002); 1.2 mg (37%; P < 0.001); and 1.8 mg (38%;P < 0.001) compared with placebo (7%). The liraglutide 1.2 and 1.8 mg treatment groups also had more subjects achieving the same FPG target at end of treatment compared with rosiglitazone (26%) (P = 0.007 and P = 0.01, respectively).', 'The percentage of subjects achieving FPG values between 5.0 mmol/l and ≤ 7.2 mmol/l (ADA target) after 26 weeks was higher with liraglutide: 0.6 mg (19%; P = 0.002); 1.2 mg (37%; P < 0.001); and 1.8 mg (38%;P < 0.001) compared with placebo (7%). '], 'Label Code': [1, 1], 'In Abstract': [True, True], 'Evidence Start': [18230, 18230], 'Evidence End': [18670, 18476]}, {'UserID': [0, 1, 3, 2], 'PromptID': [124, 124, 124, 124], 'PMCID': [2871176, 2871176, 2871176, 2871176], 'Valid Label': [True, True, True, True], 'Valid Reasoning': [True, True, True, True], 'Label': ['significantly decreased', 'significantly decreased', 'significantly decreased', 'significantly decreased'], 'Annotations': ['Treatment differences for PPG were greater with all doses of liraglutide compared with placebo (1.5–2.4 mmol/l; P < 0.0001)', 'reatment differences for PPG were greater with all doses of liraglutide compared with placebo (1.5–2.4 mmol/l; P < 0.0001) and greater with liraglutide 1.2 mg (0.64 mmol/l; P = 0.043) and 1.8 mg (0.87 mmol/l;P = 0.0022) compared with rosiglitazone.', 'Treatment differences for PPG were greater with all doses of liraglutide compared with placebo (1.5–2.4 mmol/l; P < 0.0001) ', 'Treatment differences for PPG were greater with all doses of liraglutide compared with placebo (1.5–2.4 mmol/l; P < 0.0001) and greater with liraglutide 1.2 mg (0.64 mmol/l; P = 0.043) and 1.8 mg (0.87 mmol/l;P = 0.0022) compared with rosiglitazone.'], 'Label Code': [-1, -1, -1, -1], 'In Abstract': [True, True, True, True], 'Evidence Start': [19128, 19129, 19128, 19128], 'Evidence End': [19251, 19377, 19252, 19377]}, {'UserID': [0, 1, 3, 2], 'PromptID': [107, 107, 107, 107], 'PMCID': [2871176, 2871176, 2871176, 2871176], 'Valid Label': [True, True, True, True], 'Valid Reasoning': [True, True, True, True], 'Label': ['significantly decreased', 'significantly decreased', 'significantly decreased', 'significantly decreased'], 'Annotations': ['Liraglutide (1.2 or 1.8 mg) produced greater reductions in HbA1c from baseline, (−1.1%, baseline 8.5%) compared with placebo (+0.2%, P < 0.0001, baseline 8.4%) or rosiglitazone (−0.4%, P < 0.0001, baseline 8.4%) when added to glimepiride.', 'After 26 weeks, HbA1c decreased by 1.1% from baseline (primary endpoint) with either liraglutide 1.2 or 1.8 mg, respectively, compared with either placebo (+0.2%) or rosiglitazone (−0.4%) (Fig. 3d). ', 'Liraglutide (1.2 or 1.8 mg) produced greater reductions in HbA1c from baseline, (−1.1%, baseline 8.5%) compared with placebo (+0.2%, P < 0.0001, baseline 8.4%) or rosiglitazone (−0.4%, P < 0.0001, baseline 8.4%) when added to glimepiride. ', 'After 26 weeks, HbA1c decreased by 1.1% from baseline (primary endpoint) with either liraglutide 1.2 or 1.8 mg, respectively, compared with either placebo (+0.2%) or rosiglitazone (−0.4%) (Fig. 3d). Estimated treatment differences and 95% CIs to placebo were: liraglutide 1.8 mg: −1.4% (1.6, −1.1); liraglutide 1.2 mg: −1.3% (1.5, −1.1); liraglutide 0.6 mg: −0.8% (−1.1, −0.6); rosiglitazone: −0.7% (−0.9, −0.4). All liraglutide doses were superior to placebo (P < 0.0001), while the two higher liraglutide doses were superior to rosiglitazone (P < 0.0001). Liraglutide 0.6 mg was non-inferior to rosiglitazone. Rosiglitazone also was superior to placebo (P < 0.0001). '], 'Label Code': [-1, -1, -1, -1], 'In Abstract': [True, True, True, True], 'Evidence Start': [843, 13756, 843, 13756], 'Evidence End': [1081, 13955, 1082, 14426]}, {'UserID': [0, 1, 3, 2], 'PromptID': [105, 105, 105, 105], 'PMCID': [2871176, 2871176, 2871176, 2871176], 'Valid Label': [True, True, True, True], 'Valid Reasoning': [True, True, True, True], 'Label': ['significantly decreased', 'significantly decreased', 'significantly decreased', 'significantly decreased'], 'Annotations': ['Liraglutide (1.2 or 1.8 mg) produced greater reductions in HbA1c from baseline, (−1.1%, baseline 8.5%) compared with placebo (+0.2%, P < 0.0001, baseline 8.4%) or rosiglitazone (−0.4%, P < 0.0001, baseline 8.4%) when added to glimepiride.', 'After 26 weeks, HbA1c decreased by 1.1% from baseline (primary endpoint) with either liraglutide 1.2 or 1.8 mg, respectively, compared with either placebo (+0.2%) or rosiglitazone (−0.4%) (Fig. 3d). ', 'All liraglutide doses were superior to placebo (P < 0.0001),', 'All liraglutide doses were superior to placebo (P < 0.0001), while the two higher liraglutide doses were superior to rosiglitazone (P < 0.0001).'], 'Label Code': [-1, -1, -1, -1], 'In Abstract': [True, True, True, True], 'Evidence Start': [843, 13756, 14169, 14169], 'Evidence End': [1081, 13955, 14229, 14313]}, {'UserID': [0, 1, 3, 2], 'PromptID': [133, 133, 133, 133], 'PMCID': [2871176, 2871176, 2871176, 2871176], 'Valid Label': [True, True, True, True], 'Valid Reasoning': [True, True, True, True], 'Label': ['significantly decreased', 'significantly decreased', 'significantly decreased', 'significantly decreased'], 'Annotations': ['Reductions in the proinsulin : insulin ratio were greater with both liraglutide 1.2 and 1.8 mg compared with either rosiglitazone or placebo (Table 2; P ≤ 0.02)', 'Reductions in the proinsulin : insulin ratio were greater with both liraglutide 1.2 and 1.8 mg compared with either rosiglitazone or placebo (Table 2; P ≤ 0.02). ', 'Reductions in the proinsulin : insulin ratio were greater with both liraglutide 1.2 and 1.8 mg compared with either rosiglitazone or placebo (Table 2; P ≤ 0.02)', 'Reductions in the proinsulin : insulin ratio were greater with both liraglutide 1.2 and 1.8 mg compared with either rosiglitazone or placebo (Table 2; P ≤ 0.02). '], 'Label Code': [-1, -1, -1, -1], 'In Abstract': [True, True, True, True], 'Evidence Start': [20566, 20566, 20566, 20566], 'Evidence End': [20726, 20728, 20726, 20728]}, {'UserID': [0, 1, 3, 2], 'PromptID': [103, 103, 103, 103], 'PMCID': [2871176, 2871176, 2871176, 2871176], 'Valid Label': [True, True, True, True], 'Valid Reasoning': [True, True, True, True], 'Label': ['significantly decreased', 'significantly decreased', 'significantly decreased', 'significantly decreased'], 'Annotations': ['Decreases in postprandial plasma glucose from baseline were greater with liraglutide 1.2 or 1.8 mg [−2.5 to −2.7 mmol/l (baseline 12.9 mmol/l for both)] compared with placebo (−0.4 mmol/l, P < 0.0001, baseline 12.7 mmol/l)', 'Decreases in postprandial plasma glucose from baseline were greater with liraglutide 1.2 or 1.8 mg [−2.5 to −2.7 mmol/l (baseline 12.9 mmol/l for both)] compared with placebo (−0.4 mmol/l, P < 0.0001, baseline 12.7 mmol/l) or rosiglitazone (−1.8 mmol/l, P < 0.05, baseline 13.0 mmol/l). ', 'Decreases in postprandial plasma glucose from baseline were greater with liraglutide 1.2 or 1.8 mg [−2.5 to −2.7 mmol/l (baseline 12.9 mmol/l for both)] compared with placebo (−0.4 mmol/l, P < 0.0001, baseline 12.7 mmol/l) ', 'Decreases in postprandial plasma glucose from baseline were greater with liraglutide 1.2 or 1.8 mg [−2.5 to −2.7 mmol/l (baseline 12.9 mmol/l for both)] compared with placebo (−0.4 mmol/l, P < 0.0001, baseline 12.7 mmol/l) or rosiglitazone (−1.8 mmol/l, P < 0.05, baseline 13.0 mmol/l). '], 'Label Code': [-1, -1, -1, -1], 'In Abstract': [True, True, True, True], 'Evidence Start': [1469, 1469, 1469, 1469], 'Evidence End': [1691, 1756, 1692, 1756]}, {'UserID': [0, 1, 3, 2], 'PromptID': [126, 126, 126, 126], 'PMCID': [2871176, 2871176, 2871176, 2871176], 'Valid Label': [True, True, True, True], 'Valid Reasoning': [True, True, True, True], 'Label': ['significantly increased', 'significantly increased', 'significantly increased', 'significantly increased'], 'Annotations': ['The percentage of subjects with one, two or three PPG measurements < 10.0 mmol/l (ADA target) were greater for all doses of liraglutide compared with placebo (P < 0.05) but not rosiglitazone', 'The percentage of subjects with one, two or three PPG measurements < 10.0 mmol/l (ADA target) were greater for all doses of liraglutide compared with placebo (P < 0.05) but not rosiglitazone.', 'The percentage of subjects with one, two or three PPG measurements < 10.0 mmol/l (ADA target) were greater for all doses of liraglutide compared with placebo (P < 0.05)', 'The percentage of subjects with one, two or three PPG measurements < 10.0 mmol/l (ADA target) were greater for all doses of liraglutide compared with placebo (P < 0.05) but not rosiglitazone.'], 'Label Code': [1, 1, 1, 1], 'In Abstract': [True, True, True, True], 'Evidence Start': [19433, 19433, 19433, 19433], 'Evidence End': [19623, 19624, 19601, 19624]}, {'UserID': [0, 1, 3, 2], 'PromptID': [118, 118, 118, 118], 'PMCID': [2871176, 2871176, 2871176, 2871176], 'Valid Label': [True, True, True, True], 'Valid Reasoning': [True, True, True, True], 'Label': ['significantly increased', 'significantly increased', 'significantly increased', 'significantly increased'], 'Annotations': ['The percentage of subjects achieving FPG values between 5.0 mmol/l and ≤ 7.2 mmol/l (ADA target) after 26 weeks was higher with liraglutide: 0.6 mg (19%; P = 0.002); 1.2 mg (37%; P < 0.001); and 1.8 mg (38%;P < 0.001) compared with placebo (7%).', 'The percentage of subjects achieving FPG values between 5.0 mmol/l and ≤ 7.2 mmol/l (ADA target) after 26 weeks was higher with liraglutide: 0.6 mg (19%; P = 0.002); 1.2 mg (37%; P < 0.001); and 1.8 mg (38%;P < 0.001) compared with placebo (7%). ', 'The percentage of subjects achieving FPG values between 5.0 mmol/l and ≤ 7.2 mmol/l (ADA target) after 26 weeks was higher with liraglutide: 0.6 mg (19%; P = 0.002); 1.2 mg (37%; P < 0.001); and 1.8 mg (38%;P < 0.001) compared with placebo (7%)', 'The percentage of subjects achieving FPG values between 5.0 mmol/l and ≤ 7.2 mmol/l (ADA target) after 26 weeks was higher with liraglutide: 0.6 mg (19%; P = 0.002); 1.2 mg (37%; P < 0.001); and 1.8 mg (38%;P < 0.001) compared with placebo (7%). '], 'Label Code': [1, 1, 1, 1], 'In Abstract': [True, True, True, True], 'Evidence Start': [18230, 18230, 18230, 18230], 'Evidence End': [18475, 18476, 18474, 18476]}, {'UserID': [0, 1, 2], 'PromptID': [132, 132, 132], 'PMCID': [2871176, 2871176, 2871176], 'Valid Label': [True, True, True], 'Valid Reasoning': [True, True, True], 'Label': ['significantly decreased', 'significantly decreased', 'significantly decreased'], 'Annotations': ['Reductions in the proinsulin : insulin ratio were greater with both liraglutide 1.2 and 1.8 mg compared with either rosiglitazone or placebo (Table 2; P ≤ 0.02)', 'Reductions in the proinsulin : insulin ratio were greater with both liraglutide 1.2 and 1.8 mg compared with either rosiglitazone or placebo (Table 2; P ≤ 0.02). ', 'Reductions in the proinsulin : insulin ratio were greater with both liraglutide 1.2 and 1.8 mg compared with either rosiglitazone or placebo (Table 2; P ≤ 0.02). '], 'Label Code': [-1, -1, -1], 'In Abstract': [True, True, True], 'Evidence Start': [20566, 20566, 20566], 'Evidence End': [20726, 20728, 20728]}, {'UserID': [0, 1, 1, 2], 'PromptID': [122, 122, 122, 122], 'PMCID': [2871176, 2871176, 2871176, 2871176], 'Valid Label': [True, True, True, True], 'Valid Reasoning': [True, True, True, True], 'Label': ['significantly increased', 'significantly increased', 'significantly increased', 'significantly increased'], 'Annotations': ['The percentage of subjects achieving FPG values between 5.0 mmol/l and ≤ 7.2 mmol/l (ADA target) after 26 weeks was higher with liraglutide: 0.6 mg (19%; P = 0.002); 1.2 mg (37%; P < 0.001); and 1.8 mg (38%;P < 0.001) compared with placebo (7%). The liraglutide 1.2 and 1.8 mg treatment groups also had more subjects achieving the same FPG target at end of treatment compared with rosiglitazone (26%) (P = 0.007 and P = 0.01, respectively).', 'The percentage of subjects achieving FPG values between 5.0 mmol/l and ≤ 7.2 mmol/l (ADA target) after 26 weeks was higher with liraglutide: 0.6 mg (19%; P = 0.002); 1.2 mg (37%; P < 0.001); and 1.8 mg (38%;P < 0.001) compared with placebo (7%). ', 'The liraglutide 1.2 and 1.8 mg treatment groups also had more subjects achieving the same FPG target at end of treatment compared with rosiglitazone (26%) (P = 0.007 and P = 0.01, respectively).', 'The percentage of subjects achieving FPG values between 5.0 mmol/l and ≤ 7.2 mmol/l (ADA target) after 26 weeks was higher with liraglutide: 0.6 mg (19%; P = 0.002); 1.2 mg (37%; P < 0.001); and 1.8 mg (38%;P < 0.001) compared with placebo (7%). The liraglutide 1.2 and 1.8 mg treatment groups also had more subjects achieving the same FPG target at end of treatment compared with rosiglitazone (26%) (P = 0.007 and P = 0.01, respectively).'], 'Label Code': [1, 1, 1, 1], 'In Abstract': [True, True, True, True], 'Evidence Start': [18230, 18230, 18476, 18230], 'Evidence End': [18670, 18476, 18670, 18670]}, {'UserID': [0, 1, 3, 2], 'PromptID': [141, 141, 141, 141], 'PMCID': [2871176, 2871176, 2871176, 2871176], 'Valid Label': [True, True, True, True], 'Valid Reasoning': [True, True, True, True], 'Label': ['no significant difference', 'no significant difference', 'no significant difference', 'no significant difference'], 'Annotations': ['Although decreases in systolic blood pressure occurred with either liraglutide 1.2 or 1.8 mg (2.6–2.8 mmHg), they were not significantly different from placebo or rosiglitazone (0.9–2.3 mmHg)', 'Although decreases in systolic blood pressure occurred with either liraglutide 1.2 or 1.8 mg (2.6–2.8 mmHg), they were not significantly different from placebo or rosiglitazone (0.9–2.3 mmHg). ', 'Although decreases in systolic blood pressure occurred with either liraglutide 1.2 or 1.8 mg (2.6–2.8 mmHg), they were not significantly different from placebo ', 'Although decreases in systolic blood pressure occurred with either liraglutide 1.2 or 1.8 mg (2.6–2.8 mmHg), they were not significantly different from placebo or rosiglitazone (0.9–2.3 mmHg). '], 'Label Code': [0, 0, 0, 0], 'In Abstract': [True, True, True, True], 'Evidence Start': [22039, 22039, 22039, 22039], 'Evidence End': [22230, 22232, 22199, 22232]}, {'UserID': [0, 1, 3, 2], 'PromptID': [151, 151, 151, 151], 'PMCID': [2871176, 2871176, 2871176, 2871176], 'Valid Label': [True, True, True, True], 'Valid Reasoning': [True, True, True, True], 'Label': ['significantly increased', 'significantly increased', 'significantly increased', 'significantly increased'], 'Annotations': ['The proportion of subjects experiencing minor hypoglycaemia during the trial was lowest with placebo (i.e. glimepiride monotherapy 2.6%; 0.17 events/subject-year), comparable with liraglutide 0.6 mg (5.2%, 0.17 events/subject-year) and rosiglitazone (4.3%, 0.12 events/subject-year) groups and similar between the liraglutide 1.2 mg (9.2%, 0.51 events/subject-year) and liraglutide 1.8 mg (8.1%, 0.47 events/subject-year) treatment groups. Incidence was higher with liraglutide 1.2 mg (P = 0.0024) and 1.8 mg (P = 0.0065) compared with rosiglitazone and liraglutide 1.2 mg compared with placebo (P = 0.048), occurring in the setting of lower mean HbA1c values.', 'Incidence was higher with liraglutide 1.2 mg (P = 0.0024) and 1.8 mg (P = 0.0065) compared with rosiglitazone and liraglutide 1.2 mg compared with placebo (P = 0.048), occurring in the setting of lower mean HbA1c values.', 'Incidence was higher with liraglutide 1.2 mg (P = 0.0024) and 1.8 mg (P = 0.0065) compared with rosiglitazone', 'Incidence was higher with liraglutide 1.2 mg (P = 0.0024) and 1.8 mg (P = 0.0065) compared with rosiglitazone and liraglutide 1.2 mg compared with placebo (P = 0.048), occurring in the setting of lower mean HbA1c values.'], 'Label Code': [1, 1, 1, 1], 'In Abstract': [True, True, True, True], 'Evidence Start': [25524, 25964, 25964, 25964], 'Evidence End': [26184, 26184, 26073, 26184]}, {'UserID': [0, 1, 3, 2], 'PromptID': [112, 112, 112, 112], 'PMCID': [2871176, 2871176, 2871176, 2871176], 'Valid Label': [True, True, True, True], 'Valid Reasoning': [True, True, True, True], 'Label': ['significantly increased', 'significantly increased', 'significantly increased', 'significantly increased'], 'Annotations': ['The estimated proportion of subjects treated with either liraglutide 1.2 or 1.8 mg reaching ADA/EASD and IDF/AACE HbA1c targets was substantially greater compared with either placebo (P < 0.0001) or rosiglitazone (Fig. 4; P ≤ 0.0003)', 'At week 26, 42% and 21% of subjects treated with liraglutide 1.8 mg reached an HbA1c < 7.0% and ≤ 6.5%, respectively, compared with 8% and 4% for placebo (Fig. 4). The estimated proportion of subjects treated with either liraglutide 1.2 or 1.8 mg reaching ADA/EASD and IDF/AACE HbA1c targets was substantially greater compared with either placebo (P < 0.0001) or rosiglitazone (Fig. 4; P ≤ 0.0003), with more patients reaching < 7.0% with liraglutide 1.8 mg compared with 1.2 mg (P = 0.018). ', 'The estimated proportion of subjects treated with either liraglutide 1.2 or 1.8 mg reaching ADA/EASD and IDF/AACE HbA1c targets was substantially greater compared with either placebo (P < 0.0001) or rosiglitazone (Fig. 4; P ≤ 0.0003), with more patients reaching < 7.0% with liraglutide 1.8 mg compared with 1.2 mg (P = 0.018). ', 'The percentage of subjects reaching ADA [2] and International Diabetes Federation (IDF)/American Association of Clinical Endocrinologists (AACE) [11,12] treatment HbA1c goals with liraglutide was dose dependent (Fig. 4). At week 26, 42% and 21% of subjects treated with liraglutide 1.8 mg reached an HbA1c < 7.0% and ≤ 6.5%, respectively, compared with 8% and 4% for placebo (Fig. 4). The estimated proportion of subjects treated with either liraglutide 1.2 or 1.8 mg reaching ADA/EASD and IDF/AACE HbA1c targets was substantially greater compared with either placebo (P < 0.0001) or rosiglitazone (Fig. 4; P ≤ 0.0003), with more patients reaching < 7.0% with liraglutide 1.8 mg compared with 1.2 mg (P = 0.018). '], 'Label Code': [1, 1, 1, 1], 'In Abstract': [True, True, True, True], 'Evidence Start': [16120, 15956, 16120, 15735], 'Evidence End': [16353, 16449, 16449, 16449]}, {'UserID': [0, 1, 3, 2], 'PromptID': [153, 153, 153, 153], 'PMCID': [2871176, 2871176, 2871176, 2871176], 'Valid Label': [True, True, True, True], 'Valid Reasoning': [True, True, True, True], 'Label': ['no significant difference', 'no significant difference', 'no significant difference', 'no significant difference'], 'Annotations': ['No significant differences in calcitonin were found between the three groups treated with liraglutide when compared with either placebo or rosiglitazone at the end of the trial at week 26.', 'No significant differences in calcitonin were found between the three groups treated with liraglutide when compared with either placebo or rosiglitazone at the end of the trial at week 26.', 'No significant differences in calcitonin were found between the three groups treated with liraglutide when compared with either placebo or rosiglitazone at the end of the trial at week 26.', 'No significant differences in calcitonin were found between the three groups treated with liraglutide when compared with either placebo or rosiglitazone at the end of the trial at week 26.'], 'Label Code': [0, 0, 0, 0], 'In Abstract': [True, True, True, True], 'Evidence Start': [26515, 26515, 26515, 26515], 'Evidence End': [26703, 26703, 26703, 26703]}, {'UserID': [0, 1, 3, 2], 'PromptID': [102, 102, 102, 102], 'PMCID': [2871176, 2871176, 2871176, 2871176], 'Valid Label': [True, True, True, True], 'Valid Reasoning': [True, True, True, True], 'Label': ['significantly decreased', 'significantly decreased', 'significantly decreased', 'significantly decreased'], 'Annotations': ['Fasting plasma glucose decreased by week 2, with a 1.6 mmol/l decrease from baseline at week 26 with liraglutide 1.2 mg (baseline 9.8 mmol/l) or 1.8 mg (baseline 9.7 mmol/l) compared with a 0.9 mmol/l increase (placebo, P < 0.0001, baseline 9.5 mmol/l) or 1.0 mmol/l decrease (rosiglitazone, P < 0.006, baseline 9.9 mmol/l).', 'Fasting plasma glucose decreased by week 2, with a 1.6 mmol/l decrease from baseline at week 26 with liraglutide 1.2 mg (baseline 9.8 mmol/l) or 1.8 mg (baseline 9.7 mmol/l) compared with a 0.9 mmol/l increase (placebo, P < 0.0001, baseline 9.5 mmol/l) or 1.0 mmol/l decrease (rosiglitazone, P < 0.006, baseline 9.9 mmol/l).', 'An 0.7-mmol/l greater reduction in FPG was achieved with either liraglutide 1.2 or 1.8 mg compared with rosiglitazone (P ≤ 0.006) after 26 weeks. ', 'Fasting plasma glucose decreased by week 2, with a 1.6 mmol/l decrease from baseline at week 26 with liraglutide 1.2 mg (baseline 9.8 mmol/l) or 1.8 mg (baseline 9.7 mmol/l) compared with a 0.9 mmol/l increase (placebo, P < 0.0001, baseline 9.5 mmol/l) or 1.0 mmol/l decrease (rosiglitazone, P < 0.006, baseline 9.9 mmol/l).'], 'Label Code': [-1, -1, -1, -1], 'In Abstract': [True, True, True, True], 'Evidence Start': [1144, 1144, 17914, 1144], 'Evidence End': [1468, 1468, 18061, 1468]}, {'UserID': [0, 1, 3, 2], 'PromptID': [129, 129, 129, 129], 'PMCID': [2871176, 2871176, 2871176, 2871176], 'Valid Label': [True, True, True, True], 'Valid Reasoning': [True, True, True, True], 'Label': ['no significant difference', 'no significant difference', 'no significant difference', 'no significant difference'], 'Annotations': ['The percentage of subjects with one, two or three PPG measurements < 10.0 mmol/l (ADA target) were greater for all doses of liraglutide compared with placebo (P < 0.05) but not rosiglitazone.', 'The percentage of subjects with one, two or three PPG measurements < 10.0 mmol/l (ADA target) were greater for all doses of liraglutide compared with placebo (P < 0.05) but not rosiglitazone.', 'The percentage of subjects with one, two or three PPG measurements < 10.0 mmol/l (ADA target) were greater for all doses of liraglutide compared with placebo (P < 0.05) but not rosiglitazone.', 'The percentage of subjects with one, two or three PPG measurements < 10.0 mmol/l (ADA target) were greater for all doses of liraglutide compared with placebo (P < 0.05) but not rosiglitazone.'], 'Label Code': [0, 0, 0, 0], 'In Abstract': [True, True, True, True], 'Evidence Start': [19433, 19433, 19433, 19433], 'Evidence End': [19624, 19624, 19624, 19624]}, {'UserID': [1, 2], 'PromptID': [104, 104], 'PMCID': [2871176, 2871176], 'Valid Label': [True, True], 'Valid Reasoning': [True, True], 'Label': ['significantly decreased', 'significantly decreased'], 'Annotations': ['Decreases in postprandial plasma glucose from baseline were greater with liraglutide 1.2 or 1.8 mg [−2.5 to −2.7 mmol/l (baseline 12.9 mmol/l for both)] compared with placebo (−0.4 mmol/l, P < 0.0001, baseline 12.7 mmol/l) or rosiglitazone (−1.8 mmol/l, P < 0.05, baseline 13.0 mmol/l). ', 'Decreases in postprandial plasma glucose from baseline were greater with liraglutide 1.2 or 1.8 mg [−2.5 to −2.7 mmol/l (baseline 12.9 mmol/l for both)] compared with placebo (−0.4 mmol/l, P < 0.0001, baseline 12.7 mmol/l) or rosiglitazone (−1.8 mmol/l, P < 0.05, baseline 13.0 mmol/l). '], 'Label Code': [-1, -1], 'In Abstract': [True, True], 'Evidence Start': [1469, 1469], 'Evidence End': [1756, 1756]}, {'UserID': [0, 1, 3, 2], 'PromptID': [116, 116, 116, 116], 'PMCID': [2871176, 2871176, 2871176, 2871176], 'Valid Label': [True, True, True, True], 'Valid Reasoning': [True, True, True, True], 'Label': ['significantly decreased', 'significantly decreased', 'significantly decreased', 'significantly decreased'], 'Annotations': ['At week 26, all doses of liraglutide decreased FPG more than did placebo (Fig. 5; P < 0.0001)', 'By week 2, subjects treated with liraglutide had rapid and larger decreases in FPG vs. comparator treatment. At week 26, all doses of liraglutide decreased FPG more than did placebo (Fig. 5; P < 0.0001), while only liraglutide 1.2 or 1.8 mg produced greater reductions than rosiglitazone. FPG treatment differences to placebo were 1.7 mmol/l for liraglutide 0.6 mg and 2.6 mmol/l for both liraglutide 1.2 and 1.8 mg.', 'At week 26, all doses of liraglutide decreased FPG more than did placebo (Fig. 5; P < 0.0001),', 'At week 26, all doses of liraglutide decreased FPG more than did placebo (Fig. 5; P < 0.0001), while only liraglutide 1.2 or 1.8 mg produced greater reductions than rosiglitazone.'], 'Label Code': [-1, -1, -1, -1], 'In Abstract': [True, True, True, True], 'Evidence Start': [17606, 17497, 17606, 17606], 'Evidence End': [17699, 17913, 17700, 17785]}, {'UserID': [0, 1, 3, 2], 'PromptID': [136, 136, 136, 136], 'PMCID': [2871176, 2871176, 2871176, 2871176], 'Valid Label': [True, True, True, True], 'Valid Reasoning': [True, True, True, True], 'Label': ['significantly increased', 'significantly increased', 'significantly increased', 'significantly increased'], 'Annotations': ['HOMA-B increased with liraglutide (1.8 or 1.2 mg) compared with rosiglitazone (P < 0.05)', 'HOMA-B increased with liraglutide (1.8 or 1.2 mg) compared with rosiglitazone (P < 0.05), while this increase was only different to placebo with liraglutide 1.2 mg (P = 0.01) and not liraglutide 1.8 mg (P = 0.051).', 'HOMA-B increased with liraglutide (1.8 or 1.2 mg) compared with rosiglitazone (P < 0.05),', 'HOMA-B increased with liraglutide (1.8 or 1.2 mg) compared with rosiglitazone (P < 0.05), while this increase was only different to placebo with liraglutide 1.2 mg (P = 0.01) and not liraglutide 1.8 mg (P = 0.051).'], 'Label Code': [1, 1, 1, 1], 'In Abstract': [True, True, True, True], 'Evidence Start': [20728, 20728, 20728, 20728], 'Evidence End': [20816, 20942, 20817, 20942]}, {'UserID': [0, 1, 3, 2], 'PromptID': [123, 123, 123, 123], 'PMCID': [2871176, 2871176, 2871176, 2871176], 'Valid Label': [True, True, True, True], 'Valid Reasoning': [True, True, True, True], 'Label': ['significantly decreased', 'significantly decreased', 'significantly decreased', 'significantly decreased'], 'Annotations': ['Decreases in postprandial plasma glucose from baseline were greater with liraglutide 1.2 or 1.8 mg [−2.5 to −2.7 mmol/l (baseline 12.9 mmol/l for both)] compared with placebo (−0.4 mmol/l, P < 0.0001, baseline 12.7 mmol/l)', 'Decreases in postprandial plasma glucose from baseline were greater with liraglutide 1.2 or 1.8 mg [−2.5 to −2.7 mmol/l (baseline 12.9 mmol/l for both)] compared with placebo (−0.4 mmol/l, P < 0.0001, baseline 12.7 mmol/l) or rosiglitazone (−1.8 mmol/l, P < 0.05, baseline 13.0 mmol/l). ', 'Decreases in postprandial plasma glucose from baseline were greater with liraglutide 1.2 or 1.8 mg [−2.5 to −2.7 mmol/l (baseline 12.9 mmol/l for both)] compared with placebo (−0.4 mmol/l, P < 0.0001, baseline 12.7 mmol/l) ', 'Decreases in postprandial plasma glucose from baseline were greater with liraglutide 1.2 or 1.8 mg [−2.5 to −2.7 mmol/l (baseline 12.9 mmol/l for both)] compared with placebo (−0.4 mmol/l, P < 0.0001, baseline 12.7 mmol/l) or rosiglitazone (−1.8 mmol/l, P < 0.05, baseline 13.0 mmol/l). '], 'Label Code': [-1, -1, -1, -1], 'In Abstract': [True, True, True, True], 'Evidence Start': [1469, 1469, 1469, 1469], 'Evidence End': [1691, 1756, 1692, 1756]}, {'UserID': [0, 1, 3, 2], 'PromptID': [135, 135, 135, 135], 'PMCID': [2871176, 2871176, 2871176, 2871176], 'Valid Label': [True, True, True, True], 'Valid Reasoning': [True, True, True, True], 'Label': ['significantly increased', 'significantly increased', 'significantly increased', 'significantly increased'], 'Annotations': ['HOMA-B increased with liraglutide (1.8 or 1.2 mg) compared with rosiglitazone (P < 0.05)', 'HOMA-B increased with liraglutide (1.8 or 1.2 mg) compared with rosiglitazone (P < 0.05), while this increase was only different to placebo with liraglutide 1.2 mg (P = 0.01) and not liraglutide 1.8 mg (P = 0.051).', 'HOMA-B increased with liraglutide (1.8 or 1.2 mg) compared with rosiglitazone (P < 0.05),', 'HOMA-B increased with liraglutide (1.8 or 1.2 mg) compared with rosiglitazone (P < 0.05), while this increase was only different to placebo with liraglutide 1.2 mg (P = 0.01) and not liraglutide 1.8 mg (P = 0.051)'], 'Label Code': [1, 1, 1, 1], 'In Abstract': [True, True, True, True], 'Evidence Start': [20728, 20728, 20728, 20728], 'Evidence End': [20816, 20942, 20817, 20941]}, {'UserID': [0, 1, 3, 2], 'PromptID': [139, 139, 139, 139], 'PMCID': [2871176, 2871176, 2871176, 2871176], 'Valid Label': [True, True, True, True], 'Valid Reasoning': [True, True, True, True], 'Label': ['no significant difference', 'no significant difference', 'no significant difference', 'no significant difference'], 'Annotations': ['There were no significant differences between treatments for HOMA-IR.', 'There were no significant differences between treatments for HOMA-IR.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nTable 2', 'There were no significant differences between treatments for HOMA-IR.', 'There were no significant differences between treatments for HOMA-IR.'], 'Label Code': [0, 0, 0, 0], 'In Abstract': [True, True, True, True], 'Evidence Start': [20943, -1, 20943, 20943], 'Evidence End': [21012, -1, 21012, 21012]}, {'UserID': [0, 1, 3, 2], 'PromptID': [101, 101, 101, 101], 'PMCID': [2871176, 2871176, 2871176, 2871176], 'Valid Label': [True, True, True, True], 'Valid Reasoning': [True, True, True, True], 'Label': ['significantly decreased', 'significantly decreased', 'significantly decreased', 'significantly decreased'], 'Annotations': ['Fasting plasma glucose decreased by week 2, with a 1.6 mmol/l decrease from baseline at week 26 with liraglutide 1.2 mg (baseline 9.8 mmol/l) or 1.8 mg (baseline 9.7 mmol/l) compared with a 0.9 mmol/l increase (placebo, P < 0.0001, baseline 9.5 mmol/l)', 'Fasting plasma glucose decreased by week 2, with a 1.6 mmol/l decrease from baseline at week 26 with liraglutide 1.2 mg (baseline 9.8 mmol/l) or 1.8 mg (baseline 9.7 mmol/l) compared with a 0.9 mmol/l increase (placebo, P < 0.0001, baseline 9.5 mmol/l) or 1.0 mmol/l decrease (rosiglitazone, P < 0.006, baseline 9.9 mmol/l).', 'At week 26, all doses of liraglutide decreased FPG more than did placebo (Fig. 5; P < 0.0001)', 'Fasting plasma glucose decreased by week 2, with a 1.6 mmol/l decrease from baseline at week 26 with liraglutide 1.2 mg (baseline 9.8 mmol/l) or 1.8 mg (baseline 9.7 mmol/l) compared with a 0.9 mmol/l increase (placebo, P < 0.0001, baseline 9.5 mmol/l) or 1.0 mmol/l decrease (rosiglitazone, P < 0.006, baseline 9.9 mmol/l).'], 'Label Code': [-1, -1, -1, -1], 'In Abstract': [True, True, True, True], 'Evidence Start': [1144, 1144, 17606, 1144], 'Evidence End': [1396, 1468, 17699, 1468]}, {'UserID': [0, 1, 3, 2], 'PromptID': [99, 99, 99, 99], 'PMCID': [2871176, 2871176, 2871176, 2871176], 'Valid Label': [True, True, True, True], 'Valid Reasoning': [True, True, True, True], 'Label': ['significantly decreased', 'significantly decreased', 'significantly decreased', 'significantly decreased'], 'Annotations': ['Liraglutide (1.2 or 1.8 mg) produced greater reductions in HbA1c from baseline, (−1.1%, baseline 8.5%) compared with placebo (+0.2%, P < 0.0001, baseline 8.4%)', 'After 26 weeks, HbA1c decreased by 1.1% from baseline (primary endpoint) with either liraglutide 1.2 or 1.8 mg, respectively, compared with either placebo (+0.2%) or rosiglitazone (−0.4%) (Fig. 3d). ', 'Liraglutide (1.2 or 1.8 mg) produced greater reductions in HbA1c from baseline, (−1.1%, baseline 8.5%) compared with placebo (+0.2%, P < 0.0001, baseline 8.4%) ', 'After 26 weeks, HbA1c decreased by 1.1% from baseline (primary endpoint) with either liraglutide 1.2 or 1.8 mg, respectively, compared with either placebo (+0.2%) or rosiglitazone (−0.4%) (Fig. 3d). Estimated treatment differences and 95% CIs to placebo were: liraglutide 1.8 mg: −1.4% (1.6, −1.1); liraglutide 1.2 mg: −1.3% (1.5, −1.1); liraglutide 0.6 mg: −0.8% (−1.1, −0.6); rosiglitazone: −0.7% (−0.9, −0.4). All liraglutide doses were superior to placebo (P < 0.0001), while the two higher liraglutide doses were superior to rosiglitazone (P < 0.0001)'], 'Label Code': [-1, -1, -1, -1], 'In Abstract': [True, True, True, True], 'Evidence Start': [843, 13756, 843, 13756], 'Evidence End': [1002, 13955, 1003, 14312]}, {'UserID': [0, 1, 3, 2], 'PromptID': [144, 144, 144, 144], 'PMCID': [2871176, 2871176, 2871176, 2871176], 'Valid Label': [True, True, True, True], 'Valid Reasoning': [True, True, True, True], 'Label': ['no significant difference', 'no significant difference', 'no significant difference', 'no significant difference'], 'Annotations': ['Although decreases in systolic blood pressure occurred with either liraglutide 1.2 or 1.8 mg (2.6–2.8 mmHg), they were not significantly different from placebo or rosiglitazone (0.9–2.3 mmHg).', 'Although decreases in systolic blood pressure occurred with either liraglutide 1.2 or 1.8 mg (2.6–2.8 mmHg), they were not significantly different from placebo or rosiglitazone (0.9–2.3 mmHg). ', 'Although decreases in systolic blood pressure occurred with either liraglutide 1.2 or 1.8 mg (2.6–2.8 mmHg), they were not significantly different from placebo or rosiglitazone (0.9–2.3 mmHg). ', 'Although decreases in systolic blood pressure occurred with either liraglutide 1.2 or 1.8 mg (2.6–2.8 mmHg), they were not significantly different from placebo or rosiglitazone (0.9–2.3 mmHg). '], 'Label Code': [0, 0, 0, 0], 'In Abstract': [True, True, True, True], 'Evidence Start': [22039, 22039, 22039, 22039], 'Evidence End': [22231, 22232, 22232, 22232]}, {'UserID': [0, 1, 3, 2], 'PromptID': [145, 145, 145, 145], 'PMCID': [2871176, 2871176, 2871176, 2871176], 'Valid Label': [True, True, True, True], 'Valid Reasoning': [True, True, True, True], 'Label': ['no significant difference', 'no significant difference', 'no significant difference', 'no significant difference'], 'Annotations': ['Reductions in diastolic blood pressure also occurred with all treatments (0.7–1.4 mmHg), with no significant differences between treatments.', 'Reductions in diastolic blood pressure also occurred with all treatments (0.7–1.4 mmHg), with no significant differences between treatments. ', 'Reductions in diastolic blood pressure also occurred with all treatments (0.7–1.4 mmHg), with no significant differences between treatments. ', 'Reductions in diastolic blood pressure also occurred with all treatments (0.7–1.4 mmHg), with no significant differences between treatments. '], 'Label Code': [0, 0, 0, 0], 'In Abstract': [True, True, True, True], 'Evidence Start': [22232, 22232, 22232, 22232], 'Evidence End': [22372, 22373, 22373, 22373]}, {'UserID': [0, 1, 2], 'PromptID': [147, 147, 147], 'PMCID': [2871176, 2871176, 2871176], 'Valid Label': [True, True, True], 'Valid Reasoning': [True, True, True], 'Label': ['significantly increased', 'significantly increased', 'significantly increased'], 'Annotations': ['Changes in pulse for all doses of liraglutide were significant vs. placebo (P ≤ 0.002). This also was true with either liraglutide 1.8 or 1.2 mg compared with rosiglitazone (P < 0.01).', 'Changes in pulse for all doses of liraglutide were significant vs. placebo (P ≤ 0.002). ', 'Changes in pulse for all doses of liraglutide were significant vs. placebo (P ≤ 0.002). '], 'Label Code': [1, 1, 1], 'In Abstract': [True, True, True], 'Evidence Start': [22554, 22554, 22554], 'Evidence End': [22738, 22642, 22642]}, {'UserID': [0, 1, 3, 2], 'PromptID': [117, 117, 117, 117], 'PMCID': [2871176, 2871176, 2871176, 2871176], 'Valid Label': [True, True, True, True], 'Valid Reasoning': [True, True, True, True], 'Label': ['significantly decreased', 'significantly decreased', 'significantly decreased', 'significantly decreased'], 'Annotations': ['Fasting plasma glucose decreased by week 2, with a 1.6 mmol/l decrease from baseline at week 26 with liraglutide 1.2 mg (baseline 9.8 mmol/l) or 1.8 mg (baseline 9.7 mmol/l) compared with a 0.9 mmol/l increase (placebo, P < 0.0001, baseline 9.5 mmol/l) or 1.0 mmol/l decrease (rosiglitazone, P < 0.006, baseline 9.9 mmol/l).', 'Fasting plasma glucose decreased by week 2, with a 1.6 mmol/l decrease from baseline at week 26 with liraglutide 1.2 mg (baseline 9.8 mmol/l) or 1.8 mg (baseline 9.7 mmol/l) compared with a 0.9 mmol/l increase (placebo, P < 0.0001, baseline 9.5 mmol/l) or 1.0 mmol/l decrease (rosiglitazone, P < 0.006, baseline 9.9 mmol/l).', 'Fasting plasma glucose decreased by week 2, with a 1.6 mmol/l decrease from baseline at week 26 with liraglutide 1.2 mg (baseline 9.8 mmol/l) or 1.8 mg (baseline 9.7 mmol/l) compared with a 0.9 mmol/l increase (placebo, P < 0.0001, baseline 9.5 mmol/l) or 1.0 mmol/l decrease (rosiglitazone, P < 0.006, baseline 9.9 mmol/l).', 'By week 2, subjects treated with liraglutide had rapid and larger decreases in FPG vs. comparator treatment. At week 26, all doses of liraglutide decreased FPG more than did placebo (Fig. 5; P < 0.0001), while only liraglutide 1.2 or 1.8 mg produced greater reductions than rosiglitazone. FPG treatment differences to placebo were 1.7 mmol/l for liraglutide 0.6 mg and 2.6 mmol/l for both liraglutide 1.2 and 1.8 mg. An 0.7-mmol/l greater reduction in FPG was achieved with either liraglutide 1.2 or 1.8 mg compared with rosiglitazone (P ≤ 0.006) after 26 weeks. '], 'Label Code': [-1, -1, -1, -1], 'In Abstract': [True, True, True, True], 'Evidence Start': [1144, 1144, 1144, 17497], 'Evidence End': [1468, 1468, 1468, 18061]}, {'UserID': [0, 1, 3, 2], 'PromptID': [143, 143, 143, 143], 'PMCID': [2871176, 2871176, 2871176, 2871176], 'Valid Label': [True, True, True, True], 'Valid Reasoning': [True, True, True, True], 'Label': ['no significant difference', 'no significant difference', 'no significant difference', 'no significant difference'], 'Annotations': ['Although decreases in systolic blood pressure occurred with either liraglutide 1.2 or 1.8 mg (2.6–2.8 mmHg), they were not significantly different from placebo or rosiglitazone (0.9–2.3 mmHg).', 'Although decreases in systolic blood pressure occurred with either liraglutide 1.2 or 1.8 mg (2.6–2.8 mmHg), they were not significantly different from placebo or rosiglitazone (0.9–2.3 mmHg). ', 'Although decreases in systolic blood pressure occurred with either liraglutide 1.2 or 1.8 mg (2.6–2.8 mmHg), they were not significantly different from placebo or rosiglitazone (0.9–2.3 mmHg). ', 'Although decreases in systolic blood pressure occurred with either liraglutide 1.2 or 1.8 mg (2.6–2.8 mmHg), they were not significantly different from placebo or rosiglitazone (0.9–2.3 mmHg). '], 'Label Code': [0, 0, 0, 0], 'In Abstract': [True, True, True, True], 'Evidence Start': [22039, 22039, 22039, 22039], 'Evidence End': [22231, 22232, 22232, 22232]}, {'UserID': [0, 1, 3, 2], 'PromptID': [111, 111, 111, 111], 'PMCID': [2871176, 2871176, 2871176, 2871176], 'Valid Label': [True, True, True, True], 'Valid Reasoning': [True, True, True, True], 'Label': ['significantly increased', 'significantly increased', 'significantly increased', 'significantly increased'], 'Annotations': ['The estimated proportion of subjects treated with either liraglutide 1.2 or 1.8 mg reaching ADA/EASD and IDF/AACE HbA1c targets was substantially greater compared with either placebo (P < 0.0001)', ' The estimated proportion of subjects treated with either liraglutide 1.2 or 1.8 mg reaching ADA/EASD and IDF/AACE HbA1c targets was substantially greater compared with either placebo (P < 0.0001) or rosiglitazone (Fig. 4; P ≤ 0.0003), with more patients reaching < 7.0% with liraglutide 1.8 mg compared with 1.2 mg (P = 0.018). FIGURE 4', 'At week 26, 42% and 21% of subjects treated with liraglutide 1.8 mg reached an HbA1c < 7.0% and ≤ 6.5%, respectively, compared with 8% and 4% for placebo ', 'The estimated proportion of subjects treated with either liraglutide 1.2 or 1.8 mg reaching ADA/EASD and IDF/AACE HbA1c targets was substantially greater compared with either placebo (P < 0.0001) or rosiglitazone (Fig. 4; P ≤ 0.0003), with more patients reaching < 7.0% with liraglutide 1.8 mg compared with 1.2 mg (P = 0.018). '], 'Label Code': [1, 1, 1, 1], 'In Abstract': [True, True, True, True], 'Evidence Start': [16120, 16119, 15956, 16120], 'Evidence End': [16315, 16457, 16110, 16449]}, {'UserID': [0, 1, 3, 2], 'PromptID': [137, 137, 137, 137], 'PMCID': [2871176, 2871176, 2871176, 2871176], 'Valid Label': [True, True, True, True], 'Valid Reasoning': [True, True, True, True], 'Label': ['significantly increased', 'significantly increased', 'significantly increased', 'significantly increased'], 'Annotations': ['HOMA-B increased with liraglutide (1.8 or 1.2 mg) compared with rosiglitazone (P < 0.05), while this increase was only different to placebo with liraglutide 1.2 mg (P = 0.01) and not liraglutide 1.8 mg (P = 0.051)', 'HOMA-B increased with liraglutide (1.8 or 1.2 mg) compared with rosiglitazone (P < 0.05), while this increase was only different to placebo with liraglutide 1.2 mg (P = 0.01) and not liraglutide 1.8 mg (P = 0.051).', 'HOMA-B increased with liraglutide (1.8 or 1.2 mg) compared with rosiglitazone (P < 0.05), while this increase was only different to placebo with liraglutide 1.2 mg (P = 0.01)', 'HOMA-B increased with liraglutide (1.8 or 1.2 mg) compared with rosiglitazone (P < 0.05), while this increase was only different to placebo with liraglutide 1.2 mg (P = 0.01) and not liraglutide 1.8 mg (P = 0.051).'], 'Label Code': [1, 1, 1, 1], 'In Abstract': [True, True, True, True], 'Evidence Start': [20728, 20728, 20728, 20728], 'Evidence End': [20941, 20942, 20902, 20942]}, {'UserID': [0, 1], 'PromptID': [114, 114], 'PMCID': [2871176, 2871176], 'Valid Label': [True, True], 'Valid Reasoning': [True, True], 'Label': ['significantly increased', 'significantly increased'], 'Annotations': ['The estimated proportion of subjects treated with either liraglutide 1.2 or 1.8 mg reaching ADA/EASD and IDF/AACE HbA1c targets was substantially greater compared with either placebo (P < 0.0001) or rosiglitazone (Fig. 4; P ≤ 0.0003), with more patients reaching < 7.0% with liraglutide 1.8 mg compared with 1.2 mg (P = 0.018).', 'At week 26, 42% and 21% of subjects treated with liraglutide 1.8 mg reached an HbA1c < 7.0% and ≤ 6.5%, respectively, compared with 8% and 4% for placebo (Fig. 4). The estimated proportion of subjects treated with either liraglutide 1.2 or 1.8 mg reaching ADA/EASD and IDF/AACE HbA1c targets was substantially greater compared with either placebo (P < 0.0001) or rosiglitazone (Fig. 4; P ≤ 0.0003), with more patients reaching < 7.0% with liraglutide 1.8 mg compared with 1.2 mg (P = 0.018). '], 'Label Code': [1, 1], 'In Abstract': [True, True], 'Evidence Start': [16120, 15956], 'Evidence End': [16447, 16449]}, {'UserID': [0, 1, 3, 2], 'PromptID': [108, 108, 108, 108], 'PMCID': [2871176, 2871176, 2871176, 2871176], 'Valid Label': [True, True, True, True], 'Valid Reasoning': [True, True, True, True], 'Label': ['no significant difference', 'no significant difference', 'no significant difference', 'no significant difference'], 'Annotations': ['Liraglutide 0.6 mg was non-inferior to rosiglitazone', 'All liraglutide doses were superior to placebo (P < 0.0001), while the two higher liraglutide doses were superior to rosiglitazone (P < 0.0001). Liraglutide 0.6 mg was non-inferior to rosiglitazone.', 'Liraglutide 0.6 mg was non-inferior to rosiglitazone', '. All liraglutide doses were superior to placebo (P < 0.0001), while the two higher liraglutide doses were superior to rosiglitazone (P < 0.0001). Liraglutide 0.6 mg was non-inferior to rosiglitazone.'], 'Label Code': [0, 0, 0, 0], 'In Abstract': [True, True, True, True], 'Evidence Start': [14314, 14169, 14314, 14167], 'Evidence End': [14366, 14367, 14366, 14367]}, {'UserID': [0], 'PromptID': [128], 'PMCID': [2871176], 'Valid Label': [True], 'Valid Reasoning': [True], 'Label': ['significantly increased'], 'Annotations': ['The percentage of subjects with one, two or three PPG measurements < 10.0 mmol/l (ADA target) were greater for all doses of liraglutide compared with placebo (P < 0.05) but not rosiglitazone'], 'Label Code': [1], 'In Abstract': [True], 'Evidence Start': [19433], 'Evidence End': [19623]}, {'UserID': [0, 1, 2], 'PromptID': [134, 134, 134], 'PMCID': [2871176, 2871176, 2871176], 'Valid Label': [True, True, True], 'Valid Reasoning': [True, True, True], 'Label': ['significantly decreased', 'significantly decreased', 'significantly decreased'], 'Annotations': ['Reductions in the proinsulin : insulin ratio were greater with both liraglutide 1.2 and 1.8 mg compared with either rosiglitazone or placebo (Table 2; P ≤ 0.02)', 'Reductions in the proinsulin : insulin ratio were greater with both liraglutide 1.2 and 1.8 mg compared with either rosiglitazone or placebo (Table 2; P ≤ 0.02). ', 'Reductions in the proinsulin : insulin ratio were greater with both liraglutide 1.2 and 1.8 mg compared with either rosiglitazone or placebo (Table 2; P ≤ 0.02). HOMA-B increased with liraglutide (1.8 or 1.2 mg) compared with rosiglitazone (P < 0.05), '], 'Label Code': [-1, -1, -1], 'In Abstract': [True, True, True], 'Evidence Start': [20566, 20566, 20566], 'Evidence End': [20726, 20728, 20818]}, {'UserID': [0, 1, 3, 2], 'PromptID': [115, 115, 115, 115], 'PMCID': [2871176, 2871176, 2871176, 2871176], 'Valid Label': [True, True, True, True], 'Valid Reasoning': [True, True, True, True], 'Label': ['significantly decreased', 'significantly decreased', 'significantly decreased', 'significantly decreased'], 'Annotations': ['Fasting plasma glucose decreased by week 2, with a 1.6 mmol/l decrease from baseline at week 26 with liraglutide 1.2 mg (baseline 9.8 mmol/l) or 1.8 mg (baseline 9.7 mmol/l) compared with a 0.9 mmol/l increase (placebo, P < 0.0001, baseline 9.5 mmol/l)', 'Fasting plasma glucose decreased by week 2, with a 1.6 mmol/l decrease from baseline at week 26 with liraglutide 1.2 mg (baseline 9.8 mmol/l) or 1.8 mg (baseline 9.7 mmol/l) compared with a 0.9 mmol/l increase (placebo, P < 0.0001, baseline 9.5 mmol/l) or 1.0 mmol/l decrease (rosiglitazone, P < 0.006, baseline 9.9 mmol/l).', 'At week 26, all doses of liraglutide decreased FPG more than did placebo (Fig. 5; P < 0.0001)', 'Fasting plasma glucose decreased by week 2, with a 1.6 mmol/l decrease from baseline at week 26 with liraglutide 1.2 mg (baseline 9.8 mmol/l) or 1.8 mg (baseline 9.7 mmol/l) compared with a 0.9 mmol/l increase (placebo, P < 0.0001, baseline 9.5 mmol/l) or 1.0 mmol/l decrease (rosiglitazone, P < 0.006, baseline 9.9 mmol/l).'], 'Label Code': [-1, -1, -1, -1], 'In Abstract': [True, True, True, True], 'Evidence Start': [1144, 1144, 17606, 1144], 'Evidence End': [1396, 1468, 17699, 1468]}, {'UserID': [0, 1, 2], 'PromptID': [127, 127, 127], 'PMCID': [2871176, 2871176, 2871176], 'Valid Label': [True, True, True], 'Valid Reasoning': [True, True, True], 'Label': ['significantly increased', 'significantly increased', 'significantly increased'], 'Annotations': ['The percentage of subjects with one, two or three PPG measurements < 10.0 mmol/l (ADA target) were greater for all doses of liraglutide compared with placebo (P < 0.05) but not rosiglitazone', 'he percentage of subjects with one, two or three PPG measurements < 10.0 mmol/l (ADA target) were greater for all doses of liraglutide compared with placebo (P < 0.05) but not rosiglitazone.', 'The percentage of subjects with one, two or three PPG measurements < 10.0 mmol/l (ADA target) were greater for all doses of liraglutide compared with placebo (P < 0.05) but not rosiglitazone.'], 'Label Code': [1, 1, 1], 'In Abstract': [True, True, True], 'Evidence Start': [19433, 19434, 19433], 'Evidence End': [19623, 19624, 19624]}, {'UserID': [0, 1, 3, 2], 'PromptID': [131, 131, 131, 131], 'PMCID': [2871176, 2871176, 2871176, 2871176], 'Valid Label': [True, True, True, True], 'Valid Reasoning': [True, True, True, True], 'Label': ['significantly decreased', 'significantly decreased', 'significantly decreased', 'significantly decreased'], 'Annotations': ['Reductions in the proinsulin : insulin ratio were greater with both liraglutide 1.2 and 1.8 mg compared with either rosiglitazone or placebo (Table 2; P ≤ 0.02)', 'Reductions in the proinsulin : insulin ratio were greater with both liraglutide 1.2 and 1.8 mg compared with either rosiglitazone or placebo (Table 2; P ≤ 0.02). ', 'Reductions in the proinsulin : insulin ratio were greater with both liraglutide 1.2 and 1.8 mg compared with either rosiglitazone or placebo (Table 2; P ≤ 0.02). ', 'Reductions in the proinsulin : insulin ratio were greater with both liraglutide 1.2 and 1.8 mg compared with either rosiglitazone or placebo (Table 2; P ≤ 0.02)'], 'Label Code': [-1, -1, -1, -1], 'In Abstract': [True, True, True, True], 'Evidence Start': [20566, 20566, 20566, 20566], 'Evidence End': [20726, 20728, 20728, 20726]}, {'UserID': [0, 1, 1, 3, 2], 'PromptID': [109, 109, 109, 109, 109], 'PMCID': [2871176, 2871176, 2871176, 2871176, 2871176], 'Valid Label': [True, True, True, True, True], 'Valid Reasoning': [True, True, True, True, True], 'Label': ['significantly decreased', 'significantly decreased', 'significantly decreased', 'significantly decreased', 'significantly decreased'], 'Annotations': ['Rosiglitazone also was superior to placebo (P < 0.0001)', 'Rosiglitazone also was superior to placebo (P < 0.0001).', ' The greatest decreases occurred with liraglutide 1.2 and 1.8 mg (Fig. 3a–c). After 26 weeks, HbA1c decreased by 1.1% from baseline (primary endpoint) with either liraglutide 1.2 or 1.8 mg, respectively, compared with either placebo (+0.2%) or rosiglitazone (−0.4%) (Fig. 3d). Estimated treatment differences and 95% CIs to placebo were: liraglutide 1.8 mg: −1.4% (1.6, −1.1); liraglutide 1.2 mg: −1.3% (1.5, −1.1); liraglutide 0.6 mg: −0.8% (−1.1, −0.6); rosiglitazone: −0.7% (−0.9, −0.4). All liraglutide doses were superior to placebo (P < 0.0001), while the two higher liraglutide doses were superior to rosiglitazone (P < 0.0001). Liraglutide 0.6 mg was non-inferior to rosiglitazone. ', 'Rosiglitazone also was superior to placebo (P < 0.0001).', 'Rosiglitazone also was superior to placebo (P < 0.0001).'], 'Label Code': [-1, -1, -1, -1, -1], 'In Abstract': [True, True, True, True, True], 'Evidence Start': [14368, 14368, 13678, 14368, 14368], 'Evidence End': [14423, 14424, 14368, 14424, 14424]}, {'UserID': [0, 1, 3, 2], 'PromptID': [146, 146, 146, 146], 'PMCID': [2871176, 2871176, 2871176, 2871176], 'Valid Label': [True, True, True, True], 'Valid Reasoning': [True, True, True, True], 'Label': ['no significant difference', 'no significant difference', 'no significant difference', 'no significant difference'], 'Annotations': ['Reductions in diastolic blood pressure also occurred with all treatments (0.7–1.4 mmHg), with no significant differences between treatments.', 'Reductions in diastolic blood pressure also occurred with all treatments (0.7–1.4 mmHg), with no significant differences between treatments. ', 'Reductions in diastolic blood pressure also occurred with all treatments (0.7–1.4 mmHg), with no significant differences between treatments. ', 'Reductions in diastolic blood pressure also occurred with all treatments (0.7–1.4 mmHg), with no significant differences between treatments. '], 'Label Code': [0, 0, 0, 0], 'In Abstract': [True, True, True, True], 'Evidence Start': [22232, 22232, 22232, 22232], 'Evidence End': [22372, 22373, 22373, 22373]}, {'UserID': [0, 1, 3, 2], 'PromptID': [110, 110, 110, 110], 'PMCID': [2871176, 2871176, 2871176, 2871176], 'Valid Label': [True, True, True, True], 'Valid Reasoning': [True, True, True, True], 'Label': ['significantly increased', 'significantly increased', 'significantly increased', 'significantly increased'], 'Annotations': ['The estimated proportion of subjects treated with either liraglutide 1.2 or 1.8 mg reaching ADA/EASD and IDF/AACE HbA1c targets was substantially greater compared with either placebo (P < 0.0001)', 'The percentage of subjects reaching ADA [2] and International Diabetes Federation (IDF)/American Association of Clinical Endocrinologists (AACE) [11,12] treatment HbA1c goals with liraglutide was dose dependent (Fig. 4). At week 26, 42% and 21% of subjects treated with liraglutide 1.8 mg reached an HbA1c < 7.0% and ≤ 6.5%, respectively, compared with 8% and 4% for placebo (Fig. 4). The estimated proportion of subjects treated with either liraglutide 1.2 or 1.8 mg reaching ADA/EASD and IDF/AACE HbA1c targets was substantially greater compared with either placebo (P < 0.0001) or rosiglitazone (Fig. 4; P ≤ 0.0003), with more patients reaching < 7.0% with liraglutide 1.8 mg compared with 1.2 mg (P = 0.018). ', 'The estimated proportion of subjects treated with either liraglutide 1.2 or 1.8 mg reaching ADA/EASD and IDF/AACE HbA1c targets was substantially greater compared with either placebo (P < 0.0001) or rosiglitazone (Fig. 4; P ≤ 0.0003), with more patients reaching < 7.0% with liraglutide 1.8 mg compared with 1.2 mg (P = 0.018). ', 'The percentage of subjects reaching ADA [2] and International Diabetes Federation (IDF)/American Association of Clinical Endocrinologists (AACE) [11,12] treatment HbA1c goals with liraglutide was dose dependent (Fig. 4). At week 26, 42% and 21% of subjects treated with liraglutide 1.8 mg reached an HbA1c < 7.0% and ≤ 6.5%, respectively, compared with 8% and 4% for placebo (Fig. 4). The estimated proportion of subjects treated with either liraglutide 1.2 or 1.8 mg reaching ADA/EASD and IDF/AACE HbA1c targets was substantially greater compared with either placebo (P < 0.0001) or rosiglitazone (Fig. 4; P ≤ 0.0003), with more patients reaching < 7.0% with liraglutide 1.8 mg compared with 1.2 mg (P = 0.018). '], 'Label Code': [1, 1, 1, 1], 'In Abstract': [True, True, True, True], 'Evidence Start': [16120, 15735, 16120, 15735], 'Evidence End': [16315, 16449, 16449, 16449]}, {'UserID': [1, 3, 2], 'PromptID': [100, 100, 100], 'PMCID': [2871176, 2871176, 2871176], 'Valid Label': [True, True, True], 'Valid Reasoning': [True, True, True], 'Label': ['significantly decreased', 'significantly decreased', 'significantly decreased'], 'Annotations': ['After 26 weeks, HbA1c decreased by 1.1% from baseline (primary endpoint) with either liraglutide 1.2 or 1.8 mg, respectively, compared with either placebo (+0.2%) or rosiglitazone (−0.4%) (Fig. 3d). ', 'After 26 weeks, HbA1c decreased by 1.1% from baseline (primary endpoint) with either liraglutide 1.2 or 1.8 mg, respectively, compared with either placebo (+0.2%) or rosiglitazone (−0.4%) ', 'HbA1c decreased rapidly with all doses of liraglutide when added to glimepiride compared with either rosiglitazone or placebo (i.e. glimepiride monotherapy), irrespective of previous therapy. The greatest decreases occurred with liraglutide 1.2 and 1.8 mg (Fig. 3a–c). After 26 weeks, HbA1c decreased by 1.1% from baseline (primary endpoint) with either liraglutide 1.2 or 1.8 mg, respectively, compared with either placebo (+0.2%) or rosiglitazone (−0.4%) (Fig. 3d). Estimated treatment differences and 95% CIs to placebo were: liraglutide 1.8 mg: −1.4% (1.6, −1.1); liraglutide 1.2 mg: −1.3% (1.5, −1.1); liraglutide 0.6 mg: −0.8% (−1.1, −0.6); rosiglitazone: −0.7% (−0.9, −0.4). All liraglutide doses were superior to placebo (P < 0.0001), while the two higher liraglutide doses were superior to rosiglitazone (P < 0.0001). '], 'Label Code': [-1, -1, -1], 'In Abstract': [True, True, True], 'Evidence Start': [13756, 13756, 13487], 'Evidence End': [13955, 13944, 14314]}, {'UserID': [0, 1, 3, 2], 'PromptID': [138, 138, 138, 138], 'PMCID': [2871176, 2871176, 2871176, 2871176], 'Valid Label': [True, True, True, True], 'Valid Reasoning': [True, True, True, True], 'Label': ['no significant difference', 'no significant difference', 'no significant difference', 'no significant difference'], 'Annotations': ['HOMA-B increased with liraglutide (1.8 or 1.2 mg) compared with rosiglitazone (P < 0.05), while this increase was only different to placebo with liraglutide 1.2 mg (P = 0.01) and not liraglutide 1.8 mg (P = 0.051)', 'HOMA-B increased with liraglutide (1.8 or 1.2 mg) compared with rosiglitazone (P < 0.05), while this increase was only different to placebo with liraglutide 1.2 mg (P = 0.01) and not liraglutide 1.8 mg (P = 0.051).', 'HOMA-B increased with liraglutide (1.8 or 1.2 mg) compared with rosiglitazone (P < 0.05), while this increase was only different to placebo with liraglutide 1.2 mg (P = 0.01) and not liraglutide 1.8 mg (P = 0.051)', 'HOMA-B increased with liraglutide (1.8 or 1.2 mg) compared with rosiglitazone (P < 0.05), while this increase was only different to placebo with liraglutide 1.2 mg (P = 0.01) and not liraglutide 1.8 mg (P = 0.051).'], 'Label Code': [0, 0, 0, 0], 'In Abstract': [True, True, True, True], 'Evidence Start': [20728, 20728, 20728, 20728], 'Evidence End': [20941, 20942, 20941, 20942]}, {'UserID': [0, 1, 3, 2], 'PromptID': [119, 119, 119, 119], 'PMCID': [2871176, 2871176, 2871176, 2871176], 'Valid Label': [True, True, True, True], 'Valid Reasoning': [True, True, True, True], 'Label': ['significantly increased', 'significantly increased', 'significantly increased', 'significantly increased'], 'Annotations': ['The percentage of subjects achieving FPG values between 5.0 mmol/l and ≤ 7.2 mmol/l (ADA target) after 26 weeks was higher with liraglutide: 0.6 mg (19%; P = 0.002); 1.2 mg (37%; P < 0.001); and 1.8 mg (38%;P < 0.001) compared with placebo (7%).', 'The percentage of subjects achieving FPG values between 5.0 mmol/l and ≤ 7.2 mmol/l (ADA target) after 26 weeks was higher with liraglutide: 0.6 mg (19%; P = 0.002); 1.2 mg (37%; P < 0.001); and 1.8 mg (38%;P < 0.001) compared with placebo (7%). ', 'The percentage of subjects achieving FPG values between 5.0 mmol/l and ≤ 7.2 mmol/l (ADA target) after 26 weeks was higher with liraglutide: 0.6 mg (19%; P = 0.002); 1.2 mg (37%; P < 0.001)', 'The percentage of subjects achieving FPG values between 5.0 mmol/l and ≤ 7.2 mmol/l (ADA target) after 26 weeks was higher with liraglutide: 0.6 mg (19%; P = 0.002); 1.2 mg (37%; P < 0.001); and 1.8 mg (38%;P < 0.001) compared with placebo (7%). '], 'Label Code': [1, 1, 1, 1], 'In Abstract': [True, True, True, True], 'Evidence Start': [18230, 18230, 18230, 18230], 'Evidence End': [18475, 18476, 18419, 18476]}, {'UserID': [0, 3, 2], 'PromptID': [130, 130, 130], 'PMCID': [2871176, 2871176, 2871176], 'Valid Label': [True, True, True], 'Valid Reasoning': [True, True, True], 'Label': ['significantly increased', 'significantly increased', 'significantly increased'], 'Annotations': ['Unlike rosiglitazone, weight did not increase substantially with liraglutide and the differences between rosiglitazone and liraglutide were statistically significant (−2.3 to −1.4 kg; P < 0.0001)', 'Changes in body weight with liraglutide 1.8 mg (−0.2 kg, baseline 83.0 kg), 1.2 mg (+0.3 kg, baseline 80.0 kg) or placebo (−0.1 kg, baseline 81.9 kg) were less than with rosiglitazone (+2.1 kg, P < 0.0001, baseline 80.6 kg)', 'Unlike rosiglitazone, weight did not increase substantially with liraglutide and the differences between rosiglitazone and liraglutide were statistically significant (−2.3 to −1.4 kg; P < 0.0001), although there were no significant differences compared with placebo. '], 'Label Code': [1, 1, 1], 'In Abstract': [True, True, True], 'Evidence Start': [19950, 1756, 19950], 'Evidence End': [20145, 1979, 20217]}]}} ``` ### Data Fields - `PMCID` (`int`): ID to identify the articles. - `Text` (`str`): Article text. - `Prompts` (`dict`): Prompts and annotations with keys: - 'PromptID': Which prompt the doctor is answering. - 'PMCID' - 'Outcome': Represent the fill-in-the-blank input for the following prompt formed "With respect to outcome, characterize the reported difference between intervention and those receiving comparator". - 'Intervention': Represent the fill-in-the-blank input for the following prompt formed "With respect to outcome, characterize the reported difference between intervention and those receiving comparator". - 'Comparator': Represent the fill-in-the-blank input for the following prompt formed "With respect to outcome, characterize the reported difference between intervention and those receiving comparator". - 'Annotations': The annotation files consist of the following headings: UserID, PromptID, PMCID, Valid Label, Valid Reasoning, Label, Annotations, Label Code, In Abstract, Start Evidence, End Evidence. ### Data Splits | name | train | validation | test | |------|------:|-----------:|-----:| | 1.1 | 1931 | 248 | 240 | | 2.0 | 2690 | 340 | 334 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @inproceedings{lehman2019inferring, title={Inferring Which Medical Treatments Work from Reports of Clinical Trials}, author={Lehman, Eric and DeYoung, Jay and Barzilay, Regina and Wallace, Byron C}, booktitle={Proceedings of the North American Chapter of the Association for Computational Linguistics (NAACL)}, pages={3705--3717}, year={2019} } @misc{deyoung2020evidence, title={Evidence Inference 2.0: More Data, Better Models}, author={Jay DeYoung and Eric Lehman and Ben Nye and Iain J. Marshall and Byron C. Wallace}, year={2020}, eprint={2005.04177}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Contributions Thanks to [@Narsil](https://github.com/Narsil) for adding this dataset.
evidence_infer_treatment
[ "task_categories:text-retrieval", "task_ids:fact-checking-retrieval", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:mit", "arxiv:2005.04177", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-retrieval"], "task_ids": ["fact-checking-retrieval"], "pretty_name": "Evidence Infer Treatment", "dataset_info": [{"config_name": "2.0", "features": [{"name": "Text", "dtype": "string"}, {"name": "PMCID", "dtype": "int32"}, {"name": "Prompts", "sequence": [{"name": "PromptID", "dtype": "int32"}, {"name": "PMCID", "dtype": "int32"}, {"name": "Outcome", "dtype": "string"}, {"name": "Intervention", "dtype": "string"}, {"name": "Comparator", "dtype": "string"}, {"name": "Annotations", "sequence": [{"name": "UserID", "dtype": "int32"}, {"name": "PromptID", "dtype": "int32"}, {"name": "PMCID", "dtype": "int32"}, {"name": "Valid Label", "dtype": "bool"}, {"name": "Valid Reasoning", "dtype": "bool"}, {"name": "Label", "dtype": "string"}, {"name": "Annotations", "dtype": "string"}, {"name": "Label Code", "dtype": "int32"}, {"name": "In Abstract", "dtype": "bool"}, {"name": "Evidence Start", "dtype": "int32"}, {"name": "Evidence End", "dtype": "int32"}]}]}], "splits": [{"name": "train", "num_bytes": 77045294, "num_examples": 2690}, {"name": "test", "num_bytes": 9436674, "num_examples": 334}, {"name": "validation", "num_bytes": 10113982, "num_examples": 340}], "download_size": 163515689, "dataset_size": 96595950}, {"config_name": "1.1", "features": [{"name": "Text", "dtype": "string"}, {"name": "PMCID", "dtype": "int32"}, {"name": "Prompts", "sequence": [{"name": "PromptID", "dtype": "int32"}, {"name": "PMCID", "dtype": "int32"}, {"name": "Outcome", "dtype": "string"}, {"name": "Intervention", "dtype": "string"}, {"name": "Comparator", "dtype": "string"}, {"name": "Annotations", "sequence": [{"name": "UserID", "dtype": "int32"}, {"name": "PromptID", "dtype": "int32"}, {"name": "PMCID", "dtype": "int32"}, {"name": "Valid Label", "dtype": "bool"}, {"name": "Valid Reasoning", "dtype": "bool"}, {"name": "Label", "dtype": "string"}, {"name": "Annotations", "dtype": "string"}, {"name": "Label Code", "dtype": "int32"}, {"name": "In Abstract", "dtype": "bool"}, {"name": "Evidence Start", "dtype": "int32"}, {"name": "Evidence End", "dtype": "int32"}]}]}], "splits": [{"name": "train", "num_bytes": 55375971, "num_examples": 1931}, {"name": "test", "num_bytes": 6877338, "num_examples": 240}, {"name": "validation", "num_bytes": 7359847, "num_examples": 248}], "download_size": 114452688, "dataset_size": 69613156}]}
2024-01-18T11:03:29+00:00
[ "2005.04177" ]
[ "en" ]
TAGS #task_categories-text-retrieval #task_ids-fact-checking-retrieval #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-mit #arxiv-2005.04177 #region-us
Dataset Card for Evidence Infer =============================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: Evidence Inference 2.0: More Data, Better Models * Leaderboard: URL * Point of Contact: ### Dataset Summary Data and code from our "Inferring Which Medical Treatments Work from Reports of Clinical Trials", NAACL 2019. This work concerns inferring the results reported in clinical trials from text. The dataset consists of biomedical articles describing randomized control trials (RCTs) that compare multiple treatments. Each of these articles will have multiple questions, or 'prompts' associated with them. These prompts will ask about the relationship between an intervention and comparator with respect to an outcome, as reported in the trial. For example, a prompt may ask about the reported effects of aspirin as compared to placebo on the duration of headaches. For the sake of this task, we assume that a particular article will report that the intervention of interest either significantly increased, significantly decreased or had significant effect on the outcome, relative to the comparator. The dataset could be used for automatic data extraction of the results of a given RCT. This would enable readers to discover the effectiveness of different treatments without needing to read the paper. We have recently collected additional data for this task (URL which we will present at BioNLP 2020. ### Supported Tasks and Leaderboards ### Languages * English ('en'). Dataset Structure ----------------- ### Data Instances ### Data Fields * 'PMCID' ('int'): ID to identify the articles. * 'Text' ('str'): Article text. * 'Prompts' ('dict'): Prompts and annotations with keys: + 'PromptID': Which prompt the doctor is answering. + 'PMCID' + 'Outcome': Represent the fill-in-the-blank input for the following prompt formed "With respect to outcome, characterize the reported difference between intervention and those receiving comparator". + 'Intervention': Represent the fill-in-the-blank input for the following prompt formed "With respect to outcome, characterize the reported difference between intervention and those receiving comparator". + 'Comparator': Represent the fill-in-the-blank input for the following prompt formed "With respect to outcome, characterize the reported difference between intervention and those receiving comparator". + 'Annotations': The annotation files consist of the following headings: UserID, PromptID, PMCID, Valid Label, Valid Reasoning, Label, Annotations, Label Code, In Abstract, Start Evidence, End Evidence. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to @Narsil for adding this dataset.
[ "### Dataset Summary\n\n\nData and code from our \"Inferring Which Medical Treatments Work from Reports of Clinical Trials\", NAACL 2019. This work concerns inferring the results reported in clinical trials from text.\n\n\nThe dataset consists of biomedical articles describing randomized control trials (RCTs) that compare multiple treatments. Each of these articles will have multiple questions, or 'prompts' associated with them. These prompts will ask about the relationship between an intervention and comparator with respect to an outcome, as reported in the trial. For example, a prompt may ask about the reported effects of aspirin as compared to placebo on the duration of headaches. For the sake of this task, we assume that a particular article will report that the intervention of interest either significantly increased, significantly decreased or had significant effect on the outcome, relative to the comparator.\n\n\nThe dataset could be used for automatic data extraction of the results of a given RCT. This would enable readers to discover the effectiveness of different treatments without needing to read the paper.\n\n\nWe have recently collected additional data for this task (URL which we will present at BioNLP 2020.", "### Supported Tasks and Leaderboards", "### Languages\n\n\n* English ('en').\n\n\nDataset Structure\n-----------------", "### Data Instances", "### Data Fields\n\n\n* 'PMCID' ('int'): ID to identify the articles.\n* 'Text' ('str'): Article text.\n* 'Prompts' ('dict'): Prompts and annotations with keys:\n\t+ 'PromptID': Which prompt the doctor is answering.\n\t+ 'PMCID'\n\t+ 'Outcome': Represent the fill-in-the-blank input for the following prompt formed \"With respect to outcome, characterize the reported difference between intervention and those receiving comparator\".\n\t+ 'Intervention': Represent the fill-in-the-blank input for the following prompt formed \"With respect to outcome, characterize the reported difference between intervention and those receiving comparator\".\n\t+ 'Comparator': Represent the fill-in-the-blank input for the following prompt formed \"With respect to outcome, characterize the reported difference between intervention and those receiving comparator\".\n\t+ 'Annotations': The annotation files consist of the following headings: UserID, PromptID, PMCID, Valid Label, Valid Reasoning, Label, Annotations, Label Code, In Abstract, Start Evidence, End Evidence.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @Narsil for adding this dataset." ]
[ "TAGS\n#task_categories-text-retrieval #task_ids-fact-checking-retrieval #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-mit #arxiv-2005.04177 #region-us \n", "### Dataset Summary\n\n\nData and code from our \"Inferring Which Medical Treatments Work from Reports of Clinical Trials\", NAACL 2019. This work concerns inferring the results reported in clinical trials from text.\n\n\nThe dataset consists of biomedical articles describing randomized control trials (RCTs) that compare multiple treatments. Each of these articles will have multiple questions, or 'prompts' associated with them. These prompts will ask about the relationship between an intervention and comparator with respect to an outcome, as reported in the trial. For example, a prompt may ask about the reported effects of aspirin as compared to placebo on the duration of headaches. For the sake of this task, we assume that a particular article will report that the intervention of interest either significantly increased, significantly decreased or had significant effect on the outcome, relative to the comparator.\n\n\nThe dataset could be used for automatic data extraction of the results of a given RCT. This would enable readers to discover the effectiveness of different treatments without needing to read the paper.\n\n\nWe have recently collected additional data for this task (URL which we will present at BioNLP 2020.", "### Supported Tasks and Leaderboards", "### Languages\n\n\n* English ('en').\n\n\nDataset Structure\n-----------------", "### Data Instances", "### Data Fields\n\n\n* 'PMCID' ('int'): ID to identify the articles.\n* 'Text' ('str'): Article text.\n* 'Prompts' ('dict'): Prompts and annotations with keys:\n\t+ 'PromptID': Which prompt the doctor is answering.\n\t+ 'PMCID'\n\t+ 'Outcome': Represent the fill-in-the-blank input for the following prompt formed \"With respect to outcome, characterize the reported difference between intervention and those receiving comparator\".\n\t+ 'Intervention': Represent the fill-in-the-blank input for the following prompt formed \"With respect to outcome, characterize the reported difference between intervention and those receiving comparator\".\n\t+ 'Comparator': Represent the fill-in-the-blank input for the following prompt formed \"With respect to outcome, characterize the reported difference between intervention and those receiving comparator\".\n\t+ 'Annotations': The annotation files consist of the following headings: UserID, PromptID, PMCID, Valid Label, Valid Reasoning, Label, Annotations, Label Code, In Abstract, Start Evidence, End Evidence.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @Narsil for adding this dataset." ]
4ff10804abb3341f8815cacd778181177bba7edd
# Dataset Card for [Dataset Name] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** https://github.com/mhardalov/exams-qa - **Paper:** [EXAMS: A Multi-Subject High School Examinations Dataset for Cross-Lingual and Multilingual Question Answering](https://arxiv.org/abs/2011.03080) - **Point of Contact:** [hardalov@@fmi.uni-sofia.bg](hardalov@@fmi.uni-sofia.bg) ### Dataset Summary EXAMS is a benchmark dataset for multilingual and cross-lingual question answering from high school examinations. It consists of more than 24,000 high-quality high school exam questions in 16 languages, covering 8 language families and 24 school subjects from Natural Sciences and Social Sciences, among others. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The languages in the dataset are: - ar - bg - de - es - fr - hr - hu - it - lt - mk - pl - pt - sq - sr - tr - vi ## Dataset Structure ### Data Instances An example of a data instance (with support paragraphs, in Bulgarian) is: ``` {'answerKey': 'C', 'id': '35dd6b52-7e71-11ea-9eb1-54bef70b159e', 'info': {'grade': 12, 'language': 'Bulgarian', 'subject': 'Biology'}, 'question': {'choices': {'label': ['A', 'B', 'C', 'D'], 'para': ['Това води до наследствени изменения между организмите. Мирновременните вождове са наследствени. Черният, сивият и кафявият цвят на оцветяване на тялото се определя от пигмента меланин и възниква в резултат на наследствени изменения. Тези различия, според Монтескьо, не са наследствени. Те са и важни наследствени вещи в клана. Те са били наследствени архонти и управляват демократично. Реликвите са исторически, религиозни, семейни (наследствени) и технически. Общо са направени 800 изменения. Не всички наследствени аномалии на хемоглобина са вредни, т.е. Моногенните наследствени болести, които водят до мигрена, са редки. Няма наследствени владетели. Повечето от тях са наследствени и се предават на потомството. Всичките синове са ерцхерцози на всичките наследствени земи и претенденти. През 1509 г. Фраунбергите са издигнати на наследствени имперски графове. Фамилията Валдбург заради постиженията са номинирани на „наследствени имперски трушсеси“. Фамилията Валдбург заради постиженията са номинирани на „наследствени имперски трушсеси“. Описани са единични наследствени случаи, но по-често липсва фамилна обремененост. Позициите им са наследствени и се предават в рамките на клана. Внесени са изменения в конструкцията на веригите. и са направени изменения в ходовата част. На храма са правени лоши архитектурни изменения. Изменения са предприети и вътре в двореца. Имало двама наследствени вождове. Имало двама наследствени вождове. Годишният календар, „компасът“ и биологичния часовник са наследствени и при много бозайници.', 'Постепенно задълбочаващите се функционални изменения довеждат и до структурни изменения. Те се дължат както на растягането на кожата, така и на въздействието на хормоналните изменения върху кожната тъкан. тези изменения се долавят по-ясно. Впоследствие, той претърпява изменения. Ширината остава без изменения. След тяхното издаване се налагат изменения в първоначалния Кодекс, защото не е съобразен с направените в Дигестите изменения. Еволюционният преход се характеризира със следните изменения: Наблюдават се и сезонни изменения в теглото. Приемат се изменения и допълнения към Устава. Тук се размножават и предизвикват възпалителни изменения. Общо са направени 800 изменения. Бронирането не претърпява съществени изменения. При животните се откриват изменения при злокачествената форма. Срещат се и дегенеративни изменения в семенните каналчета. ТАВКР „Баку“ се строи по изменения проект 1143.4. Трансът се съпровожда с определени изменения на мозъчната дейност. На изменения е подложен и Светия Синод. Внесени са изменения в конструкцията на веригите. На храма са правени лоши архитектурни изменения. Оттогава стиховете претърпяват изменения няколко пъти. Настъпват съществени изменения в музикалната култура. По-късно той претърпява леки изменения. Настъпват съществени изменения в музикалната култура. Претърпява сериозни изменения само носовата надстройка. Хоризонталното брониране е оставено без изменения.', 'Модификациите са обратими. Тези реакции са обратими. В началните стадии тези натрупвания са обратими. Всички такива ефекти са временни и обратими. Много от реакциите са обратими и идентични с тези при гликолизата. Ако в обращение има книжни пари, те са обратими в злато при поискване . Общо са направени 800 изменения. Непоследователността е представена от принципа на "симетрия", при който взаимоотношенията са разглеждани като симетрични или обратими. Откакто формулите в клетките на електронната таблица не са обратими, тази техника е с ограничена стойност. Ефектът на Пелтие-Зеебек и ефектът Томсън са обратими (ефектът на Пелтие е обратен на ефекта на Зеебек). Плазмолизата протича в три етапа, в зависимост от силата и продължителността на въздействието:\n\nПървите два етапа са обратими. Внесени са изменения в конструкцията на веригите. и са направени изменения в ходовата част. На храма са правени лоши архитектурни изменения. Изменения са предприети и вътре в двореца. Оттогава насетне екипите не са претърпявали съществени изменения. Изменения са направени и в колесника на машината. Тези изменения са обявени през октомври 1878 година. Последните изменения са внесени през януари 2009 година. В процеса на последващото проектиране са внесени някои изменения. Сериозните изменения са в края на Втората световна война. Внесени са изменения в конструкцията на погребите и подемниците. Внесени са изменения в конструкцията на погребите и подемниците. Внесени са изменения в конструкцията на погребите и подемниците. Постепенно задълбочаващите се функционални изменения довеждат и до структурни изменения.', 'Ерозионни процеси от масов характер липсват. Обновлението в редиците на партията приема масов характер. Тя обаче няма масов характер поради спецификата на формата. Движението против десятъка придобива масов характер и в Балчишка околия. Понякога екзекутирането на „обсебените от Сатана“ взимало невероятно масов характер. Укриването на дължими като наряд продукти в селата придобива масов характер. Периодичните миграции са в повечето случаи с масов характер и са свързани със сезонните изменения в природата, а непериодичните са премествания на животни, които настъпват след пожари, замърсяване на средата, висока численост и др. Имат необратим характер. Именно по време на двувековните походи на западните рицари използването на гербовете придобива масов характер. След присъединяването на Южен Кавказ към Русия, изселването на азербайджанци от Грузия придобива масов характер. Те имат нормативен характер. Те имат установителен характер. Освобождаването на работна сила обикновено има масов характер, защото обхваща големи контингенти от носителите на труд. Валежите имат подчертано континентален характер. Имат най-често издънков характер. Приливите имат предимно полуденонощен характер. Някои от тях имат мистериален характер. Тези сведения имат случаен, епизодичен характер. Те имат сезонен или годишен характер. Временните обезпечителни мерки имат временен характер. Други имат пожелателен характер (Здравко, Слава). Ловът и събирачеството имат спомагателен характер. Фактически успяват само малко да усилят бронирането на артилерийските погреби, другите изменения носят само частен характер. Някои карикатури имат само развлекателен характер, докато други имат политически нюанси. Поемите на Хезиод имат по-приложен характер.'], 'text': ['дължат се на фенотипни изменения', 'имат масов характер', 'са наследствени', 'са обратими']}, 'stem': 'Мутационите изменения:'}} ``` ### Data Fields A data instance contains the following fields: - `id`: A question ID, unique across the dataset - `question`: the question contains the following: - `stem`: a stemmed representation of the question textual - `choices`: a set of 3 to 5 candidate answers, which each have: - `text`: the text of the answers - `label`: a label in `['A', 'B', 'C', 'D', 'E']` used to match to the `answerKey` - `para`: (optional) a supported paragraph from Wikipedia in the same language as the question and answer - `answerKey`: the key corresponding to the right answer's `label` - `info`: some additional information on the question including: - `grade`: the school grade for the exam this question was taken from - `subject`: a free text description of the academic subject - `language`: the English name of the language for this question ### Data Splits Depending on the configuration, the dataset have different splits: - "alignments": a single "full" split - "multilingual" and "multilingual_with_para": "train", "validation" and "test" splits - "crosslingual_test" and "crosslingual_with_para_test": a single "test" split - the rest of crosslingual configurations: "train" and "validation" splits ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization Eχαµs was collected from official state exams prepared by the ministries of education of various countries. These exams are taken by students graduating from high school, and often require knowledge learned through the entire course. The questions cover a large variety of subjects and material based on the country’s education system. They cover major school subjects such as Biology, Chemistry, Geography, History, and Physics, but we also highly specialized ones such as Agriculture, Geology, Informatics, as well as some applied and profiled studies. Some countries allow students to take official examinations in several languages. This dataset provides 9,857 parallel question pairs spread across seven languages coming from Croatia (Croatian, Serbian, Italian, Hungarian), Hungary (Hungarian, German, French, Spanish, Croatian, Serbian, Italian), and North Macedonia (Macedonian, Albanian, Turkish). For all languages in the dataset, the first step in the process of data collection was to download the PDF files per year, per subject, and per language (when parallel languages were available in the same source), convert the PDF files to text, and select those that were well formatted and followed the document structure. Then, Regular Expressions (RegEx) were used to parse the questions, their corresponding choices and the correct answer choice. In order to ensure that all our questions are answerable using textual input only, questions that contained visual information were removed, as selected by using curated list of words such as map, table, picture, graph, etc., in the corresponding language. #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information The dataset, which contains paragraphs from Wikipedia, is licensed under CC-BY-SA 4.0. The code in this repository is licensed according the [LICENSE file](https://raw.githubusercontent.com/mhardalov/exams-qa/main/LICENSE). ### Citation Information ``` @inproceedings{hardalov-etal-2020-exams, title = "{EXAMS}: A Multi-subject High School Examinations Dataset for Cross-lingual and Multilingual Question Answering", author = "Hardalov, Momchil and Mihaylov, Todor and Zlatkova, Dimitrina and Dinkov, Yoan and Koychev, Ivan and Nakov, Preslav", editor = "Webber, Bonnie and Cohn, Trevor and He, Yulan and Liu, Yang", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.emnlp-main.438", doi = "10.18653/v1/2020.emnlp-main.438", pages = "5427--5444", } ``` ### Contributions Thanks to [@yjernite](https://github.com/yjernite) for adding this dataset.
exams
[ "task_categories:question-answering", "task_ids:multiple-choice-qa", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "multilinguality:multilingual", "size_categories:10K<n<100K", "size_categories:1K<n<10K", "size_categories:n<1K", "source_datasets:original", "language:ar", "language:bg", "language:de", "language:es", "language:fr", "language:hr", "language:hu", "language:it", "language:lt", "language:mk", "language:pl", "language:pt", "language:sq", "language:sr", "language:tr", "language:vi", "license:cc-by-sa-4.0", "arxiv:2011.03080", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["ar", "bg", "de", "es", "fr", "hr", "hu", "it", "lt", "mk", "pl", "pt", "sq", "sr", "tr", "vi"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual", "multilingual"], "size_categories": ["10K<n<100K", "1K<n<10K", "n<1K"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": ["multiple-choice-qa"], "paperswithcode_id": "exams", "pretty_name": "EXAMS", "config_names": ["alignments", "crosslingual_bg", "crosslingual_hr", "crosslingual_hu", "crosslingual_it", "crosslingual_mk", "crosslingual_pl", "crosslingual_pt", "crosslingual_sq", "crosslingual_sr", "crosslingual_test", "crosslingual_tr", "crosslingual_vi", "crosslingual_with_para_bg", "crosslingual_with_para_hr", "crosslingual_with_para_hu", "crosslingual_with_para_it", "crosslingual_with_para_mk", "crosslingual_with_para_pl", "crosslingual_with_para_pt", "crosslingual_with_para_sq", "crosslingual_with_para_sr", "crosslingual_with_para_test", "crosslingual_with_para_tr", "crosslingual_with_para_vi", "multilingual", "multilingual_with_para"], "dataset_info": [{"config_name": "alignments", "features": [{"name": "source_id", "dtype": "string"}, {"name": "target_id_list", "sequence": "string"}], "splits": [{"name": "full", "num_bytes": 1265256, "num_examples": 10834}], "download_size": 184096, "dataset_size": 1265256}, {"config_name": "crosslingual_bg", "features": [{"name": "id", "dtype": "string"}, {"name": "question", "struct": [{"name": "stem", "dtype": "string"}, {"name": "choices", "sequence": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "string"}, {"name": "para", "dtype": "string"}]}]}, {"name": "answerKey", "dtype": "string"}, {"name": "info", "struct": [{"name": "grade", "dtype": "int32"}, {"name": "subject", "dtype": "string"}, {"name": "language", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1077329, "num_examples": 2344}, {"name": "validation", "num_bytes": 281771, "num_examples": 593}], "download_size": 514922, "dataset_size": 1359100}, {"config_name": "crosslingual_hr", "features": [{"name": "id", "dtype": "string"}, {"name": "question", "struct": [{"name": "stem", "dtype": "string"}, {"name": "choices", "sequence": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "string"}, {"name": "para", "dtype": "string"}]}]}, {"name": "answerKey", "dtype": "string"}, {"name": "info", "struct": [{"name": "grade", "dtype": "int32"}, {"name": "subject", "dtype": "string"}, {"name": "language", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 807104, "num_examples": 2341}, {"name": "validation", "num_bytes": 176594, "num_examples": 538}], "download_size": 450090, "dataset_size": 983698}, {"config_name": "crosslingual_hu", "features": [{"name": "id", "dtype": "string"}, {"name": "question", "struct": [{"name": "stem", "dtype": "string"}, {"name": "choices", "sequence": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "string"}, {"name": "para", "dtype": "string"}]}]}, {"name": "answerKey", "dtype": "string"}, {"name": "info", "struct": [{"name": "grade", "dtype": "int32"}, {"name": "subject", "dtype": "string"}, {"name": "language", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 677535, "num_examples": 1731}, {"name": "validation", "num_bytes": 202012, "num_examples": 536}], "download_size": 401455, "dataset_size": 879547}, {"config_name": "crosslingual_it", "features": [{"name": "id", "dtype": "string"}, {"name": "question", "struct": [{"name": "stem", "dtype": "string"}, {"name": "choices", "sequence": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "string"}, {"name": "para", "dtype": "string"}]}]}, {"name": "answerKey", "dtype": "string"}, {"name": "info", "struct": [{"name": "grade", "dtype": "int32"}, {"name": "subject", "dtype": "string"}, {"name": "language", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 399312, "num_examples": 1010}, {"name": "validation", "num_bytes": 93175, "num_examples": 246}], "download_size": 226376, "dataset_size": 492487}, {"config_name": "crosslingual_mk", "features": [{"name": "id", "dtype": "string"}, {"name": "question", "struct": [{"name": "stem", "dtype": "string"}, {"name": "choices", "sequence": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "string"}, {"name": "para", "dtype": "string"}]}]}, {"name": "answerKey", "dtype": "string"}, {"name": "info", "struct": [{"name": "grade", "dtype": "int32"}, {"name": "subject", "dtype": "string"}, {"name": "language", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 825702, "num_examples": 1665}, {"name": "validation", "num_bytes": 204318, "num_examples": 410}], "download_size": 394548, "dataset_size": 1030020}, {"config_name": "crosslingual_pl", "features": [{"name": "id", "dtype": "string"}, {"name": "question", "struct": [{"name": "stem", "dtype": "string"}, {"name": "choices", "sequence": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "string"}, {"name": "para", "dtype": "string"}]}]}, {"name": "answerKey", "dtype": "string"}, {"name": "info", "struct": [{"name": "grade", "dtype": "int32"}, {"name": "subject", "dtype": "string"}, {"name": "language", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 573410, "num_examples": 1577}, {"name": "validation", "num_bytes": 141633, "num_examples": 394}], "download_size": 341925, "dataset_size": 715043}, {"config_name": "crosslingual_pt", "features": [{"name": "id", "dtype": "string"}, {"name": "question", "struct": [{"name": "stem", "dtype": "string"}, {"name": "choices", "sequence": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "string"}, {"name": "para", "dtype": "string"}]}]}, {"name": "answerKey", "dtype": "string"}, {"name": "info", "struct": [{"name": "grade", "dtype": "int32"}, {"name": "subject", "dtype": "string"}, {"name": "language", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 374798, "num_examples": 740}, {"name": "validation", "num_bytes": 87714, "num_examples": 184}], "download_size": 208021, "dataset_size": 462512}, {"config_name": "crosslingual_sq", "features": [{"name": "id", "dtype": "string"}, {"name": "question", "struct": [{"name": "stem", "dtype": "string"}, {"name": "choices", "sequence": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "string"}, {"name": "para", "dtype": "string"}]}]}, {"name": "answerKey", "dtype": "string"}, {"name": "info", "struct": [{"name": "grade", "dtype": "int32"}, {"name": "subject", "dtype": "string"}, {"name": "language", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 423744, "num_examples": 1194}, {"name": "validation", "num_bytes": 110093, "num_examples": 311}], "download_size": 247052, "dataset_size": 533837}, {"config_name": "crosslingual_sr", "features": [{"name": "id", "dtype": "string"}, {"name": "question", "struct": [{"name": "stem", "dtype": "string"}, {"name": "choices", "sequence": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "string"}, {"name": "para", "dtype": "string"}]}]}, {"name": "answerKey", "dtype": "string"}, {"name": "info", "struct": [{"name": "grade", "dtype": "int32"}, {"name": "subject", "dtype": "string"}, {"name": "language", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 649560, "num_examples": 1323}, {"name": "validation", "num_bytes": 145724, "num_examples": 314}], "download_size": 327466, "dataset_size": 795284}, {"config_name": "crosslingual_test", "features": [{"name": "id", "dtype": "string"}, {"name": "question", "struct": [{"name": "stem", "dtype": "string"}, {"name": "choices", "sequence": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "string"}, {"name": "para", "dtype": "string"}]}]}, {"name": "answerKey", "dtype": "string"}, {"name": "info", "struct": [{"name": "grade", "dtype": "int32"}, {"name": "subject", "dtype": "string"}, {"name": "language", "dtype": "string"}]}], "splits": [{"name": "test", "num_bytes": 8402575, "num_examples": 19736}], "download_size": 3438526, "dataset_size": 8402575}, {"config_name": "crosslingual_tr", "features": [{"name": "id", "dtype": "string"}, {"name": "question", "struct": [{"name": "stem", "dtype": "string"}, {"name": "choices", "sequence": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "string"}, {"name": "para", "dtype": "string"}]}]}, {"name": "answerKey", "dtype": "string"}, {"name": "info", "struct": [{"name": "grade", "dtype": "int32"}, {"name": "subject", "dtype": "string"}, {"name": "language", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 717599, "num_examples": 1571}, {"name": "validation", "num_bytes": 182730, "num_examples": 393}], "download_size": 440914, "dataset_size": 900329}, {"config_name": "crosslingual_vi", "features": [{"name": "id", "dtype": "string"}, {"name": "question", "struct": [{"name": "stem", "dtype": "string"}, {"name": "choices", "sequence": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "string"}, {"name": "para", "dtype": "string"}]}]}, {"name": "answerKey", "dtype": "string"}, {"name": "info", "struct": [{"name": "grade", "dtype": "int32"}, {"name": "subject", "dtype": "string"}, {"name": "language", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 953167, "num_examples": 1955}, {"name": "validation", "num_bytes": 231976, "num_examples": 488}], "download_size": 462940, "dataset_size": 1185143}, {"config_name": "crosslingual_with_para_bg", "features": [{"name": "id", "dtype": "string"}, {"name": "question", "struct": [{"name": "stem", "dtype": "string"}, {"name": "choices", "sequence": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "string"}, {"name": "para", "dtype": "string"}]}]}, {"name": "answerKey", "dtype": "string"}, {"name": "info", "struct": [{"name": "grade", "dtype": "int32"}, {"name": "subject", "dtype": "string"}, {"name": "language", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 47066808, "num_examples": 2344}, {"name": "validation", "num_bytes": 11916026, "num_examples": 593}], "download_size": 15794611, "dataset_size": 58982834}, {"config_name": "crosslingual_with_para_hr", "features": [{"name": "id", "dtype": "string"}, {"name": "question", "struct": [{"name": "stem", "dtype": "string"}, {"name": "choices", "sequence": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "string"}, {"name": "para", "dtype": "string"}]}]}, {"name": "answerKey", "dtype": "string"}, {"name": "info", "struct": [{"name": "grade", "dtype": "int32"}, {"name": "subject", "dtype": "string"}, {"name": "language", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 24889604, "num_examples": 2341}, {"name": "validation", "num_bytes": 5695066, "num_examples": 538}], "download_size": 9839452, "dataset_size": 30584670}, {"config_name": "crosslingual_with_para_hu", "features": [{"name": "id", "dtype": "string"}, {"name": "question", "struct": [{"name": "stem", "dtype": "string"}, {"name": "choices", "sequence": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "string"}, {"name": "para", "dtype": "string"}]}]}, {"name": "answerKey", "dtype": "string"}, {"name": "info", "struct": [{"name": "grade", "dtype": "int32"}, {"name": "subject", "dtype": "string"}, {"name": "language", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 19035663, "num_examples": 1731}, {"name": "validation", "num_bytes": 6043265, "num_examples": 536}], "download_size": 9263625, "dataset_size": 25078928}, {"config_name": "crosslingual_with_para_it", "features": [{"name": "id", "dtype": "string"}, {"name": "question", "struct": [{"name": "stem", "dtype": "string"}, {"name": "choices", "sequence": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "string"}, {"name": "para", "dtype": "string"}]}]}, {"name": "answerKey", "dtype": "string"}, {"name": "info", "struct": [{"name": "grade", "dtype": "int32"}, {"name": "subject", "dtype": "string"}, {"name": "language", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 16409235, "num_examples": 1010}, {"name": "validation", "num_bytes": 4018329, "num_examples": 246}], "download_size": 6907617, "dataset_size": 20427564}, {"config_name": "crosslingual_with_para_mk", "features": [{"name": "id", "dtype": "string"}, {"name": "question", "struct": [{"name": "stem", "dtype": "string"}, {"name": "choices", "sequence": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "string"}, {"name": "para", "dtype": "string"}]}]}, {"name": "answerKey", "dtype": "string"}, {"name": "info", "struct": [{"name": "grade", "dtype": "int32"}, {"name": "subject", "dtype": "string"}, {"name": "language", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 38445894, "num_examples": 1665}, {"name": "validation", "num_bytes": 9673574, "num_examples": 410}], "download_size": 12878474, "dataset_size": 48119468}, {"config_name": "crosslingual_with_para_pl", "features": [{"name": "id", "dtype": "string"}, {"name": "question", "struct": [{"name": "stem", "dtype": "string"}, {"name": "choices", "sequence": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "string"}, {"name": "para", "dtype": "string"}]}]}, {"name": "answerKey", "dtype": "string"}, {"name": "info", "struct": [{"name": "grade", "dtype": "int32"}, {"name": "subject", "dtype": "string"}, {"name": "language", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 16373781, "num_examples": 1577}, {"name": "validation", "num_bytes": 4158832, "num_examples": 394}], "download_size": 6539172, "dataset_size": 20532613}, {"config_name": "crosslingual_with_para_pt", "features": [{"name": "id", "dtype": "string"}, {"name": "question", "struct": [{"name": "stem", "dtype": "string"}, {"name": "choices", "sequence": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "string"}, {"name": "para", "dtype": "string"}]}]}, {"name": "answerKey", "dtype": "string"}, {"name": "info", "struct": [{"name": "grade", "dtype": "int32"}, {"name": "subject", "dtype": "string"}, {"name": "language", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 12185383, "num_examples": 740}, {"name": "validation", "num_bytes": 3093712, "num_examples": 184}], "download_size": 4956969, "dataset_size": 15279095}, {"config_name": "crosslingual_with_para_sq", "features": [{"name": "id", "dtype": "string"}, {"name": "question", "struct": [{"name": "stem", "dtype": "string"}, {"name": "choices", "sequence": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "string"}, {"name": "para", "dtype": "string"}]}]}, {"name": "answerKey", "dtype": "string"}, {"name": "info", "struct": [{"name": "grade", "dtype": "int32"}, {"name": "subject", "dtype": "string"}, {"name": "language", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 17341277, "num_examples": 1194}, {"name": "validation", "num_bytes": 4449952, "num_examples": 311}], "download_size": 7112236, "dataset_size": 21791229}, {"config_name": "crosslingual_with_para_sr", "features": [{"name": "id", "dtype": "string"}, {"name": "question", "struct": [{"name": "stem", "dtype": "string"}, {"name": "choices", "sequence": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "string"}, {"name": "para", "dtype": "string"}]}]}, {"name": "answerKey", "dtype": "string"}, {"name": "info", "struct": [{"name": "grade", "dtype": "int32"}, {"name": "subject", "dtype": "string"}, {"name": "language", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 24575845, "num_examples": 1323}, {"name": "validation", "num_bytes": 5772509, "num_examples": 314}], "download_size": 8035415, "dataset_size": 30348354}, {"config_name": "crosslingual_with_para_test", "features": [{"name": "id", "dtype": "string"}, {"name": "question", "struct": [{"name": "stem", "dtype": "string"}, {"name": "choices", "sequence": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "string"}, {"name": "para", "dtype": "string"}]}]}, {"name": "answerKey", "dtype": "string"}, {"name": "info", "struct": [{"name": "grade", "dtype": "int32"}, {"name": "subject", "dtype": "string"}, {"name": "language", "dtype": "string"}]}], "splits": [{"name": "test", "num_bytes": 207974374, "num_examples": 13510}], "download_size": 62878029, "dataset_size": 207974374}, {"config_name": "crosslingual_with_para_tr", "features": [{"name": "id", "dtype": "string"}, {"name": "question", "struct": [{"name": "stem", "dtype": "string"}, {"name": "choices", "sequence": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "string"}, {"name": "para", "dtype": "string"}]}]}, {"name": "answerKey", "dtype": "string"}, {"name": "info", "struct": [{"name": "grade", "dtype": "int32"}, {"name": "subject", "dtype": "string"}, {"name": "language", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 18597131, "num_examples": 1571}, {"name": "validation", "num_bytes": 4763097, "num_examples": 393}], "download_size": 7346658, "dataset_size": 23360228}, {"config_name": "crosslingual_with_para_vi", "features": [{"name": "id", "dtype": "string"}, {"name": "question", "struct": [{"name": "stem", "dtype": "string"}, {"name": "choices", "sequence": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "string"}, {"name": "para", "dtype": "string"}]}]}, {"name": "answerKey", "dtype": "string"}, {"name": "info", "struct": [{"name": "grade", "dtype": "int32"}, {"name": "subject", "dtype": "string"}, {"name": "language", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 40882999, "num_examples": 1955}, {"name": "validation", "num_bytes": 10260374, "num_examples": 488}], "download_size": 13028078, "dataset_size": 51143373}, {"config_name": "multilingual", "features": [{"name": "id", "dtype": "string"}, {"name": "question", "struct": [{"name": "stem", "dtype": "string"}, {"name": "choices", "sequence": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "string"}, {"name": "para", "dtype": "string"}]}]}, {"name": "answerKey", "dtype": "string"}, {"name": "info", "struct": [{"name": "grade", "dtype": "int32"}, {"name": "subject", "dtype": "string"}, {"name": "language", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 3381837, "num_examples": 7961}, {"name": "validation", "num_bytes": 1141687, "num_examples": 2672}, {"name": "test", "num_bytes": 5746781, "num_examples": 13510}], "download_size": 4323915, "dataset_size": 10270305}, {"config_name": "multilingual_with_para", "features": [{"name": "id", "dtype": "string"}, {"name": "question", "struct": [{"name": "stem", "dtype": "string"}, {"name": "choices", "sequence": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "string"}, {"name": "para", "dtype": "string"}]}]}, {"name": "answerKey", "dtype": "string"}, {"name": "info", "struct": [{"name": "grade", "dtype": "int32"}, {"name": "subject", "dtype": "string"}, {"name": "language", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 127294567, "num_examples": 7961}, {"name": "validation", "num_bytes": 42711689, "num_examples": 2672}, {"name": "test", "num_bytes": 207974374, "num_examples": 13510}], "download_size": 112597818, "dataset_size": 377980630}], "configs": [{"config_name": "alignments", "data_files": [{"split": "full", "path": "alignments/full-*"}]}, {"config_name": "crosslingual_bg", "data_files": [{"split": "train", "path": "crosslingual_bg/train-*"}, {"split": "validation", "path": "crosslingual_bg/validation-*"}]}, {"config_name": "crosslingual_hr", "data_files": [{"split": "train", "path": "crosslingual_hr/train-*"}, {"split": "validation", "path": "crosslingual_hr/validation-*"}]}, {"config_name": "crosslingual_hu", "data_files": [{"split": "train", "path": "crosslingual_hu/train-*"}, {"split": "validation", "path": "crosslingual_hu/validation-*"}]}, {"config_name": "crosslingual_it", "data_files": [{"split": "train", "path": "crosslingual_it/train-*"}, {"split": "validation", "path": "crosslingual_it/validation-*"}]}, {"config_name": "crosslingual_mk", "data_files": [{"split": "train", "path": "crosslingual_mk/train-*"}, {"split": "validation", "path": "crosslingual_mk/validation-*"}]}, {"config_name": "crosslingual_pl", "data_files": [{"split": "train", "path": "crosslingual_pl/train-*"}, {"split": "validation", "path": "crosslingual_pl/validation-*"}]}, {"config_name": "crosslingual_pt", "data_files": [{"split": "train", "path": "crosslingual_pt/train-*"}, {"split": "validation", "path": "crosslingual_pt/validation-*"}]}, {"config_name": "crosslingual_sq", "data_files": [{"split": "train", "path": "crosslingual_sq/train-*"}, {"split": "validation", "path": "crosslingual_sq/validation-*"}]}, {"config_name": "crosslingual_sr", "data_files": [{"split": "train", "path": "crosslingual_sr/train-*"}, {"split": "validation", "path": "crosslingual_sr/validation-*"}]}, {"config_name": "crosslingual_test", "data_files": [{"split": "test", "path": "crosslingual_test/test-*"}]}, {"config_name": "crosslingual_tr", "data_files": [{"split": "train", "path": "crosslingual_tr/train-*"}, {"split": "validation", "path": "crosslingual_tr/validation-*"}]}, {"config_name": "crosslingual_vi", "data_files": [{"split": "train", "path": "crosslingual_vi/train-*"}, {"split": "validation", "path": "crosslingual_vi/validation-*"}]}, {"config_name": "crosslingual_with_para_bg", "data_files": [{"split": "train", "path": "crosslingual_with_para_bg/train-*"}, {"split": "validation", "path": "crosslingual_with_para_bg/validation-*"}]}, {"config_name": "crosslingual_with_para_hr", "data_files": [{"split": "train", "path": "crosslingual_with_para_hr/train-*"}, {"split": "validation", "path": "crosslingual_with_para_hr/validation-*"}]}, {"config_name": "crosslingual_with_para_hu", "data_files": [{"split": "train", "path": "crosslingual_with_para_hu/train-*"}, {"split": "validation", "path": "crosslingual_with_para_hu/validation-*"}]}, {"config_name": "crosslingual_with_para_it", "data_files": [{"split": "train", "path": "crosslingual_with_para_it/train-*"}, {"split": "validation", "path": "crosslingual_with_para_it/validation-*"}]}, {"config_name": "crosslingual_with_para_mk", "data_files": [{"split": "train", "path": "crosslingual_with_para_mk/train-*"}, {"split": "validation", "path": "crosslingual_with_para_mk/validation-*"}]}, {"config_name": "crosslingual_with_para_pl", "data_files": [{"split": "train", "path": "crosslingual_with_para_pl/train-*"}, {"split": "validation", "path": "crosslingual_with_para_pl/validation-*"}]}, {"config_name": "crosslingual_with_para_pt", "data_files": [{"split": "train", "path": "crosslingual_with_para_pt/train-*"}, {"split": "validation", "path": "crosslingual_with_para_pt/validation-*"}]}, {"config_name": "crosslingual_with_para_sq", "data_files": [{"split": "train", "path": "crosslingual_with_para_sq/train-*"}, {"split": "validation", "path": "crosslingual_with_para_sq/validation-*"}]}, {"config_name": "crosslingual_with_para_sr", "data_files": [{"split": "train", "path": "crosslingual_with_para_sr/train-*"}, {"split": "validation", "path": "crosslingual_with_para_sr/validation-*"}]}, {"config_name": "crosslingual_with_para_test", "data_files": [{"split": "test", "path": "crosslingual_with_para_test/test-*"}]}, {"config_name": "crosslingual_with_para_tr", "data_files": [{"split": "train", "path": "crosslingual_with_para_tr/train-*"}, {"split": "validation", "path": "crosslingual_with_para_tr/validation-*"}]}, {"config_name": "crosslingual_with_para_vi", "data_files": [{"split": "train", "path": "crosslingual_with_para_vi/train-*"}, {"split": "validation", "path": "crosslingual_with_para_vi/validation-*"}]}, {"config_name": "multilingual", "data_files": [{"split": "train", "path": "multilingual/train-*"}, {"split": "validation", "path": "multilingual/validation-*"}, {"split": "test", "path": "multilingual/test-*"}]}, {"config_name": "multilingual_with_para", "data_files": [{"split": "train", "path": "multilingual_with_para/train-*"}, {"split": "validation", "path": "multilingual_with_para/validation-*"}, {"split": "test", "path": "multilingual_with_para/test-*"}], "default": true}]}
2024-02-06T07:20:12+00:00
[ "2011.03080" ]
[ "ar", "bg", "de", "es", "fr", "hr", "hu", "it", "lt", "mk", "pl", "pt", "sq", "sr", "tr", "vi" ]
TAGS #task_categories-question-answering #task_ids-multiple-choice-qa #annotations_creators-found #language_creators-found #multilinguality-monolingual #multilinguality-multilingual #size_categories-10K<n<100K #size_categories-1K<n<10K #size_categories-n<1K #source_datasets-original #language-Arabic #language-Bulgarian #language-German #language-Spanish #language-French #language-Croatian #language-Hungarian #language-Italian #language-Lithuanian #language-Macedonian #language-Polish #language-Portuguese #language-Albanian #language-Serbian #language-Turkish #language-Vietnamese #license-cc-by-sa-4.0 #arxiv-2011.03080 #region-us
# Dataset Card for [Dataset Name] ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Repository: URL - Paper: EXAMS: A Multi-Subject High School Examinations Dataset for Cross-Lingual and Multilingual Question Answering - Point of Contact: hardalov@@URL ### Dataset Summary EXAMS is a benchmark dataset for multilingual and cross-lingual question answering from high school examinations. It consists of more than 24,000 high-quality high school exam questions in 16 languages, covering 8 language families and 24 school subjects from Natural Sciences and Social Sciences, among others. ### Supported Tasks and Leaderboards ### Languages The languages in the dataset are: - ar - bg - de - es - fr - hr - hu - it - lt - mk - pl - pt - sq - sr - tr - vi ## Dataset Structure ### Data Instances An example of a data instance (with support paragraphs, in Bulgarian) is: ### Data Fields A data instance contains the following fields: - 'id': A question ID, unique across the dataset - 'question': the question contains the following: - 'stem': a stemmed representation of the question textual - 'choices': a set of 3 to 5 candidate answers, which each have: - 'text': the text of the answers - 'label': a label in '['A', 'B', 'C', 'D', 'E']' used to match to the 'answerKey' - 'para': (optional) a supported paragraph from Wikipedia in the same language as the question and answer - 'answerKey': the key corresponding to the right answer's 'label' - 'info': some additional information on the question including: - 'grade': the school grade for the exam this question was taken from - 'subject': a free text description of the academic subject - 'language': the English name of the language for this question ### Data Splits Depending on the configuration, the dataset have different splits: - "alignments": a single "full" split - "multilingual" and "multilingual_with_para": "train", "validation" and "test" splits - "crosslingual_test" and "crosslingual_with_para_test": a single "test" split - the rest of crosslingual configurations: "train" and "validation" splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization Eχαµs was collected from official state exams prepared by the ministries of education of various countries. These exams are taken by students graduating from high school, and often require knowledge learned through the entire course. The questions cover a large variety of subjects and material based on the country’s education system. They cover major school subjects such as Biology, Chemistry, Geography, History, and Physics, but we also highly specialized ones such as Agriculture, Geology, Informatics, as well as some applied and profiled studies. Some countries allow students to take official examinations in several languages. This dataset provides 9,857 parallel question pairs spread across seven languages coming from Croatia (Croatian, Serbian, Italian, Hungarian), Hungary (Hungarian, German, French, Spanish, Croatian, Serbian, Italian), and North Macedonia (Macedonian, Albanian, Turkish). For all languages in the dataset, the first step in the process of data collection was to download the PDF files per year, per subject, and per language (when parallel languages were available in the same source), convert the PDF files to text, and select those that were well formatted and followed the document structure. Then, Regular Expressions (RegEx) were used to parse the questions, their corresponding choices and the correct answer choice. In order to ensure that all our questions are answerable using textual input only, questions that contained visual information were removed, as selected by using curated list of words such as map, table, picture, graph, etc., in the corresponding language. #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information The dataset, which contains paragraphs from Wikipedia, is licensed under CC-BY-SA 4.0. The code in this repository is licensed according the LICENSE file. ### Contributions Thanks to @yjernite for adding this dataset.
[ "# Dataset Card for [Dataset Name]", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Repository: URL\n- Paper: EXAMS: A Multi-Subject High School Examinations Dataset for Cross-Lingual and Multilingual Question Answering\n- Point of Contact: hardalov@@URL", "### Dataset Summary\n\nEXAMS is a benchmark dataset for multilingual and cross-lingual question answering from high school examinations. It consists of more than 24,000 high-quality high school exam questions in 16 languages, covering 8 language families and 24 school subjects from Natural Sciences and Social Sciences, among others.", "### Supported Tasks and Leaderboards", "### Languages\n\nThe languages in the dataset are:\n- ar\n- bg\n- de\n- es\n- fr\n- hr\n- hu\n- it\n- lt\n- mk\n- pl\n- pt\n- sq\n- sr\n- tr\n- vi", "## Dataset Structure", "### Data Instances\n\nAn example of a data instance (with support paragraphs, in Bulgarian) is:", "### Data Fields\n\nA data instance contains the following fields:\n- 'id': A question ID, unique across the dataset\n- 'question': the question contains the following:\n - 'stem': a stemmed representation of the question textual\n - 'choices': a set of 3 to 5 candidate answers, which each have:\n - 'text': the text of the answers\n - 'label': a label in '['A', 'B', 'C', 'D', 'E']' used to match to the 'answerKey'\n - 'para': (optional) a supported paragraph from Wikipedia in the same language as the question and answer\n- 'answerKey': the key corresponding to the right answer's 'label'\n- 'info': some additional information on the question including:\n - 'grade': the school grade for the exam this question was taken from\n - 'subject': a free text description of the academic subject\n - 'language': the English name of the language for this question", "### Data Splits\n\nDepending on the configuration, the dataset have different splits:\n- \"alignments\": a single \"full\" split\n- \"multilingual\" and \"multilingual_with_para\": \"train\", \"validation\" and \"test\" splits\n- \"crosslingual_test\" and \"crosslingual_with_para_test\": a single \"test\" split\n- the rest of crosslingual configurations: \"train\" and \"validation\" splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization\n\nEχαµs was collected from official state exams prepared by the ministries of education of various countries. These exams are taken by students graduating from high school, and often require knowledge learned through the entire course.\n\nThe questions cover a large variety of subjects and material based on the country’s education system. They cover major school subjects such as Biology, Chemistry, Geography, History, and Physics, but we also highly specialized ones such as Agriculture, Geology, Informatics, as well as some applied and profiled studies.\n\nSome countries allow students to take official examinations in several languages. This dataset provides 9,857 parallel question pairs spread across seven languages coming from Croatia (Croatian, Serbian, Italian, Hungarian), Hungary (Hungarian, German, French, Spanish, Croatian, Serbian, Italian), and North Macedonia (Macedonian, Albanian, Turkish).\n\nFor all languages in the dataset, the first step in the process of data collection was to download the PDF files per year, per subject, and per language (when parallel languages were available in the same source), convert the PDF files to text, and select those that were well formatted and followed the document structure.\n\nThen, Regular Expressions (RegEx) were used to parse the questions, their corresponding choices and the correct answer choice. In order to ensure that all our questions are answerable using textual input only, questions that contained visual information were removed, as selected by using curated list of words such as map, table, picture, graph, etc., in the corresponding language.", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nThe dataset, which contains paragraphs from Wikipedia, is licensed under CC-BY-SA 4.0. The code in this repository is licensed according the LICENSE file.", "### Contributions\n\nThanks to @yjernite for adding this dataset." ]
[ "TAGS\n#task_categories-question-answering #task_ids-multiple-choice-qa #annotations_creators-found #language_creators-found #multilinguality-monolingual #multilinguality-multilingual #size_categories-10K<n<100K #size_categories-1K<n<10K #size_categories-n<1K #source_datasets-original #language-Arabic #language-Bulgarian #language-German #language-Spanish #language-French #language-Croatian #language-Hungarian #language-Italian #language-Lithuanian #language-Macedonian #language-Polish #language-Portuguese #language-Albanian #language-Serbian #language-Turkish #language-Vietnamese #license-cc-by-sa-4.0 #arxiv-2011.03080 #region-us \n", "# Dataset Card for [Dataset Name]", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Repository: URL\n- Paper: EXAMS: A Multi-Subject High School Examinations Dataset for Cross-Lingual and Multilingual Question Answering\n- Point of Contact: hardalov@@URL", "### Dataset Summary\n\nEXAMS is a benchmark dataset for multilingual and cross-lingual question answering from high school examinations. It consists of more than 24,000 high-quality high school exam questions in 16 languages, covering 8 language families and 24 school subjects from Natural Sciences and Social Sciences, among others.", "### Supported Tasks and Leaderboards", "### Languages\n\nThe languages in the dataset are:\n- ar\n- bg\n- de\n- es\n- fr\n- hr\n- hu\n- it\n- lt\n- mk\n- pl\n- pt\n- sq\n- sr\n- tr\n- vi", "## Dataset Structure", "### Data Instances\n\nAn example of a data instance (with support paragraphs, in Bulgarian) is:", "### Data Fields\n\nA data instance contains the following fields:\n- 'id': A question ID, unique across the dataset\n- 'question': the question contains the following:\n - 'stem': a stemmed representation of the question textual\n - 'choices': a set of 3 to 5 candidate answers, which each have:\n - 'text': the text of the answers\n - 'label': a label in '['A', 'B', 'C', 'D', 'E']' used to match to the 'answerKey'\n - 'para': (optional) a supported paragraph from Wikipedia in the same language as the question and answer\n- 'answerKey': the key corresponding to the right answer's 'label'\n- 'info': some additional information on the question including:\n - 'grade': the school grade for the exam this question was taken from\n - 'subject': a free text description of the academic subject\n - 'language': the English name of the language for this question", "### Data Splits\n\nDepending on the configuration, the dataset have different splits:\n- \"alignments\": a single \"full\" split\n- \"multilingual\" and \"multilingual_with_para\": \"train\", \"validation\" and \"test\" splits\n- \"crosslingual_test\" and \"crosslingual_with_para_test\": a single \"test\" split\n- the rest of crosslingual configurations: \"train\" and \"validation\" splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization\n\nEχαµs was collected from official state exams prepared by the ministries of education of various countries. These exams are taken by students graduating from high school, and often require knowledge learned through the entire course.\n\nThe questions cover a large variety of subjects and material based on the country’s education system. They cover major school subjects such as Biology, Chemistry, Geography, History, and Physics, but we also highly specialized ones such as Agriculture, Geology, Informatics, as well as some applied and profiled studies.\n\nSome countries allow students to take official examinations in several languages. This dataset provides 9,857 parallel question pairs spread across seven languages coming from Croatia (Croatian, Serbian, Italian, Hungarian), Hungary (Hungarian, German, French, Spanish, Croatian, Serbian, Italian), and North Macedonia (Macedonian, Albanian, Turkish).\n\nFor all languages in the dataset, the first step in the process of data collection was to download the PDF files per year, per subject, and per language (when parallel languages were available in the same source), convert the PDF files to text, and select those that were well formatted and followed the document structure.\n\nThen, Regular Expressions (RegEx) were used to parse the questions, their corresponding choices and the correct answer choice. In order to ensure that all our questions are answerable using textual input only, questions that contained visual information were removed, as selected by using curated list of words such as map, table, picture, graph, etc., in the corresponding language.", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nThe dataset, which contains paragraphs from Wikipedia, is licensed under CC-BY-SA 4.0. The code in this repository is licensed according the LICENSE file.", "### Contributions\n\nThanks to @yjernite for adding this dataset." ]
d7b706837ae29db6a86957e4e05e54e58ee83051
# Dataset Card for FACTCK BR ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/jghm-f/FACTCK.BR - **Repository:** https://github.com/jghm-f/FACTCK.BR - **Paper:** https://dl.acm.org/doi/10.1145/3323503.3361698 - **Leaderboard:** - **Point of Contact:** ### Dataset Summary A dataset to study Fake News in Portuguese, presenting a supposedly false News along with their respective fact check and classification. The data is collected from the ClaimReview, a structured data schema used by fact check agencies to share their results in search engines, enabling data collect in real time. The FACTCK.BR dataset contains 1309 claims with its corresponding label. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@hugoabonizio](https://github.com/hugoabonizio) for adding this dataset.
factckbr
[ "task_categories:text-classification", "task_ids:fact-checking", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:pt", "license:mit", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["pt"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["fact-checking"], "pretty_name": "FACTCK BR", "dataset_info": {"features": [{"name": "url", "dtype": "string"}, {"name": "author", "dtype": "string"}, {"name": "date", "dtype": "string"}, {"name": "claim", "dtype": "string"}, {"name": "review", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "rating", "dtype": "float32"}, {"name": "best_rating", "dtype": "float32"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "falso", "1": "distorcido", "2": "impreciso", "3": "exagerado", "4": "insustent\u00e1vel", "5": "verdadeiro", "6": "outros", "7": "subestimado", "8": "imposs\u00edvel provar", "9": "discut\u00edvel", "10": "sem contexto", "11": "de olho", "12": "verdadeiro, mas", "13": "ainda \u00e9 cedo para dizer"}}}}], "splits": [{"name": "train", "num_bytes": 750646, "num_examples": 1313}], "download_size": 721314, "dataset_size": 750646}}
2024-01-18T11:03:30+00:00
[]
[ "pt" ]
TAGS #task_categories-text-classification #task_ids-fact-checking #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Portuguese #license-mit #region-us
# Dataset Card for FACTCK BR ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: URL - Paper: URL - Leaderboard: - Point of Contact: ### Dataset Summary A dataset to study Fake News in Portuguese, presenting a supposedly false News along with their respective fact check and classification. The data is collected from the ClaimReview, a structured data schema used by fact check agencies to share their results in search engines, enabling data collect in real time. The FACTCK.BR dataset contains 1309 claims with its corresponding label. ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions Thanks to @hugoabonizio for adding this dataset.
[ "# Dataset Card for FACTCK BR", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nA dataset to study Fake News in Portuguese, presenting a supposedly false News along with their respective fact check and classification.\nThe data is collected from the ClaimReview, a structured data schema used by fact check agencies to share their results in search engines, enabling data collect in real time.\nThe FACTCK.BR dataset contains 1309 claims with its corresponding label.", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @hugoabonizio for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-fact-checking #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Portuguese #license-mit #region-us \n", "# Dataset Card for FACTCK BR", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nA dataset to study Fake News in Portuguese, presenting a supposedly false News along with their respective fact check and classification.\nThe data is collected from the ClaimReview, a structured data schema used by fact check agencies to share their results in search engines, enabling data collect in real time.\nThe FACTCK.BR dataset contains 1309 claims with its corresponding label.", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @hugoabonizio for adding this dataset." ]
791806a682c05a9a673167679d42e1b455994f66
# Dataset Card for Fake News English ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://dl.acm.org/doi/10.1145/3201064.3201100** - **Repository:** https://github.com/jgolbeck/fakenews/ - **Paper:** https://doi.org/10.1145/3201064.3201100 - **Leaderboard:** - **Point of Contact:** Jennifer Golbeck (http://www.jengolbeck.com) ### Dataset Summary This dataset contains URLs of news articles classified as either fake or satire. The articles classified as fake also have the URL of a rebutting article. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages English ## Dataset Structure ### Data Instances ``` { "article_number": 102 , "url_of_article": https://newslo.com/roger-stone-blames-obama-possibility-trump-alzheimers-attacks-president-caused-severe-stress/ , "fake_or_satire": 1, # Fake "url_of_rebutting_article": https://www.snopes.com/fact-check/donald-trumps-intelligence-quotient/ } ``` ### Data Fields - article_number: An integer used as an index for each row - url_of_article: A string which contains URL of an article to be assessed and classified as either Fake or Satire - fake_or_satire: A classlabel for the above variable which can take two values- Fake (1) and Satire (0) - url_of_rebutting_article: A string which contains a URL of the article used to refute the article in question (present - in url_of_article) ### Data Splits This dataset is not split, only the train split is available. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators Golbeck, Jennifer Everett, Jennine Falak, Waleed Gieringer, Carl Graney, Jack Hoffman, Kelly Huth, Lindsay Ma, Zhenya Jha, Mayanka Khan, Misbah Kori, Varsha Mauriello, Matthew Lewis, Elo Mirano, George IV, William Mussenden, Sean Nelson, Tammie Mcwillie, Sean Pant, Akshat Cheakalos, Paul ### Licensing Information [More Information Needed] ### Citation Information @inproceedings{inproceedings, author = {Golbeck, Jennifer and Everett, Jennine and Falak, Waleed and Gieringer, Carl and Graney, Jack and Hoffman, Kelly and Huth, Lindsay and Ma, Zhenya and Jha, Mayanka and Khan, Misbah and Kori, Varsha and Mauriello, Matthew and Lewis, Elo and Mirano, George and IV, William and Mussenden, Sean and Nelson, Tammie and Mcwillie, Sean and Pant, Akshat and Cheakalos, Paul}, year = {2018}, month = {05}, pages = {17-21}, title = {Fake News vs Satire: A Dataset and Analysis}, doi = {10.1145/3201064.3201100} } ### Contributions Thanks to [@MisbahKhan789](https://github.com/MisbahKhan789), [@lhoestq](https://github.com/lhoestq) for adding this dataset.
fake_news_english
[ "task_categories:text-classification", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:n<1K", "source_datasets:original", "language:en", "license:unknown", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": [], "pretty_name": "Fake News English", "dataset_info": {"features": [{"name": "article_number", "dtype": "int32"}, {"name": "url_of_article", "dtype": "string"}, {"name": "fake_or_satire", "dtype": {"class_label": {"names": {"0": "Satire", "1": "Fake"}}}}, {"name": "url_of_rebutting_article", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 78078, "num_examples": 492}], "download_size": 3002233, "dataset_size": 78078}}
2024-01-18T11:03:32+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-n<1K #source_datasets-original #language-English #license-unknown #region-us
# Dataset Card for Fake News English ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: URL - Paper: URL - Leaderboard: - Point of Contact: Jennifer Golbeck (URL) ### Dataset Summary This dataset contains URLs of news articles classified as either fake or satire. The articles classified as fake also have the URL of a rebutting article. ### Supported Tasks and Leaderboards ### Languages English ## Dataset Structure ### Data Instances ### Data Fields - article_number: An integer used as an index for each row - url_of_article: A string which contains URL of an article to be assessed and classified as either Fake or Satire - fake_or_satire: A classlabel for the above variable which can take two values- Fake (1) and Satire (0) - url_of_rebutting_article: A string which contains a URL of the article used to refute the article in question (present - in url_of_article) ### Data Splits This dataset is not split, only the train split is available. ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators Golbeck, Jennifer Everett, Jennine Falak, Waleed Gieringer, Carl Graney, Jack Hoffman, Kelly Huth, Lindsay Ma, Zhenya Jha, Mayanka Khan, Misbah Kori, Varsha Mauriello, Matthew Lewis, Elo Mirano, George IV, William Mussenden, Sean Nelson, Tammie Mcwillie, Sean Pant, Akshat Cheakalos, Paul ### Licensing Information @inproceedings{inproceedings, author = {Golbeck, Jennifer and Everett, Jennine and Falak, Waleed and Gieringer, Carl and Graney, Jack and Hoffman, Kelly and Huth, Lindsay and Ma, Zhenya and Jha, Mayanka and Khan, Misbah and Kori, Varsha and Mauriello, Matthew and Lewis, Elo and Mirano, George and IV, William and Mussenden, Sean and Nelson, Tammie and Mcwillie, Sean and Pant, Akshat and Cheakalos, Paul}, year = {2018}, month = {05}, pages = {17-21}, title = {Fake News vs Satire: A Dataset and Analysis}, doi = {10.1145/3201064.3201100} } ### Contributions Thanks to @MisbahKhan789, @lhoestq for adding this dataset.
[ "# Dataset Card for Fake News English", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard:\n- Point of Contact: Jennifer Golbeck (URL)", "### Dataset Summary\nThis dataset contains URLs of news articles classified as either fake or satire. The articles classified as fake also have the URL of a rebutting article.", "### Supported Tasks and Leaderboards", "### Languages\nEnglish", "## Dataset Structure", "### Data Instances", "### Data Fields\n- article_number: An integer used as an index for each row\n- url_of_article: A string which contains URL of an article to be assessed and classified as either Fake or Satire\n- fake_or_satire: A classlabel for the above variable which can take two values- Fake (1) and Satire (0)\n- url_of_rebutting_article: A string which contains a URL of the article used to refute the article in question (present - in url_of_article)", "### Data Splits\nThis dataset is not split, only the train split is available.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\nGolbeck, Jennifer\nEverett, Jennine \nFalak, Waleed\nGieringer, Carl\nGraney, Jack \nHoffman, Kelly \nHuth, Lindsay \nMa, Zhenya \nJha, Mayanka \nKhan, Misbah \nKori, Varsha \nMauriello, Matthew \nLewis, Elo \nMirano, George \nIV, William \nMussenden, Sean \nNelson, Tammie \nMcwillie, Sean \nPant, Akshat \nCheakalos, Paul", "### Licensing Information\n\n\n\n@inproceedings{inproceedings,\nauthor = {Golbeck, Jennifer and Everett, Jennine and Falak, Waleed and Gieringer, Carl and Graney, Jack and Hoffman, Kelly and Huth, Lindsay and Ma, Zhenya and Jha, Mayanka and Khan, Misbah and Kori, Varsha and Mauriello, Matthew and Lewis, Elo and Mirano, George and IV, William and Mussenden, Sean and Nelson, Tammie and Mcwillie, Sean and Pant, Akshat and Cheakalos, Paul},\nyear = {2018},\nmonth = {05},\npages = {17-21},\ntitle = {Fake News vs Satire: A Dataset and Analysis},\ndoi = {10.1145/3201064.3201100}\n}", "### Contributions\n\nThanks to @MisbahKhan789, @lhoestq for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-n<1K #source_datasets-original #language-English #license-unknown #region-us \n", "# Dataset Card for Fake News English", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard:\n- Point of Contact: Jennifer Golbeck (URL)", "### Dataset Summary\nThis dataset contains URLs of news articles classified as either fake or satire. The articles classified as fake also have the URL of a rebutting article.", "### Supported Tasks and Leaderboards", "### Languages\nEnglish", "## Dataset Structure", "### Data Instances", "### Data Fields\n- article_number: An integer used as an index for each row\n- url_of_article: A string which contains URL of an article to be assessed and classified as either Fake or Satire\n- fake_or_satire: A classlabel for the above variable which can take two values- Fake (1) and Satire (0)\n- url_of_rebutting_article: A string which contains a URL of the article used to refute the article in question (present - in url_of_article)", "### Data Splits\nThis dataset is not split, only the train split is available.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\nGolbeck, Jennifer\nEverett, Jennine \nFalak, Waleed\nGieringer, Carl\nGraney, Jack \nHoffman, Kelly \nHuth, Lindsay \nMa, Zhenya \nJha, Mayanka \nKhan, Misbah \nKori, Varsha \nMauriello, Matthew \nLewis, Elo \nMirano, George \nIV, William \nMussenden, Sean \nNelson, Tammie \nMcwillie, Sean \nPant, Akshat \nCheakalos, Paul", "### Licensing Information\n\n\n\n@inproceedings{inproceedings,\nauthor = {Golbeck, Jennifer and Everett, Jennine and Falak, Waleed and Gieringer, Carl and Graney, Jack and Hoffman, Kelly and Huth, Lindsay and Ma, Zhenya and Jha, Mayanka and Khan, Misbah and Kori, Varsha and Mauriello, Matthew and Lewis, Elo and Mirano, George and IV, William and Mussenden, Sean and Nelson, Tammie and Mcwillie, Sean and Pant, Akshat and Cheakalos, Paul},\nyear = {2018},\nmonth = {05},\npages = {17-21},\ntitle = {Fake News vs Satire: A Dataset and Analysis},\ndoi = {10.1145/3201064.3201100}\n}", "### Contributions\n\nThanks to @MisbahKhan789, @lhoestq for adding this dataset." ]
d622d04b3bee15be391dfec4f55fd6980738fe6b
# Dataset Card for Fake News Filipino ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Fake News Filipino homepage](https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks) - **Repository:** [Fake News Filipino repository](https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks) - **Paper:** [LREC 2020 paper](http://www.lrec-conf.org/proceedings/lrec2020/index.html) - **Leaderboard:** - **Point of Contact:** [Jan Christian Cruz](mailto:[email protected]) ### Dataset Summary Low-Resource Fake News Detection Corpora in Filipino. The first of its kind. Contains 3,206 expertly-labeled news samples, half of which are real and half of which are fake. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The dataset is primarily in Filipino, with the addition of some English words commonly used in Filipino vernacular. ## Dataset Structure ### Data Instances Sample data: ``` { "label": "0", "article": "Sa 8-pahinang desisyon, pinaboran ng Sandiganbayan First Division ang petition for Writ of Preliminary Attachment/Garnishment na inihain ng prosekusyon laban sa mambabatas." } ``` ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation Fake news articles were sourced from online sites that were tagged as fake news sites by the non-profit independent media fact-checking organization Verafiles and the National Union of Journalists in the Philippines (NUJP). Real news articles were sourced from mainstream news websites in the Philippines, including Pilipino Star Ngayon, Abante, and Bandera. ### Curation Rationale We remedy the lack of a proper, curated benchmark dataset for fake news detection in Filipino by constructing and producing what we call “Fake News Filipino.” ### Source Data #### Initial Data Collection and Normalization We construct the dataset by scraping our source websites, encoding all characters into UTF-8. Preprocessing was light to keep information intact: we retain capitalization and punctuation, and do not correct any misspelled words. #### Who are the source language producers? Jan Christian Blaise Cruz, Julianne Agatha Tan, and Charibeth Cheng ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [Jan Christian Cruz](mailto:[email protected]), Julianne Agatha Tan, and Charibeth Cheng ### Licensing Information [More Information Needed] ### Citation Information @inproceedings{cruz2020localization, title={Localization of Fake News Detection via Multitask Transfer Learning}, author={Cruz, Jan Christian Blaise and Tan, Julianne Agatha and Cheng, Charibeth}, booktitle={Proceedings of The 12th Language Resources and Evaluation Conference}, pages={2596--2604}, year={2020} } ### Contributions Thanks to [@anaerobeth](https://github.com/anaerobeth) for adding this dataset.
fake_news_filipino
[ "task_categories:text-classification", "task_ids:fact-checking", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:tl", "license:unknown", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["crowdsourced"], "language": ["tl"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["fact-checking"], "paperswithcode_id": "fake-news-filipino-dataset", "pretty_name": "Fake News Filipino", "dataset_info": {"features": [{"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "article", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3623685, "num_examples": 3206}], "download_size": 1313458, "dataset_size": 3623685}}
2024-01-18T11:03:33+00:00
[]
[ "tl" ]
TAGS #task_categories-text-classification #task_ids-fact-checking #annotations_creators-expert-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Tagalog #license-unknown #region-us
# Dataset Card for Fake News Filipino ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: Fake News Filipino homepage - Repository: Fake News Filipino repository - Paper: LREC 2020 paper - Leaderboard: - Point of Contact: Jan Christian Cruz ### Dataset Summary Low-Resource Fake News Detection Corpora in Filipino. The first of its kind. Contains 3,206 expertly-labeled news samples, half of which are real and half of which are fake. ### Supported Tasks and Leaderboards ### Languages The dataset is primarily in Filipino, with the addition of some English words commonly used in Filipino vernacular. ## Dataset Structure ### Data Instances Sample data: ### Data Fields ### Data Splits ## Dataset Creation Fake news articles were sourced from online sites that were tagged as fake news sites by the non-profit independent media fact-checking organization Verafiles and the National Union of Journalists in the Philippines (NUJP). Real news articles were sourced from mainstream news websites in the Philippines, including Pilipino Star Ngayon, Abante, and Bandera. ### Curation Rationale We remedy the lack of a proper, curated benchmark dataset for fake news detection in Filipino by constructing and producing what we call “Fake News Filipino.” ### Source Data #### Initial Data Collection and Normalization We construct the dataset by scraping our source websites, encoding all characters into UTF-8. Preprocessing was light to keep information intact: we retain capitalization and punctuation, and do not correct any misspelled words. #### Who are the source language producers? Jan Christian Blaise Cruz, Julianne Agatha Tan, and Charibeth Cheng ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators Jan Christian Cruz, Julianne Agatha Tan, and Charibeth Cheng ### Licensing Information @inproceedings{cruz2020localization, title={Localization of Fake News Detection via Multitask Transfer Learning}, author={Cruz, Jan Christian Blaise and Tan, Julianne Agatha and Cheng, Charibeth}, booktitle={Proceedings of The 12th Language Resources and Evaluation Conference}, pages={2596--2604}, year={2020} } ### Contributions Thanks to @anaerobeth for adding this dataset.
[ "# Dataset Card for Fake News Filipino", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: Fake News Filipino homepage\n- Repository: Fake News Filipino repository\n- Paper: LREC 2020 paper\n- Leaderboard:\n- Point of Contact: Jan Christian Cruz", "### Dataset Summary\n\nLow-Resource Fake News Detection Corpora in Filipino. The first of its kind. Contains 3,206 expertly-labeled news samples, half of which are real and half of which are fake.", "### Supported Tasks and Leaderboards", "### Languages\n\nThe dataset is primarily in Filipino, with the addition of some English words commonly used in Filipino vernacular.", "## Dataset Structure", "### Data Instances\n\nSample data:", "### Data Fields", "### Data Splits", "## Dataset Creation\n\nFake news articles were sourced from online sites that were tagged as fake news sites by the non-profit independent media fact-checking organization Verafiles and the National Union of Journalists in the Philippines (NUJP). Real news articles were sourced from mainstream news websites in the Philippines, including Pilipino Star Ngayon, Abante, and Bandera.", "### Curation Rationale\n\nWe remedy the lack of a proper, curated benchmark dataset for fake news detection in Filipino by constructing and producing what we call “Fake News Filipino.”", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe construct the dataset by scraping our source websites, encoding all characters into UTF-8. Preprocessing was light to keep information intact: we retain capitalization and punctuation, and do not correct any misspelled words.", "#### Who are the source language producers?\n\nJan Christian Blaise Cruz, Julianne Agatha Tan, and Charibeth Cheng", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nJan Christian Cruz, Julianne Agatha Tan, and Charibeth Cheng", "### Licensing Information\n\n\n\n\n\n @inproceedings{cruz2020localization,\n title={Localization of Fake News Detection via Multitask Transfer Learning},\n author={Cruz, Jan Christian Blaise and Tan, Julianne Agatha and Cheng, Charibeth},\n booktitle={Proceedings of The 12th Language Resources and Evaluation Conference},\n pages={2596--2604},\n year={2020}\n }", "### Contributions\n\nThanks to @anaerobeth for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-fact-checking #annotations_creators-expert-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Tagalog #license-unknown #region-us \n", "# Dataset Card for Fake News Filipino", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: Fake News Filipino homepage\n- Repository: Fake News Filipino repository\n- Paper: LREC 2020 paper\n- Leaderboard:\n- Point of Contact: Jan Christian Cruz", "### Dataset Summary\n\nLow-Resource Fake News Detection Corpora in Filipino. The first of its kind. Contains 3,206 expertly-labeled news samples, half of which are real and half of which are fake.", "### Supported Tasks and Leaderboards", "### Languages\n\nThe dataset is primarily in Filipino, with the addition of some English words commonly used in Filipino vernacular.", "## Dataset Structure", "### Data Instances\n\nSample data:", "### Data Fields", "### Data Splits", "## Dataset Creation\n\nFake news articles were sourced from online sites that were tagged as fake news sites by the non-profit independent media fact-checking organization Verafiles and the National Union of Journalists in the Philippines (NUJP). Real news articles were sourced from mainstream news websites in the Philippines, including Pilipino Star Ngayon, Abante, and Bandera.", "### Curation Rationale\n\nWe remedy the lack of a proper, curated benchmark dataset for fake news detection in Filipino by constructing and producing what we call “Fake News Filipino.”", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe construct the dataset by scraping our source websites, encoding all characters into UTF-8. Preprocessing was light to keep information intact: we retain capitalization and punctuation, and do not correct any misspelled words.", "#### Who are the source language producers?\n\nJan Christian Blaise Cruz, Julianne Agatha Tan, and Charibeth Cheng", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nJan Christian Cruz, Julianne Agatha Tan, and Charibeth Cheng", "### Licensing Information\n\n\n\n\n\n @inproceedings{cruz2020localization,\n title={Localization of Fake News Detection via Multitask Transfer Learning},\n author={Cruz, Jan Christian Blaise and Tan, Julianne Agatha and Cheng, Charibeth},\n booktitle={Proceedings of The 12th Language Resources and Evaluation Conference},\n pages={2596--2604},\n year={2020}\n }", "### Contributions\n\nThanks to @anaerobeth for adding this dataset." ]
8aa5ce8faee4ed4dec6a98ed326fba6f2768fd2b
# Dataset Card for FarsiNews ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** []() - **Repository:** [link](https://github.com/sci2lab/Farsi-datasets) - **Paper:** []() - **Leaderboard:** []() - **Point of Contact:** []() ### Dataset Summary https://github.com/sci2lab/Farsi-datasets Contains Farsi (Persian) datasets for Machine Learning tasks, particularly NLP. These datasets have been extracted from the RSS feed of two Farsi news agency websites: - Hamshahri - RadioFarda ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure [More Information Needed] ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information https://github.com/sci2lab/Farsi-datasets ### Contributions Thanks to [@Narsil](https://github.com/Narsil) for adding this dataset.
farsi_news
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:fa", "license:unknown", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["fa"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-generation", "fill-mask"], "task_ids": ["language-modeling", "masked-language-modeling"], "pretty_name": "FarsiNews", "dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "link", "dtype": "string"}, {"name": "tags", "sequence": "string"}], "splits": [{"name": "hamshahri", "num_bytes": 1267659, "num_examples": 2203}, {"name": "radiofarda", "num_bytes": 265272, "num_examples": 284}], "download_size": 1648337, "dataset_size": 1532931}}
2024-01-18T11:03:34+00:00
[]
[ "fa" ]
TAGS #task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Persian #license-unknown #region-us
# Dataset Card for FarsiNews ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: []() - Repository: link - Paper: []() - Leaderboard: []() - Point of Contact: []() ### Dataset Summary URL Contains Farsi (Persian) datasets for Machine Learning tasks, particularly NLP. These datasets have been extracted from the RSS feed of two Farsi news agency websites: - Hamshahri - RadioFarda ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information URL ### Contributions Thanks to @Narsil for adding this dataset.
[ "# Dataset Card for FarsiNews", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: []()\n- Repository: link\n- Paper: []()\n- Leaderboard: []()\n- Point of Contact: []()", "### Dataset Summary\n\nURL\nContains Farsi (Persian) datasets for Machine Learning tasks, particularly NLP.\nThese datasets have been extracted from the RSS feed of two Farsi news agency websites:\n\n- Hamshahri\n- RadioFarda", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\n\n\n\n\nURL", "### Contributions\n\nThanks to @Narsil for adding this dataset." ]
[ "TAGS\n#task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Persian #license-unknown #region-us \n", "# Dataset Card for FarsiNews", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: []()\n- Repository: link\n- Paper: []()\n- Leaderboard: []()\n- Point of Contact: []()", "### Dataset Summary\n\nURL\nContains Farsi (Persian) datasets for Machine Learning tasks, particularly NLP.\nThese datasets have been extracted from the RSS feed of two Farsi news agency websites:\n\n- Hamshahri\n- RadioFarda", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\n\n\n\n\nURL", "### Contributions\n\nThanks to @Narsil for adding this dataset." ]
8bbdd6c75ac5dede8443382cce26a0dcd58ea532
# Dataset Card for FashionMNIST ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [GitHub](https://github.com/zalandoresearch/fashion-mnist) - **Repository:** [GitHub](https://github.com/zalandoresearch/fashion-mnist) - **Paper:** [arXiv](https://arxiv.org/pdf/1708.07747.pdf) - **Leaderboard:** - **Point of Contact:** ### Dataset Summary Fashion-MNIST is a dataset of Zalando's article images—consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes. We intend Fashion-MNIST to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms. It shares the same image size and structure of training and testing splits. ### Supported Tasks and Leaderboards - `image-classification`: The goal of this task is to classify a given image of Zalando's article into one of 10 classes. The leaderboard is available [here](https://paperswithcode.com/sota/image-classification-on-fashion-mnist). ### Languages [More Information Needed] ## Dataset Structure ### Data Instances A data point comprises an image and its label. ``` { 'image': <PIL.PngImagePlugin.PngImageFile image mode=L size=28x28 at 0x27601169DD8>, 'label': 9 } ``` ### Data Fields - `image`: A `PIL.Image.Image` object containing the 28x28 image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`. - `label`: an integer between 0 and 9 representing the classes with the following mapping: | Label | Description | | --- | --- | | 0 | T-shirt/top | | 1 | Trouser | | 2 | Pullover | | 3 | Dress | | 4 | Coat | | 5 | Sandal | | 6 | Shirt | | 7 | Sneaker | | 8 | Bag | | 9 | Ankle boot | ### Data Splits The data is split into training and test set. The training set contains 60,000 images and the test set 10,000 images. ## Dataset Creation ### Curation Rationale **From the arXiv paper:** The original MNIST dataset contains a lot of handwritten digits. Members of the AI/ML/Data Science community love this dataset and use it as a benchmark to validate their algorithms. In fact, MNIST is often the first dataset researchers try. "If it doesn't work on MNIST, it won't work at all", they said. "Well, if it does work on MNIST, it may still fail on others." Here are some good reasons: - MNIST is too easy. Convolutional nets can achieve 99.7% on MNIST. Classic machine learning algorithms can also achieve 97% easily. Check out our side-by-side benchmark for Fashion-MNIST vs. MNIST, and read "Most pairs of MNIST digits can be distinguished pretty well by just one pixel." - MNIST is overused. In this April 2017 Twitter thread, Google Brain research scientist and deep learning expert Ian Goodfellow calls for people to move away from MNIST. - MNIST can not represent modern CV tasks, as noted in this April 2017 Twitter thread, deep learning expert/Keras author François Chollet. ### Source Data #### Initial Data Collection and Normalization **From the arXiv paper:** Fashion-MNIST is based on the assortment on Zalando’s website. Every fashion product on Zalando has a set of pictures shot by professional photographers, demonstrating different aspects of the product, i.e. front and back looks, details, looks with model and in an outfit. The original picture has a light-gray background (hexadecimal color: #fdfdfd) and stored in 762 × 1000 JPEG format. For efficiently serving different frontend components, the original picture is resampled with multiple resolutions, e.g. large, medium, small, thumbnail and tiny. We use the front look thumbnail images of 70,000 unique products to build Fashion-MNIST. Those products come from different gender groups: men, women, kids and neutral. In particular, whitecolor products are not included in the dataset as they have low contrast to the background. The thumbnails (51 × 73) are then fed into the following conversion pipeline: 1. Converting the input to a PNG image. 2. Trimming any edges that are close to the color of the corner pixels. The “closeness” is defined by the distance within 5% of the maximum possible intensity in RGB space. 3. Resizing the longest edge of the image to 28 by subsampling the pixels, i.e. some rows and columns are skipped over. 4. Sharpening pixels using a Gaussian operator of the radius and standard deviation of 1.0, with increasing effect near outlines. 5. Extending the shortest edge to 28 and put the image to the center of the canvas. 6. Negating the intensities of the image. 7. Converting the image to 8-bit grayscale pixels. #### Who are the source language producers? **From the arXiv paper:** Every fashion product on Zalando has a set of pictures shot by professional photographers, demonstrating different aspects of the product, i.e. front and back looks, details, looks with model and in an outfit. ### Annotations #### Annotation process **From the arXiv paper:** For the class labels, they use the silhouette code of the product. The silhouette code is manually labeled by the in-house fashion experts and reviewed by a separate team at Zalando. Each product Zalando is the Europe’s largest online fashion platform. Each product contains only one silhouette code. #### Who are the annotators? **From the arXiv paper:** The silhouette code is manually labeled by the in-house fashion experts and reviewed by a separate team at Zalando. ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators Han Xiao and Kashif Rasul and Roland Vollgraf ### Licensing Information MIT Licence ### Citation Information ``` @article{DBLP:journals/corr/abs-1708-07747, author = {Han Xiao and Kashif Rasul and Roland Vollgraf}, title = {Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms}, journal = {CoRR}, volume = {abs/1708.07747}, year = {2017}, url = {http://arxiv.org/abs/1708.07747}, archivePrefix = {arXiv}, eprint = {1708.07747}, timestamp = {Mon, 13 Aug 2018 16:47:27 +0200}, biburl = {https://dblp.org/rec/bib/journals/corr/abs-1708-07747}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` ### Contributions Thanks to [@gchhablani](https://github.com/gchablani) for adding this dataset.
fashion_mnist
[ "task_categories:image-classification", "task_ids:multi-class-image-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:mit", "arxiv:1708.07747", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["image-classification"], "task_ids": ["multi-class-image-classification"], "paperswithcode_id": "fashion-mnist", "pretty_name": "FashionMNIST", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "T - shirt / top", "1": "Trouser", "2": "Pullover", "3": "Dress", "4": "Coat", "5": "Sandal", "6": "Shirt", "7": "Sneaker", "8": "Bag", "9": "Ankle boot"}}}}], "config_name": "fashion_mnist", "splits": [{"name": "train", "num_bytes": 31296655, "num_examples": 60000}, {"name": "test", "num_bytes": 5233818, "num_examples": 10000}], "download_size": 30878645, "dataset_size": 36530473}}
2024-01-18T11:03:36+00:00
[ "1708.07747" ]
[ "en" ]
TAGS #task_categories-image-classification #task_ids-multi-class-image-classification #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-mit #arxiv-1708.07747 #region-us
Dataset Card for FashionMNIST ============================= Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: GitHub * Repository: GitHub * Paper: arXiv * Leaderboard: * Point of Contact: ### Dataset Summary Fashion-MNIST is a dataset of Zalando's article images—consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes. We intend Fashion-MNIST to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms. It shares the same image size and structure of training and testing splits. ### Supported Tasks and Leaderboards * 'image-classification': The goal of this task is to classify a given image of Zalando's article into one of 10 classes. The leaderboard is available here. ### Languages Dataset Structure ----------------- ### Data Instances A data point comprises an image and its label. ### Data Fields * 'image': A 'PIL.Image.Image' object containing the 28x28 image. Note that when accessing the image column: 'dataset[0]["image"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '"image"' column, *i.e.* 'dataset[0]["image"]' should always be preferred over 'dataset["image"][0]'. * 'label': an integer between 0 and 9 representing the classes with the following mapping: ### Data Splits The data is split into training and test set. The training set contains 60,000 images and the test set 10,000 images. Dataset Creation ---------------- ### Curation Rationale From the arXiv paper: The original MNIST dataset contains a lot of handwritten digits. Members of the AI/ML/Data Science community love this dataset and use it as a benchmark to validate their algorithms. In fact, MNIST is often the first dataset researchers try. "If it doesn't work on MNIST, it won't work at all", they said. "Well, if it does work on MNIST, it may still fail on others." Here are some good reasons: * MNIST is too easy. Convolutional nets can achieve 99.7% on MNIST. Classic machine learning algorithms can also achieve 97% easily. Check out our side-by-side benchmark for Fashion-MNIST vs. MNIST, and read "Most pairs of MNIST digits can be distinguished pretty well by just one pixel." * MNIST is overused. In this April 2017 Twitter thread, Google Brain research scientist and deep learning expert Ian Goodfellow calls for people to move away from MNIST. * MNIST can not represent modern CV tasks, as noted in this April 2017 Twitter thread, deep learning expert/Keras author François Chollet. ### Source Data #### Initial Data Collection and Normalization From the arXiv paper: Fashion-MNIST is based on the assortment on Zalando’s website. Every fashion product on Zalando has a set of pictures shot by professional photographers, demonstrating different aspects of the product, i.e. front and back looks, details, looks with model and in an outfit. The original picture has a light-gray background (hexadecimal color: #fdfdfd) and stored in 762 × 1000 JPEG format. For efficiently serving different frontend components, the original picture is resampled with multiple resolutions, e.g. large, medium, small, thumbnail and tiny. We use the front look thumbnail images of 70,000 unique products to build Fashion-MNIST. Those products come from different gender groups: men, women, kids and neutral. In particular, whitecolor products are not included in the dataset as they have low contrast to the background. The thumbnails (51 × 73) are then fed into the following conversion pipeline: 1. Converting the input to a PNG image. 2. Trimming any edges that are close to the color of the corner pixels. The “closeness” is defined by the distance within 5% of the maximum possible intensity in RGB space. 3. Resizing the longest edge of the image to 28 by subsampling the pixels, i.e. some rows and columns are skipped over. 4. Sharpening pixels using a Gaussian operator of the radius and standard deviation of 1.0, with increasing effect near outlines. 5. Extending the shortest edge to 28 and put the image to the center of the canvas. 6. Negating the intensities of the image. 7. Converting the image to 8-bit grayscale pixels. #### Who are the source language producers? From the arXiv paper: Every fashion product on Zalando has a set of pictures shot by professional photographers, demonstrating different aspects of the product, i.e. front and back looks, details, looks with model and in an outfit. ### Annotations #### Annotation process From the arXiv paper: For the class labels, they use the silhouette code of the product. The silhouette code is manually labeled by the in-house fashion experts and reviewed by a separate team at Zalando. Each product Zalando is the Europe’s largest online fashion platform. Each product contains only one silhouette code. #### Who are the annotators? From the arXiv paper: The silhouette code is manually labeled by the in-house fashion experts and reviewed by a separate team at Zalando. ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators Han Xiao and Kashif Rasul and Roland Vollgraf ### Licensing Information MIT Licence ### Contributions Thanks to @gchhablani for adding this dataset.
[ "### Dataset Summary\n\n\nFashion-MNIST is a dataset of Zalando's article images—consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes. We intend Fashion-MNIST to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms. It shares the same image size and structure of training and testing splits.", "### Supported Tasks and Leaderboards\n\n\n* 'image-classification': The goal of this task is to classify a given image of Zalando's article into one of 10 classes. The leaderboard is available here.", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA data point comprises an image and its label.", "### Data Fields\n\n\n* 'image': A 'PIL.Image.Image' object containing the 28x28 image. Note that when accessing the image column: 'dataset[0][\"image\"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '\"image\"' column, *i.e.* 'dataset[0][\"image\"]' should always be preferred over 'dataset[\"image\"][0]'.\n* 'label': an integer between 0 and 9 representing the classes with the following mapping:", "### Data Splits\n\n\nThe data is split into training and test set. The training set contains 60,000 images and the test set 10,000 images.\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nFrom the arXiv paper:\nThe original MNIST dataset contains a lot of handwritten digits. Members of the AI/ML/Data Science community love this dataset and use it as a benchmark to validate their algorithms. In fact, MNIST is often the first dataset researchers try. \"If it doesn't work on MNIST, it won't work at all\", they said. \"Well, if it does work on MNIST, it may still fail on others.\"\n\n\nHere are some good reasons:\n\n\n* MNIST is too easy. Convolutional nets can achieve 99.7% on MNIST. Classic machine learning algorithms can also achieve 97% easily. Check out our side-by-side benchmark for Fashion-MNIST vs. MNIST, and read \"Most pairs of MNIST digits can be distinguished pretty well by just one pixel.\"\n* MNIST is overused. In this April 2017 Twitter thread, Google Brain research scientist and deep learning expert Ian Goodfellow calls for people to move away from MNIST.\n* MNIST can not represent modern CV tasks, as noted in this April 2017 Twitter thread, deep learning expert/Keras author François Chollet.", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nFrom the arXiv paper:\nFashion-MNIST is based on the assortment on Zalando’s website. Every fashion product on Zalando has a set of pictures shot by professional photographers, demonstrating different aspects of the product, i.e. front and back looks, details, looks with model and in an outfit. The original picture has a light-gray background (hexadecimal color: #fdfdfd) and stored in 762 × 1000 JPEG format. For efficiently serving different frontend components, the original picture is resampled with multiple resolutions, e.g. large, medium, small, thumbnail and tiny.\n\n\nWe use the front look thumbnail images of 70,000 unique products to build Fashion-MNIST. Those products come from different gender groups: men, women, kids and neutral. In particular, whitecolor products are not included in the dataset as they have low contrast to the background. The thumbnails (51 × 73) are then fed into the following conversion pipeline:\n\n\n1. Converting the input to a PNG image.\n2. Trimming any edges that are close to the color of the corner pixels. The “closeness” is defined by the distance within 5% of the maximum possible intensity in RGB space.\n3. Resizing the longest edge of the image to 28 by subsampling the pixels, i.e. some rows and columns are skipped over.\n4. Sharpening pixels using a Gaussian operator of the radius and standard deviation of 1.0, with increasing effect near outlines.\n5. Extending the shortest edge to 28 and put the image to the center of the canvas.\n6. Negating the intensities of the image.\n7. Converting the image to 8-bit grayscale pixels.", "#### Who are the source language producers?\n\n\nFrom the arXiv paper:\nEvery fashion product on Zalando has a set of pictures shot by professional photographers, demonstrating different aspects of the product, i.e. front and back looks, details, looks with model and in an outfit.", "### Annotations", "#### Annotation process\n\n\nFrom the arXiv paper:\nFor the class labels, they use the silhouette code of the product. The silhouette code is manually labeled by the in-house fashion experts and reviewed by a separate team at Zalando. Each product Zalando is the Europe’s largest online fashion platform. Each product contains only one silhouette code.", "#### Who are the annotators?\n\n\nFrom the arXiv paper:\nThe silhouette code is manually labeled by the in-house fashion experts and reviewed by a separate team at Zalando.", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nHan Xiao and Kashif Rasul and Roland Vollgraf", "### Licensing Information\n\n\nMIT Licence", "### Contributions\n\n\nThanks to @gchhablani for adding this dataset." ]
[ "TAGS\n#task_categories-image-classification #task_ids-multi-class-image-classification #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-mit #arxiv-1708.07747 #region-us \n", "### Dataset Summary\n\n\nFashion-MNIST is a dataset of Zalando's article images—consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes. We intend Fashion-MNIST to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms. It shares the same image size and structure of training and testing splits.", "### Supported Tasks and Leaderboards\n\n\n* 'image-classification': The goal of this task is to classify a given image of Zalando's article into one of 10 classes. The leaderboard is available here.", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA data point comprises an image and its label.", "### Data Fields\n\n\n* 'image': A 'PIL.Image.Image' object containing the 28x28 image. Note that when accessing the image column: 'dataset[0][\"image\"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '\"image\"' column, *i.e.* 'dataset[0][\"image\"]' should always be preferred over 'dataset[\"image\"][0]'.\n* 'label': an integer between 0 and 9 representing the classes with the following mapping:", "### Data Splits\n\n\nThe data is split into training and test set. The training set contains 60,000 images and the test set 10,000 images.\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nFrom the arXiv paper:\nThe original MNIST dataset contains a lot of handwritten digits. Members of the AI/ML/Data Science community love this dataset and use it as a benchmark to validate their algorithms. In fact, MNIST is often the first dataset researchers try. \"If it doesn't work on MNIST, it won't work at all\", they said. \"Well, if it does work on MNIST, it may still fail on others.\"\n\n\nHere are some good reasons:\n\n\n* MNIST is too easy. Convolutional nets can achieve 99.7% on MNIST. Classic machine learning algorithms can also achieve 97% easily. Check out our side-by-side benchmark for Fashion-MNIST vs. MNIST, and read \"Most pairs of MNIST digits can be distinguished pretty well by just one pixel.\"\n* MNIST is overused. In this April 2017 Twitter thread, Google Brain research scientist and deep learning expert Ian Goodfellow calls for people to move away from MNIST.\n* MNIST can not represent modern CV tasks, as noted in this April 2017 Twitter thread, deep learning expert/Keras author François Chollet.", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nFrom the arXiv paper:\nFashion-MNIST is based on the assortment on Zalando’s website. Every fashion product on Zalando has a set of pictures shot by professional photographers, demonstrating different aspects of the product, i.e. front and back looks, details, looks with model and in an outfit. The original picture has a light-gray background (hexadecimal color: #fdfdfd) and stored in 762 × 1000 JPEG format. For efficiently serving different frontend components, the original picture is resampled with multiple resolutions, e.g. large, medium, small, thumbnail and tiny.\n\n\nWe use the front look thumbnail images of 70,000 unique products to build Fashion-MNIST. Those products come from different gender groups: men, women, kids and neutral. In particular, whitecolor products are not included in the dataset as they have low contrast to the background. The thumbnails (51 × 73) are then fed into the following conversion pipeline:\n\n\n1. Converting the input to a PNG image.\n2. Trimming any edges that are close to the color of the corner pixels. The “closeness” is defined by the distance within 5% of the maximum possible intensity in RGB space.\n3. Resizing the longest edge of the image to 28 by subsampling the pixels, i.e. some rows and columns are skipped over.\n4. Sharpening pixels using a Gaussian operator of the radius and standard deviation of 1.0, with increasing effect near outlines.\n5. Extending the shortest edge to 28 and put the image to the center of the canvas.\n6. Negating the intensities of the image.\n7. Converting the image to 8-bit grayscale pixels.", "#### Who are the source language producers?\n\n\nFrom the arXiv paper:\nEvery fashion product on Zalando has a set of pictures shot by professional photographers, demonstrating different aspects of the product, i.e. front and back looks, details, looks with model and in an outfit.", "### Annotations", "#### Annotation process\n\n\nFrom the arXiv paper:\nFor the class labels, they use the silhouette code of the product. The silhouette code is manually labeled by the in-house fashion experts and reviewed by a separate team at Zalando. Each product Zalando is the Europe’s largest online fashion platform. Each product contains only one silhouette code.", "#### Who are the annotators?\n\n\nFrom the arXiv paper:\nThe silhouette code is manually labeled by the in-house fashion experts and reviewed by a separate team at Zalando.", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nHan Xiao and Kashif Rasul and Roland Vollgraf", "### Licensing Information\n\n\nMIT Licence", "### Contributions\n\n\nThanks to @gchhablani for adding this dataset." ]
2a74f2909caf2b8656343aeb8203e50bf84dcb56
# Dataset Card for "fever" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://fever.ai/](https://fever.ai/) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Dataset Summary With billions of individual pages on the web providing information on almost every conceivable topic, we should have the ability to collect facts that answer almost every conceivable question. However, only a small fraction of this information is contained in structured sources (Wikidata, Freebase, etc.) – we are therefore limited by our ability to transform free-form text to structured knowledge. There is, however, another problem that has become the focus of a lot of recent research and media coverage: false information coming from unreliable sources. The FEVER workshops are a venue for work in verifiable knowledge extraction and to stimulate progress in this direction. - FEVER Dataset: FEVER (Fact Extraction and VERification) consists of 185,445 claims generated by altering sentences extracted from Wikipedia and subsequently verified without knowledge of the sentence they were derived from. The claims are classified as Supported, Refuted or NotEnoughInfo. For the first two classes, the annotators also recorded the sentence(s) forming the necessary evidence for their judgment. - FEVER 2.0 Adversarial Attacks Dataset: The FEVER 2.0 Dataset consists of 1174 claims created by the submissions of participants in the Breaker phase of the 2019 shared task. Participants (Breakers) were tasked with generating adversarial examples that induce classification errors for the existing systems. Breakers submitted a dataset of up to 1000 instances with equal number of instances for each of the three classes (Supported, Refuted NotEnoughInfo). Only novel claims (i.e. not contained in the original FEVER dataset) were considered as valid entries to the shared task. The submissions were then manually evaluated for Correctness (grammatical, appropriately labeled and meet the FEVER annotation guidelines requirements). ### Supported Tasks and Leaderboards The task is verification of textual claims against textual sources. When compared to textual entailment (TE)/natural language inference, the key difference is that in these tasks the passage to verify each claim is given, and in recent years it typically consists a single sentence, while in verification systems it is retrieved from a large set of documents in order to form the evidence. ### Languages The dataset is in English. ## Dataset Structure ### Data Instances #### v1.0 - **Size of downloaded dataset files:** 44.86 MB - **Size of the generated dataset:** 40.05 MB - **Total amount of disk used:** 84.89 MB An example of 'train' looks as follows. ``` 'claim': 'Nikolaj Coster-Waldau worked with the Fox Broadcasting Company.', 'evidence_wiki_url': 'Nikolaj_Coster-Waldau', 'label': 'SUPPORTS', 'id': 75397, 'evidence_id': 104971, 'evidence_sentence_id': 7, 'evidence_annotation_id': 92206} ``` #### v2.0 - **Size of downloaded dataset files:** 0.39 MB - **Size of the generated dataset:** 0.30 MB - **Total amount of disk used:** 0.70 MB An example of 'validation' looks as follows. ``` {'claim': "There is a convicted statutory rapist called Chinatown's writer.", 'evidence_wiki_url': '', 'label': 'NOT ENOUGH INFO', 'id': 500000, 'evidence_id': -1, 'evidence_sentence_id': -1, 'evidence_annotation_id': 269158} ``` #### wiki_pages - **Size of downloaded dataset files:** 1.71 GB - **Size of the generated dataset:** 7.25 GB - **Total amount of disk used:** 8.97 GB An example of 'wikipedia_pages' looks as follows. ``` {'text': 'The following are the football -LRB- soccer -RRB- events of the year 1928 throughout the world . ', 'lines': '0\tThe following are the football -LRB- soccer -RRB- events of the year 1928 throughout the world .\n1\t', 'id': '1928_in_association_football'} ``` ### Data Fields The data fields are the same among all splits. #### v1.0 - `id`: a `int32` feature. - `label`: a `string` feature. - `claim`: a `string` feature. - `evidence_annotation_id`: a `int32` feature. - `evidence_id`: a `int32` feature. - `evidence_wiki_url`: a `string` feature. - `evidence_sentence_id`: a `int32` feature. #### v2.0 - `id`: a `int32` feature. - `label`: a `string` feature. - `claim`: a `string` feature. - `evidence_annotation_id`: a `int32` feature. - `evidence_id`: a `int32` feature. - `evidence_wiki_url`: a `string` feature. - `evidence_sentence_id`: a `int32` feature. #### wiki_pages - `id`: a `string` feature. - `text`: a `string` feature. - `lines`: a `string` feature. ### Data Splits #### v1.0 | | train | unlabelled_dev | labelled_dev | paper_dev | unlabelled_test | paper_test | |------|-------:|---------------:|-------------:|----------:|----------------:|-----------:| | v1.0 | 311431 | 19998 | 37566 | 18999 | 19998 | 18567 | #### v2.0 | | validation | |------|-----------:| | v2.0 | 2384 | #### wiki_pages | | wikipedia_pages | |------------|----------------:| | wiki_pages | 5416537 | ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information FEVER license: ``` These data annotations incorporate material from Wikipedia, which is licensed pursuant to the Wikipedia Copyright Policy. These annotations are made available under the license terms described on the applicable Wikipedia article pages, or, where Wikipedia license terms are unavailable, under the Creative Commons Attribution-ShareAlike License (version 3.0), available at http://creativecommons.org/licenses/by-sa/3.0/ (collectively, the “License Terms”). You may not use these files except in compliance with the applicable License Terms. ``` ### Citation Information If you use "FEVER Dataset", please cite: ```bibtex @inproceedings{Thorne18Fever, author = {Thorne, James and Vlachos, Andreas and Christodoulopoulos, Christos and Mittal, Arpit}, title = {{FEVER}: a Large-scale Dataset for Fact Extraction and {VERification}}, booktitle = {NAACL-HLT}, year = {2018} } ``` If you use "FEVER 2.0 Adversarial Attacks Dataset", please cite: ```bibtex @inproceedings{Thorne19FEVER2, author = {Thorne, James and Vlachos, Andreas and Cocarascu, Oana and Christodoulopoulos, Christos and Mittal, Arpit}, title = {The {FEVER2.0} Shared Task}, booktitle = {Proceedings of the Second Workshop on {Fact Extraction and VERification (FEVER)}}, year = {2018} } ``` ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq), [@mariamabarham](https://github.com/mariamabarham), [@lewtun](https://github.com/lewtun), [@albertvillanova](https://github.com/albertvillanova) for adding this dataset.
fever
[ "task_categories:text-classification", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:extended|wikipedia", "language:en", "license:cc-by-sa-3.0", "license:gpl-3.0", "knowledge-verification", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-sa-3.0", "gpl-3.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["extended|wikipedia"], "task_categories": ["text-classification"], "task_ids": [], "paperswithcode_id": "fever", "pretty_name": "FEVER", "tags": ["knowledge-verification"], "dataset_info": [{"config_name": "v1.0", "features": [{"name": "id", "dtype": "int32"}, {"name": "label", "dtype": "string"}, {"name": "claim", "dtype": "string"}, {"name": "evidence_annotation_id", "dtype": "int32"}, {"name": "evidence_id", "dtype": "int32"}, {"name": "evidence_wiki_url", "dtype": "string"}, {"name": "evidence_sentence_id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 29591412, "num_examples": 311431}, {"name": "labelled_dev", "num_bytes": 3643157, "num_examples": 37566}, {"name": "unlabelled_dev", "num_bytes": 1548965, "num_examples": 19998}, {"name": "unlabelled_test", "num_bytes": 1617002, "num_examples": 19998}, {"name": "paper_dev", "num_bytes": 1821489, "num_examples": 18999}, {"name": "paper_test", "num_bytes": 1821668, "num_examples": 18567}], "download_size": 44853972, "dataset_size": 40043693}, {"config_name": "v2.0", "features": [{"name": "id", "dtype": "int32"}, {"name": "label", "dtype": "string"}, {"name": "claim", "dtype": "string"}, {"name": "evidence_annotation_id", "dtype": "int32"}, {"name": "evidence_id", "dtype": "int32"}, {"name": "evidence_wiki_url", "dtype": "string"}, {"name": "evidence_sentence_id", "dtype": "int32"}], "splits": [{"name": "validation", "num_bytes": 306243, "num_examples": 2384}], "download_size": 392466, "dataset_size": 306243}, {"config_name": "wiki_pages", "features": [{"name": "id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "lines", "dtype": "string"}], "splits": [{"name": "wikipedia_pages", "num_bytes": 7254115038, "num_examples": 5416537}], "download_size": 1713485474, "dataset_size": 7254115038}]}
2024-01-18T11:03:38+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-extended|wikipedia #language-English #license-cc-by-sa-3.0 #license-gpl-3.0 #knowledge-verification #region-us
Dataset Card for "fever" ======================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: * Paper: * Point of Contact: ### Dataset Summary With billions of individual pages on the web providing information on almost every conceivable topic, we should have the ability to collect facts that answer almost every conceivable question. However, only a small fraction of this information is contained in structured sources (Wikidata, Freebase, etc.) – we are therefore limited by our ability to transform free-form text to structured knowledge. There is, however, another problem that has become the focus of a lot of recent research and media coverage: false information coming from unreliable sources. The FEVER workshops are a venue for work in verifiable knowledge extraction and to stimulate progress in this direction. * FEVER Dataset: FEVER (Fact Extraction and VERification) consists of 185,445 claims generated by altering sentences extracted from Wikipedia and subsequently verified without knowledge of the sentence they were derived from. The claims are classified as Supported, Refuted or NotEnoughInfo. For the first two classes, the annotators also recorded the sentence(s) forming the necessary evidence for their judgment. * FEVER 2.0 Adversarial Attacks Dataset: The FEVER 2.0 Dataset consists of 1174 claims created by the submissions of participants in the Breaker phase of the 2019 shared task. Participants (Breakers) were tasked with generating adversarial examples that induce classification errors for the existing systems. Breakers submitted a dataset of up to 1000 instances with equal number of instances for each of the three classes (Supported, Refuted NotEnoughInfo). Only novel claims (i.e. not contained in the original FEVER dataset) were considered as valid entries to the shared task. The submissions were then manually evaluated for Correctness (grammatical, appropriately labeled and meet the FEVER annotation guidelines requirements). ### Supported Tasks and Leaderboards The task is verification of textual claims against textual sources. When compared to textual entailment (TE)/natural language inference, the key difference is that in these tasks the passage to verify each claim is given, and in recent years it typically consists a single sentence, while in verification systems it is retrieved from a large set of documents in order to form the evidence. ### Languages The dataset is in English. Dataset Structure ----------------- ### Data Instances #### v1.0 * Size of downloaded dataset files: 44.86 MB * Size of the generated dataset: 40.05 MB * Total amount of disk used: 84.89 MB An example of 'train' looks as follows. #### v2.0 * Size of downloaded dataset files: 0.39 MB * Size of the generated dataset: 0.30 MB * Total amount of disk used: 0.70 MB An example of 'validation' looks as follows. #### wiki\_pages * Size of downloaded dataset files: 1.71 GB * Size of the generated dataset: 7.25 GB * Total amount of disk used: 8.97 GB An example of 'wikipedia\_pages' looks as follows. ### Data Fields The data fields are the same among all splits. #### v1.0 * 'id': a 'int32' feature. * 'label': a 'string' feature. * 'claim': a 'string' feature. * 'evidence\_annotation\_id': a 'int32' feature. * 'evidence\_id': a 'int32' feature. * 'evidence\_wiki\_url': a 'string' feature. * 'evidence\_sentence\_id': a 'int32' feature. #### v2.0 * 'id': a 'int32' feature. * 'label': a 'string' feature. * 'claim': a 'string' feature. * 'evidence\_annotation\_id': a 'int32' feature. * 'evidence\_id': a 'int32' feature. * 'evidence\_wiki\_url': a 'string' feature. * 'evidence\_sentence\_id': a 'int32' feature. #### wiki\_pages * 'id': a 'string' feature. * 'text': a 'string' feature. * 'lines': a 'string' feature. ### Data Splits #### v1.0 #### v2.0 #### wiki\_pages Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information FEVER license: If you use "FEVER Dataset", please cite: If you use "FEVER 2.0 Adversarial Attacks Dataset", please cite: ### Contributions Thanks to @thomwolf, @lhoestq, @mariamabarham, @lewtun, @albertvillanova for adding this dataset.
[ "### Dataset Summary\n\n\nWith billions of individual pages on the web providing information on almost every conceivable topic, we should have\nthe ability to collect facts that answer almost every conceivable question. However, only a small fraction of this\ninformation is contained in structured sources (Wikidata, Freebase, etc.) – we are therefore limited by our ability to\ntransform free-form text to structured knowledge. There is, however, another problem that has become the focus of a lot\nof recent research and media coverage: false information coming from unreliable sources.\n\n\nThe FEVER workshops are a venue for work in verifiable knowledge extraction and to stimulate progress in this direction.\n\n\n* FEVER Dataset: FEVER (Fact Extraction and VERification) consists of 185,445 claims generated by altering sentences\nextracted from Wikipedia and subsequently verified without knowledge of the sentence they were derived from. The claims\nare classified as Supported, Refuted or NotEnoughInfo. For the first two classes, the annotators also recorded the\nsentence(s) forming the necessary evidence for their judgment.\n* FEVER 2.0 Adversarial Attacks Dataset: The FEVER 2.0 Dataset consists of 1174 claims created by the submissions of\nparticipants in the Breaker phase of the 2019 shared task. Participants (Breakers) were tasked with generating\nadversarial examples that induce classification errors for the existing systems. Breakers submitted a dataset of up to\n1000 instances with equal number of instances for each of the three classes (Supported, Refuted NotEnoughInfo). Only\nnovel claims (i.e. not contained in the original FEVER dataset) were considered as valid entries to the shared task.\nThe submissions were then manually evaluated for Correctness (grammatical, appropriately labeled and meet the FEVER\nannotation guidelines requirements).", "### Supported Tasks and Leaderboards\n\n\nThe task is verification of textual claims against textual sources.\n\n\nWhen compared to textual entailment (TE)/natural language inference, the key difference is that in these tasks the\npassage to verify each claim is given, and in recent years it typically consists a single sentence, while in\nverification systems it is retrieved from a large set of documents in order to form the evidence.", "### Languages\n\n\nThe dataset is in English.\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### v1.0\n\n\n* Size of downloaded dataset files: 44.86 MB\n* Size of the generated dataset: 40.05 MB\n* Total amount of disk used: 84.89 MB\n\n\nAn example of 'train' looks as follows.", "#### v2.0\n\n\n* Size of downloaded dataset files: 0.39 MB\n* Size of the generated dataset: 0.30 MB\n* Total amount of disk used: 0.70 MB\n\n\nAn example of 'validation' looks as follows.", "#### wiki\\_pages\n\n\n* Size of downloaded dataset files: 1.71 GB\n* Size of the generated dataset: 7.25 GB\n* Total amount of disk used: 8.97 GB\n\n\nAn example of 'wikipedia\\_pages' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### v1.0\n\n\n* 'id': a 'int32' feature.\n* 'label': a 'string' feature.\n* 'claim': a 'string' feature.\n* 'evidence\\_annotation\\_id': a 'int32' feature.\n* 'evidence\\_id': a 'int32' feature.\n* 'evidence\\_wiki\\_url': a 'string' feature.\n* 'evidence\\_sentence\\_id': a 'int32' feature.", "#### v2.0\n\n\n* 'id': a 'int32' feature.\n* 'label': a 'string' feature.\n* 'claim': a 'string' feature.\n* 'evidence\\_annotation\\_id': a 'int32' feature.\n* 'evidence\\_id': a 'int32' feature.\n* 'evidence\\_wiki\\_url': a 'string' feature.\n* 'evidence\\_sentence\\_id': a 'int32' feature.", "#### wiki\\_pages\n\n\n* 'id': a 'string' feature.\n* 'text': a 'string' feature.\n* 'lines': a 'string' feature.", "### Data Splits", "#### v1.0", "#### v2.0", "#### wiki\\_pages\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nFEVER license:\n\n\nIf you use \"FEVER Dataset\", please cite:\n\n\nIf you use \"FEVER 2.0 Adversarial Attacks Dataset\", please cite:", "### Contributions\n\n\nThanks to @thomwolf, @lhoestq,\n@mariamabarham, @lewtun,\n@albertvillanova for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-extended|wikipedia #language-English #license-cc-by-sa-3.0 #license-gpl-3.0 #knowledge-verification #region-us \n", "### Dataset Summary\n\n\nWith billions of individual pages on the web providing information on almost every conceivable topic, we should have\nthe ability to collect facts that answer almost every conceivable question. However, only a small fraction of this\ninformation is contained in structured sources (Wikidata, Freebase, etc.) – we are therefore limited by our ability to\ntransform free-form text to structured knowledge. There is, however, another problem that has become the focus of a lot\nof recent research and media coverage: false information coming from unreliable sources.\n\n\nThe FEVER workshops are a venue for work in verifiable knowledge extraction and to stimulate progress in this direction.\n\n\n* FEVER Dataset: FEVER (Fact Extraction and VERification) consists of 185,445 claims generated by altering sentences\nextracted from Wikipedia and subsequently verified without knowledge of the sentence they were derived from. The claims\nare classified as Supported, Refuted or NotEnoughInfo. For the first two classes, the annotators also recorded the\nsentence(s) forming the necessary evidence for their judgment.\n* FEVER 2.0 Adversarial Attacks Dataset: The FEVER 2.0 Dataset consists of 1174 claims created by the submissions of\nparticipants in the Breaker phase of the 2019 shared task. Participants (Breakers) were tasked with generating\nadversarial examples that induce classification errors for the existing systems. Breakers submitted a dataset of up to\n1000 instances with equal number of instances for each of the three classes (Supported, Refuted NotEnoughInfo). Only\nnovel claims (i.e. not contained in the original FEVER dataset) were considered as valid entries to the shared task.\nThe submissions were then manually evaluated for Correctness (grammatical, appropriately labeled and meet the FEVER\nannotation guidelines requirements).", "### Supported Tasks and Leaderboards\n\n\nThe task is verification of textual claims against textual sources.\n\n\nWhen compared to textual entailment (TE)/natural language inference, the key difference is that in these tasks the\npassage to verify each claim is given, and in recent years it typically consists a single sentence, while in\nverification systems it is retrieved from a large set of documents in order to form the evidence.", "### Languages\n\n\nThe dataset is in English.\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### v1.0\n\n\n* Size of downloaded dataset files: 44.86 MB\n* Size of the generated dataset: 40.05 MB\n* Total amount of disk used: 84.89 MB\n\n\nAn example of 'train' looks as follows.", "#### v2.0\n\n\n* Size of downloaded dataset files: 0.39 MB\n* Size of the generated dataset: 0.30 MB\n* Total amount of disk used: 0.70 MB\n\n\nAn example of 'validation' looks as follows.", "#### wiki\\_pages\n\n\n* Size of downloaded dataset files: 1.71 GB\n* Size of the generated dataset: 7.25 GB\n* Total amount of disk used: 8.97 GB\n\n\nAn example of 'wikipedia\\_pages' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### v1.0\n\n\n* 'id': a 'int32' feature.\n* 'label': a 'string' feature.\n* 'claim': a 'string' feature.\n* 'evidence\\_annotation\\_id': a 'int32' feature.\n* 'evidence\\_id': a 'int32' feature.\n* 'evidence\\_wiki\\_url': a 'string' feature.\n* 'evidence\\_sentence\\_id': a 'int32' feature.", "#### v2.0\n\n\n* 'id': a 'int32' feature.\n* 'label': a 'string' feature.\n* 'claim': a 'string' feature.\n* 'evidence\\_annotation\\_id': a 'int32' feature.\n* 'evidence\\_id': a 'int32' feature.\n* 'evidence\\_wiki\\_url': a 'string' feature.\n* 'evidence\\_sentence\\_id': a 'int32' feature.", "#### wiki\\_pages\n\n\n* 'id': a 'string' feature.\n* 'text': a 'string' feature.\n* 'lines': a 'string' feature.", "### Data Splits", "#### v1.0", "#### v2.0", "#### wiki\\_pages\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nFEVER license:\n\n\nIf you use \"FEVER Dataset\", please cite:\n\n\nIf you use \"FEVER 2.0 Adversarial Attacks Dataset\", please cite:", "### Contributions\n\n\nThanks to @thomwolf, @lhoestq,\n@mariamabarham, @lewtun,\n@albertvillanova for adding this dataset." ]
12d1071d9cb5b7526c81a2784be3716acf8c6c00
# Dataset Card for few_rel ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [GitHub Page](https://thunlp.github.io/) - **Repository:** [GitHub](https://github.com/thunlp/FewRel) - **Paper:** [FewRel](https://arxiv.org/abs/1810.10147), [FewRel 2.0](https://arxiv.org/abs/1910.07124) - **Leaderboard:** [GitHub Leaderboard](https://thunlp.github.io/fewrel.html) - **Point of Contact:** [Needs More Information] ### Dataset Summary FewRel is a large-scale few-shot relation extraction dataset, which contains more than one hundred relations and tens of thousands of annotated instances cross different domains. ### Supported Tasks and Leaderboards [Needs More Information] ### Languages The dataset contaings English text, as used by writers on Wikipedia, and crowdsourced English annotations. ## Dataset Structure ### Data Instances An instance from `train_wiki` split: ``` {'head': {'indices': [[16]], 'text': 'tjq', 'type': 'Q1331049'}, 'names': ['place served by transport hub', 'territorial entity or entities served by this transport hub (airport, train station, etc.)'], 'relation': 'P931', 'tail': {'indices': [[13, 14]], 'text': 'tanjung pandan', 'type': 'Q3056359'}, 'tokens': ['Merpati', 'flight', '106', 'departed', 'Jakarta', '(', 'CGK', ')', 'on', 'a', 'domestic', 'flight', 'to', 'Tanjung', 'Pandan', '(', 'TJQ', ')', '.']} ``` ### Data Fields For `default`: - `relation`: a `string` feature containing PID of the relation. - `tokens`: a `list` of `string` features containing tokens for the text. - `head`: a dictionary containing: - `text`: a `string` feature representing the head entity. - `type`: a `string` feature representing the type of the head entity. - `indices`: a `list` containing `list` of token indices. - `tail`: a dictionary containing: - `text`: a `string` feature representing the tail entity. - `type`: a `string` feature representing the type of the tail entity. - `indices`: a `list` containing `list` of token indices. - `names`: a `list` of `string` features containing relation names. For `pubmed_unsupervised` split, this is set to a `list` with an empty `string`. For `val_semeval` and `val_pubmed` split, this is set to a `list` with the `string` from the `relation` field. ### Data Splits `train_wiki`: 44800 `val_nyt`: 2500 `val_pubmed`: 1000 `val_semeval`: 8851 `val_wiki`: 11200 `pubmed_unsupervised`: 2500 ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators For FewRel: Han, Xu and Zhu, Hao and Yu, Pengfei and Wang, Ziyun and Yao, Yuan and Liu, Zhiyuan and Sun, Maosong For FewRel 2.0: Gao, Tianyu and Han, Xu and Zhu, Hao and Liu, Zhiyuan and Li, Peng and Sun, Maosong and Zhou, Jie ### Licensing Information ``` MIT License Copyright (c) 2018 THUNLP Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ``` ### Citation Information ``` @inproceedings{han-etal-2018-fewrel, title = "{F}ew{R}el: A Large-Scale Supervised Few-Shot Relation Classification Dataset with State-of-the-Art Evaluation", author = "Han, Xu and Zhu, Hao and Yu, Pengfei and Wang, Ziyun and Yao, Yuan and Liu, Zhiyuan and Sun, Maosong", booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", month = oct # "-" # nov, year = "2018", address = "Brussels, Belgium", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/D18-1514", doi = "10.18653/v1/D18-1514", pages = "4803--4809" } ``` ``` @inproceedings{gao-etal-2019-fewrel, title = "{F}ew{R}el 2.0: Towards More Challenging Few-Shot Relation Classification", author = "Gao, Tianyu and Han, Xu and Zhu, Hao and Liu, Zhiyuan and Li, Peng and Sun, Maosong and Zhou, Jie", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", month = nov, year = "2019", address = "Hong Kong, China", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/D19-1649", doi = "10.18653/v1/D19-1649", pages = "6251--6256" } ``` ### Contributions Thanks to [@gchhablani](https://github.com/gchhablani) for adding this dataset.
few_rel
[ "task_categories:other", "annotations_creators:crowdsourced", "annotations_creators:machine-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "size_categories:n<1K", "source_datasets:original", "language:en", "license:mit", "relation-extraction", "arxiv:1810.10147", "arxiv:1910.07124", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced", "machine-generated"], "language_creators": ["found"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K", "n<1K"], "source_datasets": ["original"], "task_categories": ["other"], "task_ids": [], "paperswithcode_id": "fewrel", "pretty_name": "Few-Shot Relation Classification Dataset", "config_names": ["default", "pid2name"], "tags": ["relation-extraction"], "dataset_info": [{"config_name": "default", "features": [{"name": "relation", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "head", "struct": [{"name": "text", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "indices", "sequence": {"sequence": "int64"}}]}, {"name": "tail", "struct": [{"name": "text", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "indices", "sequence": {"sequence": "int64"}}]}, {"name": "names", "sequence": "string"}], "splits": [{"name": "train_wiki", "num_bytes": 19923155, "num_examples": 44800}, {"name": "val_nyt", "num_bytes": 1385642, "num_examples": 2500}, {"name": "val_pubmed", "num_bytes": 488502, "num_examples": 1000}, {"name": "val_semeval", "num_bytes": 2646249, "num_examples": 8851}, {"name": "val_wiki", "num_bytes": 5147348, "num_examples": 11200}, {"name": "pubmed_unsupervised", "num_bytes": 1117703, "num_examples": 2500}], "download_size": 22674323, "dataset_size": 30708599}, {"config_name": "pid2name", "features": [{"name": "relation", "dtype": "string"}, {"name": "names", "sequence": "string"}], "splits": [{"name": "pid2name", "num_bytes": 81607, "num_examples": 744}], "download_size": 22674323, "dataset_size": 81607}]}
2024-01-18T11:03:39+00:00
[ "1810.10147", "1910.07124" ]
[ "en" ]
TAGS #task_categories-other #annotations_creators-crowdsourced #annotations_creators-machine-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #size_categories-n<1K #source_datasets-original #language-English #license-mit #relation-extraction #arxiv-1810.10147 #arxiv-1910.07124 #region-us
# Dataset Card for few_rel ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: GitHub Page - Repository: GitHub - Paper: FewRel, FewRel 2.0 - Leaderboard: GitHub Leaderboard - Point of Contact: ### Dataset Summary FewRel is a large-scale few-shot relation extraction dataset, which contains more than one hundred relations and tens of thousands of annotated instances cross different domains. ### Supported Tasks and Leaderboards ### Languages The dataset contaings English text, as used by writers on Wikipedia, and crowdsourced English annotations. ## Dataset Structure ### Data Instances An instance from 'train_wiki' split: ### Data Fields For 'default': - 'relation': a 'string' feature containing PID of the relation. - 'tokens': a 'list' of 'string' features containing tokens for the text. - 'head': a dictionary containing: - 'text': a 'string' feature representing the head entity. - 'type': a 'string' feature representing the type of the head entity. - 'indices': a 'list' containing 'list' of token indices. - 'tail': a dictionary containing: - 'text': a 'string' feature representing the tail entity. - 'type': a 'string' feature representing the type of the tail entity. - 'indices': a 'list' containing 'list' of token indices. - 'names': a 'list' of 'string' features containing relation names. For 'pubmed_unsupervised' split, this is set to a 'list' with an empty 'string'. For 'val_semeval' and 'val_pubmed' split, this is set to a 'list' with the 'string' from the 'relation' field. ### Data Splits 'train_wiki': 44800 'val_nyt': 2500 'val_pubmed': 1000 'val_semeval': 8851 'val_wiki': 11200 'pubmed_unsupervised': 2500 ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators For FewRel: Han, Xu and Zhu, Hao and Yu, Pengfei and Wang, Ziyun and Yao, Yuan and Liu, Zhiyuan and Sun, Maosong For FewRel 2.0: Gao, Tianyu and Han, Xu and Zhu, Hao and Liu, Zhiyuan and Li, Peng and Sun, Maosong and Zhou, Jie ### Licensing Information ### Contributions Thanks to @gchhablani for adding this dataset.
[ "# Dataset Card for few_rel", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: GitHub Page\n- Repository: GitHub\n- Paper: FewRel, FewRel 2.0\n- Leaderboard: GitHub Leaderboard\n- Point of Contact:", "### Dataset Summary\n\nFewRel is a large-scale few-shot relation extraction dataset, which contains more than one hundred relations and tens of thousands of annotated instances cross different domains.", "### Supported Tasks and Leaderboards", "### Languages\n\nThe dataset contaings English text, as used by writers on Wikipedia, and crowdsourced English annotations.", "## Dataset Structure", "### Data Instances\n\nAn instance from 'train_wiki' split:", "### Data Fields\n\nFor 'default':\n\n- 'relation': a 'string' feature containing PID of the relation.\n- 'tokens': a 'list' of 'string' features containing tokens for the text.\n- 'head': a dictionary containing:\n - 'text': a 'string' feature representing the head entity.\n - 'type': a 'string' feature representing the type of the head entity.\n - 'indices': a 'list' containing 'list' of token indices.\n\n- 'tail': a dictionary containing:\n - 'text': a 'string' feature representing the tail entity.\n - 'type': a 'string' feature representing the type of the tail entity.\n - 'indices': a 'list' containing 'list' of token indices.\n- 'names': a 'list' of 'string' features containing relation names. For 'pubmed_unsupervised' split, this is set to a 'list' with an empty 'string'. For 'val_semeval' and 'val_pubmed' split, this is set to a 'list' with the 'string' from the 'relation' field.", "### Data Splits\n\n'train_wiki': 44800\n'val_nyt': 2500\n'val_pubmed': 1000\n'val_semeval': 8851\n'val_wiki': 11200\n'pubmed_unsupervised': 2500", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nFor FewRel:\n\nHan, Xu and Zhu, Hao and Yu, Pengfei and Wang, Ziyun and Yao, Yuan and Liu, Zhiyuan and Sun, Maosong\n\nFor FewRel 2.0:\n\nGao, Tianyu and Han, Xu and Zhu, Hao and Liu, Zhiyuan and Li, Peng and Sun, Maosong and Zhou, Jie", "### Licensing Information", "### Contributions\n\nThanks to @gchhablani for adding this dataset." ]
[ "TAGS\n#task_categories-other #annotations_creators-crowdsourced #annotations_creators-machine-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #size_categories-n<1K #source_datasets-original #language-English #license-mit #relation-extraction #arxiv-1810.10147 #arxiv-1910.07124 #region-us \n", "# Dataset Card for few_rel", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: GitHub Page\n- Repository: GitHub\n- Paper: FewRel, FewRel 2.0\n- Leaderboard: GitHub Leaderboard\n- Point of Contact:", "### Dataset Summary\n\nFewRel is a large-scale few-shot relation extraction dataset, which contains more than one hundred relations and tens of thousands of annotated instances cross different domains.", "### Supported Tasks and Leaderboards", "### Languages\n\nThe dataset contaings English text, as used by writers on Wikipedia, and crowdsourced English annotations.", "## Dataset Structure", "### Data Instances\n\nAn instance from 'train_wiki' split:", "### Data Fields\n\nFor 'default':\n\n- 'relation': a 'string' feature containing PID of the relation.\n- 'tokens': a 'list' of 'string' features containing tokens for the text.\n- 'head': a dictionary containing:\n - 'text': a 'string' feature representing the head entity.\n - 'type': a 'string' feature representing the type of the head entity.\n - 'indices': a 'list' containing 'list' of token indices.\n\n- 'tail': a dictionary containing:\n - 'text': a 'string' feature representing the tail entity.\n - 'type': a 'string' feature representing the type of the tail entity.\n - 'indices': a 'list' containing 'list' of token indices.\n- 'names': a 'list' of 'string' features containing relation names. For 'pubmed_unsupervised' split, this is set to a 'list' with an empty 'string'. For 'val_semeval' and 'val_pubmed' split, this is set to a 'list' with the 'string' from the 'relation' field.", "### Data Splits\n\n'train_wiki': 44800\n'val_nyt': 2500\n'val_pubmed': 1000\n'val_semeval': 8851\n'val_wiki': 11200\n'pubmed_unsupervised': 2500", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nFor FewRel:\n\nHan, Xu and Zhu, Hao and Yu, Pengfei and Wang, Ziyun and Yao, Yuan and Liu, Zhiyuan and Sun, Maosong\n\nFor FewRel 2.0:\n\nGao, Tianyu and Han, Xu and Zhu, Hao and Liu, Zhiyuan and Li, Peng and Sun, Maosong and Zhou, Jie", "### Licensing Information", "### Contributions\n\nThanks to @gchhablani for adding this dataset." ]
1484d06fe7af23030c7c977b12556108d1f67039
# Dataset Card for financial_phrasebank ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Kaggle](https://www.kaggle.com/ankurzing/sentiment-analysis-for-financial-news) [ResearchGate](https://www.researchgate.net/publication/251231364_FinancialPhraseBank-v10) - **Repository:** - **Paper:** [Arxiv](https://arxiv.org/abs/1307.5336) - **Leaderboard:** [Kaggle](https://www.kaggle.com/ankurzing/sentiment-analysis-for-financial-news/code) [PapersWithCode](https://paperswithcode.com/sota/sentiment-analysis-on-financial-phrasebank) = - **Point of Contact:** [Pekka Malo](mailto:[email protected]) [Ankur Sinha](mailto:[email protected]) ### Dataset Summary Polar sentiment dataset of sentences from financial news. The dataset consists of 4840 sentences from English language financial news categorised by sentiment. The dataset is divided by agreement rate of 5-8 annotators. ### Supported Tasks and Leaderboards Sentiment Classification ### Languages English ## Dataset Structure ### Data Instances ``` { "sentence": "Pharmaceuticals group Orion Corp reported a fall in its third-quarter earnings that were hit by larger expenditures on R&D and marketing .", "label": "negative" } ``` ### Data Fields - sentence: a tokenized line from the dataset - label: a label corresponding to the class as a string: 'positive', 'negative' or 'neutral' ### Data Splits There's no train/validation/test split. However the dataset is available in four possible configurations depending on the percentage of agreement of annotators: `sentences_50agree`; Number of instances with >=50% annotator agreement: 4846 `sentences_66agree`: Number of instances with >=66% annotator agreement: 4217 `sentences_75agree`: Number of instances with >=75% annotator agreement: 3453 `sentences_allagree`: Number of instances with 100% annotator agreement: 2264 ## Dataset Creation ### Curation Rationale The key arguments for the low utilization of statistical techniques in financial sentiment analysis have been the difficulty of implementation for practical applications and the lack of high quality training data for building such models. Especially in the case of finance and economic texts, annotated collections are a scarce resource and many are reserved for proprietary use only. To resolve the missing training data problem, we present a collection of ∼ 5000 sentences to establish human-annotated standards for benchmarking alternative modeling techniques. The objective of the phrase level annotation task was to classify each example sentence into a positive, negative or neutral category by considering only the information explicitly available in the given sentence. Since the study is focused only on financial and economic domains, the annotators were asked to consider the sentences from the view point of an investor only; i.e. whether the news may have positive, negative or neutral influence on the stock price. As a result, sentences which have a sentiment that is not relevant from an economic or financial perspective are considered neutral. ### Source Data #### Initial Data Collection and Normalization The corpus used in this paper is made out of English news on all listed companies in OMX Helsinki. The news has been downloaded from the LexisNexis database using an automated web scraper. Out of this news database, a random subset of 10,000 articles was selected to obtain good coverage across small and large companies, companies in different industries, as well as different news sources. Following the approach taken by Maks and Vossen (2010), we excluded all sentences which did not contain any of the lexicon entities. This reduced the overall sample to 53,400 sentences, where each has at least one or more recognized lexicon entity. The sentences were then classified according to the types of entity sequences detected. Finally, a random sample of ∼5000 sentences was chosen to represent the overall news database. #### Who are the source language producers? The source data was written by various financial journalists. ### Annotations #### Annotation process This release of the financial phrase bank covers a collection of 4840 sentences. The selected collection of phrases was annotated by 16 people with adequate background knowledge on financial markets. Given the large number of overlapping annotations (5 to 8 annotations per sentence), there are several ways to define a majority vote based gold standard. To provide an objective comparison, we have formed 4 alternative reference datasets based on the strength of majority agreement: #### Who are the annotators? Three of the annotators were researchers and the remaining 13 annotators were master's students at Aalto University School of Business with majors primarily in finance, accounting, and economics. ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases All annotators were from the same institution and so interannotator agreement should be understood with this taken into account. ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-sa/3.0/. If you are interested in commercial use of the data, please contact the following authors for an appropriate license: - [Pekka Malo](mailto:[email protected]) - [Ankur Sinha](mailto:[email protected]) ### Citation Information ``` @article{Malo2014GoodDO, title={Good debt or bad debt: Detecting semantic orientations in economic texts}, author={P. Malo and A. Sinha and P. Korhonen and J. Wallenius and P. Takala}, journal={Journal of the Association for Information Science and Technology}, year={2014}, volume={65} } ``` ### Contributions Thanks to [@frankier](https://github.com/frankier) for adding this dataset.
financial_phrasebank
[ "task_categories:text-classification", "task_ids:multi-class-classification", "task_ids:sentiment-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:cc-by-nc-sa-3.0", "finance", "arxiv:1307.5336", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-nc-sa-3.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification", "sentiment-classification"], "pretty_name": "FinancialPhrasebank", "dataset_info": [{"config_name": "sentences_allagree", "features": [{"name": "sentence", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "negative", "1": "neutral", "2": "positive"}}}}], "splits": [{"name": "train", "num_bytes": 303371, "num_examples": 2264}], "download_size": 681890, "dataset_size": 303371}, {"config_name": "sentences_75agree", "features": [{"name": "sentence", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "negative", "1": "neutral", "2": "positive"}}}}], "splits": [{"name": "train", "num_bytes": 472703, "num_examples": 3453}], "download_size": 681890, "dataset_size": 472703}, {"config_name": "sentences_66agree", "features": [{"name": "sentence", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "negative", "1": "neutral", "2": "positive"}}}}], "splits": [{"name": "train", "num_bytes": 587152, "num_examples": 4217}], "download_size": 681890, "dataset_size": 587152}, {"config_name": "sentences_50agree", "features": [{"name": "sentence", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "negative", "1": "neutral", "2": "positive"}}}}], "splits": [{"name": "train", "num_bytes": 679240, "num_examples": 4846}], "download_size": 681890, "dataset_size": 679240}], "tags": ["finance"]}
2024-01-18T11:03:40+00:00
[ "1307.5336" ]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-multi-class-classification #task_ids-sentiment-classification #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-cc-by-nc-sa-3.0 #finance #arxiv-1307.5336 #region-us
# Dataset Card for financial_phrasebank ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: Kaggle ResearchGate - Repository: - Paper: Arxiv - Leaderboard: Kaggle PapersWithCode = - Point of Contact: Pekka Malo Ankur Sinha ### Dataset Summary Polar sentiment dataset of sentences from financial news. The dataset consists of 4840 sentences from English language financial news categorised by sentiment. The dataset is divided by agreement rate of 5-8 annotators. ### Supported Tasks and Leaderboards Sentiment Classification ### Languages English ## Dataset Structure ### Data Instances ### Data Fields - sentence: a tokenized line from the dataset - label: a label corresponding to the class as a string: 'positive', 'negative' or 'neutral' ### Data Splits There's no train/validation/test split. However the dataset is available in four possible configurations depending on the percentage of agreement of annotators: 'sentences_50agree'; Number of instances with >=50% annotator agreement: 4846 'sentences_66agree': Number of instances with >=66% annotator agreement: 4217 'sentences_75agree': Number of instances with >=75% annotator agreement: 3453 'sentences_allagree': Number of instances with 100% annotator agreement: 2264 ## Dataset Creation ### Curation Rationale The key arguments for the low utilization of statistical techniques in financial sentiment analysis have been the difficulty of implementation for practical applications and the lack of high quality training data for building such models. Especially in the case of finance and economic texts, annotated collections are a scarce resource and many are reserved for proprietary use only. To resolve the missing training data problem, we present a collection of ∼ 5000 sentences to establish human-annotated standards for benchmarking alternative modeling techniques. The objective of the phrase level annotation task was to classify each example sentence into a positive, negative or neutral category by considering only the information explicitly available in the given sentence. Since the study is focused only on financial and economic domains, the annotators were asked to consider the sentences from the view point of an investor only; i.e. whether the news may have positive, negative or neutral influence on the stock price. As a result, sentences which have a sentiment that is not relevant from an economic or financial perspective are considered neutral. ### Source Data #### Initial Data Collection and Normalization The corpus used in this paper is made out of English news on all listed companies in OMX Helsinki. The news has been downloaded from the LexisNexis database using an automated web scraper. Out of this news database, a random subset of 10,000 articles was selected to obtain good coverage across small and large companies, companies in different industries, as well as different news sources. Following the approach taken by Maks and Vossen (2010), we excluded all sentences which did not contain any of the lexicon entities. This reduced the overall sample to 53,400 sentences, where each has at least one or more recognized lexicon entity. The sentences were then classified according to the types of entity sequences detected. Finally, a random sample of ∼5000 sentences was chosen to represent the overall news database. #### Who are the source language producers? The source data was written by various financial journalists. ### Annotations #### Annotation process This release of the financial phrase bank covers a collection of 4840 sentences. The selected collection of phrases was annotated by 16 people with adequate background knowledge on financial markets. Given the large number of overlapping annotations (5 to 8 annotations per sentence), there are several ways to define a majority vote based gold standard. To provide an objective comparison, we have formed 4 alternative reference datasets based on the strength of majority agreement: #### Who are the annotators? Three of the annotators were researchers and the remaining 13 annotators were master's students at Aalto University School of Business with majors primarily in finance, accounting, and economics. ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases All annotators were from the same institution and so interannotator agreement should be understood with this taken into account. ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. To view a copy of this license, visit URL If you are interested in commercial use of the data, please contact the following authors for an appropriate license: - Pekka Malo - Ankur Sinha ### Contributions Thanks to @frankier for adding this dataset.
[ "# Dataset Card for financial_phrasebank", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: Kaggle ResearchGate\n- Repository:\n- Paper: Arxiv\n- Leaderboard: Kaggle PapersWithCode =\n- Point of Contact: Pekka Malo Ankur Sinha", "### Dataset Summary\n\nPolar sentiment dataset of sentences from financial news. The dataset consists of 4840 sentences from English language financial news categorised by sentiment. The dataset is divided by agreement rate of 5-8 annotators.", "### Supported Tasks and Leaderboards\n\nSentiment Classification", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances", "### Data Fields\n\n- sentence: a tokenized line from the dataset\n- label: a label corresponding to the class as a string: 'positive', 'negative' or 'neutral'", "### Data Splits\nThere's no train/validation/test split.\n\nHowever the dataset is available in four possible configurations depending on the percentage of agreement of annotators:\n\n'sentences_50agree'; Number of instances with >=50% annotator agreement: 4846 \n'sentences_66agree': Number of instances with >=66% annotator agreement: 4217\n'sentences_75agree': Number of instances with >=75% annotator agreement: 3453\n'sentences_allagree': Number of instances with 100% annotator agreement: 2264", "## Dataset Creation", "### Curation Rationale\n\nThe key arguments for the low utilization of statistical techniques in\nfinancial sentiment analysis have been the difficulty of implementation for\npractical applications and the lack of high quality training data for building\nsuch models. Especially in the case of finance and economic texts, annotated\ncollections are a scarce resource and many are reserved for proprietary use\nonly. To resolve the missing training data problem, we present a collection of\n∼ 5000 sentences to establish human-annotated standards for benchmarking\nalternative modeling techniques. \n\nThe objective of the phrase level annotation task was to classify each example\nsentence into a positive, negative or neutral category by considering only the\ninformation explicitly available in the given sentence. Since the study is\nfocused only on financial and economic domains, the annotators were asked to\nconsider the sentences from the view point of an investor only; i.e. whether\nthe news may have positive, negative or neutral influence on the stock price.\nAs a result, sentences which have a sentiment that is not relevant from an\neconomic or financial perspective are considered neutral.", "### Source Data", "#### Initial Data Collection and Normalization\n\nThe corpus used in this paper is made out of English news on all listed\ncompanies in OMX Helsinki. The news has been downloaded from the LexisNexis\ndatabase using an automated web scraper. Out of this news database, a random\nsubset of 10,000 articles was selected to obtain good coverage across small and\nlarge companies, companies in different industries, as well as different news\nsources. Following the approach taken by Maks and Vossen (2010), we excluded\nall sentences which did not contain any of the lexicon entities. This reduced\nthe overall sample to 53,400 sentences, where each has at least one or more\nrecognized lexicon entity. The sentences were then classified according to the\ntypes of entity sequences detected. Finally, a random sample of ∼5000 sentences\nwas chosen to represent the overall news database.", "#### Who are the source language producers?\n\nThe source data was written by various financial journalists.", "### Annotations", "#### Annotation process\n\nThis release of the financial phrase bank covers a collection of 4840\nsentences. The selected collection of phrases was annotated by 16 people with\nadequate background knowledge on financial markets.\n\nGiven the large number of overlapping annotations (5 to 8 annotations per\nsentence), there are several ways to define a majority vote based gold\nstandard. To provide an objective comparison, we have formed 4 alternative\nreference datasets based on the strength of majority agreement:", "#### Who are the annotators?\n\nThree of the annotators were researchers and the remaining 13 annotators were\nmaster's students at Aalto University School of Business with majors primarily\nin finance, accounting, and economics.", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases\n\nAll annotators were from the same institution and so interannotator agreement\nshould be understood with this taken into account.", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nThis work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. To view a copy of this license, visit URL\n\nIf you are interested in commercial use of the data, please contact the following authors for an appropriate license:\n- Pekka Malo\n- Ankur Sinha", "### Contributions\n\nThanks to @frankier for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-multi-class-classification #task_ids-sentiment-classification #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-cc-by-nc-sa-3.0 #finance #arxiv-1307.5336 #region-us \n", "# Dataset Card for financial_phrasebank", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: Kaggle ResearchGate\n- Repository:\n- Paper: Arxiv\n- Leaderboard: Kaggle PapersWithCode =\n- Point of Contact: Pekka Malo Ankur Sinha", "### Dataset Summary\n\nPolar sentiment dataset of sentences from financial news. The dataset consists of 4840 sentences from English language financial news categorised by sentiment. The dataset is divided by agreement rate of 5-8 annotators.", "### Supported Tasks and Leaderboards\n\nSentiment Classification", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances", "### Data Fields\n\n- sentence: a tokenized line from the dataset\n- label: a label corresponding to the class as a string: 'positive', 'negative' or 'neutral'", "### Data Splits\nThere's no train/validation/test split.\n\nHowever the dataset is available in four possible configurations depending on the percentage of agreement of annotators:\n\n'sentences_50agree'; Number of instances with >=50% annotator agreement: 4846 \n'sentences_66agree': Number of instances with >=66% annotator agreement: 4217\n'sentences_75agree': Number of instances with >=75% annotator agreement: 3453\n'sentences_allagree': Number of instances with 100% annotator agreement: 2264", "## Dataset Creation", "### Curation Rationale\n\nThe key arguments for the low utilization of statistical techniques in\nfinancial sentiment analysis have been the difficulty of implementation for\npractical applications and the lack of high quality training data for building\nsuch models. Especially in the case of finance and economic texts, annotated\ncollections are a scarce resource and many are reserved for proprietary use\nonly. To resolve the missing training data problem, we present a collection of\n∼ 5000 sentences to establish human-annotated standards for benchmarking\nalternative modeling techniques. \n\nThe objective of the phrase level annotation task was to classify each example\nsentence into a positive, negative or neutral category by considering only the\ninformation explicitly available in the given sentence. Since the study is\nfocused only on financial and economic domains, the annotators were asked to\nconsider the sentences from the view point of an investor only; i.e. whether\nthe news may have positive, negative or neutral influence on the stock price.\nAs a result, sentences which have a sentiment that is not relevant from an\neconomic or financial perspective are considered neutral.", "### Source Data", "#### Initial Data Collection and Normalization\n\nThe corpus used in this paper is made out of English news on all listed\ncompanies in OMX Helsinki. The news has been downloaded from the LexisNexis\ndatabase using an automated web scraper. Out of this news database, a random\nsubset of 10,000 articles was selected to obtain good coverage across small and\nlarge companies, companies in different industries, as well as different news\nsources. Following the approach taken by Maks and Vossen (2010), we excluded\nall sentences which did not contain any of the lexicon entities. This reduced\nthe overall sample to 53,400 sentences, where each has at least one or more\nrecognized lexicon entity. The sentences were then classified according to the\ntypes of entity sequences detected. Finally, a random sample of ∼5000 sentences\nwas chosen to represent the overall news database.", "#### Who are the source language producers?\n\nThe source data was written by various financial journalists.", "### Annotations", "#### Annotation process\n\nThis release of the financial phrase bank covers a collection of 4840\nsentences. The selected collection of phrases was annotated by 16 people with\nadequate background knowledge on financial markets.\n\nGiven the large number of overlapping annotations (5 to 8 annotations per\nsentence), there are several ways to define a majority vote based gold\nstandard. To provide an objective comparison, we have formed 4 alternative\nreference datasets based on the strength of majority agreement:", "#### Who are the annotators?\n\nThree of the annotators were researchers and the remaining 13 annotators were\nmaster's students at Aalto University School of Business with majors primarily\nin finance, accounting, and economics.", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases\n\nAll annotators were from the same institution and so interannotator agreement\nshould be understood with this taken into account.", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nThis work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. To view a copy of this license, visit URL\n\nIf you are interested in commercial use of the data, please contact the following authors for an appropriate license:\n- Pekka Malo\n- Ankur Sinha", "### Contributions\n\nThanks to @frankier for adding this dataset." ]
79786c111fb131eed688572b5e384773d0f7ae91
# Dataset Card for [Dataset Name] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Github](https://github.com/mpsilfve/finer-data) - **Repository:** [Github](https://github.com/mpsilfve/finer-data) - **Paper:** [Arxiv](https://arxiv.org/abs/1908.04212) - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields Each row consists of the following fields: * `id`: The sentence id * `tokens`: An ordered list of tokens from the full text * `ner_tags`: Named entity recognition tags for each token * `nested_ner_tags`: Nested named entity recognition tags for each token Note that by design, the length of `tokens`, `ner_tags`, and `nested_ner_tags` will always be identical. `ner_tags` and `nested_ner_tags` correspond to the list below: ``` [ "O", "B-DATE", "B-EVENT", "B-LOC", "B-ORG", "B-PER", "B-PRO", "I-DATE", "I-EVENT", "I-LOC", "I-ORG", "I-PER", "I-PRO" ] ``` IOB2 labeling scheme is used. ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@stefan-it](https://github.com/stefan-it) for adding this dataset.
finer
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:other", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:fi", "license:mit", "arxiv:1908.04212", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["other"], "language": ["fi"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "paperswithcode_id": "finer", "pretty_name": "Finnish News Corpus for Named Entity Recognition", "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-DATE", "2": "B-EVENT", "3": "B-LOC", "4": "B-ORG", "5": "B-PER", "6": "B-PRO", "7": "I-DATE", "8": "I-EVENT", "9": "I-LOC", "10": "I-ORG", "11": "I-PER", "12": "I-PRO"}}}}, {"name": "nested_ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-DATE", "2": "B-EVENT", "3": "B-LOC", "4": "B-ORG", "5": "B-PER", "6": "B-PRO", "7": "I-DATE", "8": "I-EVENT", "9": "I-LOC", "10": "I-ORG", "11": "I-PER", "12": "I-PRO"}}}}], "config_name": "finer", "splits": [{"name": "train", "num_bytes": 5159550, "num_examples": 13497}, {"name": "validation", "num_bytes": 387494, "num_examples": 986}, {"name": "test", "num_bytes": 1327354, "num_examples": 3512}, {"name": "test_wikipedia", "num_bytes": 1404397, "num_examples": 3360}], "download_size": 3733127, "dataset_size": 8278795}}
2024-01-18T11:03:41+00:00
[ "1908.04212" ]
[ "fi" ]
TAGS #task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-expert-generated #language_creators-other #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Finnish #license-mit #arxiv-1908.04212 #region-us
# Dataset Card for [Dataset Name] ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: Github - Repository: Github - Paper: Arxiv - Leaderboard: - Point of Contact: ### Dataset Summary ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields Each row consists of the following fields: * 'id': The sentence id * 'tokens': An ordered list of tokens from the full text * 'ner_tags': Named entity recognition tags for each token * 'nested_ner_tags': Nested named entity recognition tags for each token Note that by design, the length of 'tokens', 'ner_tags', and 'nested_ner_tags' will always be identical. 'ner_tags' and 'nested_ner_tags' correspond to the list below: IOB2 labeling scheme is used. ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions Thanks to @stefan-it for adding this dataset.
[ "# Dataset Card for [Dataset Name]", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: Github\n- Repository: Github\n- Paper: Arxiv\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields\n\nEach row consists of the following fields:\n\n* 'id': The sentence id\n* 'tokens': An ordered list of tokens from the full text\n* 'ner_tags': Named entity recognition tags for each token\n* 'nested_ner_tags': Nested named entity recognition tags for each token\n\nNote that by design, the length of 'tokens', 'ner_tags', and 'nested_ner_tags' will always be identical.\n\n'ner_tags' and 'nested_ner_tags' correspond to the list below:\n\n\n\nIOB2 labeling scheme is used.", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @stefan-it for adding this dataset." ]
[ "TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-expert-generated #language_creators-other #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Finnish #license-mit #arxiv-1908.04212 #region-us \n", "# Dataset Card for [Dataset Name]", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: Github\n- Repository: Github\n- Paper: Arxiv\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields\n\nEach row consists of the following fields:\n\n* 'id': The sentence id\n* 'tokens': An ordered list of tokens from the full text\n* 'ner_tags': Named entity recognition tags for each token\n* 'nested_ner_tags': Nested named entity recognition tags for each token\n\nNote that by design, the length of 'tokens', 'ner_tags', and 'nested_ner_tags' will always be identical.\n\n'ner_tags' and 'nested_ner_tags' correspond to the list below:\n\n\n\nIOB2 labeling scheme is used.", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @stefan-it for adding this dataset." ]
0647750849a0a0b9c4c64394a66ee82e1f45a31f
# Dataset Card for "flores" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://github.com/facebookresearch/flores/](https://github.com/facebookresearch/flores/) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 3.08 MB - **Size of the generated dataset:** 3.87 MB - **Total amount of disk used:** 6.95 MB ### Dataset Summary Evaluation datasets for low-resource machine translation: Nepali-English and Sinhala-English. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### neen - **Size of downloaded dataset files:** 1.54 MB - **Size of the generated dataset:** 1.86 MB - **Total amount of disk used:** 3.40 MB An example of 'validation' looks as follows. ``` This example was too long and was cropped: { "translation": "{\"en\": \"This is the wrong translation!\", \"ne\": \"यस वाहेक आगम पूजा, तारा पूजा, व्रत आदि पनि घरभित्र र वाहिर दुवै स्थानमा गरेको पा..." } ``` #### sien - **Size of downloaded dataset files:** 1.54 MB - **Size of the generated dataset:** 2.01 MB - **Total amount of disk used:** 3.57 MB An example of 'validation' looks as follows. ``` This example was too long and was cropped: { "translation": "{\"en\": \"This is the wrong translation!\", \"si\": \"එවැනි ආවරණයක් ලබාදීමට රක්ෂණ සපයන්නෙකු කැමති වුවත් ඒ සාමාන් යයෙන් බොහෝ රටවල පොදු ..." } ``` ### Data Fields The data fields are the same among all splits. #### neen - `translation`: a multilingual `string` variable, with possible languages including `ne`, `en`. #### sien - `translation`: a multilingual `string` variable, with possible languages including `si`, `en`. ### Data Splits |name|validation|test| |----|---------:|---:| |neen| 2560|2836| |sien| 2899|2767| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @misc{guzmn2019new, title={Two New Evaluation Datasets for Low-Resource Machine Translation: Nepali-English and Sinhala-English}, author={Francisco Guzman and Peng-Jen Chen and Myle Ott and Juan Pino and Guillaume Lample and Philipp Koehn and Vishrav Chaudhary and Marc'Aurelio Ranzato}, year={2019}, eprint={1902.01382}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun) for adding this dataset.
flores
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:translation", "size_categories:1K<n<10K", "source_datasets:extended|wikipedia", "source_datasets:extended|opus_gnome", "source_datasets:extended|opus_ubuntu", "source_datasets:extended|open_subtitles", "source_datasets:extended|paracrawl", "source_datasets:extended|bible_para", "source_datasets:extended|kde4", "source_datasets:extended|other-global-voices", "source_datasets:extended|other-common-crawl", "language:en", "language:ne", "language:si", "license:cc-by-4.0", "arxiv:1902.01382", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["en", "ne", "si"], "license": ["cc-by-4.0"], "multilinguality": ["translation"], "size_categories": ["1K<n<10K"], "source_datasets": ["extended|wikipedia", "extended|opus_gnome", "extended|opus_ubuntu", "extended|open_subtitles", "extended|paracrawl", "extended|bible_para", "extended|kde4", "extended|other-global-voices", "extended|other-common-crawl"], "task_categories": ["translation"], "task_ids": [], "paperswithcode_id": "flores", "pretty_name": "Flores", "config_names": ["neen", "sien"], "dataset_info": [{"config_name": "neen", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["ne", "en"]}}}], "splits": [{"name": "validation", "num_bytes": 849380, "num_examples": 2560}, {"name": "test", "num_bytes": 999063, "num_examples": 2836}], "download_size": 1542781, "dataset_size": 1848443}, {"config_name": "sien", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["si", "en"]}}}], "splits": [{"name": "validation", "num_bytes": 1031158, "num_examples": 2899}, {"name": "test", "num_bytes": 983563, "num_examples": 2767}], "download_size": 1542781, "dataset_size": 2014721}]}
2024-01-18T11:03:43+00:00
[ "1902.01382" ]
[ "en", "ne", "si" ]
TAGS #task_categories-translation #annotations_creators-found #language_creators-found #multilinguality-translation #size_categories-1K<n<10K #source_datasets-extended|wikipedia #source_datasets-extended|opus_gnome #source_datasets-extended|opus_ubuntu #source_datasets-extended|open_subtitles #source_datasets-extended|paracrawl #source_datasets-extended|bible_para #source_datasets-extended|kde4 #source_datasets-extended|other-global-voices #source_datasets-extended|other-common-crawl #language-English #language-Nepali (macrolanguage) #language-Sinhala #license-cc-by-4.0 #arxiv-1902.01382 #region-us
Dataset Card for "flores" ========================= Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: * Paper: * Point of Contact: * Size of downloaded dataset files: 3.08 MB * Size of the generated dataset: 3.87 MB * Total amount of disk used: 6.95 MB ### Dataset Summary Evaluation datasets for low-resource machine translation: Nepali-English and Sinhala-English. ### Supported Tasks and Leaderboards ### Languages Dataset Structure ----------------- ### Data Instances #### neen * Size of downloaded dataset files: 1.54 MB * Size of the generated dataset: 1.86 MB * Total amount of disk used: 3.40 MB An example of 'validation' looks as follows. #### sien * Size of downloaded dataset files: 1.54 MB * Size of the generated dataset: 2.01 MB * Total amount of disk used: 3.57 MB An example of 'validation' looks as follows. ### Data Fields The data fields are the same among all splits. #### neen * 'translation': a multilingual 'string' variable, with possible languages including 'ne', 'en'. #### sien * 'translation': a multilingual 'string' variable, with possible languages including 'si', 'en'. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to @thomwolf, @patrickvonplaten, @lewtun for adding this dataset.
[ "### Dataset Summary\n\n\nEvaluation datasets for low-resource machine translation: Nepali-English and Sinhala-English.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### neen\n\n\n* Size of downloaded dataset files: 1.54 MB\n* Size of the generated dataset: 1.86 MB\n* Total amount of disk used: 3.40 MB\n\n\nAn example of 'validation' looks as follows.", "#### sien\n\n\n* Size of downloaded dataset files: 1.54 MB\n* Size of the generated dataset: 2.01 MB\n* Total amount of disk used: 3.57 MB\n\n\nAn example of 'validation' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### neen\n\n\n* 'translation': a multilingual 'string' variable, with possible languages including 'ne', 'en'.", "#### sien\n\n\n* 'translation': a multilingual 'string' variable, with possible languages including 'si', 'en'.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @thomwolf, @patrickvonplaten, @lewtun for adding this dataset." ]
[ "TAGS\n#task_categories-translation #annotations_creators-found #language_creators-found #multilinguality-translation #size_categories-1K<n<10K #source_datasets-extended|wikipedia #source_datasets-extended|opus_gnome #source_datasets-extended|opus_ubuntu #source_datasets-extended|open_subtitles #source_datasets-extended|paracrawl #source_datasets-extended|bible_para #source_datasets-extended|kde4 #source_datasets-extended|other-global-voices #source_datasets-extended|other-common-crawl #language-English #language-Nepali (macrolanguage) #language-Sinhala #license-cc-by-4.0 #arxiv-1902.01382 #region-us \n", "### Dataset Summary\n\n\nEvaluation datasets for low-resource machine translation: Nepali-English and Sinhala-English.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### neen\n\n\n* Size of downloaded dataset files: 1.54 MB\n* Size of the generated dataset: 1.86 MB\n* Total amount of disk used: 3.40 MB\n\n\nAn example of 'validation' looks as follows.", "#### sien\n\n\n* Size of downloaded dataset files: 1.54 MB\n* Size of the generated dataset: 2.01 MB\n* Total amount of disk used: 3.57 MB\n\n\nAn example of 'validation' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### neen\n\n\n* 'translation': a multilingual 'string' variable, with possible languages including 'ne', 'en'.", "#### sien\n\n\n* 'translation': a multilingual 'string' variable, with possible languages including 'si', 'en'.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @thomwolf, @patrickvonplaten, @lewtun for adding this dataset." ]
0c54244659ca454ded60ec0ba3f4ff22027c3e68
# Dataset Card for FLUE ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [homepage](https://github.com/getalp/Flaubert/tree/master/flue) - **Repository:**[github](https://github.com/getalp/Flaubert/tree/master/flue) - **Paper:**[paper](https://arxiv.org/abs/1912.05372) - **Leaderboard:**[leaderboard](https://github.com/getalp/Flaubert/tree/master/flue/leaderboard) - **Point of Contact:**[Hang Le]([email protected]) ### Dataset Summary FLUE is an evaluation setup for French NLP systems similar to the popular GLUE benchmark. The goal is to enable further reproducible experiments in the future and to share models and progress on the French language. The tasks and data are obtained from existing works, please refer to our Flaubert paper for a complete list of references. ### Supported Tasks and Leaderboards The supported tasks are: Text Classification, Paraphrasing, Natural Language Inference, Constituency Parsing, Dependency Parsing, Verb Sense Disambiguation and Noun Sense Disambiguation ### Languages The datasets are all in French. ## Dataset Structure ### Text Classification (CLS) This is a binary classification task. It consists in classifying Amazon reviews for three product categories: books, DVD, and music. Each sample contains a review text and the associated rating from 1 to 5 stars. Reviews rated above 3 is labeled as positive, and those rated less than 3 is labeled as negative. #### Data Instances An instance looks like: ``` { 'idx': 1, 'label': 0, 'text': 'Bilan plus que mitigé pour cet album fourre-tout qui mêle quelques bonnes idées (les parodies d\'oeuvres d\'art) et des scènetes qui ne font que faire écho paresseusement aux précédents albums. Uderzo n\'a pas pris de risque pour cet album, mais, au vu des précédents, on se dit que c\'est peut-être un moindre mal ... L\'album semble n\'avoir été fait que pour permettre à Uderzo de rappeler avec une insistance suspecte qu\'il est bien l\'un des créateurs d\'Astérix (comme lorsqu\'il se met en scène lui même dans la BD) et de traiter ses critiques d\' "imbéciles" dans une préface un rien aigrie signée "Astérix". Préface dans laquelle Uderzo feint de croire que ce qu\'on lui reproche est d\'avoir fait survivre Asterix à la disparition de Goscinny (reproche naturellement démenti par la fidélité des lecteurs - démonstration imparable !). On aurait tant aimé qu\'Uderzo accepte de s\'entourer d\'un scénariste compétent et respectueux de l\'esprit Goscinnien (cela doit se trouver !) et nous propose des albums plus ambitieux ...' } ``` #### Data Fields The dataset is composed of two fields: - **text**: the field that represents the text to classify. - **label**: the sentiment represented by the text, here **positive** or **negative**. #### Data Splits The train and test sets are balanced, including around 1k positive and 1k negative reviews for a total of 2k reviews in each dataset. We take the French portion to create the binary text classification task in FLUE and report the accuracy on the test set. ### Paraphrasing (PAWS-X) The task consists in identifying whether the two sentences in a pair are semantically equivalent or not. #### Data Instances An instance looks like: ``` { 'idx': 1, 'label': 0, 'sentence1': "À Paris, en octobre 1560, il rencontra secrètement l'ambassadeur d'Angleterre, Nicolas Throckmorton, lui demandant un passeport pour retourner en Angleterre en passant par l'Écosse.", 'sentence2': "En octobre 1560, il rencontra secrètement l'ambassadeur d'Angleterre, Nicolas Throckmorton, à Paris, et lui demanda un passeport pour retourner en Écosse par l'Angleterre." } ``` #### Data Fields The dataset is compososed of three fields: - **sentence1**: The first sentence of an example - **sentence2**: The second sentence of an example - **lalel**: **0** if the two sentences are not paraphrasing each other, **1** otherwise. #### Data Splits The train set includes 49.4k examples, the dev and test sets each comprises nearly 2k examples. We take the related datasets for French to perform the paraphrasing task and report the accuracy on the test set. ### Natural Language Inference (XNLI) The Natural Language Inference (NLI) task, also known as recognizing textual entailment (RTE), is to determine whether a premise entails, contradicts or neither entails nor contradicts a hypothesis. We take the French part of the XNLI corpus to form the development and test sets for the NLI task in FLUE. #### Data Instances An instance looks like: ``` { 'idx': 1, 'label': 2, 'hypo': 'Le produit et la géographie sont ce qui fait travailler la crème de la crème .', 'premise': "L' écrémage conceptuel de la crème a deux dimensions fondamentales : le produit et la géographie ." } ``` #### Data Fields The dataset is composed of three fields: - **premise**: Premise sentence. - **hypo**: Hypothesis sentence. - **label**: **contradiction** if the two sentences are contradictory, **entailment** if the two sentences entails, **neutral** if they neither entails or contradict each other. #### Data Splits The train set includes 392.7k examples, the dev and test sets comprises 2.5k and 5k examples respectively. We take the related datasets for French to perform the NLI task and report the accuracy on the test set. ### Word Sense Disambiguation for Verbs (WSD-V) The FrenchSemEval (FSE) dataset aims to evaluate the Word Sense Disambiguation for Verbs task for the French language. Extracted from Wiktionary. #### Data Instances An instance looks like: ``` { 'idx': 'd000.s001', 'sentence': ['"', 'Ce', 'ne', 'fut', 'pas', 'une', 'révolution', '2.0', ',', 'ce', 'fut', 'une', 'révolution', 'de', 'rue', '.'], 'fine_pos_tags': [27, 26, 18, 13, 18, 0, 6, 22, 27, 26, 13, 0, 6, 4, 6, 27], 'lemmas': ['"', 'ce', 'ne', 'être', 'pas', 'un', 'révolution', '2.0', ',', 'ce', 'être', 'un', 'révolution', 'de', 'rue', '.'], 'pos_tags': [13, 11, 14, 0, 14, 9, 15, 4, 13, 11, 0, 9, 15, 7, 15, 13], 'disambiguate_labels': ['__ws_1_2.0__adj__1'], 'disambiguate_tokens_ids': [7], } ``` #### Data Fields The dataset is composed of six fields: - **sentence**: The sentence to process split in tokens. - **pos_tags**: The corresponding POS tags for each tokens. - **lemmas**: The corresponding lemma for each tokens. - **fine_pos_tags**: Fined (more specific) POS tags for each tokens. - **disambiguate_tokens_ids**: The ID of the token in the sentence to disambiguate. - **disambiguate_labels**: The label in the form of **sentenceID __ws_sentence-number_token__pos__number-of-time-the-token-appeared-across-all-the-sentences** (i.e. **d000.s404.t000 __ws_2_agir__verb__1**). #### Data Splits The train set includes 269821 examples, the test set includes 3121 examples. ## Considerations for Using the Data ### Social Impact of Dataset The goal is to enable further reproducible experiments in the future and to share models and progress on the French language. ## Additional Information ### Licensing Information The licenses are: - The licensing status of the data, especially the news source text, is unknown for CLS - *The dataset may be freely used for any purpose, although acknowledgement of Google LLC ("Google") as the data source would be appreciated. The dataset is provided "AS IS" without any warranty, express or implied. Google disclaims all liability for any damages, direct or indirect, resulting from the use of the dataset.* for PAWS-X - CC BY-NC 4.0 for XNLI - The licensing status of the data, especially the news source text, is unknown for Verb Sense Disambiguation ### Citation Information ``` @misc{le2019flaubert, title={FlauBERT: Unsupervised Language Model Pre-training for French}, author={Hang Le and Loïc Vial and Jibril Frej and Vincent Segonne and Maximin Coavoux and Benjamin Lecouteux and Alexandre Allauzen and Benoît Crabbé and Laurent Besacier and Didier Schwab}, year={2019}, eprint={1912.05372}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Contributions Thanks to [@jplu](https://github.com/jplu) for adding this dataset.
flue
[ "task_categories:text-classification", "task_ids:intent-classification", "task_ids:semantic-similarity-classification", "task_ids:sentiment-classification", "annotations_creators:crowdsourced", "annotations_creators:machine-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:fr", "license:unknown", "Word Sense Disambiguation for Verbs", "arxiv:1912.05372", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced", "machine-generated"], "language_creators": ["crowdsourced"], "language": ["fr"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["intent-classification", "semantic-similarity-classification", "sentiment-classification"], "pretty_name": "FLUE", "config_names": ["CLS", "PAWS-X", "WSD-V", "XNLI"], "tags": ["Word Sense Disambiguation for Verbs"], "dataset_info": [{"config_name": "CLS", "features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "negative", "1": "positive"}}}}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 3853279, "num_examples": 5997}, {"name": "test", "num_bytes": 3852344, "num_examples": 5999}], "download_size": 314687066, "dataset_size": 7705623}, {"config_name": "PAWS-X", "features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int32"}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "validation", "num_bytes": 522013, "num_examples": 1988}, {"name": "test", "num_bytes": 526953, "num_examples": 2000}, {"name": "train", "num_bytes": 13096677, "num_examples": 49399}], "download_size": 30282057, "dataset_size": 14145643}, {"config_name": "XNLI", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypo", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "contradiction", "1": "entailment", "2": "neutral"}}}}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "validation", "num_bytes": 520022, "num_examples": 2490}, {"name": "test", "num_bytes": 1048999, "num_examples": 5010}, {"name": "train", "num_bytes": 87373154, "num_examples": 392702}], "download_size": 483963712, "dataset_size": 88942175}, {"config_name": "WSD-V", "features": [{"name": "sentence", "sequence": "string"}, {"name": "pos_tags", "sequence": "string"}, {"name": "lemmas", "sequence": "string"}, {"name": "fine_pos_tags", "sequence": "string"}, {"name": "disambiguate_tokens_ids", "sequence": "int32"}, {"name": "disambiguate_labels", "sequence": "string"}, {"name": "idx", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 206869215, "num_examples": 269821}, {"name": "test", "num_bytes": 2722232, "num_examples": 3121}], "download_size": 38303600, "dataset_size": 209591447}]}
2024-01-18T11:03:45+00:00
[ "1912.05372" ]
[ "fr" ]
TAGS #task_categories-text-classification #task_ids-intent-classification #task_ids-semantic-similarity-classification #task_ids-sentiment-classification #annotations_creators-crowdsourced #annotations_creators-machine-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-French #license-unknown #Word Sense Disambiguation for Verbs #arxiv-1912.05372 #region-us
# Dataset Card for FLUE ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: homepage - Repository:github - Paper:paper - Leaderboard:leaderboard - Point of Contact:Hang Le ### Dataset Summary FLUE is an evaluation setup for French NLP systems similar to the popular GLUE benchmark. The goal is to enable further reproducible experiments in the future and to share models and progress on the French language. The tasks and data are obtained from existing works, please refer to our Flaubert paper for a complete list of references. ### Supported Tasks and Leaderboards The supported tasks are: Text Classification, Paraphrasing, Natural Language Inference, Constituency Parsing, Dependency Parsing, Verb Sense Disambiguation and Noun Sense Disambiguation ### Languages The datasets are all in French. ## Dataset Structure ### Text Classification (CLS) This is a binary classification task. It consists in classifying Amazon reviews for three product categories: books, DVD, and music. Each sample contains a review text and the associated rating from 1 to 5 stars. Reviews rated above 3 is labeled as positive, and those rated less than 3 is labeled as negative. #### Data Instances An instance looks like: #### Data Fields The dataset is composed of two fields: - text: the field that represents the text to classify. - label: the sentiment represented by the text, here positive or negative. #### Data Splits The train and test sets are balanced, including around 1k positive and 1k negative reviews for a total of 2k reviews in each dataset. We take the French portion to create the binary text classification task in FLUE and report the accuracy on the test set. ### Paraphrasing (PAWS-X) The task consists in identifying whether the two sentences in a pair are semantically equivalent or not. #### Data Instances An instance looks like: #### Data Fields The dataset is compososed of three fields: - sentence1: The first sentence of an example - sentence2: The second sentence of an example - lalel: 0 if the two sentences are not paraphrasing each other, 1 otherwise. #### Data Splits The train set includes 49.4k examples, the dev and test sets each comprises nearly 2k examples. We take the related datasets for French to perform the paraphrasing task and report the accuracy on the test set. ### Natural Language Inference (XNLI) The Natural Language Inference (NLI) task, also known as recognizing textual entailment (RTE), is to determine whether a premise entails, contradicts or neither entails nor contradicts a hypothesis. We take the French part of the XNLI corpus to form the development and test sets for the NLI task in FLUE. #### Data Instances An instance looks like: #### Data Fields The dataset is composed of three fields: - premise: Premise sentence. - hypo: Hypothesis sentence. - label: contradiction if the two sentences are contradictory, entailment if the two sentences entails, neutral if they neither entails or contradict each other. #### Data Splits The train set includes 392.7k examples, the dev and test sets comprises 2.5k and 5k examples respectively. We take the related datasets for French to perform the NLI task and report the accuracy on the test set. ### Word Sense Disambiguation for Verbs (WSD-V) The FrenchSemEval (FSE) dataset aims to evaluate the Word Sense Disambiguation for Verbs task for the French language. Extracted from Wiktionary. #### Data Instances An instance looks like: #### Data Fields The dataset is composed of six fields: - sentence: The sentence to process split in tokens. - pos_tags: The corresponding POS tags for each tokens. - lemmas: The corresponding lemma for each tokens. - fine_pos_tags: Fined (more specific) POS tags for each tokens. - disambiguate_tokens_ids: The ID of the token in the sentence to disambiguate. - disambiguate_labels: The label in the form of sentenceID __ws_sentence-number_token__pos__number-of-time-the-token-appeared-across-all-the-sentences (i.e. d000.s404.t000 __ws_2_agir__verb__1). #### Data Splits The train set includes 269821 examples, the test set includes 3121 examples. ## Considerations for Using the Data ### Social Impact of Dataset The goal is to enable further reproducible experiments in the future and to share models and progress on the French language. ## Additional Information ### Licensing Information The licenses are: - The licensing status of the data, especially the news source text, is unknown for CLS - *The dataset may be freely used for any purpose, although acknowledgement of Google LLC ("Google") as the data source would be appreciated. The dataset is provided "AS IS" without any warranty, express or implied. Google disclaims all liability for any damages, direct or indirect, resulting from the use of the dataset.* for PAWS-X - CC BY-NC 4.0 for XNLI - The licensing status of the data, especially the news source text, is unknown for Verb Sense Disambiguation ### Contributions Thanks to @jplu for adding this dataset.
[ "# Dataset Card for FLUE", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: homepage\n- Repository:github\n- Paper:paper\n- Leaderboard:leaderboard\n- Point of Contact:Hang Le", "### Dataset Summary\n\nFLUE is an evaluation setup for French NLP systems similar to the popular GLUE benchmark. The goal is to enable further reproducible experiments in the future and to share models and progress on the French language. The tasks and data are obtained from existing works, please refer to our Flaubert paper for a complete list of references.", "### Supported Tasks and Leaderboards\n\nThe supported tasks are: Text Classification, Paraphrasing, Natural Language Inference, Constituency Parsing, Dependency Parsing, Verb Sense Disambiguation and Noun Sense Disambiguation", "### Languages\n\nThe datasets are all in French.", "## Dataset Structure", "### Text Classification (CLS)\n\nThis is a binary classification task. It consists in classifying Amazon reviews for three product categories: books, DVD, and music. Each sample contains a review text and the associated rating from 1 to 5 stars. Reviews rated above 3 is labeled as positive, and those rated less than 3 is labeled as negative.", "#### Data Instances\n\nAn instance looks like:", "#### Data Fields\n\nThe dataset is composed of two fields:\n- text: the field that represents the text to classify.\n- label: the sentiment represented by the text, here positive or negative.", "#### Data Splits\n\nThe train and test sets are balanced, including around 1k positive and 1k negative reviews for a total of 2k reviews in each dataset. We take the French portion to create the binary text classification task in FLUE and report the accuracy on the test set.", "### Paraphrasing (PAWS-X)\n\nThe task consists in identifying whether the two sentences in a pair are semantically equivalent or not.", "#### Data Instances\n\nAn instance looks like:", "#### Data Fields\n\nThe dataset is compososed of three fields:\n- sentence1: The first sentence of an example\n- sentence2: The second sentence of an example\n- lalel: 0 if the two sentences are not paraphrasing each other, 1 otherwise.", "#### Data Splits\n\nThe train set includes 49.4k examples, the dev and test sets each comprises nearly 2k examples. We take the related datasets for French to perform the paraphrasing task and report the accuracy on the test set.", "### Natural Language Inference (XNLI)\n\nThe Natural Language Inference (NLI) task, also known as recognizing textual entailment (RTE), is to determine whether a premise entails, contradicts or neither entails nor contradicts a hypothesis. We take the French part of the XNLI corpus to form the development and test sets for the NLI task in FLUE.", "#### Data Instances\n\nAn instance looks like:", "#### Data Fields\n\nThe dataset is composed of three fields:\n- premise: Premise sentence.\n- hypo: Hypothesis sentence.\n- label: contradiction if the two sentences are contradictory, entailment if the two sentences entails, neutral if they neither entails or contradict each other.", "#### Data Splits\n\nThe train set includes 392.7k examples, the dev and test sets comprises 2.5k and 5k examples respectively. We take the related datasets for French to perform the NLI task and report the accuracy on the test set.", "### Word Sense Disambiguation for Verbs (WSD-V)\n\nThe FrenchSemEval (FSE) dataset aims to evaluate the Word Sense Disambiguation for Verbs task for the French language. Extracted from Wiktionary.", "#### Data Instances\n\nAn instance looks like:", "#### Data Fields\n\nThe dataset is composed of six fields:\n- sentence: The sentence to process split in tokens.\n- pos_tags: The corresponding POS tags for each tokens.\n- lemmas: The corresponding lemma for each tokens.\n- fine_pos_tags: Fined (more specific) POS tags for each tokens.\n- disambiguate_tokens_ids: The ID of the token in the sentence to disambiguate.\n- disambiguate_labels: The label in the form of sentenceID __ws_sentence-number_token__pos__number-of-time-the-token-appeared-across-all-the-sentences (i.e. d000.s404.t000 __ws_2_agir__verb__1).", "#### Data Splits\n\nThe train set includes 269821 examples, the test set includes 3121 examples.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThe goal is to enable further reproducible experiments in the future and to share models and progress on the French language.", "## Additional Information", "### Licensing Information\n\nThe licenses are:\n- The licensing status of the data, especially the news source text, is unknown for CLS\n- *The dataset may be freely used for any purpose, although acknowledgement of Google LLC (\"Google\") as the data source would be appreciated. The dataset is provided \"AS IS\" without any warranty, express or implied. Google disclaims all liability for any damages, direct or indirect, resulting from the use of the dataset.* for PAWS-X\n- CC BY-NC 4.0 for XNLI\n- The licensing status of the data, especially the news source text, is unknown for Verb Sense Disambiguation", "### Contributions\n\nThanks to @jplu for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-intent-classification #task_ids-semantic-similarity-classification #task_ids-sentiment-classification #annotations_creators-crowdsourced #annotations_creators-machine-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-French #license-unknown #Word Sense Disambiguation for Verbs #arxiv-1912.05372 #region-us \n", "# Dataset Card for FLUE", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: homepage\n- Repository:github\n- Paper:paper\n- Leaderboard:leaderboard\n- Point of Contact:Hang Le", "### Dataset Summary\n\nFLUE is an evaluation setup for French NLP systems similar to the popular GLUE benchmark. The goal is to enable further reproducible experiments in the future and to share models and progress on the French language. The tasks and data are obtained from existing works, please refer to our Flaubert paper for a complete list of references.", "### Supported Tasks and Leaderboards\n\nThe supported tasks are: Text Classification, Paraphrasing, Natural Language Inference, Constituency Parsing, Dependency Parsing, Verb Sense Disambiguation and Noun Sense Disambiguation", "### Languages\n\nThe datasets are all in French.", "## Dataset Structure", "### Text Classification (CLS)\n\nThis is a binary classification task. It consists in classifying Amazon reviews for three product categories: books, DVD, and music. Each sample contains a review text and the associated rating from 1 to 5 stars. Reviews rated above 3 is labeled as positive, and those rated less than 3 is labeled as negative.", "#### Data Instances\n\nAn instance looks like:", "#### Data Fields\n\nThe dataset is composed of two fields:\n- text: the field that represents the text to classify.\n- label: the sentiment represented by the text, here positive or negative.", "#### Data Splits\n\nThe train and test sets are balanced, including around 1k positive and 1k negative reviews for a total of 2k reviews in each dataset. We take the French portion to create the binary text classification task in FLUE and report the accuracy on the test set.", "### Paraphrasing (PAWS-X)\n\nThe task consists in identifying whether the two sentences in a pair are semantically equivalent or not.", "#### Data Instances\n\nAn instance looks like:", "#### Data Fields\n\nThe dataset is compososed of three fields:\n- sentence1: The first sentence of an example\n- sentence2: The second sentence of an example\n- lalel: 0 if the two sentences are not paraphrasing each other, 1 otherwise.", "#### Data Splits\n\nThe train set includes 49.4k examples, the dev and test sets each comprises nearly 2k examples. We take the related datasets for French to perform the paraphrasing task and report the accuracy on the test set.", "### Natural Language Inference (XNLI)\n\nThe Natural Language Inference (NLI) task, also known as recognizing textual entailment (RTE), is to determine whether a premise entails, contradicts or neither entails nor contradicts a hypothesis. We take the French part of the XNLI corpus to form the development and test sets for the NLI task in FLUE.", "#### Data Instances\n\nAn instance looks like:", "#### Data Fields\n\nThe dataset is composed of three fields:\n- premise: Premise sentence.\n- hypo: Hypothesis sentence.\n- label: contradiction if the two sentences are contradictory, entailment if the two sentences entails, neutral if they neither entails or contradict each other.", "#### Data Splits\n\nThe train set includes 392.7k examples, the dev and test sets comprises 2.5k and 5k examples respectively. We take the related datasets for French to perform the NLI task and report the accuracy on the test set.", "### Word Sense Disambiguation for Verbs (WSD-V)\n\nThe FrenchSemEval (FSE) dataset aims to evaluate the Word Sense Disambiguation for Verbs task for the French language. Extracted from Wiktionary.", "#### Data Instances\n\nAn instance looks like:", "#### Data Fields\n\nThe dataset is composed of six fields:\n- sentence: The sentence to process split in tokens.\n- pos_tags: The corresponding POS tags for each tokens.\n- lemmas: The corresponding lemma for each tokens.\n- fine_pos_tags: Fined (more specific) POS tags for each tokens.\n- disambiguate_tokens_ids: The ID of the token in the sentence to disambiguate.\n- disambiguate_labels: The label in the form of sentenceID __ws_sentence-number_token__pos__number-of-time-the-token-appeared-across-all-the-sentences (i.e. d000.s404.t000 __ws_2_agir__verb__1).", "#### Data Splits\n\nThe train set includes 269821 examples, the test set includes 3121 examples.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThe goal is to enable further reproducible experiments in the future and to share models and progress on the French language.", "## Additional Information", "### Licensing Information\n\nThe licenses are:\n- The licensing status of the data, especially the news source text, is unknown for CLS\n- *The dataset may be freely used for any purpose, although acknowledgement of Google LLC (\"Google\") as the data source would be appreciated. The dataset is provided \"AS IS\" without any warranty, express or implied. Google disclaims all liability for any damages, direct or indirect, resulting from the use of the dataset.* for PAWS-X\n- CC BY-NC 4.0 for XNLI\n- The licensing status of the data, especially the news source text, is unknown for Verb Sense Disambiguation", "### Contributions\n\nThanks to @jplu for adding this dataset." ]
e06acf2a88084f04bce4d4a525165d68e0a36c38
# Dataset Card for Food-101 ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Food-101 Dataset](https://data.vision.ee.ethz.ch/cvl/datasets_extra/food-101/) - **Repository:** - **Paper:** [Paper](https://data.vision.ee.ethz.ch/cvl/datasets_extra/food-101/static/bossard_eccv14_food-101.pdf) - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset consists of 101 food categories, with 101'000 images. For each class, 250 manually reviewed test images are provided as well as 750 training images. On purpose, the training images were not cleaned, and thus still contain some amount of noise. This comes mostly in the form of intense colors and sometimes wrong labels. All images were rescaled to have a maximum side length of 512 pixels. ### Supported Tasks and Leaderboards - `image-classification`: The goal of this task is to classify a given image of a dish into one of 101 classes. The leaderboard is available [here](https://paperswithcode.com/sota/fine-grained-image-classification-on-food-101). ### Languages English ## Dataset Structure ### Data Instances A sample from the training set is provided below: ``` { 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=384x512 at 0x276021C5EB8>, 'label': 23 } ``` ### Data Fields The data instances have the following fields: - `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`. - `label`: an `int` classification label. <details> <summary>Class Label Mappings</summary> ```json { "apple_pie": 0, "baby_back_ribs": 1, "baklava": 2, "beef_carpaccio": 3, "beef_tartare": 4, "beet_salad": 5, "beignets": 6, "bibimbap": 7, "bread_pudding": 8, "breakfast_burrito": 9, "bruschetta": 10, "caesar_salad": 11, "cannoli": 12, "caprese_salad": 13, "carrot_cake": 14, "ceviche": 15, "cheesecake": 16, "cheese_plate": 17, "chicken_curry": 18, "chicken_quesadilla": 19, "chicken_wings": 20, "chocolate_cake": 21, "chocolate_mousse": 22, "churros": 23, "clam_chowder": 24, "club_sandwich": 25, "crab_cakes": 26, "creme_brulee": 27, "croque_madame": 28, "cup_cakes": 29, "deviled_eggs": 30, "donuts": 31, "dumplings": 32, "edamame": 33, "eggs_benedict": 34, "escargots": 35, "falafel": 36, "filet_mignon": 37, "fish_and_chips": 38, "foie_gras": 39, "french_fries": 40, "french_onion_soup": 41, "french_toast": 42, "fried_calamari": 43, "fried_rice": 44, "frozen_yogurt": 45, "garlic_bread": 46, "gnocchi": 47, "greek_salad": 48, "grilled_cheese_sandwich": 49, "grilled_salmon": 50, "guacamole": 51, "gyoza": 52, "hamburger": 53, "hot_and_sour_soup": 54, "hot_dog": 55, "huevos_rancheros": 56, "hummus": 57, "ice_cream": 58, "lasagna": 59, "lobster_bisque": 60, "lobster_roll_sandwich": 61, "macaroni_and_cheese": 62, "macarons": 63, "miso_soup": 64, "mussels": 65, "nachos": 66, "omelette": 67, "onion_rings": 68, "oysters": 69, "pad_thai": 70, "paella": 71, "pancakes": 72, "panna_cotta": 73, "peking_duck": 74, "pho": 75, "pizza": 76, "pork_chop": 77, "poutine": 78, "prime_rib": 79, "pulled_pork_sandwich": 80, "ramen": 81, "ravioli": 82, "red_velvet_cake": 83, "risotto": 84, "samosa": 85, "sashimi": 86, "scallops": 87, "seaweed_salad": 88, "shrimp_and_grits": 89, "spaghetti_bolognese": 90, "spaghetti_carbonara": 91, "spring_rolls": 92, "steak": 93, "strawberry_shortcake": 94, "sushi": 95, "tacos": 96, "takoyaki": 97, "tiramisu": 98, "tuna_tartare": 99, "waffles": 100 } ``` </details> ### Data Splits | |train|validation| |----------|----:|---------:| |# of examples|75750|25250| ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information LICENSE AGREEMENT ================= - The Food-101 data set consists of images from Foodspotting [1] which are not property of the Federal Institute of Technology Zurich (ETHZ). Any use beyond scientific fair use must be negociated with the respective picture owners according to the Foodspotting terms of use [2]. [1] http://www.foodspotting.com/ [2] http://www.foodspotting.com/terms/ ### Citation Information ``` @inproceedings{bossard14, title = {Food-101 -- Mining Discriminative Components with Random Forests}, author = {Bossard, Lukas and Guillaumin, Matthieu and Van Gool, Luc}, booktitle = {European Conference on Computer Vision}, year = {2014} } ``` ### Contributions Thanks to [@nateraw](https://github.com/nateraw) for adding this dataset.
food101
[ "task_categories:image-classification", "task_ids:multi-class-image-classification", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|other-foodspotting", "language:en", "license:unknown", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended|other-foodspotting"], "task_categories": ["image-classification"], "task_ids": ["multi-class-image-classification"], "paperswithcode_id": "food-101", "pretty_name": "Food-101", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "apple_pie", "1": "baby_back_ribs", "2": "baklava", "3": "beef_carpaccio", "4": "beef_tartare", "5": "beet_salad", "6": "beignets", "7": "bibimbap", "8": "bread_pudding", "9": "breakfast_burrito", "10": "bruschetta", "11": "caesar_salad", "12": "cannoli", "13": "caprese_salad", "14": "carrot_cake", "15": "ceviche", "16": "cheesecake", "17": "cheese_plate", "18": "chicken_curry", "19": "chicken_quesadilla", "20": "chicken_wings", "21": "chocolate_cake", "22": "chocolate_mousse", "23": "churros", "24": "clam_chowder", "25": "club_sandwich", "26": "crab_cakes", "27": "creme_brulee", "28": "croque_madame", "29": "cup_cakes", "30": "deviled_eggs", "31": "donuts", "32": "dumplings", "33": "edamame", "34": "eggs_benedict", "35": "escargots", "36": "falafel", "37": "filet_mignon", "38": "fish_and_chips", "39": "foie_gras", "40": "french_fries", "41": "french_onion_soup", "42": "french_toast", "43": "fried_calamari", "44": "fried_rice", "45": "frozen_yogurt", "46": "garlic_bread", "47": "gnocchi", "48": "greek_salad", "49": "grilled_cheese_sandwich", "50": "grilled_salmon", "51": "guacamole", "52": "gyoza", "53": "hamburger", "54": "hot_and_sour_soup", "55": "hot_dog", "56": "huevos_rancheros", "57": "hummus", "58": "ice_cream", "59": "lasagna", "60": "lobster_bisque", "61": "lobster_roll_sandwich", "62": "macaroni_and_cheese", "63": "macarons", "64": "miso_soup", "65": "mussels", "66": "nachos", "67": "omelette", "68": "onion_rings", "69": "oysters", "70": "pad_thai", "71": "paella", "72": "pancakes", "73": "panna_cotta", "74": "peking_duck", "75": "pho", "76": "pizza", "77": "pork_chop", "78": "poutine", "79": "prime_rib", "80": "pulled_pork_sandwich", "81": "ramen", "82": "ravioli", "83": "red_velvet_cake", "84": "risotto", "85": "samosa", "86": "sashimi", "87": "scallops", "88": "seaweed_salad", "89": "shrimp_and_grits", "90": "spaghetti_bolognese", "91": "spaghetti_carbonara", "92": "spring_rolls", "93": "steak", "94": "strawberry_shortcake", "95": "sushi", "96": "tacos", "97": "takoyaki", "98": "tiramisu", "99": "tuna_tartare", "100": "waffles"}}}}], "splits": [{"name": "train", "num_bytes": 3842657187.0, "num_examples": 75750}, {"name": "validation", "num_bytes": 1275182340.5, "num_examples": 25250}], "download_size": 5059972308, "dataset_size": 5117839527.5}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}]}
2024-01-06T10:08:32+00:00
[]
[ "en" ]
TAGS #task_categories-image-classification #task_ids-multi-class-image-classification #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|other-foodspotting #language-English #license-unknown #region-us
Dataset Card for Food-101 ========================= Table of Contents ----------------- * Table of Contents * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: Food-101 Dataset * Repository: * Paper: Paper * Leaderboard: * Point of Contact: ### Dataset Summary This dataset consists of 101 food categories, with 101'000 images. For each class, 250 manually reviewed test images are provided as well as 750 training images. On purpose, the training images were not cleaned, and thus still contain some amount of noise. This comes mostly in the form of intense colors and sometimes wrong labels. All images were rescaled to have a maximum side length of 512 pixels. ### Supported Tasks and Leaderboards * 'image-classification': The goal of this task is to classify a given image of a dish into one of 101 classes. The leaderboard is available here. ### Languages English Dataset Structure ----------------- ### Data Instances A sample from the training set is provided below: ### Data Fields The data instances have the following fields: * 'image': A 'PIL.Image.Image' object containing the image. Note that when accessing the image column: 'dataset[0]["image"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '"image"' column, *i.e.* 'dataset[0]["image"]' should always be preferred over 'dataset["image"][0]'. * 'label': an 'int' classification label. Class Label Mappings ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information LICENSE AGREEMENT ================= * The Food-101 data set consists of images from Foodspotting [1] which are not property of the Federal Institute of Technology Zurich (ETHZ). Any use beyond scientific fair use must be negociated with the respective picture owners according to the Foodspotting terms of use [2]. [1] URL [2] URL ### Contributions Thanks to @nateraw for adding this dataset.
[ "### Dataset Summary\n\n\nThis dataset consists of 101 food categories, with 101'000 images. For each class, 250 manually reviewed test images are provided as well as 750 training images. On purpose, the training images were not cleaned, and thus still contain some amount of noise. This comes mostly in the form of intense colors and sometimes wrong labels. All images were rescaled to have a maximum side length of 512 pixels.", "### Supported Tasks and Leaderboards\n\n\n* 'image-classification': The goal of this task is to classify a given image of a dish into one of 101 classes. The leaderboard is available here.", "### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA sample from the training set is provided below:", "### Data Fields\n\n\nThe data instances have the following fields:\n\n\n* 'image': A 'PIL.Image.Image' object containing the image. Note that when accessing the image column: 'dataset[0][\"image\"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '\"image\"' column, *i.e.* 'dataset[0][\"image\"]' should always be preferred over 'dataset[\"image\"][0]'.\n* 'label': an 'int' classification label.\n\n\n\nClass Label Mappings", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nLICENSE AGREEMENT\n=================\n\n\n* The Food-101 data set consists of images from Foodspotting [1] which are not\nproperty of the Federal Institute of Technology Zurich (ETHZ). Any use beyond\nscientific fair use must be negociated with the respective picture owners\naccording to the Foodspotting terms of use [2].\n\n\n[1] URL\n[2] URL", "### Contributions\n\n\nThanks to @nateraw for adding this dataset." ]
[ "TAGS\n#task_categories-image-classification #task_ids-multi-class-image-classification #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|other-foodspotting #language-English #license-unknown #region-us \n", "### Dataset Summary\n\n\nThis dataset consists of 101 food categories, with 101'000 images. For each class, 250 manually reviewed test images are provided as well as 750 training images. On purpose, the training images were not cleaned, and thus still contain some amount of noise. This comes mostly in the form of intense colors and sometimes wrong labels. All images were rescaled to have a maximum side length of 512 pixels.", "### Supported Tasks and Leaderboards\n\n\n* 'image-classification': The goal of this task is to classify a given image of a dish into one of 101 classes. The leaderboard is available here.", "### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA sample from the training set is provided below:", "### Data Fields\n\n\nThe data instances have the following fields:\n\n\n* 'image': A 'PIL.Image.Image' object containing the image. Note that when accessing the image column: 'dataset[0][\"image\"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '\"image\"' column, *i.e.* 'dataset[0][\"image\"]' should always be preferred over 'dataset[\"image\"][0]'.\n* 'label': an 'int' classification label.\n\n\n\nClass Label Mappings", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nLICENSE AGREEMENT\n=================\n\n\n* The Food-101 data set consists of images from Foodspotting [1] which are not\nproperty of the Federal Institute of Technology Zurich (ETHZ). Any use beyond\nscientific fair use must be negociated with the respective picture owners\naccording to the Foodspotting terms of use [2].\n\n\n[1] URL\n[2] URL", "### Contributions\n\n\nThanks to @nateraw for adding this dataset." ]
cf9a710a0dc5d61c9a6872b7343d27edd5492a33
# Dataset Card for FQuAD ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://fquad.illuin.tech/](https://fquad.illuin.tech/) - **Paper:** [FQuAD: French Question Answering Dataset](https://arxiv.org/abs/2002.06071) - **Point of Contact:** [https://www.illuin.tech/contact/](https://www.illuin.tech/contact/) - **Size of downloaded dataset files:** 3.29 MB - **Size of the generated dataset:** 6.94 MB - **Total amount of disk used:** 10.23 MB ### Dataset Summary FQuAD: French Question Answering Dataset We introduce FQuAD, a native French Question Answering Dataset. FQuAD contains 25,000+ question and answer pairs. Finetuning CamemBERT on FQuAD yields a F1 score of 88% and an exact match of 77.9%. Developped to provide a SQuAD equivalent in the French language. Questions are original and based on high quality Wikipedia articles. Please, note this dataset is licensed for non-commercial purposes and users must agree to the following terms and conditions: 1. Use FQuAD only for internal research purposes. 2. Not make any copy except a safety one. 3. Not redistribute it (or part of it) in any way, even for free. 4. Not sell it or use it for any commercial purpose. Contact us for a possible commercial licence. 5. Mention the corpus origin and Illuin Technology in all publications about experiments using FQuAD. 6. Redistribute to Illuin Technology any improved or enriched version you could make of that corpus. Request manually download of the data from: https://fquad.illuin.tech/ ### Supported Tasks and Leaderboards - `closed-domain-qa`, `text-retrieval`: This dataset is intended to be used for `closed-domain-qa`, but can also be used for information retrieval tasks. ### Languages This dataset is exclusively in French, with context data from Wikipedia and questions from French university students (`fr`). ## Dataset Structure ### Data Instances #### default - **Size of downloaded dataset files:** 3.29 MB - **Size of the generated dataset:** 6.94 MB - **Total amount of disk used:** 10.23 MB An example of 'validation' looks as follows. ``` This example was too long and was cropped: { "answers": { "answers_starts": [161, 46, 204], "texts": ["La Vierge aux rochers", "documents contemporains", "objets de spéculations"] }, "context": "\"Les deux tableaux sont certes décrits par des documents contemporains à leur création mais ceux-ci ne le font qu'indirectement ...", "questions": ["Que concerne principalement les documents ?", "Par quoi sont décrit les deux tableaux ?", "Quels types d'objets sont les deux tableaux aux yeux des chercheurs ?"] } ``` ### Data Fields The data fields are the same among all splits. #### default - `context`: a `string` feature. - `questions`: a `list` of `string` features. - `answers`: a dictionary feature containing: - `texts`: a `string` feature. - `answers_starts`: a `int32` feature. ### Data Splits The FQuAD dataset has 3 splits: _train_, _validation_, and _test_. The _test_ split is however not released publicly at the moment. The splits contain disjoint sets of articles. The following table contains stats about each split. Dataset Split | Number of Articles in Split | Number of paragraphs in split | Number of questions in split --------------|------------------------------|--------------------------|------------------------- Train | 117 | 4921 | 20731 Validation | 768 | 51.0% | 3188 Test | 10 | 532 | 2189 ## Dataset Creation ### Curation Rationale The FQuAD dataset was created by Illuin technology. It was developped to provide a SQuAD equivalent in the French language. Questions are original and based on high quality Wikipedia articles. ### Source Data The text used for the contexts are from the curated list of French High-Quality Wikipedia [articles](https://fr.wikipedia.org/wiki/Cat%C3%A9gorie:Article_de_qualit%C3%A9). ### Annotations Annotations (spans and questions) are written by students of the CentraleSupélec school of engineering. Wikipedia articles were scraped and Illuin used an internally-developped tool to help annotators ask questions and indicate the answer spans. Annotators were given paragraph sized contexts and asked to generate 4/5 non-trivial questions about information in the context. ### Personal and Sensitive Information No personal or sensitive information is included in this dataset. This has been manually verified by the dataset curators. ## Considerations for Using the Data Users should consider this dataset is sampled from Wikipedia data which might not be representative of all QA use cases. ### Social Impact of Dataset The social biases of this dataset have not yet been investigated. ### Discussion of Biases The social biases of this dataset have not yet been investigated, though articles have been selected by their quality and objectivity. ### Other Known Limitations The limitations of the FQuAD dataset have not yet been investigated. ## Additional Information ### Dataset Curators Illuin Technology: [https://fquad.illuin.tech/](https://fquad.illuin.tech/) ### Licensing Information The FQuAD dataset is licensed under the [CC BY-NC-SA 3.0](https://creativecommons.org/licenses/by-nc-sa/3.0/fr/) license. It allows personal and academic research uses of the dataset, but not commercial uses. So concretely, the dataset cannot be used to train a model that is then put into production within a business or a company. For this type of commercial use, we invite FQuAD users to contact [the authors](https://www.illuin.tech/contact/) to discuss possible partnerships. ### Citation Information ``` @ARTICLE{2020arXiv200206071 author = {Martin, d'Hoffschmidt and Maxime, Vidal and Wacim, Belblidia and Tom, Brendlé}, title = "{FQuAD: French Question Answering Dataset}", journal = {arXiv e-prints}, keywords = {Computer Science - Computation and Language}, year = "2020", month = "Feb", eid = {arXiv:2002.06071}, pages = {arXiv:2002.06071}, archivePrefix = {arXiv}, eprint = {2002.06071}, primaryClass = {cs.CL} } ``` ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@mariamabarham](https://github.com/mariamabarham), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun), [@albertvillanova](https://github.com/albertvillanova) for adding this dataset. Thanks to [@ManuelFay](https://github.com/manuelfay) for providing information on the dataset creation process.
fquad
[ "task_categories:question-answering", "task_categories:text-retrieval", "task_ids:extractive-qa", "task_ids:closed-domain-qa", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:fr", "license:cc-by-nc-sa-3.0", "arxiv:2002.06071", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced", "found"], "language": ["fr"], "license": ["cc-by-nc-sa-3.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["question-answering", "text-retrieval"], "task_ids": ["extractive-qa", "closed-domain-qa"], "paperswithcode_id": "fquad", "pretty_name": "FQuAD: French Question Answering Dataset", "dataset_info": {"features": [{"name": "context", "dtype": "string"}, {"name": "questions", "sequence": "string"}, {"name": "answers", "sequence": [{"name": "texts", "dtype": "string"}, {"name": "answers_starts", "dtype": "int32"}]}], "splits": [{"name": "train", "num_bytes": 5898752, "num_examples": 4921}, {"name": "validation", "num_bytes": 1031456, "num_examples": 768}], "download_size": 0, "dataset_size": 6930208}}
2024-01-18T11:03:47+00:00
[ "2002.06071" ]
[ "fr" ]
TAGS #task_categories-question-answering #task_categories-text-retrieval #task_ids-extractive-qa #task_ids-closed-domain-qa #annotations_creators-crowdsourced #language_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-French #license-cc-by-nc-sa-3.0 #arxiv-2002.06071 #region-us
Dataset Card for FQuAD ====================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Paper: FQuAD: French Question Answering Dataset * Point of Contact: URL * Size of downloaded dataset files: 3.29 MB * Size of the generated dataset: 6.94 MB * Total amount of disk used: 10.23 MB ### Dataset Summary FQuAD: French Question Answering Dataset We introduce FQuAD, a native French Question Answering Dataset. FQuAD contains 25,000+ question and answer pairs. Finetuning CamemBERT on FQuAD yields a F1 score of 88% and an exact match of 77.9%. Developped to provide a SQuAD equivalent in the French language. Questions are original and based on high quality Wikipedia articles. Please, note this dataset is licensed for non-commercial purposes and users must agree to the following terms and conditions: 1. Use FQuAD only for internal research purposes. 2. Not make any copy except a safety one. 3. Not redistribute it (or part of it) in any way, even for free. 4. Not sell it or use it for any commercial purpose. Contact us for a possible commercial licence. 5. Mention the corpus origin and Illuin Technology in all publications about experiments using FQuAD. 6. Redistribute to Illuin Technology any improved or enriched version you could make of that corpus. Request manually download of the data from: URL ### Supported Tasks and Leaderboards * 'closed-domain-qa', 'text-retrieval': This dataset is intended to be used for 'closed-domain-qa', but can also be used for information retrieval tasks. ### Languages This dataset is exclusively in French, with context data from Wikipedia and questions from French university students ('fr'). Dataset Structure ----------------- ### Data Instances #### default * Size of downloaded dataset files: 3.29 MB * Size of the generated dataset: 6.94 MB * Total amount of disk used: 10.23 MB An example of 'validation' looks as follows. ### Data Fields The data fields are the same among all splits. #### default * 'context': a 'string' feature. * 'questions': a 'list' of 'string' features. * 'answers': a dictionary feature containing: + 'texts': a 'string' feature. + 'answers\_starts': a 'int32' feature. ### Data Splits The FQuAD dataset has 3 splits: *train*, *validation*, and *test*. The *test* split is however not released publicly at the moment. The splits contain disjoint sets of articles. The following table contains stats about each split. Dataset Creation ---------------- ### Curation Rationale The FQuAD dataset was created by Illuin technology. It was developped to provide a SQuAD equivalent in the French language. Questions are original and based on high quality Wikipedia articles. ### Source Data The text used for the contexts are from the curated list of French High-Quality Wikipedia articles. ### Annotations Annotations (spans and questions) are written by students of the CentraleSupélec school of engineering. Wikipedia articles were scraped and Illuin used an internally-developped tool to help annotators ask questions and indicate the answer spans. Annotators were given paragraph sized contexts and asked to generate 4/5 non-trivial questions about information in the context. ### Personal and Sensitive Information No personal or sensitive information is included in this dataset. This has been manually verified by the dataset curators. Considerations for Using the Data --------------------------------- Users should consider this dataset is sampled from Wikipedia data which might not be representative of all QA use cases. ### Social Impact of Dataset The social biases of this dataset have not yet been investigated. ### Discussion of Biases The social biases of this dataset have not yet been investigated, though articles have been selected by their quality and objectivity. ### Other Known Limitations The limitations of the FQuAD dataset have not yet been investigated. Additional Information ---------------------- ### Dataset Curators Illuin Technology: URL ### Licensing Information The FQuAD dataset is licensed under the CC BY-NC-SA 3.0 license. It allows personal and academic research uses of the dataset, but not commercial uses. So concretely, the dataset cannot be used to train a model that is then put into production within a business or a company. For this type of commercial use, we invite FQuAD users to contact the authors to discuss possible partnerships. ### Contributions Thanks to @thomwolf, @mariamabarham, @patrickvonplaten, @lewtun, @albertvillanova for adding this dataset. Thanks to @ManuelFay for providing information on the dataset creation process.
[ "### Dataset Summary\n\n\nFQuAD: French Question Answering Dataset\nWe introduce FQuAD, a native French Question Answering Dataset.\n\n\nFQuAD contains 25,000+ question and answer pairs.\nFinetuning CamemBERT on FQuAD yields a F1 score of 88% and an exact match of 77.9%.\nDevelopped to provide a SQuAD equivalent in the French language. Questions are original and based on high quality Wikipedia articles.\n\n\nPlease, note this dataset is licensed for non-commercial purposes and users must agree to the following terms and conditions:\n\n\n1. Use FQuAD only for internal research purposes.\n2. Not make any copy except a safety one.\n3. Not redistribute it (or part of it) in any way, even for free.\n4. Not sell it or use it for any commercial purpose. Contact us for a possible commercial licence.\n5. Mention the corpus origin and Illuin Technology in all publications about experiments using FQuAD.\n6. Redistribute to Illuin Technology any improved or enriched version you could make of that corpus.\n\n\nRequest manually download of the data from: URL", "### Supported Tasks and Leaderboards\n\n\n* 'closed-domain-qa', 'text-retrieval': This dataset is intended to be used for 'closed-domain-qa', but can also be used for information retrieval tasks.", "### Languages\n\n\nThis dataset is exclusively in French, with context data from Wikipedia and questions from French university students ('fr').\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### default\n\n\n* Size of downloaded dataset files: 3.29 MB\n* Size of the generated dataset: 6.94 MB\n* Total amount of disk used: 10.23 MB\n\n\nAn example of 'validation' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### default\n\n\n* 'context': a 'string' feature.\n* 'questions': a 'list' of 'string' features.\n* 'answers': a dictionary feature containing:\n\t+ 'texts': a 'string' feature.\n\t+ 'answers\\_starts': a 'int32' feature.", "### Data Splits\n\n\nThe FQuAD dataset has 3 splits: *train*, *validation*, and *test*. The *test* split is however not released publicly at the moment. The splits contain disjoint sets of articles. The following table contains stats about each split.\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nThe FQuAD dataset was created by Illuin technology. It was developped to provide a SQuAD equivalent in the French language. Questions are original and based on high quality Wikipedia articles.", "### Source Data\n\n\nThe text used for the contexts are from the curated list of French High-Quality Wikipedia articles.", "### Annotations\n\n\nAnnotations (spans and questions) are written by students of the CentraleSupélec school of engineering.\nWikipedia articles were scraped and Illuin used an internally-developped tool to help annotators ask questions and indicate the answer spans.\nAnnotators were given paragraph sized contexts and asked to generate 4/5 non-trivial questions about information in the context.", "### Personal and Sensitive Information\n\n\nNo personal or sensitive information is included in this dataset. This has been manually verified by the dataset curators.\n\n\nConsiderations for Using the Data\n---------------------------------\n\n\nUsers should consider this dataset is sampled from Wikipedia data which might not be representative of all QA use cases.", "### Social Impact of Dataset\n\n\nThe social biases of this dataset have not yet been investigated.", "### Discussion of Biases\n\n\nThe social biases of this dataset have not yet been investigated, though articles have been selected by their quality and objectivity.", "### Other Known Limitations\n\n\nThe limitations of the FQuAD dataset have not yet been investigated.\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nIlluin Technology: URL", "### Licensing Information\n\n\nThe FQuAD dataset is licensed under the CC BY-NC-SA 3.0 license.\n\n\nIt allows personal and academic research uses of the dataset, but not commercial uses. So concretely, the dataset cannot be used to train a model that is then put into production within a business or a company. For this type of commercial use, we invite FQuAD users to contact the authors to discuss possible partnerships.", "### Contributions\n\n\nThanks to @thomwolf, @mariamabarham, @patrickvonplaten, @lewtun, @albertvillanova for adding this dataset.\nThanks to @ManuelFay for providing information on the dataset creation process." ]
[ "TAGS\n#task_categories-question-answering #task_categories-text-retrieval #task_ids-extractive-qa #task_ids-closed-domain-qa #annotations_creators-crowdsourced #language_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-French #license-cc-by-nc-sa-3.0 #arxiv-2002.06071 #region-us \n", "### Dataset Summary\n\n\nFQuAD: French Question Answering Dataset\nWe introduce FQuAD, a native French Question Answering Dataset.\n\n\nFQuAD contains 25,000+ question and answer pairs.\nFinetuning CamemBERT on FQuAD yields a F1 score of 88% and an exact match of 77.9%.\nDevelopped to provide a SQuAD equivalent in the French language. Questions are original and based on high quality Wikipedia articles.\n\n\nPlease, note this dataset is licensed for non-commercial purposes and users must agree to the following terms and conditions:\n\n\n1. Use FQuAD only for internal research purposes.\n2. Not make any copy except a safety one.\n3. Not redistribute it (or part of it) in any way, even for free.\n4. Not sell it or use it for any commercial purpose. Contact us for a possible commercial licence.\n5. Mention the corpus origin and Illuin Technology in all publications about experiments using FQuAD.\n6. Redistribute to Illuin Technology any improved or enriched version you could make of that corpus.\n\n\nRequest manually download of the data from: URL", "### Supported Tasks and Leaderboards\n\n\n* 'closed-domain-qa', 'text-retrieval': This dataset is intended to be used for 'closed-domain-qa', but can also be used for information retrieval tasks.", "### Languages\n\n\nThis dataset is exclusively in French, with context data from Wikipedia and questions from French university students ('fr').\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### default\n\n\n* Size of downloaded dataset files: 3.29 MB\n* Size of the generated dataset: 6.94 MB\n* Total amount of disk used: 10.23 MB\n\n\nAn example of 'validation' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### default\n\n\n* 'context': a 'string' feature.\n* 'questions': a 'list' of 'string' features.\n* 'answers': a dictionary feature containing:\n\t+ 'texts': a 'string' feature.\n\t+ 'answers\\_starts': a 'int32' feature.", "### Data Splits\n\n\nThe FQuAD dataset has 3 splits: *train*, *validation*, and *test*. The *test* split is however not released publicly at the moment. The splits contain disjoint sets of articles. The following table contains stats about each split.\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nThe FQuAD dataset was created by Illuin technology. It was developped to provide a SQuAD equivalent in the French language. Questions are original and based on high quality Wikipedia articles.", "### Source Data\n\n\nThe text used for the contexts are from the curated list of French High-Quality Wikipedia articles.", "### Annotations\n\n\nAnnotations (spans and questions) are written by students of the CentraleSupélec school of engineering.\nWikipedia articles were scraped and Illuin used an internally-developped tool to help annotators ask questions and indicate the answer spans.\nAnnotators were given paragraph sized contexts and asked to generate 4/5 non-trivial questions about information in the context.", "### Personal and Sensitive Information\n\n\nNo personal or sensitive information is included in this dataset. This has been manually verified by the dataset curators.\n\n\nConsiderations for Using the Data\n---------------------------------\n\n\nUsers should consider this dataset is sampled from Wikipedia data which might not be representative of all QA use cases.", "### Social Impact of Dataset\n\n\nThe social biases of this dataset have not yet been investigated.", "### Discussion of Biases\n\n\nThe social biases of this dataset have not yet been investigated, though articles have been selected by their quality and objectivity.", "### Other Known Limitations\n\n\nThe limitations of the FQuAD dataset have not yet been investigated.\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nIlluin Technology: URL", "### Licensing Information\n\n\nThe FQuAD dataset is licensed under the CC BY-NC-SA 3.0 license.\n\n\nIt allows personal and academic research uses of the dataset, but not commercial uses. So concretely, the dataset cannot be used to train a model that is then put into production within a business or a company. For this type of commercial use, we invite FQuAD users to contact the authors to discuss possible partnerships.", "### Contributions\n\n\nThanks to @thomwolf, @mariamabarham, @patrickvonplaten, @lewtun, @albertvillanova for adding this dataset.\nThanks to @ManuelFay for providing information on the dataset creation process." ]
5c61c7fe4e7ed120ab8db421226c84f39c5c9a68
# Dataset Card for FreebaseQA ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** [FreebaseQA repository](https://github.com/kelvin-jiang/FreebaseQA) - **Paper:** [FreebaseQA ACL paper](https://www.aclweb.org/anthology/N19-1028.pdf) - **Leaderboard:** - **Point of Contact:** [Kelvin Jiang](https://github.com/kelvin-jiang) ### Dataset Summary FreebaseQA is a dataset for open-domain factoid question answering (QA) tasks over structured knowledge bases, like Freebase. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages English ## Dataset Structure ### Data Instances Here is an example from the dataset: ``` {'Parses': {'Answers': [{'AnswersMid': ['m.01npcx'], 'AnswersName': [['goldeneye']]}, {'AnswersMid': ['m.01npcx'], 'AnswersName': [['goldeneye']]}], 'InferentialChain': ['film.film_character.portrayed_in_films..film.performance.film', 'film.actor.film..film.performance.film'], 'Parse-Id': ['FreebaseQA-train-0.P0', 'FreebaseQA-train-0.P1'], 'PotentialTopicEntityMention': ['007', 'pierce brosnan'], 'TopicEntityMid': ['m.0clpml', 'm.018p4y'], 'TopicEntityName': ['james bond', 'pierce brosnan']}, 'ProcessedQuestion': "what was pierce brosnan's first outing as 007", 'Question-ID': 'FreebaseQA-train-0', 'RawQuestion': "What was Pierce Brosnan's first outing as 007?"} ``` ### Data Fields - `Question-ID`: a `string` feature representing ID of each question. - `RawQuestion`: a `string` feature representing the original question collected from data sources. - `ProcessedQuestion`: a `string` feature representing the question processed with some operations such as removal of trailing question mark and decapitalization. - `Parses`: a dictionary feature representing the semantic parse(s) for the question containing: - `Parse-Id`: a `string` feature representing the ID of each semantic parse. - `PotentialTopicEntityMention`: a `string` feature representing the potential topic entity mention in the question. - `TopicEntityName`: a `string` feature representing name or alias of the topic entity in the question from Freebase. - `TopicEntityMid`: a `string` feature representing the Freebase MID of the topic entity in the question. - `InferentialChain`: a `string` feature representing path from the topic entity node to the answer node in Freebase, labeled as a predicate. - `Answers`: a dictionary feature representing the answer found from this parse containing: - `AnswersMid`: a `string` feature representing the Freebase MID of the answer. - `AnswersName`: a `list` of `string` features representing the answer string from the original question-answer pair. ### Data Splits This data set contains 28,348 unique questions that are divided into three subsets: train (20,358), dev (3,994) and eval (3,996), formatted as JSON files: FreebaseQA-[train|dev|eval].json ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization The data set is generated by matching trivia-type question-answer pairs with subject-predicateobject triples in Freebase. For each collected question-answer pair, we first tag all entities in each question and search for relevant predicates that bridge a tagged entity with the answer in Freebase. Finally, human annotation is used to remove false positives in these matched triples. #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators Kelvin Jiang - Currently at University of Waterloo. Work was done at York University. ### Licensing Information [More Information Needed] ### Citation Information ``` @inproceedings{jiang-etal-2019-freebaseqa, title = "{F}reebase{QA}: A New Factoid {QA} Data Set Matching Trivia-Style Question-Answer Pairs with {F}reebase", author = "Jiang, Kelvin and Wu, Dekun and Jiang, Hui", booktitle = "Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)", month = jun, year = "2019", address = "Minneapolis, Minnesota", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/N19-1028", doi = "10.18653/v1/N19-1028", pages = "318--323", abstract = "In this paper, we present a new data set, named FreebaseQA, for open-domain factoid question answering (QA) tasks over structured knowledge bases, like Freebase. The data set is generated by matching trivia-type question-answer pairs with subject-predicate-object triples in Freebase. For each collected question-answer pair, we first tag all entities in each question and search for relevant predicates that bridge a tagged entity with the answer in Freebase. Finally, human annotation is used to remove any false positive in these matched triples. Using this method, we are able to efficiently generate over 54K matches from about 28K unique questions with minimal cost. Our analysis shows that this data set is suitable for model training in factoid QA tasks beyond simpler questions since FreebaseQA provides more linguistically sophisticated questions than other existing data sets.", } ``` ### Contributions Thanks to [@gchhablani](https://github.com/gchhablani) and [@anaerobeth](https://github.com/anaerobeth) for adding this dataset.
freebase_qa
[ "task_categories:question-answering", "task_ids:open-domain-qa", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|trivia_qa", "language:en", "license:unknown", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended|trivia_qa"], "task_categories": ["question-answering"], "task_ids": ["open-domain-qa"], "paperswithcode_id": "freebaseqa", "pretty_name": "FreebaseQA", "dataset_info": {"features": [{"name": "Question-ID", "dtype": "string"}, {"name": "RawQuestion", "dtype": "string"}, {"name": "ProcessedQuestion", "dtype": "string"}, {"name": "Parses", "sequence": [{"name": "Parse-Id", "dtype": "string"}, {"name": "PotentialTopicEntityMention", "dtype": "string"}, {"name": "TopicEntityName", "dtype": "string"}, {"name": "TopicEntityMid", "dtype": "string"}, {"name": "InferentialChain", "dtype": "string"}, {"name": "Answers", "sequence": [{"name": "AnswersMid", "dtype": "string"}, {"name": "AnswersName", "sequence": "string"}]}]}], "splits": [{"name": "train", "num_bytes": 10235375, "num_examples": 20358}, {"name": "test", "num_bytes": 1987874, "num_examples": 3996}, {"name": "validation", "num_bytes": 1974114, "num_examples": 3994}], "download_size": 33204999, "dataset_size": 14197363}}
2024-01-18T11:03:51+00:00
[]
[ "en" ]
TAGS #task_categories-question-answering #task_ids-open-domain-qa #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|trivia_qa #language-English #license-unknown #region-us
# Dataset Card for FreebaseQA ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: - Repository: FreebaseQA repository - Paper: FreebaseQA ACL paper - Leaderboard: - Point of Contact: Kelvin Jiang ### Dataset Summary FreebaseQA is a dataset for open-domain factoid question answering (QA) tasks over structured knowledge bases, like Freebase. ### Supported Tasks and Leaderboards ### Languages English ## Dataset Structure ### Data Instances Here is an example from the dataset: ### Data Fields - 'Question-ID': a 'string' feature representing ID of each question. - 'RawQuestion': a 'string' feature representing the original question collected from data sources. - 'ProcessedQuestion': a 'string' feature representing the question processed with some operations such as removal of trailing question mark and decapitalization. - 'Parses': a dictionary feature representing the semantic parse(s) for the question containing: - 'Parse-Id': a 'string' feature representing the ID of each semantic parse. - 'PotentialTopicEntityMention': a 'string' feature representing the potential topic entity mention in the question. - 'TopicEntityName': a 'string' feature representing name or alias of the topic entity in the question from Freebase. - 'TopicEntityMid': a 'string' feature representing the Freebase MID of the topic entity in the question. - 'InferentialChain': a 'string' feature representing path from the topic entity node to the answer node in Freebase, labeled as a predicate. - 'Answers': a dictionary feature representing the answer found from this parse containing: - 'AnswersMid': a 'string' feature representing the Freebase MID of the answer. - 'AnswersName': a 'list' of 'string' features representing the answer string from the original question-answer pair. ### Data Splits This data set contains 28,348 unique questions that are divided into three subsets: train (20,358), dev (3,994) and eval (3,996), formatted as JSON files: FreebaseQA-[train|dev|eval].json ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization The data set is generated by matching trivia-type question-answer pairs with subject-predicateobject triples in Freebase. For each collected question-answer pair, we first tag all entities in each question and search for relevant predicates that bridge a tagged entity with the answer in Freebase. Finally, human annotation is used to remove false positives in these matched triples. #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators Kelvin Jiang - Currently at University of Waterloo. Work was done at York University. ### Licensing Information ### Contributions Thanks to @gchhablani and @anaerobeth for adding this dataset.
[ "# Dataset Card for FreebaseQA", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n \n- Homepage:\n- Repository: FreebaseQA repository\n- Paper: FreebaseQA ACL paper\n- Leaderboard:\n- Point of Contact: Kelvin Jiang", "### Dataset Summary\n \nFreebaseQA is a dataset for open-domain factoid question answering (QA) tasks over structured knowledge bases, like Freebase.", "### Supported Tasks and Leaderboards", "### Languages\n \nEnglish", "## Dataset Structure", "### Data Instances\n\nHere is an example from the dataset:", "### Data Fields\n- 'Question-ID': a 'string' feature representing ID of each question.\n- 'RawQuestion': a 'string' feature representing the original question collected from data sources.\n- 'ProcessedQuestion': a 'string' feature representing the question processed with some operations such as removal of trailing question mark and decapitalization.\n- 'Parses': a dictionary feature representing the semantic parse(s) for the question containing:\n - 'Parse-Id': a 'string' feature representing the ID of each semantic parse.\n - 'PotentialTopicEntityMention': a 'string' feature representing the potential topic entity mention in the question.\n - 'TopicEntityName': a 'string' feature representing name or alias of the topic entity in the question from Freebase.\n - 'TopicEntityMid': a 'string' feature representing the Freebase MID of the topic entity in the question.\n - 'InferentialChain': a 'string' feature representing path from the topic entity node to the answer node in Freebase, labeled as a predicate.\n - 'Answers': a dictionary feature representing the answer found from this parse containing:\n - 'AnswersMid': a 'string' feature representing the Freebase MID of the answer.\n - 'AnswersName': a 'list' of 'string' features representing the answer string from the original question-answer pair.", "### Data Splits\nThis data set contains 28,348 unique questions that are divided into three subsets: train (20,358), dev (3,994) and eval (3,996), formatted as JSON files: FreebaseQA-[train|dev|eval].json", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization\n \nThe data set is generated by matching trivia-type question-answer pairs with subject-predicateobject triples in Freebase. For each collected question-answer pair, we first tag all entities in each question and search for relevant predicates that bridge a tagged entity with the answer in Freebase. Finally, human annotation is used to remove false positives in these matched triples.", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n \nKelvin Jiang - Currently at University of Waterloo. Work was done at\nYork University.", "### Licensing Information", "### Contributions\n\nThanks to @gchhablani and @anaerobeth for adding this dataset." ]
[ "TAGS\n#task_categories-question-answering #task_ids-open-domain-qa #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|trivia_qa #language-English #license-unknown #region-us \n", "# Dataset Card for FreebaseQA", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n \n- Homepage:\n- Repository: FreebaseQA repository\n- Paper: FreebaseQA ACL paper\n- Leaderboard:\n- Point of Contact: Kelvin Jiang", "### Dataset Summary\n \nFreebaseQA is a dataset for open-domain factoid question answering (QA) tasks over structured knowledge bases, like Freebase.", "### Supported Tasks and Leaderboards", "### Languages\n \nEnglish", "## Dataset Structure", "### Data Instances\n\nHere is an example from the dataset:", "### Data Fields\n- 'Question-ID': a 'string' feature representing ID of each question.\n- 'RawQuestion': a 'string' feature representing the original question collected from data sources.\n- 'ProcessedQuestion': a 'string' feature representing the question processed with some operations such as removal of trailing question mark and decapitalization.\n- 'Parses': a dictionary feature representing the semantic parse(s) for the question containing:\n - 'Parse-Id': a 'string' feature representing the ID of each semantic parse.\n - 'PotentialTopicEntityMention': a 'string' feature representing the potential topic entity mention in the question.\n - 'TopicEntityName': a 'string' feature representing name or alias of the topic entity in the question from Freebase.\n - 'TopicEntityMid': a 'string' feature representing the Freebase MID of the topic entity in the question.\n - 'InferentialChain': a 'string' feature representing path from the topic entity node to the answer node in Freebase, labeled as a predicate.\n - 'Answers': a dictionary feature representing the answer found from this parse containing:\n - 'AnswersMid': a 'string' feature representing the Freebase MID of the answer.\n - 'AnswersName': a 'list' of 'string' features representing the answer string from the original question-answer pair.", "### Data Splits\nThis data set contains 28,348 unique questions that are divided into three subsets: train (20,358), dev (3,994) and eval (3,996), formatted as JSON files: FreebaseQA-[train|dev|eval].json", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization\n \nThe data set is generated by matching trivia-type question-answer pairs with subject-predicateobject triples in Freebase. For each collected question-answer pair, we first tag all entities in each question and search for relevant predicates that bridge a tagged entity with the answer in Freebase. Finally, human annotation is used to remove false positives in these matched triples.", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n \nKelvin Jiang - Currently at University of Waterloo. Work was done at\nYork University.", "### Licensing Information", "### Contributions\n\nThanks to @gchhablani and @anaerobeth for adding this dataset." ]
f27ccb0bf685fccf27ab07ae51df7e774e2ca854
# Dataset Card for "gap" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://github.com/google-research-datasets/gap-coreference](https://github.com/google-research-datasets/gap-coreference) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [Mind the GAP: A Balanced Corpus of Gendered Ambiguous Pronouns](https://arxiv.org/abs/1810.05201) - **Point of Contact:** [[email protected]](mailto:[email protected]) - **Size of downloaded dataset files:** 2.40 MB - **Size of the generated dataset:** 2.43 MB - **Total amount of disk used:** 4.83 MB ### Dataset Summary GAP is a gender-balanced dataset containing 8,908 coreference-labeled pairs of (ambiguous pronoun, antecedent name), sampled from Wikipedia and released by Google AI Language for the evaluation of coreference resolution in practical applications. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### default - **Size of downloaded dataset files:** 2.40 MB - **Size of the generated dataset:** 2.43 MB - **Total amount of disk used:** 4.83 MB An example of 'validation' looks as follows. ``` { "A": "aliquam ultrices sagittis", "A-coref": false, "A-offset": 208, "B": "elementum curabitur vitae", "B-coref": false, "B-offset": 435, "ID": "validation-1", "Pronoun": "condimentum mattis pellentesque", "Pronoun-offset": 948, "Text": "Lorem ipsum dolor", "URL": "sem fringilla ut" } ``` ### Data Fields The data fields are the same among all splits. #### default - `ID`: a `string` feature. - `Text`: a `string` feature. - `Pronoun`: a `string` feature. - `Pronoun-offset`: a `int32` feature. - `A`: a `string` feature. - `A-offset`: a `int32` feature. - `A-coref`: a `bool` feature. - `B`: a `string` feature. - `B-offset`: a `int32` feature. - `B-coref`: a `bool` feature. - `URL`: a `string` feature. ### Data Splits | name |train|validation|test| |-------|----:|---------:|---:| |default| 2000| 454|2000| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @article{webster-etal-2018-mind, title = "Mind the {GAP}: A Balanced Corpus of Gendered Ambiguous Pronouns", author = "Webster, Kellie and Recasens, Marta and Axelrod, Vera and Baldridge, Jason", journal = "Transactions of the Association for Computational Linguistics", volume = "6", year = "2018", address = "Cambridge, MA", publisher = "MIT Press", url = "https://aclanthology.org/Q18-1042", doi = "10.1162/tacl_a_00240", pages = "605--617", } ``` ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@otakumesi](https://github.com/otakumesi), [@lewtun](https://github.com/lewtun) for adding this dataset.
gap
[ "task_categories:token-classification", "task_ids:coreference-resolution", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:unknown", "arxiv:1810.05201", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["coreference-resolution"], "paperswithcode_id": "gap", "pretty_name": "GAP Benchmark Suite", "dataset_info": {"features": [{"name": "ID", "dtype": "string"}, {"name": "Text", "dtype": "string"}, {"name": "Pronoun", "dtype": "string"}, {"name": "Pronoun-offset", "dtype": "int32"}, {"name": "A", "dtype": "string"}, {"name": "A-offset", "dtype": "int32"}, {"name": "A-coref", "dtype": "bool"}, {"name": "B", "dtype": "string"}, {"name": "B-offset", "dtype": "int32"}, {"name": "B-coref", "dtype": "bool"}, {"name": "URL", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1095623, "num_examples": 2000}, {"name": "validation", "num_bytes": 248329, "num_examples": 454}, {"name": "test", "num_bytes": 1090462, "num_examples": 2000}], "download_size": 2401971, "dataset_size": 2434414}}
2024-01-18T11:04:03+00:00
[ "1810.05201" ]
[ "en" ]
TAGS #task_categories-token-classification #task_ids-coreference-resolution #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-unknown #arxiv-1810.05201 #region-us
Dataset Card for "gap" ====================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: * Paper: Mind the GAP: A Balanced Corpus of Gendered Ambiguous Pronouns * Point of Contact: gap-coreference@URL * Size of downloaded dataset files: 2.40 MB * Size of the generated dataset: 2.43 MB * Total amount of disk used: 4.83 MB ### Dataset Summary GAP is a gender-balanced dataset containing 8,908 coreference-labeled pairs of (ambiguous pronoun, antecedent name), sampled from Wikipedia and released by Google AI Language for the evaluation of coreference resolution in practical applications. ### Supported Tasks and Leaderboards ### Languages Dataset Structure ----------------- ### Data Instances #### default * Size of downloaded dataset files: 2.40 MB * Size of the generated dataset: 2.43 MB * Total amount of disk used: 4.83 MB An example of 'validation' looks as follows. ### Data Fields The data fields are the same among all splits. #### default * 'ID': a 'string' feature. * 'Text': a 'string' feature. * 'Pronoun': a 'string' feature. * 'Pronoun-offset': a 'int32' feature. * 'A': a 'string' feature. * 'A-offset': a 'int32' feature. * 'A-coref': a 'bool' feature. * 'B': a 'string' feature. * 'B-offset': a 'int32' feature. * 'B-coref': a 'bool' feature. * 'URL': a 'string' feature. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to @thomwolf, @patrickvonplaten, @otakumesi, @lewtun for adding this dataset.
[ "### Dataset Summary\n\n\nGAP is a gender-balanced dataset containing 8,908 coreference-labeled pairs of\n(ambiguous pronoun, antecedent name), sampled from Wikipedia and released by\nGoogle AI Language for the evaluation of coreference resolution in practical\napplications.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### default\n\n\n* Size of downloaded dataset files: 2.40 MB\n* Size of the generated dataset: 2.43 MB\n* Total amount of disk used: 4.83 MB\n\n\nAn example of 'validation' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### default\n\n\n* 'ID': a 'string' feature.\n* 'Text': a 'string' feature.\n* 'Pronoun': a 'string' feature.\n* 'Pronoun-offset': a 'int32' feature.\n* 'A': a 'string' feature.\n* 'A-offset': a 'int32' feature.\n* 'A-coref': a 'bool' feature.\n* 'B': a 'string' feature.\n* 'B-offset': a 'int32' feature.\n* 'B-coref': a 'bool' feature.\n* 'URL': a 'string' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @thomwolf, @patrickvonplaten, @otakumesi, @lewtun for adding this dataset." ]
[ "TAGS\n#task_categories-token-classification #task_ids-coreference-resolution #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-unknown #arxiv-1810.05201 #region-us \n", "### Dataset Summary\n\n\nGAP is a gender-balanced dataset containing 8,908 coreference-labeled pairs of\n(ambiguous pronoun, antecedent name), sampled from Wikipedia and released by\nGoogle AI Language for the evaluation of coreference resolution in practical\napplications.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### default\n\n\n* Size of downloaded dataset files: 2.40 MB\n* Size of the generated dataset: 2.43 MB\n* Total amount of disk used: 4.83 MB\n\n\nAn example of 'validation' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### default\n\n\n* 'ID': a 'string' feature.\n* 'Text': a 'string' feature.\n* 'Pronoun': a 'string' feature.\n* 'Pronoun-offset': a 'int32' feature.\n* 'A': a 'string' feature.\n* 'A-offset': a 'int32' feature.\n* 'A-coref': a 'bool' feature.\n* 'B': a 'string' feature.\n* 'B-offset': a 'int32' feature.\n* 'B-coref': a 'bool' feature.\n* 'URL': a 'string' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @thomwolf, @patrickvonplaten, @otakumesi, @lewtun for adding this dataset." ]
49d680cd93350c6e4d5b397e30b1f696ffcb7720
# Dataset Card for GEM ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://gem-benchmark.github.io/](https://gem-benchmark.github.io/) - **Repository:** - **Paper:** [The GEM Benchmark: Natural Language Generation, its Evaluation and Metrics](https://arxiv.org/abs/2102.01672) - **Point of Contact:** [Sebastian Gehrman]([email protected]) - **Size of downloaded dataset files:** 2.19 GB - **Size of the generated dataset:** 3.92 GB - **Total amount of disk used:** 6.10 GB ### Dataset Summary GEM is a benchmark environment for Natural Language Generation with a focus on its Evaluation, both through human annotations and automated Metrics. GEM aims to: - measure NLG progress across 13 datasets spanning many NLG tasks and languages. - provide an in-depth analysis of data and models presented via data statements and challenge sets. - develop standards for evaluation of generated text using both automated and human metrics. It is our goal to regularly update GEM and to encourage toward more inclusive practices in dataset development by extending existing data or developing datasets for additional languages. You can find more complete information in the dataset cards for each of the subsets: - [CommonGen](https://gem-benchmark.com/data_cards/common_gen) - [Czech Restaurant](https://gem-benchmark.com/data_cards/cs_restaurants) - [DART](https://gem-benchmark.com/data_cards/dart) - [E2E](https://gem-benchmark.com/data_cards/e2e_nlg) - [MLSum](https://gem-benchmark.com/data_cards/mlsum) - [Schema-Guided Dialog](https://gem-benchmark.com/data_cards/schema_guided_dialog) - [WebNLG](https://gem-benchmark.com/data_cards/web_nlg) - [Wiki-Auto/ASSET/TURK](https://gem-benchmark.com/data_cards/wiki_auto_asset_turk) - [WikiLingua](https://gem-benchmark.com/data_cards/wiki_lingua) - [XSum](https://gem-benchmark.com/data_cards/xsum) The subsets are organized by task: ``` { "summarization": { "mlsum": ["mlsum_de", "mlsum_es"], "wiki_lingua": ["wiki_lingua_es_en", "wiki_lingua_ru_en", "wiki_lingua_tr_en", "wiki_lingua_vi_en"], "xsum": ["xsum"], }, "struct2text": { "common_gen": ["common_gen"], "cs_restaurants": ["cs_restaurants"], "dart": ["dart"], "e2e": ["e2e_nlg"], "totto": ["totto"], "web_nlg": ["web_nlg_en", "web_nlg_ru"], }, "simplification": { "wiki_auto_asset_turk": ["wiki_auto_asset_turk"], }, "dialog": { "schema_guided_dialog": ["schema_guided_dialog"], }, } ``` Each example has one `target` per example in its training set, and a set of `references` (with one or more items) in its validation and test set. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### common_gen - **Size of downloaded dataset files:** 1.85 MB - **Size of the generated dataset:** 9.23 MB - **Total amount of disk used:** 11.07 MB An example of `validation` looks as follows. ``` {'concept_set_id': 0, 'concepts': ['field', 'look', 'stand'], 'gem_id': 'common_gen-validation-0', 'references': ['The player stood in the field looking at the batter.', 'The coach stands along the field, looking at the goalkeeper.', 'I stood and looked across the field, peacefully.', 'Someone stands, looking around the empty field.'], 'target': 'The player stood in the field looking at the batter.'} ``` #### cs_restaurants - **Size of downloaded dataset files:** 1.47 MB - **Size of the generated dataset:** 1.31 MB - **Total amount of disk used:** 2.77 MB An example of `validation` looks as follows. ``` {'dialog_act': '?request(area)', 'dialog_act_delexicalized': '?request(area)', 'gem_id': 'cs_restaurants-validation-0', 'references': ['Jakou lokalitu hledáte ?'], 'target': 'Jakou lokalitu hledáte ?', 'target_delexicalized': 'Jakou lokalitu hledáte ?'} ``` #### dart - **Size of downloaded dataset files:** 29.37 MB - **Size of the generated dataset:** 27.44 MB - **Total amount of disk used:** 56.81 MB An example of `validation` looks as follows. ``` {'dart_id': 0, 'gem_id': 'dart-validation-0', 'references': ['A school from Mars Hill, North Carolina, joined in 1973.'], 'subtree_was_extended': True, 'target': 'A school from Mars Hill, North Carolina, joined in 1973.', 'target_sources': ['WikiSQL_decl_sents'], 'tripleset': [['Mars Hill College', 'JOINED', '1973'], ['Mars Hill College', 'LOCATION', 'Mars Hill, North Carolina']]} ``` #### e2e_nlg - **Size of downloaded dataset files:** 14.60 MB - **Size of the generated dataset:** 12.14 MB - **Total amount of disk used:** 26.74 MB An example of `validation` looks as follows. ``` {'gem_id': 'e2e_nlg-validation-0', 'meaning_representation': 'name[Alimentum], area[city centre], familyFriendly[no]', 'references': ['There is a place in the city centre, Alimentum, that is not family-friendly.'], 'target': 'There is a place in the city centre, Alimentum, that is not family-friendly.'} ``` #### mlsum_de - **Size of downloaded dataset files:** 347.36 MB - **Size of the generated dataset:** 951.06 MB - **Total amount of disk used:** 1.30 GB An example of `validation` looks as follows. ``` {'date': '00/04/2019', 'gem_id': 'mlsum_de-validation-0', 'references': ['In einer Kleinstadt auf der Insel Usedom war eine junge Frau tot in ihrer Wohnung gefunden worden. Nun stehen zwei Bekannte unter Verdacht.'], 'target': 'In einer Kleinstadt auf der Insel Usedom war eine junge Frau tot in ihrer Wohnung gefunden worden. Nun stehen zwei Bekannte unter Verdacht.', 'text': 'Kerzen und Blumen stehen vor dem Eingang eines Hauses, in dem eine 18-jährige Frau tot aufgefunden wurde. In einer Kleinstadt auf der Insel Usedom war eine junge Frau tot in ...', 'title': 'Tod von 18-Jähriger auf Usedom: Zwei Festnahmen', 'topic': 'panorama', 'url': 'https://www.sueddeutsche.de/panorama/usedom-frau-tot-festnahme-verdaechtige-1.4412256'} ``` #### mlsum_es - **Size of downloaded dataset files:** 514.11 MB - **Size of the generated dataset:** 1.31 GB - **Total amount of disk used:** 1.83 GB An example of `validation` looks as follows. ``` {'date': '05/01/2019', 'gem_id': 'mlsum_es-validation-0', 'references': ['El diseñador que dio carta de naturaleza al estilo genuinamente americano celebra el medio siglo de su marca entre grandes fastos y problemas financieros. Conectar con las nuevas generaciones es el regalo que precisa más que nunca'], 'target': 'El diseñador que dio carta de naturaleza al estilo genuinamente americano celebra el medio siglo de su marca entre grandes fastos y problemas financieros. Conectar con las nuevas generaciones es el regalo que precisa más que nunca', 'text': 'Un oso de peluche marcándose un heelflip de monopatín es todo lo que Ralph Lauren necesitaba esta Navidad. Estampado en un jersey de lana azul marino, supone la guinda que corona ...', 'title': 'Ralph Lauren busca el secreto de la eterna juventud', 'topic': 'elpais estilo', 'url': 'http://elpais.com/elpais/2019/01/04/estilo/1546617396_933318.html'} ``` #### schema_guided_dialog - **Size of downloaded dataset files:** 8.64 MB - **Size of the generated dataset:** 45.78 MB - **Total amount of disk used:** 54.43 MB An example of `validation` looks as follows. ``` {'dialog_acts': [{'act': 2, 'slot': 'song_name', 'values': ['Carnivore']}, {'act': 2, 'slot': 'playback_device', 'values': ['TV']}], 'dialog_id': '10_00054', 'gem_id': 'schema_guided_dialog-validation-0', 'prompt': 'Yes, I would.', 'references': ['Please confirm the song Carnivore on tv.'], 'target': 'Please confirm the song Carnivore on tv.', 'turn_id': 15} ``` #### totto - **Size of downloaded dataset files:** 187.73 MB - **Size of the generated dataset:** 757.99 MB - **Total amount of disk used:** 945.72 MB An example of `validation` looks as follows. ``` {'example_id': '7391450717765563190', 'gem_id': 'totto-validation-0', 'highlighted_cells': [[3, 0], [3, 2], [3, 3]], 'overlap_subset': 'True', 'references': ['Daniel Henry Chamberlain was the 76th Governor of South Carolina from 1874.', 'Daniel Henry Chamberlain was the 76th Governor of South Carolina, beginning in 1874.', 'Daniel Henry Chamberlain was the 76th Governor of South Carolina who took office in 1874.'], 'sentence_annotations': [{'final_sentence': 'Daniel Henry Chamberlain was the 76th Governor of South Carolina from 1874.', 'original_sentence': 'Daniel Henry Chamberlain (June 23, 1835 – April 13, 1907) was an American planter, lawyer, author and the 76th Governor of South Carolina ' 'from 1874 until 1877.', 'sentence_after_ambiguity': 'Daniel Henry Chamberlain was the 76th Governor of South Carolina from 1874.', 'sentence_after_deletion': 'Daniel Henry Chamberlain was the 76th Governor of South Carolina from 1874.'}, ... ], 'table': [[{'column_span': 1, 'is_header': True, 'row_span': 1, 'value': '#'}, {'column_span': 2, 'is_header': True, 'row_span': 1, 'value': 'Governor'}, {'column_span': 1, 'is_header': True, 'row_span': 1, 'value': 'Took Office'}, {'column_span': 1, 'is_header': True, 'row_span': 1, 'value': 'Left Office'}], [{'column_span': 1, 'is_header': True, 'row_span': 1, 'value': '74'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '-'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'Robert Kingston Scott'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'July 6, 1868'}], ... ], 'table_page_title': 'List of Governors of South Carolina', 'table_section_text': 'Parties Democratic Republican', 'table_section_title': 'Governors under the Constitution of 1868', 'table_webpage_url': 'http://en.wikipedia.org/wiki/List_of_Governors_of_South_Carolina', 'target': 'Daniel Henry Chamberlain was the 76th Governor of South Carolina from 1874.', 'totto_id': 0} ``` #### web_nlg_en - **Size of downloaded dataset files:** 12.95 MB - **Size of the generated dataset:** 14.63 MB - **Total amount of disk used:** 27.57 MB An example of `validation` looks as follows. ``` {'category': 'Airport', 'gem_id': 'web_nlg_en-validation-0', 'input': ['Aarhus | leader | Jacob_Bundsgaard'], 'references': ['The leader of Aarhus is Jacob Bundsgaard.'], 'target': 'The leader of Aarhus is Jacob Bundsgaard.', 'webnlg_id': 'dev/Airport/1/Id1'} ``` #### web_nlg_ru - **Size of downloaded dataset files:** 7.63 MB - **Size of the generated dataset:** 8.41 MB - **Total amount of disk used:** 16.04 MB An example of `validation` looks as follows. ``` {'category': 'Airport', 'gem_id': 'web_nlg_ru-validation-0', 'input': ['Punjab,_Pakistan | leaderTitle | Provincial_Assembly_of_the_Punjab'], 'references': ['Пенджаб, Пакистан, возглавляется Провинциальной ассамблеей Пенджаба.', 'Пенджаб, Пакистан возглавляется Провинциальной ассамблеей Пенджаба.'], 'target': 'Пенджаб, Пакистан, возглавляется Провинциальной ассамблеей Пенджаба.', 'webnlg_id': 'dev/Airport/1/Id1'} ``` #### wiki_auto_asset_turk - **Size of downloaded dataset files:** 127.27 MB - **Size of the generated dataset:** 152.77 MB - **Total amount of disk used:** 280.04 MB An example of `validation` looks as follows. ``` {'gem_id': 'wiki_auto_asset_turk-validation-0', 'references': ['The Gandalf Awards honor excellent writing in in fantasy literature.'], 'source': 'The Gandalf Awards, honoring achievement in fantasy literature, were conferred by the World Science Fiction Society annually from 1974 to 1981.', 'source_id': '350_691837-1-0-0', 'target': 'The Gandalf Awards honor excellent writing in in fantasy literature.', 'target_id': '350_691837-0-0-0'} ``` #### wiki_lingua_es_en - **Size of downloaded dataset files:** 169.41 MB - **Size of the generated dataset:** 287.60 MB - **Total amount of disk used:** 457.01 MB An example of `validation` looks as follows. ``` 'references': ["Practice matted hair prevention from early in your cat's life. Make sure that your cat is grooming itself effectively. Keep a close eye on cats with long hair."], 'source': 'Muchas personas presentan problemas porque no cepillaron el pelaje de sus gatos en una etapa temprana de su vida, ya que no lo consideraban necesario. Sin embargo, a medida que...', 'target': "Practice matted hair prevention from early in your cat's life. Make sure that your cat is grooming itself effectively. Keep a close eye on cats with long hair."} ``` #### wiki_lingua_ru_en - **Size of downloaded dataset files:** 169.41 MB - **Size of the generated dataset:** 211.21 MB - **Total amount of disk used:** 380.62 MB An example of `validation` looks as follows. ``` {'gem_id': 'wiki_lingua_ru_en-val-0', 'references': ['Get immediate medical care if you notice signs of a complication. Undergo diagnostic tests to check for gallstones and complications. Ask your doctor about your treatment ' 'options.'], 'source': 'И хотя, скорее всего, вам не о чем волноваться, следует незамедлительно обратиться к врачу, если вы подозреваете, что у вас возникло осложнение желчекаменной болезни. Это ...', 'target': 'Get immediate medical care if you notice signs of a complication. Undergo diagnostic tests to check for gallstones and complications. Ask your doctor about your treatment ' 'options.'} ``` #### wiki_lingua_tr_en - **Size of downloaded dataset files:** 169.41 MB - **Size of the generated dataset:** 10.35 MB - **Total amount of disk used:** 179.75 MB An example of `validation` looks as follows. ``` {'gem_id': 'wiki_lingua_tr_en-val-0', 'references': ['Open Instagram. Go to the video you want to download. Tap ⋮. Tap Copy Link. Open Google Chrome. Tap the address bar. Go to the SaveFromWeb site. Tap the "Paste Instagram Video" text box. Tap and hold the text box. Tap PASTE. Tap Download. Download the video. Find the video on your Android.'], 'source': 'Instagram uygulamasının çok renkli kamera şeklindeki simgesine dokun. Daha önce giriş yaptıysan Instagram haber kaynağı açılır. Giriş yapmadıysan istendiğinde e-posta adresini ...', 'target': 'Open Instagram. Go to the video you want to download. Tap ⋮. Tap Copy Link. Open Google Chrome. Tap the address bar. Go to the SaveFromWeb site. Tap the "Paste Instagram Video" text box. Tap and hold the text box. Tap PASTE. Tap Download. Download the video. Find the video on your Android.'} ``` #### wiki_lingua_vi_en - **Size of downloaded dataset files:** 169.41 MB - **Size of the generated dataset:** 41.02 MB - **Total amount of disk used:** 210.43 MB An example of `validation` looks as follows. ``` {'gem_id': 'wiki_lingua_vi_en-val-0', 'references': ['Select the right time of year for planting the tree. You will usually want to plant your tree when it is dormant, or not flowering, during cooler or colder times of year.'], 'source': 'Bạn muốn cung cấp cho cây cơ hội tốt nhất để phát triển và sinh tồn. Trồng cây đúng thời điểm trong năm chính là yếu tố then chốt. Thời điểm sẽ thay đổi phụ thuộc vào loài cây ...', 'target': 'Select the right time of year for planting the tree. You will usually want to plant your tree when it is dormant, or not flowering, during cooler or colder times of year.'} ``` #### xsum - **Size of downloaded dataset files:** 254.89 MB - **Size of the generated dataset:** 70.67 MB - **Total amount of disk used:** 325.56 MB An example of `validation` looks as follows. ``` {'document': 'Burberry reported pre-tax profits of £166m for the year to March. A year ago it made a loss of £16.1m, hit by charges at its Spanish operations.\n' 'In the past year it has opened 21 new stores and closed nine. It plans to open 20-30 stores this year worldwide.\n' 'The group has also focused on promoting the Burberry brand online...', 'gem_id': 'xsum-validation-0', 'references': ['Luxury fashion designer Burberry has returned to profit after opening new stores and spending more on online marketing'], 'target': 'Luxury fashion designer Burberry has returned to profit after opening new stores and spending more on online marketing', 'xsum_id': '10162122'} ``` ### Data Fields The data fields are the same among all splits. #### common_gen - `gem_id`: a `string` feature. - `concept_set_id`: a `int32` feature. - `concepts`: a `list` of `string` features. - `target`: a `string` feature. - `references`: a `list` of `string` features. #### cs_restaurants - `gem_id`: a `string` feature. - `dialog_act`: a `string` feature. - `dialog_act_delexicalized`: a `string` feature. - `target_delexicalized`: a `string` feature. - `target`: a `string` feature. - `references`: a `list` of `string` features. #### dart - `gem_id`: a `string` feature. - `dart_id`: a `int32` feature. - `tripleset`: a `list` of `string` features. - `subtree_was_extended`: a `bool` feature. - `target_sources`: a `list` of `string` features. - `target`: a `string` feature. - `references`: a `list` of `string` features. #### e2e_nlg - `gem_id`: a `string` feature. - `meaning_representation`: a `string` feature. - `target`: a `string` feature. - `references`: a `list` of `string` features. #### mlsum_de - `gem_id`: a `string` feature. - `text`: a `string` feature. - `topic`: a `string` feature. - `url`: a `string` feature. - `title`: a `string` feature. - `date`: a `string` feature. - `target`: a `string` feature. - `references`: a `list` of `string` features. #### mlsum_es - `gem_id`: a `string` feature. - `text`: a `string` feature. - `topic`: a `string` feature. - `url`: a `string` feature. - `title`: a `string` feature. - `date`: a `string` feature. - `target`: a `string` feature. - `references`: a `list` of `string` features. #### schema_guided_dialog - `gem_id`: a `string` feature. - `act`: a classification label, with possible values including `AFFIRM` (0), `AFFIRM_INTENT` (1), `CONFIRM` (2), `GOODBYE` (3), `INFORM` (4). - `slot`: a `string` feature. - `values`: a `list` of `string` features. - `dialog_id`: a `string` feature. - `turn_id`: a `int32` feature. - `prompt`: a `string` feature. - `target`: a `string` feature. - `references`: a `list` of `string` features. #### totto - `gem_id`: a `string` feature. - `totto_id`: a `int32` feature. - `table_page_title`: a `string` feature. - `table_webpage_url`: a `string` feature. - `table_section_title`: a `string` feature. - `table_section_text`: a `string` feature. - `column_span`: a `int32` feature. - `is_header`: a `bool` feature. - `row_span`: a `int32` feature. - `value`: a `string` feature. - `highlighted_cells`: a `list` of `int32` features. - `example_id`: a `string` feature. - `original_sentence`: a `string` feature. - `sentence_after_deletion`: a `string` feature. - `sentence_after_ambiguity`: a `string` feature. - `final_sentence`: a `string` feature. - `overlap_subset`: a `string` feature. - `target`: a `string` feature. - `references`: a `list` of `string` features. #### web_nlg_en - `gem_id`: a `string` feature. - `input`: a `list` of `string` features. - `target`: a `string` feature. - `references`: a `list` of `string` features. - `category`: a `string` feature. - `webnlg_id`: a `string` feature. #### web_nlg_ru - `gem_id`: a `string` feature. - `input`: a `list` of `string` features. - `target`: a `string` feature. - `references`: a `list` of `string` features. - `category`: a `string` feature. - `webnlg_id`: a `string` feature. #### wiki_auto_asset_turk - `gem_id`: a `string` feature. - `source_id`: a `string` feature. - `target_id`: a `string` feature. - `source`: a `string` feature. - `target`: a `string` feature. - `references`: a `list` of `string` features. #### wiki_lingua_es_en - `gem_id`: a `string` feature. - `source`: a `string` feature. - `target`: a `string` feature. - `references`: a `list` of `string` features. #### wiki_lingua_ru_en - `gem_id`: a `string` feature. - `source`: a `string` feature. - `target`: a `string` feature. - `references`: a `list` of `string` features. #### wiki_lingua_tr_en - `gem_id`: a `string` feature. - `source`: a `string` feature. - `target`: a `string` feature. - `references`: a `list` of `string` features. #### wiki_lingua_vi_en - `gem_id`: a `string` feature. - `source`: a `string` feature. - `target`: a `string` feature. - `references`: a `list` of `string` features. #### xsum - `gem_id`: a `string` feature. - `xsum_id`: a `string` feature. - `document`: a `string` feature. - `target`: a `string` feature. - `references`: a `list` of `string` features. ### Data Splits #### common_gen | |train|validation|test| |----------|----:|---------:|---:| |common_gen|67389| 993|1497| #### cs_restaurants | |train|validation|test| |--------------|----:|---------:|---:| |cs_restaurants| 3569| 781| 842| #### dart | |train|validation|test| |----|----:|---------:|---:| |dart|62659| 2768|6959| #### e2e_nlg | |train|validation|test| |-------|----:|---------:|---:| |e2e_nlg|33525| 4299|4693| #### mlsum_de | |train |validation|test | |--------|-----:|---------:|----:| |mlsum_de|220748| 11392|10695| #### mlsum_es | |train |validation|test | |--------|-----:|---------:|----:| |mlsum_es|259886| 9977|13365| #### schema_guided_dialog | |train |validation|test | |--------------------|-----:|---------:|----:| |schema_guided_dialog|164982| 10000|10000| #### totto | |train |validation|test| |-----|-----:|---------:|---:| |totto|121153| 7700|7700| #### web_nlg_en | |train|validation|test| |----------|----:|---------:|---:| |web_nlg_en|35426| 1667|1779| #### web_nlg_ru | |train|validation|test| |----------|----:|---------:|---:| |web_nlg_ru|14630| 790|1102| #### wiki_auto_asset_turk | |train |validation|test_asset|test_turk| |--------------------|-----:|---------:|---------:|--------:| |wiki_auto_asset_turk|373801| 73249| 359| 359| #### wiki_lingua_es_en | |train|validation|test | |-----------------|----:|---------:|----:| |wiki_lingua_es_en|79515| 8835|19797| #### wiki_lingua_ru_en | |train|validation|test| |-----------------|----:|---------:|---:| |wiki_lingua_ru_en|36898| 4100|9094| #### wiki_lingua_tr_en | |train|validation|test| |-----------------|----:|---------:|---:| |wiki_lingua_tr_en| 3193| 355| 808| #### wiki_lingua_vi_en | |train|validation|test| |-----------------|----:|---------:|---:| |wiki_lingua_vi_en| 9206| 1023|2167| #### xsum | |train|validation|test| |----|----:|---------:|---:| |xsum|23206| 1117|1166| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information CC-BY-SA-4.0 ### Citation Information ``` @article{gem_benchmark, author = {Sebastian Gehrmann and Tosin P. Adewumi and Karmanya Aggarwal and Pawan Sasanka Ammanamanchi and Aremu Anuoluwapo and Antoine Bosselut and Khyathi Raghavi Chandu and Miruna{-}Adriana Clinciu and Dipanjan Das and Kaustubh D. Dhole and Wanyu Du and Esin Durmus and Ondrej Dusek and Chris Emezue and Varun Gangal and Cristina Garbacea and Tatsunori Hashimoto and Yufang Hou and Yacine Jernite and Harsh Jhamtani and Yangfeng Ji and Shailza Jolly and Dhruv Kumar and Faisal Ladhak and Aman Madaan and Mounica Maddela and Khyati Mahajan and Saad Mahamood and Bodhisattwa Prasad Majumder and Pedro Henrique Martins and Angelina McMillan{-}Major and Simon Mille and Emiel van Miltenburg and Moin Nadeem and Shashi Narayan and Vitaly Nikolaev and Rubungo Andre Niyongabo and Salomey Osei and Ankur P. Parikh and Laura Perez{-}Beltrachini and Niranjan Ramesh Rao and Vikas Raunak and Juan Diego Rodriguez and Sashank Santhanam and Jo{\~{a}}o Sedoc and Thibault Sellam and Samira Shaikh and Anastasia Shimorina and Marco Antonio Sobrevilla Cabezudo and Hendrik Strobelt and Nishant Subramani and Wei Xu and Diyi Yang and Akhila Yerukola and Jiawei Zhou}, title = {The {GEM} Benchmark: Natural Language Generation, its Evaluation and Metrics}, journal = {CoRR}, volume = {abs/2102.01672}, year = {2021}, url = {https://arxiv.org/abs/2102.01672}, archivePrefix = {arXiv}, eprint = {2102.01672} } ``` ### Contributions Thanks to [@yjernite](https://github.com/yjernite) for adding this dataset.
gem
[ "task_categories:fill-mask", "task_categories:summarization", "task_categories:table-to-text", "task_categories:tabular-to-text", "task_categories:text-generation", "task_categories:text2text-generation", "task_ids:dialogue-modeling", "task_ids:rdf-to-text", "task_ids:news-articles-summarization", "task_ids:text-simplification", "annotations_creators:crowdsourced", "annotations_creators:found", "language_creators:crowdsourced", "language_creators:found", "language_creators:machine-generated", "multilinguality:monolingual", "multilinguality:multilingual", "size_categories:100K<n<1M", "size_categories:10K<n<100K", "size_categories:1K<n<10K", "source_datasets:extended|other-vision-datasets", "source_datasets:original", "language:cs", "language:de", "language:en", "language:es", "language:ru", "language:tr", "language:vi", "license:other", "intent-to-text", "meaning-representation-to-text", "concepts-to-text", "arxiv:2102.01672", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced", "found"], "language_creators": ["crowdsourced", "found", "machine-generated"], "language": ["cs", "de", "en", "es", "ru", "tr", "vi"], "license": ["other"], "multilinguality": ["monolingual", "multilingual"], "size_categories": ["100K<n<1M", "10K<n<100K", "1K<n<10K"], "source_datasets": ["extended|other-vision-datasets", "original"], "task_categories": ["fill-mask", "summarization", "table-to-text", "tabular-to-text", "text-generation", "text2text-generation"], "task_ids": ["dialogue-modeling", "rdf-to-text", "news-articles-summarization", "text-simplification"], "paperswithcode_id": "gem", "pretty_name": "GEM", "config_names": ["common_gen", "cs_restaurants", "dart", "e2e_nlg", "mlsum_de", "mlsum_es", "schema_guided_dialog", "totto", "web_nlg_en", "web_nlg_ru", "wiki_auto_asset_turk", "wiki_lingua_es_en", "wiki_lingua_ru_en", "wiki_lingua_tr_en", "wiki_lingua_vi_en", "xsum"], "tags": ["intent-to-text", "meaning-representation-to-text", "concepts-to-text"], "dataset_info": [{"config_name": "mlsum_de", "features": [{"name": "gem_id", "dtype": "string"}, {"name": "gem_parent_id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "topic", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "date", "dtype": "string"}, {"name": "target", "dtype": "string"}, {"name": "references", "list": "string"}], "splits": [{"name": "train", "num_bytes": 858060337, "num_examples": 220748}, {"name": "validation", "num_bytes": 49712791, "num_examples": 11392}, {"name": "test", "num_bytes": 49146354, "num_examples": 10695}, {"name": "challenge_train_sample", "num_bytes": 1894220, "num_examples": 500}, {"name": "challenge_validation_sample", "num_bytes": 2202723, "num_examples": 500}, {"name": "challenge_test_covid", "num_bytes": 19771285, "num_examples": 5058}], "download_size": 362783528, "dataset_size": 980787710}, {"config_name": "mlsum_es", "features": [{"name": "gem_id", "dtype": "string"}, {"name": "gem_parent_id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "topic", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "date", "dtype": "string"}, {"name": "target", "dtype": "string"}, {"name": "references", "list": "string"}], "splits": [{"name": "train", "num_bytes": 1211240956, "num_examples": 259888}, {"name": "validation", "num_bytes": 51611723, "num_examples": 9977}, {"name": "test", "num_bytes": 72117564, "num_examples": 13366}, {"name": "challenge_train_sample", "num_bytes": 2366443, "num_examples": 500}, {"name": "challenge_validation_sample", "num_bytes": 2658596, "num_examples": 500}, {"name": "challenge_test_covid", "num_bytes": 13576624, "num_examples": 1938}], "download_size": 525621426, "dataset_size": 1353571906}, {"config_name": "wiki_lingua_es_en_v0", "features": [{"name": "gem_id", "dtype": "string"}, {"name": "gem_parent_id", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "target", "dtype": "string"}, {"name": "references", "list": "string"}], "splits": [{"name": "train", "num_bytes": 215665468, "num_examples": 79515}, {"name": "validation", "num_bytes": 25891008, "num_examples": 8835}, {"name": "test", "num_bytes": 50195305, "num_examples": 19797}], "download_size": 169406387, "dataset_size": 291751781}, {"config_name": "wiki_lingua_ru_en_v0", "features": [{"name": "gem_id", "dtype": "string"}, {"name": "gem_parent_id", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "target", "dtype": "string"}, {"name": "references", "list": "string"}], "splits": [{"name": "train", "num_bytes": 159631205, "num_examples": 36898}, {"name": "validation", "num_bytes": 18626973, "num_examples": 4100}, {"name": "test", "num_bytes": 34865311, "num_examples": 9094}], "download_size": 169406387, "dataset_size": 213123489}, {"config_name": "wiki_lingua_tr_en_v0", "features": [{"name": "gem_id", "dtype": "string"}, {"name": "gem_parent_id", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "target", "dtype": "string"}, {"name": "references", "list": "string"}], "splits": [{"name": "train", "num_bytes": 7689845, "num_examples": 3193}, {"name": "validation", "num_bytes": 942122, "num_examples": 355}, {"name": "test", "num_bytes": 1875110, "num_examples": 808}], "download_size": 169406387, "dataset_size": 10507077}, {"config_name": "wiki_lingua_vi_en_v0", "features": [{"name": "gem_id", "dtype": "string"}, {"name": "gem_parent_id", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "target", "dtype": "string"}, {"name": "references", "list": "string"}], "splits": [{"name": "train", "num_bytes": 31599580, "num_examples": 9206}, {"name": "validation", "num_bytes": 3618660, "num_examples": 1023}, {"name": "test", "num_bytes": 6267359, "num_examples": 2167}], "download_size": 169406387, "dataset_size": 41485599}, {"config_name": "wiki_lingua_arabic_ar", "features": [{"name": "gem_id", "dtype": "string"}, {"name": "gem_parent_id", "dtype": "string"}, {"name": "source_aligned", "dtype": {"translation": {"languages": ["ar", "en"]}}}, {"name": "target_aligned", "dtype": {"translation": {"languages": ["ar", "en"]}}}, {"name": "source", "dtype": "string"}, {"name": "target", "dtype": "string"}, {"name": "references", "list": "string"}], "splits": [{"name": "train", "num_bytes": 208106335, "num_examples": 20441}, {"name": "validation", "num_bytes": 31126187, "num_examples": 2919}, {"name": "test", "num_bytes": 60915220, "num_examples": 5841}], "download_size": 58984103, "dataset_size": 300147742}, {"config_name": "wiki_lingua_chinese_zh", "features": [{"name": "gem_id", "dtype": "string"}, {"name": "gem_parent_id", "dtype": "string"}, {"name": "source_aligned", "dtype": {"translation": {"languages": ["zh", "en"]}}}, {"name": "target_aligned", "dtype": {"translation": {"languages": ["zh", "en"]}}}, {"name": "source", "dtype": "string"}, {"name": "target", "dtype": "string"}, {"name": "references", "list": "string"}], "splits": [{"name": "train", "num_bytes": 86130302, "num_examples": 13211}, {"name": "validation", "num_bytes": 13060918, "num_examples": 1886}, {"name": "test", "num_bytes": 25310021, "num_examples": 3775}], "download_size": 32899156, "dataset_size": 124501241}, {"config_name": "wiki_lingua_czech_cs", "features": [{"name": "gem_id", "dtype": "string"}, {"name": "gem_parent_id", "dtype": "string"}, {"name": "source_aligned", "dtype": {"translation": {"languages": ["cs", "en"]}}}, {"name": "target_aligned", "dtype": {"translation": {"languages": ["cs", "en"]}}}, {"name": "source", "dtype": "string"}, {"name": "target", "dtype": "string"}, {"name": "references", "list": "string"}], "splits": [{"name": "train", "num_bytes": 41107318, "num_examples": 5033}, {"name": "validation", "num_bytes": 6305328, "num_examples": 718}, {"name": "test", "num_bytes": 12124770, "num_examples": 1438}], "download_size": 14515534, "dataset_size": 59537416}, {"config_name": "wiki_lingua_dutch_nl", "features": [{"name": "gem_id", "dtype": "string"}, {"name": "gem_parent_id", "dtype": "string"}, {"name": "source_aligned", "dtype": {"translation": {"languages": ["nl", "en"]}}}, {"name": "target_aligned", "dtype": {"translation": {"languages": ["nl", "en"]}}}, {"name": "source", "dtype": "string"}, {"name": "target", "dtype": "string"}, {"name": "references", "list": "string"}], "splits": [{"name": "train", "num_bytes": 169067454, "num_examples": 21866}, {"name": "validation", "num_bytes": 25521003, "num_examples": 3123}, {"name": "test", "num_bytes": 49165151, "num_examples": 6248}], "download_size": 56492150, "dataset_size": 243753608}, {"config_name": "wiki_lingua_english_en", "features": [{"name": "gem_id", "dtype": "string"}, {"name": "gem_parent_id", "dtype": "string"}, {"name": "source_aligned", "dtype": {"translation": {"languages": ["en", "en"]}}}, {"name": "target_aligned", "dtype": {"translation": {"languages": ["en", "en"]}}}, {"name": "source", "dtype": "string"}, {"name": "target", "dtype": "string"}, {"name": "references", "list": "string"}], "splits": [{"name": "train", "num_bytes": 464171624, "num_examples": 99020}, {"name": "validation", "num_bytes": 67652281, "num_examples": 13823}, {"name": "test", "num_bytes": 138944243, "num_examples": 28614}], "download_size": 118031903, "dataset_size": 670768148}, {"config_name": "wiki_lingua_french_fr", "features": [{"name": "gem_id", "dtype": "string"}, {"name": "gem_parent_id", "dtype": "string"}, {"name": "source_aligned", "dtype": {"translation": {"languages": ["fr", "en"]}}}, {"name": "target_aligned", "dtype": {"translation": {"languages": ["fr", "en"]}}}, {"name": "source", "dtype": "string"}, {"name": "target", "dtype": "string"}, {"name": "references", "list": "string"}], "splits": [{"name": "train", "num_bytes": 372039357, "num_examples": 44556}, {"name": "validation", "num_bytes": 54992250, "num_examples": 6364}, {"name": "test", "num_bytes": 108831855, "num_examples": 12731}], "download_size": 118758047, "dataset_size": 535863462}, {"config_name": "wiki_lingua_german_de", "features": [{"name": "gem_id", "dtype": "string"}, {"name": "gem_parent_id", "dtype": "string"}, {"name": "source_aligned", "dtype": {"translation": {"languages": ["de", "en"]}}}, {"name": "target_aligned", "dtype": {"translation": {"languages": ["de", "en"]}}}, {"name": "source", "dtype": "string"}, {"name": "target", "dtype": "string"}, {"name": "references", "list": "string"}], "splits": [{"name": "train", "num_bytes": 322276536, "num_examples": 40839}, {"name": "validation", "num_bytes": 47631883, "num_examples": 5833}, {"name": "test", "num_bytes": 93715331, "num_examples": 11669}], "download_size": 107638803, "dataset_size": 463623750}, {"config_name": "wiki_lingua_hindi_hi", "features": [{"name": "gem_id", "dtype": "string"}, {"name": "gem_parent_id", "dtype": "string"}, {"name": "source_aligned", "dtype": {"translation": {"languages": ["hi", "en"]}}}, {"name": "target_aligned", "dtype": {"translation": {"languages": ["hi", "en"]}}}, {"name": "source", "dtype": "string"}, {"name": "target", "dtype": "string"}, {"name": "references", "list": "string"}], "splits": [{"name": "train", "num_bytes": 99672133, "num_examples": 6942}, {"name": "validation", "num_bytes": 14706378, "num_examples": 991}, {"name": "test", "num_bytes": 28543048, "num_examples": 1984}], "download_size": 21042040, "dataset_size": 142921559}, {"config_name": "wiki_lingua_indonesian_id", "features": [{"name": "gem_id", "dtype": "string"}, {"name": "gem_parent_id", "dtype": "string"}, {"name": "source_aligned", "dtype": {"translation": {"languages": ["id", "en"]}}}, {"name": "target_aligned", "dtype": {"translation": {"languages": ["id", "en"]}}}, {"name": "source", "dtype": "string"}, {"name": "target", "dtype": "string"}, {"name": "references", "list": "string"}], "splits": [{"name": "train", "num_bytes": 263974954, "num_examples": 33237}, {"name": "validation", "num_bytes": 39297987, "num_examples": 4747}, {"name": "test", "num_bytes": 76567819, "num_examples": 9497}], "download_size": 83968162, "dataset_size": 379840760}, {"config_name": "wiki_lingua_italian_it", "features": [{"name": "gem_id", "dtype": "string"}, {"name": "gem_parent_id", "dtype": "string"}, {"name": "source_aligned", "dtype": {"translation": {"languages": ["it", "en"]}}}, {"name": "target_aligned", "dtype": {"translation": {"languages": ["it", "en"]}}}, {"name": "source", "dtype": "string"}, {"name": "target", "dtype": "string"}, {"name": "references", "list": "string"}], "splits": [{"name": "train", "num_bytes": 267090482, "num_examples": 35661}, {"name": "validation", "num_bytes": 39227425, "num_examples": 5093}, {"name": "test", "num_bytes": 76840429, "num_examples": 10189}], "download_size": 88921209, "dataset_size": 383158336}, {"config_name": "wiki_lingua_japanese_ja", "features": [{"name": "gem_id", "dtype": "string"}, {"name": "gem_parent_id", "dtype": "string"}, {"name": "source_aligned", "dtype": {"translation": {"languages": ["ja", "en"]}}}, {"name": "target_aligned", "dtype": {"translation": {"languages": ["ja", "en"]}}}, {"name": "source", "dtype": "string"}, {"name": "target", "dtype": "string"}, {"name": "references", "list": "string"}], "splits": [{"name": "train", "num_bytes": 73871019, "num_examples": 8853}, {"name": "validation", "num_bytes": 10807006, "num_examples": 1264}, {"name": "test", "num_bytes": 21175951, "num_examples": 2530}], "download_size": 22803299, "dataset_size": 105853976}, {"config_name": "wiki_lingua_korean_ko", "features": [{"name": "gem_id", "dtype": "string"}, {"name": "gem_parent_id", "dtype": "string"}, {"name": "source_aligned", "dtype": {"translation": {"languages": ["ko", "en"]}}}, {"name": "target_aligned", "dtype": {"translation": {"languages": ["ko", "en"]}}}, {"name": "source", "dtype": "string"}, {"name": "target", "dtype": "string"}, {"name": "references", "list": "string"}], "splits": [{"name": "train", "num_bytes": 73106687, "num_examples": 8524}, {"name": "validation", "num_bytes": 10788276, "num_examples": 1216}, {"name": "test", "num_bytes": 21172641, "num_examples": 2436}], "download_size": 23336917, "dataset_size": 105067604}, {"config_name": "wiki_lingua_portuguese_pt", "features": [{"name": "gem_id", "dtype": "string"}, {"name": "gem_parent_id", "dtype": "string"}, {"name": "source_aligned", "dtype": {"translation": {"languages": ["pt", "en"]}}}, {"name": "target_aligned", "dtype": {"translation": {"languages": ["pt", "en"]}}}, {"name": "source", "dtype": "string"}, {"name": "target", "dtype": "string"}, {"name": "references", "list": "string"}], "splits": [{"name": "train", "num_bytes": 405546332, "num_examples": 57159}, {"name": "validation", "num_bytes": 59729210, "num_examples": 8165}, {"name": "test", "num_bytes": 117775356, "num_examples": 16331}], "download_size": 137542940, "dataset_size": 583050898}, {"config_name": "wiki_lingua_russian_ru", "features": [{"name": "gem_id", "dtype": "string"}, {"name": "gem_parent_id", "dtype": "string"}, {"name": "source_aligned", "dtype": {"translation": {"languages": ["ru", "en"]}}}, {"name": "target_aligned", "dtype": {"translation": {"languages": ["ru", "en"]}}}, {"name": "source", "dtype": "string"}, {"name": "target", "dtype": "string"}, {"name": "references", "list": "string"}], "splits": [{"name": "train", "num_bytes": 406299624, "num_examples": 37028}, {"name": "validation", "num_bytes": 59651340, "num_examples": 5288}, {"name": "test", "num_bytes": 116330937, "num_examples": 10580}], "download_size": 106281321, "dataset_size": 582281901}, {"config_name": "wiki_lingua_spanish_es", "features": [{"name": "gem_id", "dtype": "string"}, {"name": "gem_parent_id", "dtype": "string"}, {"name": "source_aligned", "dtype": {"translation": {"languages": ["es", "en"]}}}, {"name": "target_aligned", "dtype": {"translation": {"languages": ["es", "en"]}}}, {"name": "source", "dtype": "string"}, {"name": "target", "dtype": "string"}, {"name": "references", "list": "string"}], "splits": [{"name": "train", "num_bytes": 604276564, "num_examples": 79212}, {"name": "validation", "num_bytes": 88677656, "num_examples": 11316}, {"name": "test", "num_bytes": 177096288, "num_examples": 22632}], "download_size": 198247534, "dataset_size": 870050508}, {"config_name": "wiki_lingua_thai_th", "features": [{"name": "gem_id", "dtype": "string"}, {"name": "gem_parent_id", "dtype": "string"}, {"name": "source_aligned", "dtype": {"translation": {"languages": ["th", "en"]}}}, {"name": "target_aligned", "dtype": {"translation": {"languages": ["th", "en"]}}}, {"name": "source", "dtype": "string"}, {"name": "target", "dtype": "string"}, {"name": "references", "list": "string"}], "splits": [{"name": "train", "num_bytes": 139287649, "num_examples": 10325}, {"name": "validation", "num_bytes": 21097845, "num_examples": 1475}, {"name": "test", "num_bytes": 40049968, "num_examples": 2950}], "download_size": 29988180, "dataset_size": 200435462}, {"config_name": "wiki_lingua_turkish_tr", "features": [{"name": "gem_id", "dtype": "string"}, {"name": "gem_parent_id", "dtype": "string"}, {"name": "source_aligned", "dtype": {"translation": {"languages": ["tr", "en"]}}}, {"name": "target_aligned", "dtype": {"translation": {"languages": ["tr", "en"]}}}, {"name": "source", "dtype": "string"}, {"name": "target", "dtype": "string"}, {"name": "references", "list": "string"}], "splits": [{"name": "train", "num_bytes": 21987247, "num_examples": 3148}, {"name": "validation", "num_bytes": 3229714, "num_examples": 449}, {"name": "test", "num_bytes": 6197850, "num_examples": 900}], "download_size": 7055820, "dataset_size": 31414811}, {"config_name": "wiki_lingua_vietnamese_vi", "features": [{"name": "gem_id", "dtype": "string"}, {"name": "gem_parent_id", "dtype": "string"}, {"name": "source_aligned", "dtype": {"translation": {"languages": ["vi", "en"]}}}, {"name": "target_aligned", "dtype": {"translation": {"languages": ["vi", "en"]}}}, {"name": "source", "dtype": "string"}, {"name": "target", "dtype": "string"}, {"name": "references", "list": "string"}], "splits": [{"name": "train", "num_bytes": 128025008, "num_examples": 13707}, {"name": "validation", "num_bytes": 19414734, "num_examples": 1957}, {"name": "test", "num_bytes": 37430208, "num_examples": 3917}], "download_size": 38035490, "dataset_size": 184869950}, {"config_name": "xsum", "features": [{"name": "gem_id", "dtype": "string"}, {"name": "gem_parent_id", "dtype": "string"}, {"name": "xsum_id", "dtype": "string"}, {"name": "document", "dtype": "string"}, {"name": "target", "dtype": "string"}, {"name": "references", "list": "string"}], "splits": [{"name": "train", "num_bytes": 66299136, "num_examples": 23206}, {"name": "validation", "num_bytes": 2270306, "num_examples": 1117}, {"name": "test", "num_bytes": 2598509, "num_examples": 1166}, {"name": "challenge_train_sample", "num_bytes": 1429145, "num_examples": 500}, {"name": "challenge_validation_sample", "num_bytes": 1012689, "num_examples": 500}, {"name": "challenge_test_backtranslation", "num_bytes": 1262047, "num_examples": 500}, {"name": "challenge_test_bfp_02", "num_bytes": 1090364, "num_examples": 500}, {"name": "challenge_test_bfp_05", "num_bytes": 1078076, "num_examples": 500}, {"name": "challenge_test_nopunc", "num_bytes": 1127796, "num_examples": 500}, {"name": "challenge_test_covid", "num_bytes": 1867180, "num_examples": 401}], "download_size": 258277147, "dataset_size": 80035248}, {"config_name": "common_gen", "features": [{"name": "gem_id", "dtype": "string"}, {"name": "gem_parent_id", "dtype": "string"}, {"name": "concept_set_id", "dtype": "int32"}, {"name": "concepts", "list": "string"}, {"name": "target", "dtype": "string"}, {"name": "references", "list": "string"}], "splits": [{"name": "train", "num_bytes": 10475926, "num_examples": 67389}, {"name": "validation", "num_bytes": 405872, "num_examples": 993}, {"name": "test", "num_bytes": 153170, "num_examples": 1497}, {"name": "challenge_train_sample", "num_bytes": 85413, "num_examples": 500}, {"name": "challenge_validation_sample", "num_bytes": 215192, "num_examples": 500}, {"name": "challenge_test_scramble", "num_bytes": 60411, "num_examples": 500}], "download_size": 1933517, "dataset_size": 11395984}, {"config_name": "cs_restaurants", "features": [{"name": "gem_id", "dtype": "string"}, {"name": "gem_parent_id", "dtype": "string"}, {"name": "dialog_act", "dtype": "string"}, {"name": "dialog_act_delexicalized", "dtype": "string"}, {"name": "target_delexicalized", "dtype": "string"}, {"name": "target", "dtype": "string"}, {"name": "references", "list": "string"}], "splits": [{"name": "train", "num_bytes": 873145, "num_examples": 3569}, {"name": "validation", "num_bytes": 288222, "num_examples": 781}, {"name": "test", "num_bytes": 295696, "num_examples": 842}, {"name": "challenge_train_sample", "num_bytes": 127869, "num_examples": 500}, {"name": "challenge_validation_sample", "num_bytes": 193239, "num_examples": 500}, {"name": "challenge_test_scramble", "num_bytes": 185574, "num_examples": 500}], "download_size": 1531111, "dataset_size": 1963745}, {"config_name": "dart", "features": [{"name": "gem_id", "dtype": "string"}, {"name": "gem_parent_id", "dtype": "string"}, {"name": "dart_id", "dtype": "int32"}, {"name": "tripleset", "list": {"list": "string"}}, {"name": "subtree_was_extended", "dtype": "bool"}, {"name": "target_sources", "list": "string"}, {"name": "target", "dtype": "string"}, {"name": "references", "list": "string"}], "splits": [{"name": "train", "num_bytes": 23047610, "num_examples": 62659}, {"name": "validation", "num_bytes": 1934054, "num_examples": 2768}, {"name": "test", "num_bytes": 3476953, "num_examples": 5097}], "download_size": 29939366, "dataset_size": 28458617}, {"config_name": "e2e_nlg", "features": [{"name": "gem_id", "dtype": "string"}, {"name": "gem_parent_id", "dtype": "string"}, {"name": "meaning_representation", "dtype": "string"}, {"name": "target", "dtype": "string"}, {"name": "references", "list": "string"}], "splits": [{"name": "train", "num_bytes": 9129030, "num_examples": 33525}, {"name": "validation", "num_bytes": 1856097, "num_examples": 4299}, {"name": "test", "num_bytes": 2133695, "num_examples": 4693}, {"name": "challenge_train_sample", "num_bytes": 145319, "num_examples": 500}, {"name": "challenge_validation_sample", "num_bytes": 226525, "num_examples": 500}, {"name": "challenge_test_scramble", "num_bytes": 236199, "num_examples": 500}], "download_size": 14668048, "dataset_size": 13726865}, {"config_name": "totto", "features": [{"name": "gem_id", "dtype": "string"}, {"name": "gem_parent_id", "dtype": "string"}, {"name": "totto_id", "dtype": "int32"}, {"name": "table_page_title", "dtype": "string"}, {"name": "table_webpage_url", "dtype": "string"}, {"name": "table_section_title", "dtype": "string"}, {"name": "table_section_text", "dtype": "string"}, {"name": "table", "list": {"list": [{"name": "column_span", "dtype": "int32"}, {"name": "is_header", "dtype": "bool"}, {"name": "row_span", "dtype": "int32"}, {"name": "value", "dtype": "string"}]}}, {"name": "highlighted_cells", "list": {"list": "int32"}}, {"name": "example_id", "dtype": "string"}, {"name": "sentence_annotations", "list": [{"name": "original_sentence", "dtype": "string"}, {"name": "sentence_after_deletion", "dtype": "string"}, {"name": "sentence_after_ambiguity", "dtype": "string"}, {"name": "final_sentence", "dtype": "string"}]}, {"name": "overlap_subset", "dtype": "string"}, {"name": "target", "dtype": "string"}, {"name": "references", "list": "string"}], "splits": [{"name": "train", "num_bytes": 676032144, "num_examples": 121153}, {"name": "validation", "num_bytes": 50736204, "num_examples": 7700}, {"name": "test", "num_bytes": 41330062, "num_examples": 7700}, {"name": "challenge_train_sample", "num_bytes": 2283076, "num_examples": 500}, {"name": "challenge_validation_sample", "num_bytes": 3398639, "num_examples": 500}, {"name": "challenge_test_scramble", "num_bytes": 2638966, "num_examples": 500}], "download_size": 189534609, "dataset_size": 776419091}, {"config_name": "web_nlg_en", "features": [{"name": "gem_id", "dtype": "string"}, {"name": "gem_parent_id", "dtype": "string"}, {"name": "input", "list": "string"}, {"name": "target", "dtype": "string"}, {"name": "references", "list": "string"}, {"name": "category", "dtype": "string"}, {"name": "webnlg_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 13067615, "num_examples": 35426}, {"name": "validation", "num_bytes": 1153995, "num_examples": 1667}, {"name": "test", "num_bytes": 1403601, "num_examples": 1779}, {"name": "challenge_train_sample", "num_bytes": 193198, "num_examples": 502}, {"name": "challenge_validation_sample", "num_bytes": 359868, "num_examples": 499}, {"name": "challenge_test_scramble", "num_bytes": 402407, "num_examples": 500}, {"name": "challenge_test_numbers", "num_bytes": 409213, "num_examples": 500}], "download_size": 13181969, "dataset_size": 16989897}, {"config_name": "web_nlg_ru", "features": [{"name": "gem_id", "dtype": "string"}, {"name": "gem_parent_id", "dtype": "string"}, {"name": "input", "list": "string"}, {"name": "target", "dtype": "string"}, {"name": "references", "list": "string"}, {"name": "category", "dtype": "string"}, {"name": "webnlg_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6888009, "num_examples": 14630}, {"name": "validation", "num_bytes": 795998, "num_examples": 790}, {"name": "test", "num_bytes": 1145282, "num_examples": 1102}, {"name": "challenge_train_sample", "num_bytes": 247089, "num_examples": 501}, {"name": "challenge_validation_sample", "num_bytes": 514117, "num_examples": 500}, {"name": "challenge_test_scramble", "num_bytes": 521625, "num_examples": 500}], "download_size": 7854845, "dataset_size": 10112120}, {"config_name": "wiki_auto_asset_turk", "features": [{"name": "gem_id", "dtype": "string"}, {"name": "gem_parent_id", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "target", "dtype": "string"}, {"name": "references", "list": "string"}], "splits": [{"name": "train", "num_bytes": 161095379, "num_examples": 483801}, {"name": "validation", "num_bytes": 8211308, "num_examples": 20000}, {"name": "test_asset", "num_bytes": 475336, "num_examples": 359}, {"name": "test_turk", "num_bytes": 406842, "num_examples": 359}, {"name": "challenge_train_sample", "num_bytes": 219542, "num_examples": 500}, {"name": "challenge_validation_sample", "num_bytes": 213048, "num_examples": 500}, {"name": "challenge_test_asset_backtranslation", "num_bytes": 436820, "num_examples": 359}, {"name": "challenge_test_asset_bfp02", "num_bytes": 432742, "num_examples": 359}, {"name": "challenge_test_asset_bfp05", "num_bytes": 432742, "num_examples": 359}, {"name": "challenge_test_asset_nopunc", "num_bytes": 432735, "num_examples": 359}, {"name": "challenge_test_turk_backtranslation", "num_bytes": 417204, "num_examples": 359}, {"name": "challenge_test_turk_bfp02", "num_bytes": 414381, "num_examples": 359}, {"name": "challenge_test_turk_bfp05", "num_bytes": 414383, "num_examples": 359}, {"name": "challenge_test_turk_nopunc", "num_bytes": 414388, "num_examples": 359}], "download_size": 126927527, "dataset_size": 174016850}, {"config_name": "schema_guided_dialog", "features": [{"name": "gem_id", "dtype": "string"}, {"name": "gem_parent_id", "dtype": "string"}, {"name": "dialog_acts", "list": [{"name": "act", "dtype": {"class_label": {"names": {"0": "AFFIRM", "1": "AFFIRM_INTENT", "2": "CONFIRM", "3": "GOODBYE", "4": "INFORM", "5": "INFORM_COUNT", "6": "INFORM_INTENT", "7": "NEGATE", "8": "NEGATE_INTENT", "9": "NOTIFY_FAILURE", "10": "NOTIFY_SUCCESS", "11": "OFFER", "12": "OFFER_INTENT", "13": "REQUEST", "14": "REQUEST_ALTS", "15": "REQ_MORE", "16": "SELECT", "17": "THANK_YOU"}}}}, {"name": "slot", "dtype": "string"}, {"name": "values", "list": "string"}]}, {"name": "context", "list": "string"}, {"name": "dialog_id", "dtype": "string"}, {"name": "service", "dtype": "string"}, {"name": "turn_id", "dtype": "int32"}, {"name": "prompt", "dtype": "string"}, {"name": "target", "dtype": "string"}, {"name": "references", "list": "string"}], "splits": [{"name": "train", "num_bytes": 146648117, "num_examples": 164982}, {"name": "validation", "num_bytes": 9376504, "num_examples": 10000}, {"name": "test", "num_bytes": 10160596, "num_examples": 10000}, {"name": "challenge_train_sample", "num_bytes": 441326, "num_examples": 500}, {"name": "challenge_validation_sample", "num_bytes": 491492, "num_examples": 500}, {"name": "challenge_test_backtranslation", "num_bytes": 512834, "num_examples": 500}, {"name": "challenge_test_bfp02", "num_bytes": 529404, "num_examples": 500}, {"name": "challenge_test_bfp05", "num_bytes": 515151, "num_examples": 500}, {"name": "challenge_test_nopunc", "num_bytes": 509332, "num_examples": 500}, {"name": "challenge_test_scramble", "num_bytes": 514644, "num_examples": 500}], "download_size": 17826468, "dataset_size": 169699400}]}
2024-01-18T11:04:05+00:00
[ "2102.01672" ]
[ "cs", "de", "en", "es", "ru", "tr", "vi" ]
TAGS #task_categories-fill-mask #task_categories-summarization #task_categories-table-to-text #task_categories-tabular-to-text #task_categories-text-generation #task_categories-text2text-generation #task_ids-dialogue-modeling #task_ids-rdf-to-text #task_ids-news-articles-summarization #task_ids-text-simplification #annotations_creators-crowdsourced #annotations_creators-found #language_creators-crowdsourced #language_creators-found #language_creators-machine-generated #multilinguality-monolingual #multilinguality-multilingual #size_categories-100K<n<1M #size_categories-10K<n<100K #size_categories-1K<n<10K #source_datasets-extended|other-vision-datasets #source_datasets-original #language-Czech #language-German #language-English #language-Spanish #language-Russian #language-Turkish #language-Vietnamese #license-other #intent-to-text #meaning-representation-to-text #concepts-to-text #arxiv-2102.01672 #region-us
Dataset Card for GEM ==================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: * Paper: The GEM Benchmark: Natural Language Generation, its Evaluation and Metrics * Point of Contact: Sebastian Gehrman * Size of downloaded dataset files: 2.19 GB * Size of the generated dataset: 3.92 GB * Total amount of disk used: 6.10 GB ### Dataset Summary GEM is a benchmark environment for Natural Language Generation with a focus on its Evaluation, both through human annotations and automated Metrics. GEM aims to: * measure NLG progress across 13 datasets spanning many NLG tasks and languages. * provide an in-depth analysis of data and models presented via data statements and challenge sets. * develop standards for evaluation of generated text using both automated and human metrics. It is our goal to regularly update GEM and to encourage toward more inclusive practices in dataset development by extending existing data or developing datasets for additional languages. You can find more complete information in the dataset cards for each of the subsets: * CommonGen * Czech Restaurant * DART * E2E * MLSum * Schema-Guided Dialog * WebNLG * Wiki-Auto/ASSET/TURK * WikiLingua * XSum The subsets are organized by task: Each example has one 'target' per example in its training set, and a set of 'references' (with one or more items) in its validation and test set. ### Supported Tasks and Leaderboards ### Languages Dataset Structure ----------------- ### Data Instances #### common\_gen * Size of downloaded dataset files: 1.85 MB * Size of the generated dataset: 9.23 MB * Total amount of disk used: 11.07 MB An example of 'validation' looks as follows. #### cs\_restaurants * Size of downloaded dataset files: 1.47 MB * Size of the generated dataset: 1.31 MB * Total amount of disk used: 2.77 MB An example of 'validation' looks as follows. #### dart * Size of downloaded dataset files: 29.37 MB * Size of the generated dataset: 27.44 MB * Total amount of disk used: 56.81 MB An example of 'validation' looks as follows. #### e2e\_nlg * Size of downloaded dataset files: 14.60 MB * Size of the generated dataset: 12.14 MB * Total amount of disk used: 26.74 MB An example of 'validation' looks as follows. #### mlsum\_de * Size of downloaded dataset files: 347.36 MB * Size of the generated dataset: 951.06 MB * Total amount of disk used: 1.30 GB An example of 'validation' looks as follows. #### mlsum\_es * Size of downloaded dataset files: 514.11 MB * Size of the generated dataset: 1.31 GB * Total amount of disk used: 1.83 GB An example of 'validation' looks as follows. #### schema\_guided\_dialog * Size of downloaded dataset files: 8.64 MB * Size of the generated dataset: 45.78 MB * Total amount of disk used: 54.43 MB An example of 'validation' looks as follows. #### totto * Size of downloaded dataset files: 187.73 MB * Size of the generated dataset: 757.99 MB * Total amount of disk used: 945.72 MB An example of 'validation' looks as follows. #### web\_nlg\_en * Size of downloaded dataset files: 12.95 MB * Size of the generated dataset: 14.63 MB * Total amount of disk used: 27.57 MB An example of 'validation' looks as follows. #### web\_nlg\_ru * Size of downloaded dataset files: 7.63 MB * Size of the generated dataset: 8.41 MB * Total amount of disk used: 16.04 MB An example of 'validation' looks as follows. #### wiki\_auto\_asset\_turk * Size of downloaded dataset files: 127.27 MB * Size of the generated dataset: 152.77 MB * Total amount of disk used: 280.04 MB An example of 'validation' looks as follows. #### wiki\_lingua\_es\_en * Size of downloaded dataset files: 169.41 MB * Size of the generated dataset: 287.60 MB * Total amount of disk used: 457.01 MB An example of 'validation' looks as follows. #### wiki\_lingua\_ru\_en * Size of downloaded dataset files: 169.41 MB * Size of the generated dataset: 211.21 MB * Total amount of disk used: 380.62 MB An example of 'validation' looks as follows. #### wiki\_lingua\_tr\_en * Size of downloaded dataset files: 169.41 MB * Size of the generated dataset: 10.35 MB * Total amount of disk used: 179.75 MB An example of 'validation' looks as follows. #### wiki\_lingua\_vi\_en * Size of downloaded dataset files: 169.41 MB * Size of the generated dataset: 41.02 MB * Total amount of disk used: 210.43 MB An example of 'validation' looks as follows. #### xsum * Size of downloaded dataset files: 254.89 MB * Size of the generated dataset: 70.67 MB * Total amount of disk used: 325.56 MB An example of 'validation' looks as follows. ### Data Fields The data fields are the same among all splits. #### common\_gen * 'gem\_id': a 'string' feature. * 'concept\_set\_id': a 'int32' feature. * 'concepts': a 'list' of 'string' features. * 'target': a 'string' feature. * 'references': a 'list' of 'string' features. #### cs\_restaurants * 'gem\_id': a 'string' feature. * 'dialog\_act': a 'string' feature. * 'dialog\_act\_delexicalized': a 'string' feature. * 'target\_delexicalized': a 'string' feature. * 'target': a 'string' feature. * 'references': a 'list' of 'string' features. #### dart * 'gem\_id': a 'string' feature. * 'dart\_id': a 'int32' feature. * 'tripleset': a 'list' of 'string' features. * 'subtree\_was\_extended': a 'bool' feature. * 'target\_sources': a 'list' of 'string' features. * 'target': a 'string' feature. * 'references': a 'list' of 'string' features. #### e2e\_nlg * 'gem\_id': a 'string' feature. * 'meaning\_representation': a 'string' feature. * 'target': a 'string' feature. * 'references': a 'list' of 'string' features. #### mlsum\_de * 'gem\_id': a 'string' feature. * 'text': a 'string' feature. * 'topic': a 'string' feature. * 'url': a 'string' feature. * 'title': a 'string' feature. * 'date': a 'string' feature. * 'target': a 'string' feature. * 'references': a 'list' of 'string' features. #### mlsum\_es * 'gem\_id': a 'string' feature. * 'text': a 'string' feature. * 'topic': a 'string' feature. * 'url': a 'string' feature. * 'title': a 'string' feature. * 'date': a 'string' feature. * 'target': a 'string' feature. * 'references': a 'list' of 'string' features. #### schema\_guided\_dialog * 'gem\_id': a 'string' feature. * 'act': a classification label, with possible values including 'AFFIRM' (0), 'AFFIRM\_INTENT' (1), 'CONFIRM' (2), 'GOODBYE' (3), 'INFORM' (4). * 'slot': a 'string' feature. * 'values': a 'list' of 'string' features. * 'dialog\_id': a 'string' feature. * 'turn\_id': a 'int32' feature. * 'prompt': a 'string' feature. * 'target': a 'string' feature. * 'references': a 'list' of 'string' features. #### totto * 'gem\_id': a 'string' feature. * 'totto\_id': a 'int32' feature. * 'table\_page\_title': a 'string' feature. * 'table\_webpage\_url': a 'string' feature. * 'table\_section\_title': a 'string' feature. * 'table\_section\_text': a 'string' feature. * 'column\_span': a 'int32' feature. * 'is\_header': a 'bool' feature. * 'row\_span': a 'int32' feature. * 'value': a 'string' feature. * 'highlighted\_cells': a 'list' of 'int32' features. * 'example\_id': a 'string' feature. * 'original\_sentence': a 'string' feature. * 'sentence\_after\_deletion': a 'string' feature. * 'sentence\_after\_ambiguity': a 'string' feature. * 'final\_sentence': a 'string' feature. * 'overlap\_subset': a 'string' feature. * 'target': a 'string' feature. * 'references': a 'list' of 'string' features. #### web\_nlg\_en * 'gem\_id': a 'string' feature. * 'input': a 'list' of 'string' features. * 'target': a 'string' feature. * 'references': a 'list' of 'string' features. * 'category': a 'string' feature. * 'webnlg\_id': a 'string' feature. #### web\_nlg\_ru * 'gem\_id': a 'string' feature. * 'input': a 'list' of 'string' features. * 'target': a 'string' feature. * 'references': a 'list' of 'string' features. * 'category': a 'string' feature. * 'webnlg\_id': a 'string' feature. #### wiki\_auto\_asset\_turk * 'gem\_id': a 'string' feature. * 'source\_id': a 'string' feature. * 'target\_id': a 'string' feature. * 'source': a 'string' feature. * 'target': a 'string' feature. * 'references': a 'list' of 'string' features. #### wiki\_lingua\_es\_en * 'gem\_id': a 'string' feature. * 'source': a 'string' feature. * 'target': a 'string' feature. * 'references': a 'list' of 'string' features. #### wiki\_lingua\_ru\_en * 'gem\_id': a 'string' feature. * 'source': a 'string' feature. * 'target': a 'string' feature. * 'references': a 'list' of 'string' features. #### wiki\_lingua\_tr\_en * 'gem\_id': a 'string' feature. * 'source': a 'string' feature. * 'target': a 'string' feature. * 'references': a 'list' of 'string' features. #### wiki\_lingua\_vi\_en * 'gem\_id': a 'string' feature. * 'source': a 'string' feature. * 'target': a 'string' feature. * 'references': a 'list' of 'string' features. #### xsum * 'gem\_id': a 'string' feature. * 'xsum\_id': a 'string' feature. * 'document': a 'string' feature. * 'target': a 'string' feature. * 'references': a 'list' of 'string' features. ### Data Splits #### common\_gen #### cs\_restaurants #### dart #### e2e\_nlg #### mlsum\_de #### mlsum\_es #### schema\_guided\_dialog #### totto #### web\_nlg\_en #### web\_nlg\_ru #### wiki\_auto\_asset\_turk #### wiki\_lingua\_es\_en #### wiki\_lingua\_ru\_en #### wiki\_lingua\_tr\_en #### wiki\_lingua\_vi\_en #### xsum Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information CC-BY-SA-4.0 ### Contributions Thanks to @yjernite for adding this dataset.
[ "### Dataset Summary\n\n\nGEM is a benchmark environment for Natural Language Generation with a focus on its Evaluation,\nboth through human annotations and automated Metrics.\n\n\nGEM aims to:\n\n\n* measure NLG progress across 13 datasets spanning many NLG tasks and languages.\n* provide an in-depth analysis of data and models presented via data statements and challenge sets.\n* develop standards for evaluation of generated text using both automated and human metrics.\n\n\nIt is our goal to regularly update GEM and to encourage toward more inclusive practices in dataset development\nby extending existing data or developing datasets for additional languages.\n\n\nYou can find more complete information in the dataset cards for each of the subsets:\n\n\n* CommonGen\n* Czech Restaurant\n* DART\n* E2E\n* MLSum\n* Schema-Guided Dialog\n* WebNLG\n* Wiki-Auto/ASSET/TURK\n* WikiLingua\n* XSum\n\n\nThe subsets are organized by task:\n\n\nEach example has one 'target' per example in its training set, and a set of 'references' (with one or more items) in its validation and test set.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### common\\_gen\n\n\n* Size of downloaded dataset files: 1.85 MB\n* Size of the generated dataset: 9.23 MB\n* Total amount of disk used: 11.07 MB\n\n\nAn example of 'validation' looks as follows.", "#### cs\\_restaurants\n\n\n* Size of downloaded dataset files: 1.47 MB\n* Size of the generated dataset: 1.31 MB\n* Total amount of disk used: 2.77 MB\n\n\nAn example of 'validation' looks as follows.", "#### dart\n\n\n* Size of downloaded dataset files: 29.37 MB\n* Size of the generated dataset: 27.44 MB\n* Total amount of disk used: 56.81 MB\n\n\nAn example of 'validation' looks as follows.", "#### e2e\\_nlg\n\n\n* Size of downloaded dataset files: 14.60 MB\n* Size of the generated dataset: 12.14 MB\n* Total amount of disk used: 26.74 MB\n\n\nAn example of 'validation' looks as follows.", "#### mlsum\\_de\n\n\n* Size of downloaded dataset files: 347.36 MB\n* Size of the generated dataset: 951.06 MB\n* Total amount of disk used: 1.30 GB\n\n\nAn example of 'validation' looks as follows.", "#### mlsum\\_es\n\n\n* Size of downloaded dataset files: 514.11 MB\n* Size of the generated dataset: 1.31 GB\n* Total amount of disk used: 1.83 GB\n\n\nAn example of 'validation' looks as follows.", "#### schema\\_guided\\_dialog\n\n\n* Size of downloaded dataset files: 8.64 MB\n* Size of the generated dataset: 45.78 MB\n* Total amount of disk used: 54.43 MB\n\n\nAn example of 'validation' looks as follows.", "#### totto\n\n\n* Size of downloaded dataset files: 187.73 MB\n* Size of the generated dataset: 757.99 MB\n* Total amount of disk used: 945.72 MB\n\n\nAn example of 'validation' looks as follows.", "#### web\\_nlg\\_en\n\n\n* Size of downloaded dataset files: 12.95 MB\n* Size of the generated dataset: 14.63 MB\n* Total amount of disk used: 27.57 MB\n\n\nAn example of 'validation' looks as follows.", "#### web\\_nlg\\_ru\n\n\n* Size of downloaded dataset files: 7.63 MB\n* Size of the generated dataset: 8.41 MB\n* Total amount of disk used: 16.04 MB\n\n\nAn example of 'validation' looks as follows.", "#### wiki\\_auto\\_asset\\_turk\n\n\n* Size of downloaded dataset files: 127.27 MB\n* Size of the generated dataset: 152.77 MB\n* Total amount of disk used: 280.04 MB\n\n\nAn example of 'validation' looks as follows.", "#### wiki\\_lingua\\_es\\_en\n\n\n* Size of downloaded dataset files: 169.41 MB\n* Size of the generated dataset: 287.60 MB\n* Total amount of disk used: 457.01 MB\n\n\nAn example of 'validation' looks as follows.", "#### wiki\\_lingua\\_ru\\_en\n\n\n* Size of downloaded dataset files: 169.41 MB\n* Size of the generated dataset: 211.21 MB\n* Total amount of disk used: 380.62 MB\n\n\nAn example of 'validation' looks as follows.", "#### wiki\\_lingua\\_tr\\_en\n\n\n* Size of downloaded dataset files: 169.41 MB\n* Size of the generated dataset: 10.35 MB\n* Total amount of disk used: 179.75 MB\n\n\nAn example of 'validation' looks as follows.", "#### wiki\\_lingua\\_vi\\_en\n\n\n* Size of downloaded dataset files: 169.41 MB\n* Size of the generated dataset: 41.02 MB\n* Total amount of disk used: 210.43 MB\n\n\nAn example of 'validation' looks as follows.", "#### xsum\n\n\n* Size of downloaded dataset files: 254.89 MB\n* Size of the generated dataset: 70.67 MB\n* Total amount of disk used: 325.56 MB\n\n\nAn example of 'validation' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### common\\_gen\n\n\n* 'gem\\_id': a 'string' feature.\n* 'concept\\_set\\_id': a 'int32' feature.\n* 'concepts': a 'list' of 'string' features.\n* 'target': a 'string' feature.\n* 'references': a 'list' of 'string' features.", "#### cs\\_restaurants\n\n\n* 'gem\\_id': a 'string' feature.\n* 'dialog\\_act': a 'string' feature.\n* 'dialog\\_act\\_delexicalized': a 'string' feature.\n* 'target\\_delexicalized': a 'string' feature.\n* 'target': a 'string' feature.\n* 'references': a 'list' of 'string' features.", "#### dart\n\n\n* 'gem\\_id': a 'string' feature.\n* 'dart\\_id': a 'int32' feature.\n* 'tripleset': a 'list' of 'string' features.\n* 'subtree\\_was\\_extended': a 'bool' feature.\n* 'target\\_sources': a 'list' of 'string' features.\n* 'target': a 'string' feature.\n* 'references': a 'list' of 'string' features.", "#### e2e\\_nlg\n\n\n* 'gem\\_id': a 'string' feature.\n* 'meaning\\_representation': a 'string' feature.\n* 'target': a 'string' feature.\n* 'references': a 'list' of 'string' features.", "#### mlsum\\_de\n\n\n* 'gem\\_id': a 'string' feature.\n* 'text': a 'string' feature.\n* 'topic': a 'string' feature.\n* 'url': a 'string' feature.\n* 'title': a 'string' feature.\n* 'date': a 'string' feature.\n* 'target': a 'string' feature.\n* 'references': a 'list' of 'string' features.", "#### mlsum\\_es\n\n\n* 'gem\\_id': a 'string' feature.\n* 'text': a 'string' feature.\n* 'topic': a 'string' feature.\n* 'url': a 'string' feature.\n* 'title': a 'string' feature.\n* 'date': a 'string' feature.\n* 'target': a 'string' feature.\n* 'references': a 'list' of 'string' features.", "#### schema\\_guided\\_dialog\n\n\n* 'gem\\_id': a 'string' feature.\n* 'act': a classification label, with possible values including 'AFFIRM' (0), 'AFFIRM\\_INTENT' (1), 'CONFIRM' (2), 'GOODBYE' (3), 'INFORM' (4).\n* 'slot': a 'string' feature.\n* 'values': a 'list' of 'string' features.\n* 'dialog\\_id': a 'string' feature.\n* 'turn\\_id': a 'int32' feature.\n* 'prompt': a 'string' feature.\n* 'target': a 'string' feature.\n* 'references': a 'list' of 'string' features.", "#### totto\n\n\n* 'gem\\_id': a 'string' feature.\n* 'totto\\_id': a 'int32' feature.\n* 'table\\_page\\_title': a 'string' feature.\n* 'table\\_webpage\\_url': a 'string' feature.\n* 'table\\_section\\_title': a 'string' feature.\n* 'table\\_section\\_text': a 'string' feature.\n* 'column\\_span': a 'int32' feature.\n* 'is\\_header': a 'bool' feature.\n* 'row\\_span': a 'int32' feature.\n* 'value': a 'string' feature.\n* 'highlighted\\_cells': a 'list' of 'int32' features.\n* 'example\\_id': a 'string' feature.\n* 'original\\_sentence': a 'string' feature.\n* 'sentence\\_after\\_deletion': a 'string' feature.\n* 'sentence\\_after\\_ambiguity': a 'string' feature.\n* 'final\\_sentence': a 'string' feature.\n* 'overlap\\_subset': a 'string' feature.\n* 'target': a 'string' feature.\n* 'references': a 'list' of 'string' features.", "#### web\\_nlg\\_en\n\n\n* 'gem\\_id': a 'string' feature.\n* 'input': a 'list' of 'string' features.\n* 'target': a 'string' feature.\n* 'references': a 'list' of 'string' features.\n* 'category': a 'string' feature.\n* 'webnlg\\_id': a 'string' feature.", "#### web\\_nlg\\_ru\n\n\n* 'gem\\_id': a 'string' feature.\n* 'input': a 'list' of 'string' features.\n* 'target': a 'string' feature.\n* 'references': a 'list' of 'string' features.\n* 'category': a 'string' feature.\n* 'webnlg\\_id': a 'string' feature.", "#### wiki\\_auto\\_asset\\_turk\n\n\n* 'gem\\_id': a 'string' feature.\n* 'source\\_id': a 'string' feature.\n* 'target\\_id': a 'string' feature.\n* 'source': a 'string' feature.\n* 'target': a 'string' feature.\n* 'references': a 'list' of 'string' features.", "#### wiki\\_lingua\\_es\\_en\n\n\n* 'gem\\_id': a 'string' feature.\n* 'source': a 'string' feature.\n* 'target': a 'string' feature.\n* 'references': a 'list' of 'string' features.", "#### wiki\\_lingua\\_ru\\_en\n\n\n* 'gem\\_id': a 'string' feature.\n* 'source': a 'string' feature.\n* 'target': a 'string' feature.\n* 'references': a 'list' of 'string' features.", "#### wiki\\_lingua\\_tr\\_en\n\n\n* 'gem\\_id': a 'string' feature.\n* 'source': a 'string' feature.\n* 'target': a 'string' feature.\n* 'references': a 'list' of 'string' features.", "#### wiki\\_lingua\\_vi\\_en\n\n\n* 'gem\\_id': a 'string' feature.\n* 'source': a 'string' feature.\n* 'target': a 'string' feature.\n* 'references': a 'list' of 'string' features.", "#### xsum\n\n\n* 'gem\\_id': a 'string' feature.\n* 'xsum\\_id': a 'string' feature.\n* 'document': a 'string' feature.\n* 'target': a 'string' feature.\n* 'references': a 'list' of 'string' features.", "### Data Splits", "#### common\\_gen", "#### cs\\_restaurants", "#### dart", "#### e2e\\_nlg", "#### mlsum\\_de", "#### mlsum\\_es", "#### schema\\_guided\\_dialog", "#### totto", "#### web\\_nlg\\_en", "#### web\\_nlg\\_ru", "#### wiki\\_auto\\_asset\\_turk", "#### wiki\\_lingua\\_es\\_en", "#### wiki\\_lingua\\_ru\\_en", "#### wiki\\_lingua\\_tr\\_en", "#### wiki\\_lingua\\_vi\\_en", "#### xsum\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCC-BY-SA-4.0", "### Contributions\n\n\nThanks to @yjernite for adding this dataset." ]
[ "TAGS\n#task_categories-fill-mask #task_categories-summarization #task_categories-table-to-text #task_categories-tabular-to-text #task_categories-text-generation #task_categories-text2text-generation #task_ids-dialogue-modeling #task_ids-rdf-to-text #task_ids-news-articles-summarization #task_ids-text-simplification #annotations_creators-crowdsourced #annotations_creators-found #language_creators-crowdsourced #language_creators-found #language_creators-machine-generated #multilinguality-monolingual #multilinguality-multilingual #size_categories-100K<n<1M #size_categories-10K<n<100K #size_categories-1K<n<10K #source_datasets-extended|other-vision-datasets #source_datasets-original #language-Czech #language-German #language-English #language-Spanish #language-Russian #language-Turkish #language-Vietnamese #license-other #intent-to-text #meaning-representation-to-text #concepts-to-text #arxiv-2102.01672 #region-us \n", "### Dataset Summary\n\n\nGEM is a benchmark environment for Natural Language Generation with a focus on its Evaluation,\nboth through human annotations and automated Metrics.\n\n\nGEM aims to:\n\n\n* measure NLG progress across 13 datasets spanning many NLG tasks and languages.\n* provide an in-depth analysis of data and models presented via data statements and challenge sets.\n* develop standards for evaluation of generated text using both automated and human metrics.\n\n\nIt is our goal to regularly update GEM and to encourage toward more inclusive practices in dataset development\nby extending existing data or developing datasets for additional languages.\n\n\nYou can find more complete information in the dataset cards for each of the subsets:\n\n\n* CommonGen\n* Czech Restaurant\n* DART\n* E2E\n* MLSum\n* Schema-Guided Dialog\n* WebNLG\n* Wiki-Auto/ASSET/TURK\n* WikiLingua\n* XSum\n\n\nThe subsets are organized by task:\n\n\nEach example has one 'target' per example in its training set, and a set of 'references' (with one or more items) in its validation and test set.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### common\\_gen\n\n\n* Size of downloaded dataset files: 1.85 MB\n* Size of the generated dataset: 9.23 MB\n* Total amount of disk used: 11.07 MB\n\n\nAn example of 'validation' looks as follows.", "#### cs\\_restaurants\n\n\n* Size of downloaded dataset files: 1.47 MB\n* Size of the generated dataset: 1.31 MB\n* Total amount of disk used: 2.77 MB\n\n\nAn example of 'validation' looks as follows.", "#### dart\n\n\n* Size of downloaded dataset files: 29.37 MB\n* Size of the generated dataset: 27.44 MB\n* Total amount of disk used: 56.81 MB\n\n\nAn example of 'validation' looks as follows.", "#### e2e\\_nlg\n\n\n* Size of downloaded dataset files: 14.60 MB\n* Size of the generated dataset: 12.14 MB\n* Total amount of disk used: 26.74 MB\n\n\nAn example of 'validation' looks as follows.", "#### mlsum\\_de\n\n\n* Size of downloaded dataset files: 347.36 MB\n* Size of the generated dataset: 951.06 MB\n* Total amount of disk used: 1.30 GB\n\n\nAn example of 'validation' looks as follows.", "#### mlsum\\_es\n\n\n* Size of downloaded dataset files: 514.11 MB\n* Size of the generated dataset: 1.31 GB\n* Total amount of disk used: 1.83 GB\n\n\nAn example of 'validation' looks as follows.", "#### schema\\_guided\\_dialog\n\n\n* Size of downloaded dataset files: 8.64 MB\n* Size of the generated dataset: 45.78 MB\n* Total amount of disk used: 54.43 MB\n\n\nAn example of 'validation' looks as follows.", "#### totto\n\n\n* Size of downloaded dataset files: 187.73 MB\n* Size of the generated dataset: 757.99 MB\n* Total amount of disk used: 945.72 MB\n\n\nAn example of 'validation' looks as follows.", "#### web\\_nlg\\_en\n\n\n* Size of downloaded dataset files: 12.95 MB\n* Size of the generated dataset: 14.63 MB\n* Total amount of disk used: 27.57 MB\n\n\nAn example of 'validation' looks as follows.", "#### web\\_nlg\\_ru\n\n\n* Size of downloaded dataset files: 7.63 MB\n* Size of the generated dataset: 8.41 MB\n* Total amount of disk used: 16.04 MB\n\n\nAn example of 'validation' looks as follows.", "#### wiki\\_auto\\_asset\\_turk\n\n\n* Size of downloaded dataset files: 127.27 MB\n* Size of the generated dataset: 152.77 MB\n* Total amount of disk used: 280.04 MB\n\n\nAn example of 'validation' looks as follows.", "#### wiki\\_lingua\\_es\\_en\n\n\n* Size of downloaded dataset files: 169.41 MB\n* Size of the generated dataset: 287.60 MB\n* Total amount of disk used: 457.01 MB\n\n\nAn example of 'validation' looks as follows.", "#### wiki\\_lingua\\_ru\\_en\n\n\n* Size of downloaded dataset files: 169.41 MB\n* Size of the generated dataset: 211.21 MB\n* Total amount of disk used: 380.62 MB\n\n\nAn example of 'validation' looks as follows.", "#### wiki\\_lingua\\_tr\\_en\n\n\n* Size of downloaded dataset files: 169.41 MB\n* Size of the generated dataset: 10.35 MB\n* Total amount of disk used: 179.75 MB\n\n\nAn example of 'validation' looks as follows.", "#### wiki\\_lingua\\_vi\\_en\n\n\n* Size of downloaded dataset files: 169.41 MB\n* Size of the generated dataset: 41.02 MB\n* Total amount of disk used: 210.43 MB\n\n\nAn example of 'validation' looks as follows.", "#### xsum\n\n\n* Size of downloaded dataset files: 254.89 MB\n* Size of the generated dataset: 70.67 MB\n* Total amount of disk used: 325.56 MB\n\n\nAn example of 'validation' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### common\\_gen\n\n\n* 'gem\\_id': a 'string' feature.\n* 'concept\\_set\\_id': a 'int32' feature.\n* 'concepts': a 'list' of 'string' features.\n* 'target': a 'string' feature.\n* 'references': a 'list' of 'string' features.", "#### cs\\_restaurants\n\n\n* 'gem\\_id': a 'string' feature.\n* 'dialog\\_act': a 'string' feature.\n* 'dialog\\_act\\_delexicalized': a 'string' feature.\n* 'target\\_delexicalized': a 'string' feature.\n* 'target': a 'string' feature.\n* 'references': a 'list' of 'string' features.", "#### dart\n\n\n* 'gem\\_id': a 'string' feature.\n* 'dart\\_id': a 'int32' feature.\n* 'tripleset': a 'list' of 'string' features.\n* 'subtree\\_was\\_extended': a 'bool' feature.\n* 'target\\_sources': a 'list' of 'string' features.\n* 'target': a 'string' feature.\n* 'references': a 'list' of 'string' features.", "#### e2e\\_nlg\n\n\n* 'gem\\_id': a 'string' feature.\n* 'meaning\\_representation': a 'string' feature.\n* 'target': a 'string' feature.\n* 'references': a 'list' of 'string' features.", "#### mlsum\\_de\n\n\n* 'gem\\_id': a 'string' feature.\n* 'text': a 'string' feature.\n* 'topic': a 'string' feature.\n* 'url': a 'string' feature.\n* 'title': a 'string' feature.\n* 'date': a 'string' feature.\n* 'target': a 'string' feature.\n* 'references': a 'list' of 'string' features.", "#### mlsum\\_es\n\n\n* 'gem\\_id': a 'string' feature.\n* 'text': a 'string' feature.\n* 'topic': a 'string' feature.\n* 'url': a 'string' feature.\n* 'title': a 'string' feature.\n* 'date': a 'string' feature.\n* 'target': a 'string' feature.\n* 'references': a 'list' of 'string' features.", "#### schema\\_guided\\_dialog\n\n\n* 'gem\\_id': a 'string' feature.\n* 'act': a classification label, with possible values including 'AFFIRM' (0), 'AFFIRM\\_INTENT' (1), 'CONFIRM' (2), 'GOODBYE' (3), 'INFORM' (4).\n* 'slot': a 'string' feature.\n* 'values': a 'list' of 'string' features.\n* 'dialog\\_id': a 'string' feature.\n* 'turn\\_id': a 'int32' feature.\n* 'prompt': a 'string' feature.\n* 'target': a 'string' feature.\n* 'references': a 'list' of 'string' features.", "#### totto\n\n\n* 'gem\\_id': a 'string' feature.\n* 'totto\\_id': a 'int32' feature.\n* 'table\\_page\\_title': a 'string' feature.\n* 'table\\_webpage\\_url': a 'string' feature.\n* 'table\\_section\\_title': a 'string' feature.\n* 'table\\_section\\_text': a 'string' feature.\n* 'column\\_span': a 'int32' feature.\n* 'is\\_header': a 'bool' feature.\n* 'row\\_span': a 'int32' feature.\n* 'value': a 'string' feature.\n* 'highlighted\\_cells': a 'list' of 'int32' features.\n* 'example\\_id': a 'string' feature.\n* 'original\\_sentence': a 'string' feature.\n* 'sentence\\_after\\_deletion': a 'string' feature.\n* 'sentence\\_after\\_ambiguity': a 'string' feature.\n* 'final\\_sentence': a 'string' feature.\n* 'overlap\\_subset': a 'string' feature.\n* 'target': a 'string' feature.\n* 'references': a 'list' of 'string' features.", "#### web\\_nlg\\_en\n\n\n* 'gem\\_id': a 'string' feature.\n* 'input': a 'list' of 'string' features.\n* 'target': a 'string' feature.\n* 'references': a 'list' of 'string' features.\n* 'category': a 'string' feature.\n* 'webnlg\\_id': a 'string' feature.", "#### web\\_nlg\\_ru\n\n\n* 'gem\\_id': a 'string' feature.\n* 'input': a 'list' of 'string' features.\n* 'target': a 'string' feature.\n* 'references': a 'list' of 'string' features.\n* 'category': a 'string' feature.\n* 'webnlg\\_id': a 'string' feature.", "#### wiki\\_auto\\_asset\\_turk\n\n\n* 'gem\\_id': a 'string' feature.\n* 'source\\_id': a 'string' feature.\n* 'target\\_id': a 'string' feature.\n* 'source': a 'string' feature.\n* 'target': a 'string' feature.\n* 'references': a 'list' of 'string' features.", "#### wiki\\_lingua\\_es\\_en\n\n\n* 'gem\\_id': a 'string' feature.\n* 'source': a 'string' feature.\n* 'target': a 'string' feature.\n* 'references': a 'list' of 'string' features.", "#### wiki\\_lingua\\_ru\\_en\n\n\n* 'gem\\_id': a 'string' feature.\n* 'source': a 'string' feature.\n* 'target': a 'string' feature.\n* 'references': a 'list' of 'string' features.", "#### wiki\\_lingua\\_tr\\_en\n\n\n* 'gem\\_id': a 'string' feature.\n* 'source': a 'string' feature.\n* 'target': a 'string' feature.\n* 'references': a 'list' of 'string' features.", "#### wiki\\_lingua\\_vi\\_en\n\n\n* 'gem\\_id': a 'string' feature.\n* 'source': a 'string' feature.\n* 'target': a 'string' feature.\n* 'references': a 'list' of 'string' features.", "#### xsum\n\n\n* 'gem\\_id': a 'string' feature.\n* 'xsum\\_id': a 'string' feature.\n* 'document': a 'string' feature.\n* 'target': a 'string' feature.\n* 'references': a 'list' of 'string' features.", "### Data Splits", "#### common\\_gen", "#### cs\\_restaurants", "#### dart", "#### e2e\\_nlg", "#### mlsum\\_de", "#### mlsum\\_es", "#### schema\\_guided\\_dialog", "#### totto", "#### web\\_nlg\\_en", "#### web\\_nlg\\_ru", "#### wiki\\_auto\\_asset\\_turk", "#### wiki\\_lingua\\_es\\_en", "#### wiki\\_lingua\\_ru\\_en", "#### wiki\\_lingua\\_tr\\_en", "#### wiki\\_lingua\\_vi\\_en", "#### xsum\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCC-BY-SA-4.0", "### Contributions\n\n\nThanks to @yjernite for adding this dataset." ]
52f4c4a8afdacb83f99547ba92104a8ba07154dc
# Dataset Card for generated_reviews_enth ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** ttp://airesearch.in.th/ - **Repository:** https://github.com/vistec-ai/generated_reviews_enth - **Paper:** https://arxiv.org/pdf/2007.03541.pdf - **Leaderboard:** - **Point of Contact:** [AIResearch](http://airesearch.in.th/) ### Dataset Summary `generated_reviews_enth` is created as part of [scb-mt-en-th-2020](https://arxiv.org/pdf/2007.03541.pdf) for machine translation task. This dataset (referred to as `generated_reviews_yn` in [scb-mt-en-th-2020](https://arxiv.org/pdf/2007.03541.pdf)) are English product reviews generated by [CTRL](https://arxiv.org/abs/1909.05858), translated by Google Translate API and annotated as accepted or rejected (`correct`) based on fluency and adequacy of the translation by human annotators. This allows it to be used for English-to-Thai translation quality esitmation (binary label), machine translation, and sentiment analysis. ### Supported Tasks and Leaderboards English-to-Thai translation quality estimation (binary label) is the intended use. Other uses include machine translation and sentiment analysis. ### Languages English, Thai ## Dataset Structure ### Data Instances ``` {'correct': 0, 'review_star': 4, 'translation': {'en': "I had a hard time finding a case for my new LG Lucid 2 but finally found this one on amazon. The colors are really pretty and it works just as well as, if not better than the otterbox. Hopefully there will be more available by next Xmas season. Overall, very cute case. I love cheetah's. :)", 'th': 'ฉันมีปัญหาในการหาเคสสำหรับ LG Lucid 2 ใหม่ของฉัน แต่ในที่สุดก็พบเคสนี้ใน Amazon สีสวยมากและใช้งานได้ดีเช่นเดียวกับถ้าไม่ดีกว่านาก หวังว่าจะมีให้มากขึ้นในช่วงเทศกาลคริสต์มาสหน้า โดยรวมแล้วน่ารักมาก ๆ ฉันรักเสือชีตาห์ :)'}} {'correct': 0, 'review_star': 1, 'translation': {'en': "This is the second battery charger I bought as a Christmas present, that came from Amazon, after one purchased before for my son. His was still working. The first charger, received in July, broke apart and wouldn't charge anymore. Just found out two days ago they discontinued it without warning. It took quite some time to find the exact replacement charger. Too bad, really liked it. One of these days, will purchase an actual Nikon product, or go back to buying batteries.", 'th': 'นี่เป็นเครื่องชาร์จแบตเตอรี่ก้อนที่สองที่ฉันซื้อเป็นของขวัญคริสต์มาสซึ่งมาจากอเมซอนหลังจากที่ซื้อมาเพื่อลูกชายของฉัน เขายังทำงานอยู่ เครื่องชาร์จแรกที่ได้รับในเดือนกรกฎาคมแตกเป็นชิ้น ๆ และจะไม่ชาร์จอีกต่อไป เพิ่งค้นพบเมื่อสองวันก่อนพวกเขาหยุดมันโดยไม่มีการเตือนล่วงหน้า ใช้เวลาพอสมควรในการหาที่ชาร์จที่ถูกต้อง แย่มากชอบมาก สักวันหนึ่งจะซื้อผลิตภัณฑ์ Nikon จริงหรือกลับไปซื้อแบตเตอรี่'}} {'correct': 1, 'review_star': 1, 'translation': {'en': 'I loved the idea of having a portable computer to share pictures with family and friends on my big screen. It worked really well for about 3 days, then when i opened it one evening there was water inside where all the wires came out. I cleaned that up and put some tape over that, so far, no leaks. My husband just told me yesterday, however, that this thing is trash.', 'th': 'ฉันชอบไอเดียที่มีคอมพิวเตอร์พกพาเพื่อแชร์รูปภาพกับครอบครัวและเพื่อน ๆ บนหน้าจอขนาดใหญ่ของฉัน มันใช้งานได้ดีจริง ๆ ประมาณ 3 วันจากนั้นเมื่อฉันเปิดมันในเย็นวันหนึ่งมีน้ำอยู่ภายในที่ซึ่งสายไฟทั้งหมดออกมา ฉันทำความสะอาดมันแล้ววางเทปไว้ที่นั่นจนถึงตอนนี้ไม่มีรอยรั่ว สามีของฉันเพิ่งบอกฉันเมื่อวานนี้ว่าสิ่งนี้เป็นขยะ'}} ``` ### Data Fields - `translation`: - `en`: English product reviews generated by [CTRL](https://arxiv.org/abs/1909.05858) - `th`: Thai product reviews translated from `en` by Google Translate API - `review_star`: Stars of the generated reviews, put as condition for [CTRL](https://arxiv.org/abs/1909.05858) - `correct`: 1 if the English-to-Thai translation is accepted (`correct`) based on fluency and adequacy of the translation by human annotators else 0 ### Data Splits | | train | valid | test | |-----------------|--------|-------|-------| | # samples | 141369 | 15708 | 17453 | | # correct:0 | 99296 | 10936 | 12208 | | # correct:1 | 42073 | 4772 | 5245 | | # review_star:1 | 50418 | 5628 | 6225 | | # review_star:2 | 22876 | 2596 | 2852 | | # review_star:3 | 22825 | 2521 | 2831 | | # review_star:1 | 22671 | 2517 | 2778 | | # review_star:5 | 22579 | 2446 | 2767 | ## Dataset Creation ### Curation Rationale `generated_reviews_enth` is created as part of [scb-mt-en-th-2020](https://arxiv.org/pdf/2007.03541.pdf) for machine translation task. This dataset (referred to as `generated_reviews_yn` in [scb-mt-en-th-2020](https://arxiv.org/pdf/2007.03541.pdf)) are English product reviews generated by [CTRL](https://arxiv.org/abs/1909.05858), translated by Google Translate API and annotated as accepted or rejected (`correct`) based on fluency and adequacy of the translation by human annotators. This allows it to be used for English-to-Thai translation quality esitmation (binary label), machine translation, and sentiment analysis. ### Source Data #### Initial Data Collection and Normalization The data generation process is as follows: - `en` is generated using conditional generation of [CTRL](https://arxiv.org/abs/1909.05858), stating a star review for each generated product review. - `th` is translated from `en` using Google Translate API - `correct` is annotated as accepted or rejected (1 or 0) based on fluency and adequacy of the translation by human annotators For this specific dataset for translation quality estimation task, we apply the following preprocessing: - Drop duplciates on `en`,`th`,`review_star`,`correct`; duplicates might exist because the translation checking is done by annotators. - Remove reviews that are not between 1-5 stars. - Remove reviews whose `correct` are not 0 or 1. - Deduplicate on `en` which contains the source sentences. #### Who are the source language producers? [CTRL](https://arxiv.org/abs/1909.05858) ### Annotations #### Annotation process Annotators are given English and Thai product review pairs. They are asked to label the pair as acceptable translation or not based on fluency and adequacy of the translation. #### Who are the annotators? Human annotators of [Hope Data Annotations](https://www.hopedata.org/) hired by [AIResearch.in.th](http://airesearch.in.th/) ### Personal and Sensitive Information The authors do not expect any personal or sensitive information to be in the generated product reviews, but they could slip through from pretraining of [CTRL](https://arxiv.org/abs/1909.05858). ## Considerations for Using the Data ### Social Impact of Dataset - English-Thai translation quality estimation for machine translation - Product review classification for Thai ### Discussion of Biases [More Information Needed] ### Other Known Limitations Due to annotation process constraints, the number of one-star reviews are notably higher than other-star reviews. This makes the dataset slighly imbalanced. ## Additional Information ### Dataset Curators The dataset was created by [AIResearch.in.th](http://airesearch.in.th/) ### Licensing Information CC BY-SA 4.0 ### Citation Information ``` @article{lowphansirikul2020scb, title={scb-mt-en-th-2020: A Large English-Thai Parallel Corpus}, author={Lowphansirikul, Lalita and Polpanumas, Charin and Rutherford, Attapol T and Nutanong, Sarana}, journal={arXiv preprint arXiv:2007.03541}, year={2020} } ``` ### Contributions Thanks to [@cstorm125](https://github.com/cstorm125) for adding this dataset.
generated_reviews_enth
[ "task_categories:translation", "task_categories:text-classification", "task_ids:multi-class-classification", "task_ids:semantic-similarity-classification", "annotations_creators:expert-generated", "annotations_creators:machine-generated", "language_creators:machine-generated", "multilinguality:translation", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "language:th", "license:cc-by-sa-4.0", "arxiv:2007.03541", "arxiv:1909.05858", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated", "machine-generated"], "language_creators": ["machine-generated"], "language": ["en", "th"], "license": ["cc-by-sa-4.0"], "multilinguality": ["translation"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["translation", "text-classification"], "task_ids": ["multi-class-classification", "semantic-similarity-classification"], "pretty_name": "generated_reviews_enth", "dataset_info": {"features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "th"]}}}, {"name": "review_star", "dtype": "int32"}, {"name": "correct", "dtype": {"class_label": {"names": {"0": "neg", "1": "pos"}}}}], "config_name": "generated_reviews_enth", "splits": [{"name": "train", "num_bytes": 147673215, "num_examples": 141369}, {"name": "validation", "num_bytes": 16409966, "num_examples": 15708}, {"name": "test", "num_bytes": 18133523, "num_examples": 17453}], "download_size": 59490601, "dataset_size": 182216704}}
2024-01-18T11:04:06+00:00
[ "2007.03541", "1909.05858" ]
[ "en", "th" ]
TAGS #task_categories-translation #task_categories-text-classification #task_ids-multi-class-classification #task_ids-semantic-similarity-classification #annotations_creators-expert-generated #annotations_creators-machine-generated #language_creators-machine-generated #multilinguality-translation #size_categories-100K<n<1M #source_datasets-original #language-English #language-Thai #license-cc-by-sa-4.0 #arxiv-2007.03541 #arxiv-1909.05858 #region-us
Dataset Card for generated\_reviews\_enth ========================================= Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: ttp://URL * Repository: URL * Paper: URL * Leaderboard: * Point of Contact: AIResearch ### Dataset Summary 'generated\_reviews\_enth' is created as part of scb-mt-en-th-2020 for machine translation task. This dataset (referred to as 'generated\_reviews\_yn' in scb-mt-en-th-2020) are English product reviews generated by CTRL, translated by Google Translate API and annotated as accepted or rejected ('correct') based on fluency and adequacy of the translation by human annotators. This allows it to be used for English-to-Thai translation quality esitmation (binary label), machine translation, and sentiment analysis. ### Supported Tasks and Leaderboards English-to-Thai translation quality estimation (binary label) is the intended use. Other uses include machine translation and sentiment analysis. ### Languages English, Thai Dataset Structure ----------------- ### Data Instances ### Data Fields * 'translation': + 'en': English product reviews generated by CTRL + 'th': Thai product reviews translated from 'en' by Google Translate API * 'review\_star': Stars of the generated reviews, put as condition for CTRL * 'correct': 1 if the English-to-Thai translation is accepted ('correct') based on fluency and adequacy of the translation by human annotators else 0 ### Data Splits Dataset Creation ---------------- ### Curation Rationale 'generated\_reviews\_enth' is created as part of scb-mt-en-th-2020 for machine translation task. This dataset (referred to as 'generated\_reviews\_yn' in scb-mt-en-th-2020) are English product reviews generated by CTRL, translated by Google Translate API and annotated as accepted or rejected ('correct') based on fluency and adequacy of the translation by human annotators. This allows it to be used for English-to-Thai translation quality esitmation (binary label), machine translation, and sentiment analysis. ### Source Data #### Initial Data Collection and Normalization The data generation process is as follows: * 'en' is generated using conditional generation of CTRL, stating a star review for each generated product review. * 'th' is translated from 'en' using Google Translate API * 'correct' is annotated as accepted or rejected (1 or 0) based on fluency and adequacy of the translation by human annotators For this specific dataset for translation quality estimation task, we apply the following preprocessing: * Drop duplciates on 'en','th','review\_star','correct'; duplicates might exist because the translation checking is done by annotators. * Remove reviews that are not between 1-5 stars. * Remove reviews whose 'correct' are not 0 or 1. * Deduplicate on 'en' which contains the source sentences. #### Who are the source language producers? CTRL ### Annotations #### Annotation process Annotators are given English and Thai product review pairs. They are asked to label the pair as acceptable translation or not based on fluency and adequacy of the translation. #### Who are the annotators? Human annotators of Hope Data Annotations hired by URL ### Personal and Sensitive Information The authors do not expect any personal or sensitive information to be in the generated product reviews, but they could slip through from pretraining of CTRL. Considerations for Using the Data --------------------------------- ### Social Impact of Dataset * English-Thai translation quality estimation for machine translation * Product review classification for Thai ### Discussion of Biases ### Other Known Limitations Due to annotation process constraints, the number of one-star reviews are notably higher than other-star reviews. This makes the dataset slighly imbalanced. Additional Information ---------------------- ### Dataset Curators The dataset was created by URL ### Licensing Information CC BY-SA 4.0 ### Contributions Thanks to @cstorm125 for adding this dataset.
[ "### Dataset Summary\n\n\n'generated\\_reviews\\_enth' is created as part of scb-mt-en-th-2020 for machine translation task. This dataset (referred to as 'generated\\_reviews\\_yn' in scb-mt-en-th-2020) are English product reviews generated by CTRL, translated by Google Translate API and annotated as accepted or rejected ('correct') based on fluency and adequacy of the translation by human annotators. This allows it to be used for English-to-Thai translation quality esitmation (binary label), machine translation, and sentiment analysis.", "### Supported Tasks and Leaderboards\n\n\nEnglish-to-Thai translation quality estimation (binary label) is the intended use. Other uses include machine translation and sentiment analysis.", "### Languages\n\n\nEnglish, Thai\n\n\nDataset Structure\n-----------------", "### Data Instances", "### Data Fields\n\n\n* 'translation':\n\t+ 'en': English product reviews generated by CTRL\n\t+ 'th': Thai product reviews translated from 'en' by Google Translate API\n* 'review\\_star': Stars of the generated reviews, put as condition for CTRL\n* 'correct': 1 if the English-to-Thai translation is accepted ('correct') based on fluency and adequacy of the translation by human annotators else 0", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\n'generated\\_reviews\\_enth' is created as part of scb-mt-en-th-2020 for machine translation task. This dataset (referred to as 'generated\\_reviews\\_yn' in scb-mt-en-th-2020) are English product reviews generated by CTRL, translated by Google Translate API and annotated as accepted or rejected ('correct') based on fluency and adequacy of the translation by human annotators. This allows it to be used for English-to-Thai translation quality esitmation (binary label), machine translation, and sentiment analysis.", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nThe data generation process is as follows:\n\n\n* 'en' is generated using conditional generation of CTRL, stating a star review for each generated product review.\n* 'th' is translated from 'en' using Google Translate API\n* 'correct' is annotated as accepted or rejected (1 or 0) based on fluency and adequacy of the translation by human annotators\n\n\nFor this specific dataset for translation quality estimation task, we apply the following preprocessing:\n\n\n* Drop duplciates on 'en','th','review\\_star','correct'; duplicates might exist because the translation checking is done by annotators.\n* Remove reviews that are not between 1-5 stars.\n* Remove reviews whose 'correct' are not 0 or 1.\n* Deduplicate on 'en' which contains the source sentences.", "#### Who are the source language producers?\n\n\nCTRL", "### Annotations", "#### Annotation process\n\n\nAnnotators are given English and Thai product review pairs. They are asked to label the pair as acceptable translation or not based on fluency and adequacy of the translation.", "#### Who are the annotators?\n\n\nHuman annotators of Hope Data Annotations hired by URL", "### Personal and Sensitive Information\n\n\nThe authors do not expect any personal or sensitive information to be in the generated product reviews, but they could slip through from pretraining of CTRL.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset\n\n\n* English-Thai translation quality estimation for machine translation\n* Product review classification for Thai", "### Discussion of Biases", "### Other Known Limitations\n\n\nDue to annotation process constraints, the number of one-star reviews are notably higher than other-star reviews. This makes the dataset slighly imbalanced.\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThe dataset was created by URL", "### Licensing Information\n\n\nCC BY-SA 4.0", "### Contributions\n\n\nThanks to @cstorm125 for adding this dataset." ]
[ "TAGS\n#task_categories-translation #task_categories-text-classification #task_ids-multi-class-classification #task_ids-semantic-similarity-classification #annotations_creators-expert-generated #annotations_creators-machine-generated #language_creators-machine-generated #multilinguality-translation #size_categories-100K<n<1M #source_datasets-original #language-English #language-Thai #license-cc-by-sa-4.0 #arxiv-2007.03541 #arxiv-1909.05858 #region-us \n", "### Dataset Summary\n\n\n'generated\\_reviews\\_enth' is created as part of scb-mt-en-th-2020 for machine translation task. This dataset (referred to as 'generated\\_reviews\\_yn' in scb-mt-en-th-2020) are English product reviews generated by CTRL, translated by Google Translate API and annotated as accepted or rejected ('correct') based on fluency and adequacy of the translation by human annotators. This allows it to be used for English-to-Thai translation quality esitmation (binary label), machine translation, and sentiment analysis.", "### Supported Tasks and Leaderboards\n\n\nEnglish-to-Thai translation quality estimation (binary label) is the intended use. Other uses include machine translation and sentiment analysis.", "### Languages\n\n\nEnglish, Thai\n\n\nDataset Structure\n-----------------", "### Data Instances", "### Data Fields\n\n\n* 'translation':\n\t+ 'en': English product reviews generated by CTRL\n\t+ 'th': Thai product reviews translated from 'en' by Google Translate API\n* 'review\\_star': Stars of the generated reviews, put as condition for CTRL\n* 'correct': 1 if the English-to-Thai translation is accepted ('correct') based on fluency and adequacy of the translation by human annotators else 0", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\n'generated\\_reviews\\_enth' is created as part of scb-mt-en-th-2020 for machine translation task. This dataset (referred to as 'generated\\_reviews\\_yn' in scb-mt-en-th-2020) are English product reviews generated by CTRL, translated by Google Translate API and annotated as accepted or rejected ('correct') based on fluency and adequacy of the translation by human annotators. This allows it to be used for English-to-Thai translation quality esitmation (binary label), machine translation, and sentiment analysis.", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nThe data generation process is as follows:\n\n\n* 'en' is generated using conditional generation of CTRL, stating a star review for each generated product review.\n* 'th' is translated from 'en' using Google Translate API\n* 'correct' is annotated as accepted or rejected (1 or 0) based on fluency and adequacy of the translation by human annotators\n\n\nFor this specific dataset for translation quality estimation task, we apply the following preprocessing:\n\n\n* Drop duplciates on 'en','th','review\\_star','correct'; duplicates might exist because the translation checking is done by annotators.\n* Remove reviews that are not between 1-5 stars.\n* Remove reviews whose 'correct' are not 0 or 1.\n* Deduplicate on 'en' which contains the source sentences.", "#### Who are the source language producers?\n\n\nCTRL", "### Annotations", "#### Annotation process\n\n\nAnnotators are given English and Thai product review pairs. They are asked to label the pair as acceptable translation or not based on fluency and adequacy of the translation.", "#### Who are the annotators?\n\n\nHuman annotators of Hope Data Annotations hired by URL", "### Personal and Sensitive Information\n\n\nThe authors do not expect any personal or sensitive information to be in the generated product reviews, but they could slip through from pretraining of CTRL.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset\n\n\n* English-Thai translation quality estimation for machine translation\n* Product review classification for Thai", "### Discussion of Biases", "### Other Known Limitations\n\n\nDue to annotation process constraints, the number of one-star reviews are notably higher than other-star reviews. This makes the dataset slighly imbalanced.\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThe dataset was created by URL", "### Licensing Information\n\n\nCC BY-SA 4.0", "### Contributions\n\n\nThanks to @cstorm125 for adding this dataset." ]
32a04d2f4369c26541fe5875af5a5d6fe1c221aa
# Dataset Card for Generics KB ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Homepage](https://allenai.org/data/genericskb) - **Repository:** [Repository](https://drive.google.com/drive/folders/1vqfVXhJXJWuiiXbUa4rZjOgQoJvwZUoT) - **Paper:** [Paper](https://arxiv.org/pdf/2005.00660.pdf) - **Point of Contact:**[Sumithra Bhakthavatsalam]([email protected]) [Chloe Anastasiades]([email protected]) [Peter Clark]([email protected]) Alternatively email_at [email protected] ### Dataset Summary Dataset contains a large (3.5M+ sentence) knowledge base of *generic sentences*. This is the first large resource to contain *naturally occurring* generic sentences, rich in high-quality, general, semantically complete statements. All GenericsKB sentences are annotated with their topical term, surrounding context (sentences), and a (learned) confidence. We also release GenericsKB-Best (1M+ sentences), containing the best-quality generics in GenericsKB augmented with selected, synthesized generics from WordNet and ConceptNet. This demonstrates that GenericsKB can be a useful resource for NLP applications, as well as providing data for linguistic studies of generics and their semantics. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The dataset is in English. ## Dataset Structure ### Data Instances The GENERICSKB contains 3,433,000 sentences. GENERICS-KB-BEST comprises of GENERICSKB generics with a score > 0.234, augmented with short generics synthesized from three other resources for all the terms (generic categories) in GENERICSKB- BEST. GENERICSKB-BEST contains 1,020,868 generics (774,621 from GENERICSKB plus 246,247 synthesized). SimpleWikipedia is a filtered scrape of SimpleWikipedia pages (simple.wikipedia.org). The Waterloo corpus is 280GB of English plain text, gathered by Charles Clarke (Univ. Waterloo) using a webcrawler in 2001 from .edu domains. ###### Sample SimpleWikipedia/ Waterloo config look like this ``` {'source_name': 'SimpleWikipedia', 'sentence': 'Sepsis happens when the bacterium enters the blood and make it form tiny clots.', 'sentences_before': [], 'sentences_after': [], 'concept_name': 'sepsis', 'quantifiers': {}, 'id': 'SimpleWikipedia--tmp-sw-rs1-with-bug-fixes-initialprocessing-inputs-articles-with-clean-sentences-jsonl-c27816b298e1e0b5326916ee4e2fd0f1603caa77-100-Bubonic-plague--Different-kinds-of-the-same-disease--Septicemic-plague-0-0-039fbe9c11adde4ff9a829376ca7e0ed-1560874903-47882-/Users/chloea/Documents/aristo/commonsense/kbs/simplewikipedia/commonsense-filtered-good-rs1.jsonl-1f33b1e84018a2b1bfdf446f9a6491568b5585da-1561086091.8220549', 'bert_score': 0.8396177887916565} ``` ###### Sample instance for Generics KB datasets look like this: ``` {'source': 'Waterloo', 'term': 'aardvark', 'quantifier_frequency': '', 'quantifier_number': '', 'generic_sentence': 'Aardvarks are very gentle animals.', 'score': '0.36080607771873474'} {'source': 'TupleKB', 'term': 'aardvark', 'quantifier_frequency': '', 'quantifier_number': '', 'generic_sentence': 'Aardvarks dig burrows.', 'score': '1.0'} ``` ### Data Fields The fields in GenericsKB-Best.tsv and GenericsKB.tsv are as follows: - `SOURCE`: denotes the source of the generic - `TERM`: denotes the category that is the topic of the generic. - `GENERIC SENTENCE`: is the sentence itself. - `SCORE`: Is the BERT-trained score, measuring the degree to which the generic represents a "useful, general truth" about the world (as judged by crowdworkers). Score ranges from 0 (worst) to 1 (best). Sentences with scores below 0.23 (corresponding to an "unsure" vote by crowdworkers) are in GenericsKB, but are not part of GenericsKB-Best due to their unreliability. - `QUANTIFIER_FREQUENCY`:For generics with explicit quantifiers (all, most, etc.) the quantifier is listed - Frequency contains values such as 'usually', 'often', 'frequently' - `QUANTIFIER_NUMBER`: For generics with explicit quantifiers (all, most, etc.) with values such as 'all'|'any'|'most'|'much'|'some' etc... The SimpleWiki/Waterloo generics from GenericsKB.tsv, but expanded to also include their surrounding context (before/after sentences). The Waterloo generics are the majority of GenericsKB. This zip file is 1.4GB expanding to 5.5GB. There is a json representation for every generic statement in the Generics KB. The generic statement is stored under the `sentence` field within the `knowledge` object. There is also a `bert_score` associated with each sentence which is the BERT-based classifier's score for the 'genericness' of the statement. This score is meant to reflect how much generalized world knowledge/commonsense the statement captures vs only being contextually meaningful. Detailed description of each of the fields: - `source_name`: The name of the corpus the generic statement was picked from. - `sentence`: The generic sentence. - `sentences_before`: Provides context information surrounding the generic statement from the original corpus.Up to five sentences preceding the generic sentence in the original corpus. - `sentences_after`: Up to five sentences following the generic sentence in the original corpus. - `concept_name`: A concept that is the subject of the generic statement. - `quantifiers`: The quantifiers for the key concept of the generic statement. There can be multiple quantifiers to allow for statements such as "All bats sometimes fly", where 'all' and 'sometimes' are both quantifiers reflecting number and frequency respectively. - `id`: Unique identifier for a generic statement in the kb. - `bert_score`: Score for the generic statement from the BERT-based generics classifier. <br>**Additional fields that apply only to SimpleWiki dataset** - `headings`: A breadcrumb of section/subsection headings from the top down to the location of the generic statement in the corpus. It applies to SimpleWikipedia which has a hierarchical structure. - `categories`:The listed categories under which the source article falls. Applies to SimpleWikipedia. ### Data Splits There are no splits. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization Data was crawled. SimpleWikipedia is a filtered scrape of SimpleWikipedia pages (simple.wikipedia.org). The Waterloo corpus is 280GB of English plain text, gathered by Charles Clarke (Univ. Waterloo) using a webcrawler in 2001 from .edu domains. #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process Bert was used to decide whether the sentence is useful or not. Every sentence has a bert score. #### Who are the annotators? No annotations were made. ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information The GenericsKB is available under the Creative Commons - Attribution 4.0 International - licence. As an informal summary, from https://creativecommons.org/licenses/by/4.0/, you are free to: Share ― copy and redistribute the material in any medium or format Adapt ― remix, transform, and build upon the material for any purpose, even commercially. under the following terms: Attribution ― You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use. No additional restrictions ― You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits. For details, see https://creativecommons.org/licenses/by/4.0/ or the or the included file "Creative Commons ― Attribution 4.0 International ― CC BY 4.0.pdf" in this folder. ### Citation Information ``` @InProceedings{huggingface:dataset, title = {GenericsKB: A Knowledge Base of Generic Statements}, authors={Sumithra Bhakthavatsalam, Chloe Anastasiades, Peter Clark}, year={2020}, publisher = {Allen Institute for AI}, } ``` ### Contributions Thanks to [@bpatidar](https://github.com/bpatidar) for adding this dataset.
generics_kb
[ "task_categories:other", "annotations_creators:machine-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "size_categories:1M<n<10M", "source_datasets:original", "language:en", "license:cc-by-4.0", "knowledge-base", "arxiv:2005.00660", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["machine-generated"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K", "1M<n<10M"], "source_datasets": ["original"], "task_categories": ["other"], "task_ids": [], "paperswithcode_id": "genericskb", "pretty_name": "GenericsKB", "config_names": ["generics_kb", "generics_kb_best", "generics_kb_simplewiki", "generics_kb_waterloo"], "tags": ["knowledge-base"], "dataset_info": [{"config_name": "generics_kb_best", "features": [{"name": "source", "dtype": "string"}, {"name": "term", "dtype": "string"}, {"name": "quantifier_frequency", "dtype": "string"}, {"name": "quantifier_number", "dtype": "string"}, {"name": "generic_sentence", "dtype": "string"}, {"name": "score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 99897719, "num_examples": 1020868}], "download_size": 94850525, "dataset_size": 99897719}, {"config_name": "generics_kb", "features": [{"name": "source", "dtype": "string"}, {"name": "term", "dtype": "string"}, {"name": "quantifier_frequency", "dtype": "string"}, {"name": "quantifier_number", "dtype": "string"}, {"name": "generic_sentence", "dtype": "string"}, {"name": "score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 348158966, "num_examples": 3433000}], "download_size": 343284785, "dataset_size": 348158966}, {"config_name": "generics_kb_simplewiki", "features": [{"name": "source_name", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "sentences_before", "sequence": "string"}, {"name": "sentences_after", "sequence": "string"}, {"name": "concept_name", "dtype": "string"}, {"name": "quantifiers", "sequence": "string"}, {"name": "id", "dtype": "string"}, {"name": "bert_score", "dtype": "float64"}, {"name": "headings", "sequence": "string"}, {"name": "categories", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 10039355, "num_examples": 12765}], "download_size": 16437369, "dataset_size": 10039355}, {"config_name": "generics_kb_waterloo", "features": [{"name": "source_name", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "sentences_before", "sequence": "string"}, {"name": "sentences_after", "sequence": "string"}, {"name": "concept_name", "dtype": "string"}, {"name": "quantifiers", "sequence": "string"}, {"name": "id", "dtype": "string"}, {"name": "bert_score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 4277214701, "num_examples": 3666725}], "download_size": 0, "dataset_size": 4277214701}]}
2023-06-07T11:35:34+00:00
[ "2005.00660" ]
[ "en" ]
TAGS #task_categories-other #annotations_creators-machine-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #size_categories-1M<n<10M #source_datasets-original #language-English #license-cc-by-4.0 #knowledge-base #arxiv-2005.00660 #region-us
# Dataset Card for Generics KB ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: Homepage - Repository: Repository - Paper: Paper - Point of Contact:Sumithra Bhakthavatsalam Chloe Anastasiades Peter Clark Alternatively email_at info@URL ### Dataset Summary Dataset contains a large (3.5M+ sentence) knowledge base of *generic sentences*. This is the first large resource to contain *naturally occurring* generic sentences, rich in high-quality, general, semantically complete statements. All GenericsKB sentences are annotated with their topical term, surrounding context (sentences), and a (learned) confidence. We also release GenericsKB-Best (1M+ sentences), containing the best-quality generics in GenericsKB augmented with selected, synthesized generics from WordNet and ConceptNet. This demonstrates that GenericsKB can be a useful resource for NLP applications, as well as providing data for linguistic studies of generics and their semantics. ### Supported Tasks and Leaderboards ### Languages The dataset is in English. ## Dataset Structure ### Data Instances The GENERICSKB contains 3,433,000 sentences. GENERICS-KB-BEST comprises of GENERICSKB generics with a score > 0.234, augmented with short generics synthesized from three other resources for all the terms (generic categories) in GENERICSKB- BEST. GENERICSKB-BEST contains 1,020,868 generics (774,621 from GENERICSKB plus 246,247 synthesized). SimpleWikipedia is a filtered scrape of SimpleWikipedia pages (URL). The Waterloo corpus is 280GB of English plain text, gathered by Charles Clarke (Univ. Waterloo) using a webcrawler in 2001 from .edu domains. ###### Sample SimpleWikipedia/ Waterloo config look like this ###### Sample instance for Generics KB datasets look like this: ### Data Fields The fields in URL and URL are as follows: - 'SOURCE': denotes the source of the generic - 'TERM': denotes the category that is the topic of the generic. - 'GENERIC SENTENCE': is the sentence itself. - 'SCORE': Is the BERT-trained score, measuring the degree to which the generic represents a "useful, general truth" about the world (as judged by crowdworkers). Score ranges from 0 (worst) to 1 (best). Sentences with scores below 0.23 (corresponding to an "unsure" vote by crowdworkers) are in GenericsKB, but are not part of GenericsKB-Best due to their unreliability. - 'QUANTIFIER_FREQUENCY':For generics with explicit quantifiers (all, most, etc.) the quantifier is listed - Frequency contains values such as 'usually', 'often', 'frequently' - 'QUANTIFIER_NUMBER': For generics with explicit quantifiers (all, most, etc.) with values such as 'all'|'any'|'most'|'much'|'some' etc... The SimpleWiki/Waterloo generics from URL, but expanded to also include their surrounding context (before/after sentences). The Waterloo generics are the majority of GenericsKB. This zip file is 1.4GB expanding to 5.5GB. There is a json representation for every generic statement in the Generics KB. The generic statement is stored under the 'sentence' field within the 'knowledge' object. There is also a 'bert_score' associated with each sentence which is the BERT-based classifier's score for the 'genericness' of the statement. This score is meant to reflect how much generalized world knowledge/commonsense the statement captures vs only being contextually meaningful. Detailed description of each of the fields: - 'source_name': The name of the corpus the generic statement was picked from. - 'sentence': The generic sentence. - 'sentences_before': Provides context information surrounding the generic statement from the original corpus.Up to five sentences preceding the generic sentence in the original corpus. - 'sentences_after': Up to five sentences following the generic sentence in the original corpus. - 'concept_name': A concept that is the subject of the generic statement. - 'quantifiers': The quantifiers for the key concept of the generic statement. There can be multiple quantifiers to allow for statements such as "All bats sometimes fly", where 'all' and 'sometimes' are both quantifiers reflecting number and frequency respectively. - 'id': Unique identifier for a generic statement in the kb. - 'bert_score': Score for the generic statement from the BERT-based generics classifier. <br>Additional fields that apply only to SimpleWiki dataset - 'headings': A breadcrumb of section/subsection headings from the top down to the location of the generic statement in the corpus. It applies to SimpleWikipedia which has a hierarchical structure. - 'categories':The listed categories under which the source article falls. Applies to SimpleWikipedia. ### Data Splits There are no splits. ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization Data was crawled. SimpleWikipedia is a filtered scrape of SimpleWikipedia pages (URL). The Waterloo corpus is 280GB of English plain text, gathered by Charles Clarke (Univ. Waterloo) using a webcrawler in 2001 from .edu domains. #### Who are the source language producers? ### Annotations #### Annotation process Bert was used to decide whether the sentence is useful or not. Every sentence has a bert score. #### Who are the annotators? No annotations were made. ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information The GenericsKB is available under the Creative Commons - Attribution 4.0 International - licence. As an informal summary, from URL you are free to: Share ― copy and redistribute the material in any medium or format Adapt ― remix, transform, and build upon the material for any purpose, even commercially. under the following terms: Attribution ― You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use. No additional restrictions ― You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits. For details, see URL or the or the included file "Creative Commons ― Attribution 4.0 International ― CC BY 4.0.pdf" in this folder. ### Contributions Thanks to @bpatidar for adding this dataset.
[ "# Dataset Card for Generics KB", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: Homepage\n- Repository: Repository\n- Paper: Paper\n- Point of Contact:Sumithra Bhakthavatsalam\n Chloe Anastasiades\n Peter Clark\n Alternatively email_at info@URL", "### Dataset Summary\n\nDataset contains a large (3.5M+ sentence) knowledge base of *generic sentences*. This is the first large resource to contain *naturally occurring* generic sentences, rich in high-quality, general, semantically complete statements. All GenericsKB sentences are annotated with their topical term, surrounding context (sentences), and a (learned) confidence. We also release GenericsKB-Best (1M+ sentences), containing the best-quality generics in GenericsKB augmented with selected, synthesized generics from WordNet and ConceptNet. This demonstrates that GenericsKB can be a useful resource for NLP applications, as well as providing data for linguistic studies of generics and their semantics.", "### Supported Tasks and Leaderboards", "### Languages\n\nThe dataset is in English.", "## Dataset Structure", "### Data Instances\n\nThe GENERICSKB contains 3,433,000 sentences. GENERICS-KB-BEST comprises of GENERICSKB generics with a score > 0.234, augmented with short generics synthesized from three other resources for all the terms (generic categories) in GENERICSKB- BEST. GENERICSKB-BEST contains 1,020,868 generics (774,621 from GENERICSKB plus 246,247 synthesized).\nSimpleWikipedia is a filtered scrape of SimpleWikipedia pages (URL). The Waterloo corpus is 280GB of English plain text, gathered by Charles Clarke (Univ. Waterloo) using a webcrawler in 2001 from .edu domains.", "###### Sample SimpleWikipedia/ Waterloo config look like this", "###### Sample instance for Generics KB datasets look like this:", "### Data Fields\n\nThe fields in URL and URL are as follows:\n- 'SOURCE': denotes the source of the generic\n- 'TERM': denotes the category that is the topic of the generic.\n- 'GENERIC SENTENCE': is the sentence itself.\n- 'SCORE': Is the BERT-trained score, measuring the degree to which the generic represents a \"useful, general truth\" about the world (as judged by crowdworkers). Score ranges from 0 (worst) to 1 (best). Sentences with scores below 0.23 (corresponding to an \"unsure\" vote by crowdworkers) are in GenericsKB, but are not part of GenericsKB-Best due to their unreliability.\n- 'QUANTIFIER_FREQUENCY':For generics with explicit quantifiers (all, most, etc.) the quantifier is listed - Frequency contains values such as 'usually', 'often', 'frequently'\n- 'QUANTIFIER_NUMBER': For generics with explicit quantifiers (all, most, etc.) with values such as 'all'|'any'|'most'|'much'|'some' etc...\n\nThe SimpleWiki/Waterloo generics from URL, but expanded to also include their surrounding context (before/after sentences). The Waterloo generics are the majority of GenericsKB. This zip file is 1.4GB expanding to 5.5GB.\nThere is a json representation for every generic statement in the Generics KB. The generic statement is stored under the 'sentence' field within the 'knowledge' object. There is also a 'bert_score' associated with each sentence which is the BERT-based classifier's score for the 'genericness' of the statement. This score is meant to reflect how much generalized world knowledge/commonsense the statement captures vs only being contextually meaningful.\nDetailed description of each of the fields:\n\n- 'source_name': The name of the corpus the generic statement was picked from.\n- 'sentence': The generic sentence.\n- 'sentences_before': Provides context information surrounding the generic statement from the original corpus.Up to five sentences preceding the generic sentence in the original corpus.\n- 'sentences_after': Up to five sentences following the generic sentence in the original corpus.\n- 'concept_name': A concept that is the subject of the generic statement.\n- 'quantifiers': The quantifiers for the key concept of the generic statement. There can be multiple quantifiers to allow for statements such as \"All bats sometimes fly\", where 'all' and 'sometimes' are both quantifiers reflecting number and frequency respectively. \n- 'id': Unique identifier for a generic statement in the kb.\n- 'bert_score': Score for the generic statement from the BERT-based generics classifier.\n<br>Additional fields that apply only to SimpleWiki dataset\n - 'headings': A breadcrumb of section/subsection headings from the top down to the location of the generic statement in the corpus. It applies to SimpleWikipedia which has a hierarchical structure.\n - 'categories':The listed categories under which the source article falls. Applies to SimpleWikipedia.", "### Data Splits\n\nThere are no splits.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization\n\nData was crawled. SimpleWikipedia is a filtered scrape of SimpleWikipedia pages (URL). The Waterloo corpus is 280GB of English plain text, gathered by Charles Clarke (Univ. Waterloo) using a webcrawler in 2001 from .edu domains.", "#### Who are the source language producers?", "### Annotations", "#### Annotation process\n\nBert was used to decide whether the sentence is useful or not. Every sentence has a bert score.", "#### Who are the annotators?\n\nNo annotations were made.", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nThe GenericsKB is available under the Creative Commons - Attribution 4.0 International - licence.\n\nAs an informal summary, from URL you are free to:\n\n\tShare ― copy and redistribute the material in any medium or format\n\tAdapt ― remix, transform, and build upon the material for any purpose, even commercially.\n\nunder the following terms:\n\n\tAttribution ― You must give appropriate credit, provide a link to the license, and\n\t\tindicate if changes were made. You may do so in any reasonable manner,\n\t\tbut not in any way that suggests the licensor endorses you or your use.\n\tNo additional restrictions ― You may not apply legal terms or technological measures\n\t\tthat legally restrict others from doing anything the license permits.\n\nFor details, see URL or the or the included\nfile \"Creative Commons ― Attribution 4.0 International ― CC BY 4.0.pdf\" in this folder.", "### Contributions\n\nThanks to @bpatidar for adding this dataset." ]
[ "TAGS\n#task_categories-other #annotations_creators-machine-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #size_categories-1M<n<10M #source_datasets-original #language-English #license-cc-by-4.0 #knowledge-base #arxiv-2005.00660 #region-us \n", "# Dataset Card for Generics KB", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: Homepage\n- Repository: Repository\n- Paper: Paper\n- Point of Contact:Sumithra Bhakthavatsalam\n Chloe Anastasiades\n Peter Clark\n Alternatively email_at info@URL", "### Dataset Summary\n\nDataset contains a large (3.5M+ sentence) knowledge base of *generic sentences*. This is the first large resource to contain *naturally occurring* generic sentences, rich in high-quality, general, semantically complete statements. All GenericsKB sentences are annotated with their topical term, surrounding context (sentences), and a (learned) confidence. We also release GenericsKB-Best (1M+ sentences), containing the best-quality generics in GenericsKB augmented with selected, synthesized generics from WordNet and ConceptNet. This demonstrates that GenericsKB can be a useful resource for NLP applications, as well as providing data for linguistic studies of generics and their semantics.", "### Supported Tasks and Leaderboards", "### Languages\n\nThe dataset is in English.", "## Dataset Structure", "### Data Instances\n\nThe GENERICSKB contains 3,433,000 sentences. GENERICS-KB-BEST comprises of GENERICSKB generics with a score > 0.234, augmented with short generics synthesized from three other resources for all the terms (generic categories) in GENERICSKB- BEST. GENERICSKB-BEST contains 1,020,868 generics (774,621 from GENERICSKB plus 246,247 synthesized).\nSimpleWikipedia is a filtered scrape of SimpleWikipedia pages (URL). The Waterloo corpus is 280GB of English plain text, gathered by Charles Clarke (Univ. Waterloo) using a webcrawler in 2001 from .edu domains.", "###### Sample SimpleWikipedia/ Waterloo config look like this", "###### Sample instance for Generics KB datasets look like this:", "### Data Fields\n\nThe fields in URL and URL are as follows:\n- 'SOURCE': denotes the source of the generic\n- 'TERM': denotes the category that is the topic of the generic.\n- 'GENERIC SENTENCE': is the sentence itself.\n- 'SCORE': Is the BERT-trained score, measuring the degree to which the generic represents a \"useful, general truth\" about the world (as judged by crowdworkers). Score ranges from 0 (worst) to 1 (best). Sentences with scores below 0.23 (corresponding to an \"unsure\" vote by crowdworkers) are in GenericsKB, but are not part of GenericsKB-Best due to their unreliability.\n- 'QUANTIFIER_FREQUENCY':For generics with explicit quantifiers (all, most, etc.) the quantifier is listed - Frequency contains values such as 'usually', 'often', 'frequently'\n- 'QUANTIFIER_NUMBER': For generics with explicit quantifiers (all, most, etc.) with values such as 'all'|'any'|'most'|'much'|'some' etc...\n\nThe SimpleWiki/Waterloo generics from URL, but expanded to also include their surrounding context (before/after sentences). The Waterloo generics are the majority of GenericsKB. This zip file is 1.4GB expanding to 5.5GB.\nThere is a json representation for every generic statement in the Generics KB. The generic statement is stored under the 'sentence' field within the 'knowledge' object. There is also a 'bert_score' associated with each sentence which is the BERT-based classifier's score for the 'genericness' of the statement. This score is meant to reflect how much generalized world knowledge/commonsense the statement captures vs only being contextually meaningful.\nDetailed description of each of the fields:\n\n- 'source_name': The name of the corpus the generic statement was picked from.\n- 'sentence': The generic sentence.\n- 'sentences_before': Provides context information surrounding the generic statement from the original corpus.Up to five sentences preceding the generic sentence in the original corpus.\n- 'sentences_after': Up to five sentences following the generic sentence in the original corpus.\n- 'concept_name': A concept that is the subject of the generic statement.\n- 'quantifiers': The quantifiers for the key concept of the generic statement. There can be multiple quantifiers to allow for statements such as \"All bats sometimes fly\", where 'all' and 'sometimes' are both quantifiers reflecting number and frequency respectively. \n- 'id': Unique identifier for a generic statement in the kb.\n- 'bert_score': Score for the generic statement from the BERT-based generics classifier.\n<br>Additional fields that apply only to SimpleWiki dataset\n - 'headings': A breadcrumb of section/subsection headings from the top down to the location of the generic statement in the corpus. It applies to SimpleWikipedia which has a hierarchical structure.\n - 'categories':The listed categories under which the source article falls. Applies to SimpleWikipedia.", "### Data Splits\n\nThere are no splits.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization\n\nData was crawled. SimpleWikipedia is a filtered scrape of SimpleWikipedia pages (URL). The Waterloo corpus is 280GB of English plain text, gathered by Charles Clarke (Univ. Waterloo) using a webcrawler in 2001 from .edu domains.", "#### Who are the source language producers?", "### Annotations", "#### Annotation process\n\nBert was used to decide whether the sentence is useful or not. Every sentence has a bert score.", "#### Who are the annotators?\n\nNo annotations were made.", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nThe GenericsKB is available under the Creative Commons - Attribution 4.0 International - licence.\n\nAs an informal summary, from URL you are free to:\n\n\tShare ― copy and redistribute the material in any medium or format\n\tAdapt ― remix, transform, and build upon the material for any purpose, even commercially.\n\nunder the following terms:\n\n\tAttribution ― You must give appropriate credit, provide a link to the license, and\n\t\tindicate if changes were made. You may do so in any reasonable manner,\n\t\tbut not in any way that suggests the licensor endorses you or your use.\n\tNo additional restrictions ― You may not apply legal terms or technological measures\n\t\tthat legally restrict others from doing anything the license permits.\n\nFor details, see URL or the or the included\nfile \"Creative Commons ― Attribution 4.0 International ― CC BY 4.0.pdf\" in this folder.", "### Contributions\n\nThanks to @bpatidar for adding this dataset." ]
f76267a2517aab636f278984b200ec3104ea91b3
# Dataset Card for Legal Documents Entity Recognition ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/elenanereiss/Legal-Entity-Recognition - **Repository:** None - **Paper:** https://link.springer.com/chapter/10.1007/978-3-030-33220-4_20 - **Leaderboard:** [If the dataset supports an active leaderboard, add link here]() - **Point of Contact:** Georg Rehm ([email protected]) ### Dataset Summary <div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400"> <p><b>Deprecated:</b> Dataset "german_legal_entity_recognition" is deprecated and will be deleted. Use <a href="https://huggingface.co/datasets/elenanereiss/german-ler">"elenanereiss/german-ler"</a> instead.</p> </div> ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
german_legal_entity_recognition
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:n<1K", "source_datasets:original", "language:de", "license:cc-by-4.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["de"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "paperswithcode_id": "legal-documents-entity-recognition", "pretty_name": "Legal Documents Entity Recognition", "dataset_info": [{"config_name": "bag", "features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "B-AN", "1": "B-EUN", "2": "B-GRT", "3": "B-GS", "4": "B-INN", "5": "B-LD", "6": "B-LDS", "7": "B-LIT", "8": "B-MRK", "9": "B-ORG", "10": "B-PER", "11": "B-RR", "12": "B-RS", "13": "B-ST", "14": "B-STR", "15": "B-UN", "16": "B-VO", "17": "B-VS", "18": "B-VT", "19": "I-AN", "20": "I-EUN", "21": "I-GRT", "22": "I-GS", "23": "I-INN", "24": "I-LD", "25": "I-LDS", "26": "I-LIT", "27": "I-MRK", "28": "I-ORG", "29": "I-PER", "30": "I-RR", "31": "I-RS", "32": "I-ST", "33": "I-STR", "34": "I-UN", "35": "I-VO", "36": "I-VS", "37": "I-VT", "38": "O"}}}}], "splits": [{"name": "train"}], "download_size": 4392913, "dataset_size": 0}, {"config_name": "bfh", "features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "B-AN", "1": "B-EUN", "2": "B-GRT", "3": "B-GS", "4": "B-INN", "5": "B-LD", "6": "B-LDS", "7": "B-LIT", "8": "B-MRK", "9": "B-ORG", "10": "B-PER", "11": "B-RR", "12": "B-RS", "13": "B-ST", "14": "B-STR", "15": "B-UN", "16": "B-VO", "17": "B-VS", "18": "B-VT", "19": "I-AN", "20": "I-EUN", "21": "I-GRT", "22": "I-GS", "23": "I-INN", "24": "I-LD", "25": "I-LDS", "26": "I-LIT", "27": "I-MRK", "28": "I-ORG", "29": "I-PER", "30": "I-RR", "31": "I-RS", "32": "I-ST", "33": "I-STR", "34": "I-UN", "35": "I-VO", "36": "I-VS", "37": "I-VT", "38": "O"}}}}], "splits": [{"name": "train"}], "download_size": 4392913, "dataset_size": 0}, {"config_name": "bgh", "features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "B-AN", "1": "B-EUN", "2": "B-GRT", "3": "B-GS", "4": "B-INN", "5": "B-LD", "6": "B-LDS", "7": "B-LIT", "8": "B-MRK", "9": "B-ORG", "10": "B-PER", "11": "B-RR", "12": "B-RS", "13": "B-ST", "14": "B-STR", "15": "B-UN", "16": "B-VO", "17": "B-VS", "18": "B-VT", "19": "I-AN", "20": "I-EUN", "21": "I-GRT", "22": "I-GS", "23": "I-INN", "24": "I-LD", "25": "I-LDS", "26": "I-LIT", "27": "I-MRK", "28": "I-ORG", "29": "I-PER", "30": "I-RR", "31": "I-RS", "32": "I-ST", "33": "I-STR", "34": "I-UN", "35": "I-VO", "36": "I-VS", "37": "I-VT", "38": "O"}}}}], "splits": [{"name": "train"}], "download_size": 4392913, "dataset_size": 0}, {"config_name": "bpatg", "features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "B-AN", "1": "B-EUN", "2": "B-GRT", "3": "B-GS", "4": "B-INN", "5": "B-LD", "6": "B-LDS", "7": "B-LIT", "8": "B-MRK", "9": "B-ORG", "10": "B-PER", "11": "B-RR", "12": "B-RS", "13": "B-ST", "14": "B-STR", "15": "B-UN", "16": "B-VO", "17": "B-VS", "18": "B-VT", "19": "I-AN", "20": "I-EUN", "21": "I-GRT", "22": "I-GS", "23": "I-INN", "24": "I-LD", "25": "I-LDS", "26": "I-LIT", "27": "I-MRK", "28": "I-ORG", "29": "I-PER", "30": "I-RR", "31": "I-RS", "32": "I-ST", "33": "I-STR", "34": "I-UN", "35": "I-VO", "36": "I-VS", "37": "I-VT", "38": "O"}}}}], "splits": [{"name": "train"}], "download_size": 4392913, "dataset_size": 0}, {"config_name": "bsg", "features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "B-AN", "1": "B-EUN", "2": "B-GRT", "3": "B-GS", "4": "B-INN", "5": "B-LD", "6": "B-LDS", "7": "B-LIT", "8": "B-MRK", "9": "B-ORG", "10": "B-PER", "11": "B-RR", "12": "B-RS", "13": "B-ST", "14": "B-STR", "15": "B-UN", "16": "B-VO", "17": "B-VS", "18": "B-VT", "19": "I-AN", "20": "I-EUN", "21": "I-GRT", "22": "I-GS", "23": "I-INN", "24": "I-LD", "25": "I-LDS", "26": "I-LIT", "27": "I-MRK", "28": "I-ORG", "29": "I-PER", "30": "I-RR", "31": "I-RS", "32": "I-ST", "33": "I-STR", "34": "I-UN", "35": "I-VO", "36": "I-VS", "37": "I-VT", "38": "O"}}}}], "splits": [{"name": "train"}], "download_size": 4392913, "dataset_size": 0}, {"config_name": "bverfg", "features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "B-AN", "1": "B-EUN", "2": "B-GRT", "3": "B-GS", "4": "B-INN", "5": "B-LD", "6": "B-LDS", "7": "B-LIT", "8": "B-MRK", "9": "B-ORG", "10": "B-PER", "11": "B-RR", "12": "B-RS", "13": "B-ST", "14": "B-STR", "15": "B-UN", "16": "B-VO", "17": "B-VS", "18": "B-VT", "19": "I-AN", "20": "I-EUN", "21": "I-GRT", "22": "I-GS", "23": "I-INN", "24": "I-LD", "25": "I-LDS", "26": "I-LIT", "27": "I-MRK", "28": "I-ORG", "29": "I-PER", "30": "I-RR", "31": "I-RS", "32": "I-ST", "33": "I-STR", "34": "I-UN", "35": "I-VO", "36": "I-VS", "37": "I-VT", "38": "O"}}}}], "splits": [{"name": "train"}], "download_size": 4392913, "dataset_size": 0}, {"config_name": "bverwg", "features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "B-AN", "1": "B-EUN", "2": "B-GRT", "3": "B-GS", "4": "B-INN", "5": "B-LD", "6": "B-LDS", "7": "B-LIT", "8": "B-MRK", "9": "B-ORG", "10": "B-PER", "11": "B-RR", "12": "B-RS", "13": "B-ST", "14": "B-STR", "15": "B-UN", "16": "B-VO", "17": "B-VS", "18": "B-VT", "19": "I-AN", "20": "I-EUN", "21": "I-GRT", "22": "I-GS", "23": "I-INN", "24": "I-LD", "25": "I-LDS", "26": "I-LIT", "27": "I-MRK", "28": "I-ORG", "29": "I-PER", "30": "I-RR", "31": "I-RS", "32": "I-ST", "33": "I-STR", "34": "I-UN", "35": "I-VO", "36": "I-VS", "37": "I-VT", "38": "O"}}}}], "splits": [{"name": "train"}], "download_size": 4392913, "dataset_size": 0}, {"config_name": "all", "features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "B-AN", "1": "B-EUN", "2": "B-GRT", "3": "B-GS", "4": "B-INN", "5": "B-LD", "6": "B-LDS", "7": "B-LIT", "8": "B-MRK", "9": "B-ORG", "10": "B-PER", "11": "B-RR", "12": "B-RS", "13": "B-ST", "14": "B-STR", "15": "B-UN", "16": "B-VO", "17": "B-VS", "18": "B-VT", "19": "I-AN", "20": "I-EUN", "21": "I-GRT", "22": "I-GS", "23": "I-INN", "24": "I-LD", "25": "I-LDS", "26": "I-LIT", "27": "I-MRK", "28": "I-ORG", "29": "I-PER", "30": "I-RR", "31": "I-RS", "32": "I-ST", "33": "I-STR", "34": "I-UN", "35": "I-VO", "36": "I-VS", "37": "I-VT", "38": "O"}}}}], "splits": [{"name": "train"}], "download_size": 4392913, "dataset_size": 0}]}
2024-01-18T11:04:08+00:00
[]
[ "de" ]
TAGS #task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-n<1K #source_datasets-original #language-German #license-cc-by-4.0 #region-us
# Dataset Card for Legal Documents Entity Recognition ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: None - Paper: URL - Leaderboard: [If the dataset supports an active leaderboard, add link here]() - Point of Contact: Georg Rehm (URL@URL) ### Dataset Summary <div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400"> <p><b>Deprecated:</b> Dataset "german_legal_entity_recognition" is deprecated and will be deleted. Use <a href="URL instead.</p> </div> ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions Thanks to @abhishekkrthakur for adding this dataset.
[ "# Dataset Card for Legal Documents Entity Recognition", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: None\n- Paper: URL\n- Leaderboard: [If the dataset supports an active leaderboard, add link here]()\n- Point of Contact: Georg Rehm (URL@URL)", "### Dataset Summary\n\n<div class=\"course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400\">\n <p><b>Deprecated:</b> Dataset \"german_legal_entity_recognition\" is deprecated and will be deleted. Use <a href=\"URL instead.</p>\n</div>", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @abhishekkrthakur for adding this dataset." ]
[ "TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-n<1K #source_datasets-original #language-German #license-cc-by-4.0 #region-us \n", "# Dataset Card for Legal Documents Entity Recognition", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: None\n- Paper: URL\n- Leaderboard: [If the dataset supports an active leaderboard, add link here]()\n- Point of Contact: Georg Rehm (URL@URL)", "### Dataset Summary\n\n<div class=\"course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400\">\n <p><b>Deprecated:</b> Dataset \"german_legal_entity_recognition\" is deprecated and will be deleted. Use <a href=\"URL instead.</p>\n</div>", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @abhishekkrthakur for adding this dataset." ]
b01a9ee786e96af2cda9de6ea25b5333abc1ed2d
# Dataset Card for GermaNER ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** https://github.com/tudarmstadt-lt/GermaNER - **Paper:** https://pdfs.semanticscholar.org/b250/3144ed2152830f6c64a9f797ab3c5a34fee5.pdf - **Point of Contact:** [Darina Benikova](mailto:[email protected]) ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages German ## Dataset Structure ### Data Instances An example instance looks as follows: ``` { 'id': '3', 'ner_tags': [1, 5, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8], 'tokens': ['Bayern', 'München', 'ist', 'wieder', 'alleiniger', 'Top-', 'Favorit', 'auf', 'den', 'Gewinn', 'der', 'deutschen', 'Fußball-Meisterschaft', '.'] } ``` ### Data Fields Each instance in the dataset has: - `id`: an id as a string - `tokens`: sequence of tokens - `ner_tags`: NER tags for each token (encoded as IOB) NER tags can be: 'B-LOC' (0), 'B-ORG' (1), 'B-OTH' (2), 'B-PER' (3), 'I-LOC' (4), 'I-ORG' (5), 'I-OTH' (6), 'I-PER' (7), 'O' (8) ### Data Splits Dataset provides only train part (26200 data instances). ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information License of GermaNER: ``` GermaNER is licensed under ASL 2.0 and other lenient licenses, allowing its use for academic and commercial purposes without restrictions. The licenses of its compenents are mixed licensed and are individually listed in Data/Licenses. Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: You must give any other recipients of the Work or Derivative Works a copy of this License; and You must cause any modified files to carry prominent notices stating that You changed the files; and You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS ``` ### Citation Information ```bibtex @inproceedings{Benikova2015GermaNERFO, title={GermaNER: Free Open German Named Entity Recognition Tool}, author={Darina Benikova and Seid Muhie Yimam and P. Santhanam and Chris Biemann}, booktitle={GSCL}, year={2015} } ``` ### Contributions Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
germaner
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:de", "license:apache-2.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["de"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "pretty_name": "GermaNER", "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "B-LOC", "1": "B-ORG", "2": "B-OTH", "3": "B-PER", "4": "I-LOC", "5": "I-ORG", "6": "I-OTH", "7": "I-PER", "8": "O"}}}}], "splits": [{"name": "train", "num_bytes": 9059606, "num_examples": 26200}], "download_size": 4363657, "dataset_size": 9059606}}
2024-01-18T11:04:09+00:00
[]
[ "de" ]
TAGS #task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-German #license-apache-2.0 #region-us
# Dataset Card for GermaNER ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Repository: URL - Paper: URL - Point of Contact: Darina Benikova ### Dataset Summary ### Supported Tasks and Leaderboards ### Languages German ## Dataset Structure ### Data Instances An example instance looks as follows: ### Data Fields Each instance in the dataset has: - 'id': an id as a string - 'tokens': sequence of tokens - 'ner_tags': NER tags for each token (encoded as IOB) NER tags can be: 'B-LOC' (0), 'B-ORG' (1), 'B-OTH' (2), 'B-PER' (3), 'I-LOC' (4), 'I-ORG' (5), 'I-OTH' (6), 'I-PER' (7), 'O' (8) ### Data Splits Dataset provides only train part (26200 data instances). ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information License of GermaNER: ### Contributions Thanks to @abhishekkrthakur for adding this dataset.
[ "# Dataset Card for GermaNER", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Repository: URL\n- Paper: URL\n- Point of Contact: Darina Benikova", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages\n\nGerman", "## Dataset Structure", "### Data Instances\n\nAn example instance looks as follows:", "### Data Fields\n\nEach instance in the dataset has:\n- 'id': an id as a string\n- 'tokens': sequence of tokens\n- 'ner_tags': NER tags for each token (encoded as IOB)\n\nNER tags can be: 'B-LOC' (0), 'B-ORG' (1), 'B-OTH' (2), 'B-PER' (3), 'I-LOC' (4), 'I-ORG' (5), 'I-OTH' (6), 'I-PER' (7), 'O' (8)", "### Data Splits\n\nDataset provides only train part (26200 data instances).", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nLicense of GermaNER:", "### Contributions\n\nThanks to @abhishekkrthakur for adding this dataset." ]
[ "TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-German #license-apache-2.0 #region-us \n", "# Dataset Card for GermaNER", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Repository: URL\n- Paper: URL\n- Point of Contact: Darina Benikova", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages\n\nGerman", "## Dataset Structure", "### Data Instances\n\nAn example instance looks as follows:", "### Data Fields\n\nEach instance in the dataset has:\n- 'id': an id as a string\n- 'tokens': sequence of tokens\n- 'ner_tags': NER tags for each token (encoded as IOB)\n\nNER tags can be: 'B-LOC' (0), 'B-ORG' (1), 'B-OTH' (2), 'B-PER' (3), 'I-LOC' (4), 'I-ORG' (5), 'I-OTH' (6), 'I-PER' (7), 'O' (8)", "### Data Splits\n\nDataset provides only train part (26200 data instances).", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nLicense of GermaNER:", "### Contributions\n\nThanks to @abhishekkrthakur for adding this dataset." ]
10b95435538d4ec829214e8beff4cb410b8118d7
# Dataset Card for "germeval_14" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://sites.google.com/site/germeval2014ner/](https://sites.google.com/site/germeval2014ner/) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [https://pdfs.semanticscholar.org/b250/3144ed2152830f6c64a9f797ab3c5a34fee5.pdf](https://pdfs.semanticscholar.org/b250/3144ed2152830f6c64a9f797ab3c5a34fee5.pdf) - **Point of Contact:** [Darina Benikova](mailto:[email protected]) - **Size of downloaded dataset files:** 10.29 MB - **Size of the generated dataset:** 18.03 MB - **Total amount of disk used:** 28.31 MB ### Dataset Summary The GermEval 2014 NER Shared Task builds on a new dataset with German Named Entity annotation with the following properties: - The data was sampled from German Wikipedia and News Corpora as a collection of citations. - The dataset covers over 31,000 sentences corresponding to over 590,000 tokens. - The NER annotation uses the NoSta-D guidelines, which extend the Tübingen Treebank guidelines, using four main NER categories with sub-structure, and annotating embeddings among NEs such as [ORG FC Kickers [LOC Darmstadt]]. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages German ## Dataset Structure ### Data Instances #### germeval_14 - **Size of downloaded dataset files:** 10.29 MB - **Size of the generated dataset:** 18.03 MB - **Total amount of disk used:** 28.31 MB An example of 'train' looks as follows. This example was too long and was cropped: ```json { "id": "11", "ner_tags": [13, 14, 14, 14, 14, 0, 0, 0, 0, 0, 0, 0, 19, 20, 13, 0, 1, 0, 0, 0, 0, 0, 19, 20, 20, 0, 0, 0, 0, 3, 4, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "nested_ner_tags": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "source": "http://de.wikipedia.org/wiki/Liste_von_Filmen_mit_homosexuellem_Inhalt [2010-01-11] ", "tokens": "[\"Scenes\", \"of\", \"a\", \"Sexual\", \"Nature\", \"(\", \"GB\", \"2006\", \")\", \"-\", \"Regie\", \":\", \"Ed\", \"Blum\", \"Shortbus\", \"(\", \"USA\", \"2006..." } ``` ### Data Fields The data fields are the same among all splits. #### germeval_14 - `id`: a `string` feature. - `source`: a `string` feature. - `tokens`: a `list` of `string` features. - `ner_tags`: a `list` of classification labels, with possible values including `O` (0), `B-LOC` (1), `I-LOC` (2), `B-LOCderiv` (3), `I-LOCderiv` (4). - `nested_ner_tags`: a `list` of classification labels, with possible values including `O` (0), `B-LOC` (1), `I-LOC` (2), `B-LOCderiv` (3), `I-LOCderiv` (4). ### Data Splits | name |train|validation|test| |-----------|----:|---------:|---:| |germeval_14|24000| 2200|5100| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [CC BY-SA 4.0 license](https://creativecommons.org/licenses/by-sa/4.0/) ### Citation Information ``` @inproceedings{benikova-etal-2014-nosta, title = {NoSta-D Named Entity Annotation for German: Guidelines and Dataset}, author = {Benikova, Darina and Biemann, Chris and Reznicek, Marc}, booktitle = {Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}'14)}, month = {may}, year = {2014}, address = {Reykjavik, Iceland}, publisher = {European Language Resources Association (ELRA)}, url = {http://www.lrec-conf.org/proceedings/lrec2014/pdf/276_Paper.pdf}, pages = {2524--2531}, } ``` ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@jplu](https://github.com/jplu), [@lewtun](https://github.com/lewtun), [@lhoestq](https://github.com/lhoestq), [@stefan-it](https://github.com/stefan-it), [@mariamabarham](https://github.com/mariamabarham) for adding this dataset.
germeval_14
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:de", "license:cc-by-4.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["de"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "paperswithcode_id": "nosta-d-named-entity-annotation-for-german", "pretty_name": "GermEval14", "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-LOC", "2": "I-LOC", "3": "B-LOCderiv", "4": "I-LOCderiv", "5": "B-LOCpart", "6": "I-LOCpart", "7": "B-ORG", "8": "I-ORG", "9": "B-ORGderiv", "10": "I-ORGderiv", "11": "B-ORGpart", "12": "I-ORGpart", "13": "B-OTH", "14": "I-OTH", "15": "B-OTHderiv", "16": "I-OTHderiv", "17": "B-OTHpart", "18": "I-OTHpart", "19": "B-PER", "20": "I-PER", "21": "B-PERderiv", "22": "I-PERderiv", "23": "B-PERpart", "24": "I-PERpart"}}}}, {"name": "nested_ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-LOC", "2": "I-LOC", "3": "B-LOCderiv", "4": "I-LOCderiv", "5": "B-LOCpart", "6": "I-LOCpart", "7": "B-ORG", "8": "I-ORG", "9": "B-ORGderiv", "10": "I-ORGderiv", "11": "B-ORGpart", "12": "I-ORGpart", "13": "B-OTH", "14": "I-OTH", "15": "B-OTHderiv", "16": "I-OTHderiv", "17": "B-OTHpart", "18": "I-OTHpart", "19": "B-PER", "20": "I-PER", "21": "B-PERderiv", "22": "I-PERderiv", "23": "B-PERpart", "24": "I-PERpart"}}}}], "config_name": "germeval_14", "splits": [{"name": "train", "num_bytes": 13816714, "num_examples": 24000}, {"name": "validation", "num_bytes": 1266974, "num_examples": 2200}, {"name": "test", "num_bytes": 2943201, "num_examples": 5100}], "download_size": 10288972, "dataset_size": 18026889}}
2024-01-18T11:04:11+00:00
[]
[ "de" ]
TAGS #task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-German #license-cc-by-4.0 #region-us
Dataset Card for "germeval\_14" =============================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: * Paper: URL * Point of Contact: Darina Benikova * Size of downloaded dataset files: 10.29 MB * Size of the generated dataset: 18.03 MB * Total amount of disk used: 28.31 MB ### Dataset Summary The GermEval 2014 NER Shared Task builds on a new dataset with German Named Entity annotation with the following properties: - The data was sampled from German Wikipedia and News Corpora as a collection of citations. - The dataset covers over 31,000 sentences corresponding to over 590,000 tokens. - The NER annotation uses the NoSta-D guidelines, which extend the Tübingen Treebank guidelines, using four main NER categories with sub-structure, and annotating embeddings among NEs such as [ORG FC Kickers [LOC Darmstadt]]. ### Supported Tasks and Leaderboards ### Languages German Dataset Structure ----------------- ### Data Instances #### germeval\_14 * Size of downloaded dataset files: 10.29 MB * Size of the generated dataset: 18.03 MB * Total amount of disk used: 28.31 MB An example of 'train' looks as follows. This example was too long and was cropped: ### Data Fields The data fields are the same among all splits. #### germeval\_14 * 'id': a 'string' feature. * 'source': a 'string' feature. * 'tokens': a 'list' of 'string' features. * 'ner\_tags': a 'list' of classification labels, with possible values including 'O' (0), 'B-LOC' (1), 'I-LOC' (2), 'B-LOCderiv' (3), 'I-LOCderiv' (4). * 'nested\_ner\_tags': a 'list' of classification labels, with possible values including 'O' (0), 'B-LOC' (1), 'I-LOC' (2), 'B-LOCderiv' (3), 'I-LOCderiv' (4). ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information CC BY-SA 4.0 license ### Contributions Thanks to @thomwolf, @jplu, @lewtun, @lhoestq, @stefan-it, @mariamabarham for adding this dataset.
[ "### Dataset Summary\n\n\nThe GermEval 2014 NER Shared Task builds on a new dataset with German Named Entity annotation with the following properties: - The data was sampled from German Wikipedia and News Corpora as a collection of citations. - The dataset covers over 31,000 sentences corresponding to over 590,000 tokens. - The NER annotation uses the NoSta-D guidelines, which extend the Tübingen Treebank guidelines, using four main NER categories with sub-structure, and annotating embeddings among NEs such as [ORG FC Kickers [LOC Darmstadt]].", "### Supported Tasks and Leaderboards", "### Languages\n\n\nGerman\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### germeval\\_14\n\n\n* Size of downloaded dataset files: 10.29 MB\n* Size of the generated dataset: 18.03 MB\n* Total amount of disk used: 28.31 MB\n\n\nAn example of 'train' looks as follows. This example was too long and was cropped:", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### germeval\\_14\n\n\n* 'id': a 'string' feature.\n* 'source': a 'string' feature.\n* 'tokens': a 'list' of 'string' features.\n* 'ner\\_tags': a 'list' of classification labels, with possible values including 'O' (0), 'B-LOC' (1), 'I-LOC' (2), 'B-LOCderiv' (3), 'I-LOCderiv' (4).\n* 'nested\\_ner\\_tags': a 'list' of classification labels, with possible values including 'O' (0), 'B-LOC' (1), 'I-LOC' (2), 'B-LOCderiv' (3), 'I-LOCderiv' (4).", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCC BY-SA 4.0 license", "### Contributions\n\n\nThanks to @thomwolf, @jplu, @lewtun, @lhoestq, @stefan-it, @mariamabarham for adding this dataset." ]
[ "TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-German #license-cc-by-4.0 #region-us \n", "### Dataset Summary\n\n\nThe GermEval 2014 NER Shared Task builds on a new dataset with German Named Entity annotation with the following properties: - The data was sampled from German Wikipedia and News Corpora as a collection of citations. - The dataset covers over 31,000 sentences corresponding to over 590,000 tokens. - The NER annotation uses the NoSta-D guidelines, which extend the Tübingen Treebank guidelines, using four main NER categories with sub-structure, and annotating embeddings among NEs such as [ORG FC Kickers [LOC Darmstadt]].", "### Supported Tasks and Leaderboards", "### Languages\n\n\nGerman\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### germeval\\_14\n\n\n* Size of downloaded dataset files: 10.29 MB\n* Size of the generated dataset: 18.03 MB\n* Total amount of disk used: 28.31 MB\n\n\nAn example of 'train' looks as follows. This example was too long and was cropped:", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### germeval\\_14\n\n\n* 'id': a 'string' feature.\n* 'source': a 'string' feature.\n* 'tokens': a 'list' of 'string' features.\n* 'ner\\_tags': a 'list' of classification labels, with possible values including 'O' (0), 'B-LOC' (1), 'I-LOC' (2), 'B-LOCderiv' (3), 'I-LOCderiv' (4).\n* 'nested\\_ner\\_tags': a 'list' of classification labels, with possible values including 'O' (0), 'B-LOC' (1), 'I-LOC' (2), 'B-LOCderiv' (3), 'I-LOCderiv' (4).", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCC BY-SA 4.0 license", "### Contributions\n\n\nThanks to @thomwolf, @jplu, @lewtun, @lhoestq, @stefan-it, @mariamabarham for adding this dataset." ]
6aeb7d114f051a1fb34cfc22476b997f063c5ed2
# Dataset Card for GigaFren ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://opus.nlpl.eu/giga-fren.php - **Repository:** None - **Paper:** http://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper.pdf - **Leaderboard:** [More Information Needed] - **Point of Contact:** [More Information Needed] ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances Here are some examples of questions and facts: ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
giga_fren
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "size_categories:10M<n<100M", "source_datasets:original", "language:en", "language:fr", "license:unknown", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["en", "fr"], "license": ["unknown"], "multilinguality": ["multilingual"], "size_categories": ["10M<n<100M"], "source_datasets": ["original"], "task_categories": ["translation"], "task_ids": [], "pretty_name": "GigaFren", "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "translation", "dtype": {"translation": {"languages": ["en", "fr"]}}}], "config_name": "en-fr", "splits": [{"name": "train", "num_bytes": 8690296821, "num_examples": 22519904}], "download_size": 2701536198, "dataset_size": 8690296821}}
2024-01-18T11:04:16+00:00
[]
[ "en", "fr" ]
TAGS #task_categories-translation #annotations_creators-found #language_creators-found #multilinguality-multilingual #size_categories-10M<n<100M #source_datasets-original #language-English #language-French #license-unknown #region-us
# Dataset Card for GigaFren ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: None - Paper: URL - Leaderboard: - Point of Contact: ### Dataset Summary ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances Here are some examples of questions and facts: ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions Thanks to @abhishekkrthakur for adding this dataset.
[ "# Dataset Card for GigaFren", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: None\n- Paper: URL\n- Leaderboard: \n- Point of Contact:", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances\n\nHere are some examples of questions and facts:", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @abhishekkrthakur for adding this dataset." ]
[ "TAGS\n#task_categories-translation #annotations_creators-found #language_creators-found #multilinguality-multilingual #size_categories-10M<n<100M #source_datasets-original #language-English #language-French #license-unknown #region-us \n", "# Dataset Card for GigaFren", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: None\n- Paper: URL\n- Leaderboard: \n- Point of Contact:", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances\n\nHere are some examples of questions and facts:", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @abhishekkrthakur for adding this dataset." ]
e45e01b2da13842bb3df1b12dc046910147b3d82
# Dataset Card for Gigaword ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** [Gigaword repository](https://github.com/harvardnlp/sent-summary) - **Leaderboard:** [Gigaword leaderboard](https://paperswithcode.com/sota/text-summarization-on-gigaword) - **Paper:** [A Neural Attention Model for Abstractive Sentence Summarization](https://arxiv.org/abs/1509.00685) - **Point of Contact:** [Alexander Rush](mailto:[email protected]) - **Size of downloaded dataset files:** 578.41 MB - **Size of the generated dataset:** 962.96 MB - **Total amount of disk used:** 1.54 GB ### Dataset Summary Headline-generation on a corpus of article pairs from Gigaword consisting of around 4 million articles. Use the 'org_data' provided by https://github.com/microsoft/unilm/ which is identical to https://github.com/harvardnlp/sent-summary but with better format. ### Supported Tasks and Leaderboards - `summarization`: This dataset can be used for Summarization, where given a dicument, the goal is to predict its summery. The model performance is evaluated using the [ROUGE](https://huggingface.co/metrics/rouge) metric. The leaderboard for this task is available [here](https://paperswithcode.com/sota/text-summarization-on-gigaword). ### Languages English. ## Dataset Structure ### Data Instances An example of 'train' looks as follows. ``` { 'document': "australia 's current account deficit shrunk by a record #.## billion dollars -lrb- #.## billion us -rrb- in the june quarter due to soaring commodity prices , figures released monday showed .", 'summary': 'australian current account deficit narrows sharply' } ``` ### Data Fields The data fields are the same among all splits. - `document`: a `string` feature. - `summary`: a `string` feature. ### Data Splits | name | train |validation|test| |-------|------:|---------:|---:| |default|3803957| 189651|1951| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization From the paper: > For our training set, we pair the headline of each article with its first sentence to create an inputsummary pair. While the model could in theory be trained on any pair, Gigaword contains many spurious headline-article pairs. We therefore prune training based on the following heuristic filters: (1) Are there no non-stop-words in common? (2) Does the title contain a byline or other extraneous editing marks? (3) Does the title have a question mark or colon? After applying these filters, the training set consists of roughly J = 4 million title-article pairs. We apply a minimal preprocessing step using PTB tokenization, lower-casing, replacing all digit characters with #, and replacing of word types seen less than 5 times with UNK. We also remove all articles from the time-period of the DUC evaluation. release. The complete input training vocabulary consists of 119 million word tokens and 110K unique word types with an average sentence size of 31.3 words. The headline vocabulary consists of 31 million tokens and 69K word types with the average title of length 8.3 words (note that this is significantly shorter than the DUC summaries). On average there are 4.6 overlapping word types between the headline and the input; although only 2.6 in the first 75-characters of the input. #### Who are the source language producers? From the paper: > For training data for both tasks, we utilize the annotated Gigaword data set (Graff et al., 2003; Napoles et al., 2012), which consists of standard Gigaword, preprocessed with Stanford CoreNLP tools (Manning et al., 2014). ### Annotations #### Annotation process Annotations are inherited from the annotatated Gigaword data set. Additional information from the paper: > Our model only uses annotations for tokenization and sentence separation, although several of the baselines use parsing and tagging as well. #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ```bibtex @article{graff2003english, title={English gigaword}, author={Graff, David and Kong, Junbo and Chen, Ke and Maeda, Kazuaki}, journal={Linguistic Data Consortium, Philadelphia}, volume={4}, number={1}, pages={34}, year={2003} } @article{Rush_2015, title={A Neural Attention Model for Abstractive Sentence Summarization}, url={http://dx.doi.org/10.18653/v1/D15-1044}, DOI={10.18653/v1/d15-1044}, journal={Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing}, publisher={Association for Computational Linguistics}, author={Rush, Alexander M. and Chopra, Sumit and Weston, Jason}, year={2015} } ``` ### Contributions Thanks to [@lewtun](https://github.com/lewtun), [@lhoestq](https://github.com/lhoestq), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
gigaword
[ "task_categories:summarization", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:extended|gigaword_2003", "language:en", "license:mit", "headline-generation", "arxiv:1509.00685", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["extended|gigaword_2003"], "task_categories": ["summarization"], "task_ids": [], "pretty_name": "Gigaword", "tags": ["headline-generation"], "dataset_info": {"features": [{"name": "document", "dtype": "string"}, {"name": "summary", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 915246340, "num_examples": 3803957}, {"name": "validation", "num_bytes": 45766944, "num_examples": 189651}, {"name": "test", "num_bytes": 450774, "num_examples": 1951}], "download_size": 578402958, "dataset_size": 961464058}, "train-eval-index": [{"config": "default", "task": "summarization", "task_id": "summarization", "splits": {"train_split": "train", "eval_split": "test"}, "col_mapping": {"document": "text", "summary": "target"}, "metrics": [{"type": "rouge", "name": "Rouge"}]}]}
2024-01-29T10:43:00+00:00
[ "1509.00685" ]
[ "en" ]
TAGS #task_categories-summarization #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-extended|gigaword_2003 #language-English #license-mit #headline-generation #arxiv-1509.00685 #region-us
Dataset Card for Gigaword ========================= Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Repository: Gigaword repository * Leaderboard: Gigaword leaderboard * Paper: A Neural Attention Model for Abstractive Sentence Summarization * Point of Contact: Alexander Rush * Size of downloaded dataset files: 578.41 MB * Size of the generated dataset: 962.96 MB * Total amount of disk used: 1.54 GB ### Dataset Summary Headline-generation on a corpus of article pairs from Gigaword consisting of around 4 million articles. Use the 'org\_data' provided by URL which is identical to URL but with better format. ### Supported Tasks and Leaderboards * 'summarization': This dataset can be used for Summarization, where given a dicument, the goal is to predict its summery. The model performance is evaluated using the ROUGE metric. The leaderboard for this task is available here. ### Languages English. Dataset Structure ----------------- ### Data Instances An example of 'train' looks as follows. ### Data Fields The data fields are the same among all splits. * 'document': a 'string' feature. * 'summary': a 'string' feature. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization From the paper: > > For our training set, we pair the headline of each article with its first sentence to create an inputsummary pair. While the model could in theory be trained on any pair, Gigaword contains many spurious headline-article pairs. We therefore prune training based on the following heuristic filters: (1) Are there no non-stop-words in common? (2) Does the title contain a byline or other extraneous editing marks? (3) Does the title have a question mark or colon? After applying these filters, the training set consists of roughly J = 4 million title-article pairs. We apply a minimal preprocessing step using PTB tokenization, lower-casing, replacing all digit characters with #, and replacing of word types seen less than 5 times with UNK. We also remove all articles from the time-period of the DUC evaluation. release. > The complete input training vocabulary consists of 119 million word tokens and 110K unique word types with an average sentence size of 31.3 words. The headline vocabulary consists of 31 million tokens and 69K word types with the average title of length 8.3 words (note that this is significantly shorter than the DUC summaries). On average there are 4.6 overlapping word types between the headline and the input; although only 2.6 in the > first 75-characters of the input. > > > #### Who are the source language producers? From the paper: > > For training data for both tasks, we utilize the annotated Gigaword data set (Graff et al., 2003; Napoles et al., 2012), which consists of standard Gigaword, preprocessed with Stanford CoreNLP tools (Manning et al., 2014). > > > ### Annotations #### Annotation process Annotations are inherited from the annotatated Gigaword data set. Additional information from the paper: > > Our model only uses annotations for tokenization and sentence separation, although several of the baselines use parsing and tagging as well. > > > #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to @lewtun, @lhoestq, @thomwolf for adding this dataset.
[ "### Dataset Summary\n\n\nHeadline-generation on a corpus of article pairs from Gigaword consisting of\naround 4 million articles. Use the 'org\\_data' provided by\nURL which is identical to\nURL but with better format.", "### Supported Tasks and Leaderboards\n\n\n* 'summarization': This dataset can be used for Summarization, where given a dicument, the goal is to predict its summery. The model performance is evaluated using the ROUGE metric. The leaderboard for this task is available here.", "### Languages\n\n\nEnglish.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.\n\n\n* 'document': a 'string' feature.\n* 'summary': a 'string' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nFrom the paper:\n\n\n\n> \n> For our training set, we pair the headline of each article with its first sentence to create an inputsummary pair. While the model could in theory be trained on any pair, Gigaword contains many spurious headline-article pairs. We therefore prune training based on the following heuristic filters: (1) Are there no non-stop-words in common? (2) Does the title contain a byline or other extraneous editing marks? (3) Does the title have a question mark or colon? After applying these filters, the training set consists of roughly J = 4 million title-article pairs. We apply a minimal preprocessing step using PTB tokenization, lower-casing, replacing all digit characters with #, and replacing of word types seen less than 5 times with UNK. We also remove all articles from the time-period of the DUC evaluation. release.\n> The complete input training vocabulary consists of 119 million word tokens and 110K unique word types with an average sentence size of 31.3 words. The headline vocabulary consists of 31 million tokens and 69K word types with the average title of length 8.3 words (note that this is significantly shorter than the DUC summaries). On average there are 4.6 overlapping word types between the headline and the input; although only 2.6 in the\n> first 75-characters of the input.\n> \n> \n>", "#### Who are the source language producers?\n\n\nFrom the paper:\n\n\n\n> \n> For training data for both tasks, we utilize the annotated Gigaword data set (Graff et al., 2003; Napoles et al., 2012), which consists of standard Gigaword, preprocessed with Stanford CoreNLP tools (Manning et al., 2014).\n> \n> \n>", "### Annotations", "#### Annotation process\n\n\nAnnotations are inherited from the annotatated Gigaword data set.\n\n\nAdditional information from the paper:\n\n\n\n> \n> Our model only uses annotations for tokenization and sentence separation, although several of the baselines use parsing and tagging as well.\n> \n> \n>", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @lewtun, @lhoestq, @thomwolf for adding this dataset." ]
[ "TAGS\n#task_categories-summarization #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-extended|gigaword_2003 #language-English #license-mit #headline-generation #arxiv-1509.00685 #region-us \n", "### Dataset Summary\n\n\nHeadline-generation on a corpus of article pairs from Gigaword consisting of\naround 4 million articles. Use the 'org\\_data' provided by\nURL which is identical to\nURL but with better format.", "### Supported Tasks and Leaderboards\n\n\n* 'summarization': This dataset can be used for Summarization, where given a dicument, the goal is to predict its summery. The model performance is evaluated using the ROUGE metric. The leaderboard for this task is available here.", "### Languages\n\n\nEnglish.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.\n\n\n* 'document': a 'string' feature.\n* 'summary': a 'string' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nFrom the paper:\n\n\n\n> \n> For our training set, we pair the headline of each article with its first sentence to create an inputsummary pair. While the model could in theory be trained on any pair, Gigaword contains many spurious headline-article pairs. We therefore prune training based on the following heuristic filters: (1) Are there no non-stop-words in common? (2) Does the title contain a byline or other extraneous editing marks? (3) Does the title have a question mark or colon? After applying these filters, the training set consists of roughly J = 4 million title-article pairs. We apply a minimal preprocessing step using PTB tokenization, lower-casing, replacing all digit characters with #, and replacing of word types seen less than 5 times with UNK. We also remove all articles from the time-period of the DUC evaluation. release.\n> The complete input training vocabulary consists of 119 million word tokens and 110K unique word types with an average sentence size of 31.3 words. The headline vocabulary consists of 31 million tokens and 69K word types with the average title of length 8.3 words (note that this is significantly shorter than the DUC summaries). On average there are 4.6 overlapping word types between the headline and the input; although only 2.6 in the\n> first 75-characters of the input.\n> \n> \n>", "#### Who are the source language producers?\n\n\nFrom the paper:\n\n\n\n> \n> For training data for both tasks, we utilize the annotated Gigaword data set (Graff et al., 2003; Napoles et al., 2012), which consists of standard Gigaword, preprocessed with Stanford CoreNLP tools (Manning et al., 2014).\n> \n> \n>", "### Annotations", "#### Annotation process\n\n\nAnnotations are inherited from the annotatated Gigaword data set.\n\n\nAdditional information from the paper:\n\n\n\n> \n> Our model only uses annotations for tokenization and sentence separation, although several of the baselines use parsing and tagging as well.\n> \n> \n>", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @lewtun, @lhoestq, @thomwolf for adding this dataset." ]
1e017f6baa597bef56ba45bb54e2fa9754522bae
# Dataset Card for [Dataset Name] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **[Repository](https://github.com/TevenLeScao/glucose)** - **[Paper](https://arxiv.org/abs/2009.07758)** - **Point of Contact:** [[email protected]](mailto:[email protected]) ### Dataset Summary GLUCOSE: GeneraLized and COntextualized Story Explanations, is a novel conceptual framework and dataset for commonsense reasoning. Given a short story and a sentence X in the story, GLUCOSE captures ten dimensions of causal explanation related to X. These dimensions, inspired by human cognitive psychology, cover often-implicit causes and effects of X, including events, location, possession, and other attributes. ### Supported Tasks and Leaderboards Common sense inference of: 1. Causes 2. Emotions motivating an event 3. Locations enabling an event 4. Possession states enabling an event 5. Other attributes enabling an event 6. Consequences 7. Emotions caused by an event 8. Changes in location caused by an event 9. Changes in possession caused by an event 10. Other attributes that may be changed by an event ### Languages English, monolingual ## Dataset Structure ### Data Instances ``` { "experiment_id": "e56c7c3e-4660-40fb-80d0-052d566d676a__4", "story_id": "e56c7c3e-4660-40fb-80d0-052d566d676a", "worker_id": 19, "submission_time_normalized": "20190930", "worker_quality_rating": 3, "selected_sentence_index": 4, "story": "It was bedtime at our house. Two of the three kids hit the pillow and fall asleep. The third is a trouble maker. For two hours he continues to get out of bed and want to play. Finally he becomes tired and falls asleep." selected_sentence: "Finally he becomes tired and falls asleep.", "1_specificNL": "The third kid continues to get out of bed and wants to play >Causes/Enables> The kid finally becomes tired and falls asleep", "1_specificStructured": "{The third kid}_[subject] {continues}_[verb] {to }_[preposition1] {get out of bed}_[object1] {and wants to play}_[object2] >Causes/Enables> {The kid}_[subject] {finally becomes}_[verb] {tired}_[object1] {and falls asleep}_[object2]", "1_generalNL": "Someone_A doesn't want to go to sleep >Causes/Enables> Someone_A finally falls asleep", "1_generalStructured": "{Someone_A}_[subject] {doesn't want}_[verb] {to }_[preposition1] {go to sleep}_[object1] >Causes/Enables> {Someone_A}_[subject] {finally falls}_[verb] {asleep}_[object1]", "2_specificNL": "escaped", "2_specificStructured": "escaped", "2_generalNL": "escaped", "2_generalStructured": "escaped", "3_specificNL": "The third kid is in bed >Enables> The kid finally becomes tired and falls asleep", "3_specificStructured": "{The third kid}_[subject] {is}_[verb] {in}_[preposition] {bed}_[object] >Enables> {The kid}_[subject] {finally becomes}_[verb] {tired}_[object1] {and falls asleep}_[object2]", "3_generalNL": "Someone_A is in bed >Enables> Someone_A falls asleep", "3_generalStructured": "{Someone_A}_[subject] {is}_[verb] {in}_[preposition] {bed}_[object] >Enables> {Someone_A}_[subject] {falls}_[verb] {asleep}_[object1]", "4_specificNL": "escaped", "4_specificStructured": "escaped", "4_generalNL": "escaped", "4_generalStructured": "escaped", "5_specificNL": "escaped", "5_specificStructured": "escaped", "5_generalNL": "escaped", "5_generalStructured": "escaped", "6_specificNL": "escaped", "6_specificStructured": "escaped", "6_generalNL": "escaped", "6_generalStructured": "escaped", "7_specificNL": "escaped", "7_specificStructured": "escaped", "7_generalNL": "escaped", "7_generalStructured": "escaped", "8_specificNL": "escaped", "8_specificStructured": "escaped", "8_generalNL": "escaped", "8_generalStructured": "escaped", "9_specificNL": "escaped", "9_specificStructured": "escaped", "9_generalNL": "escaped", "9_generalStructured": "escaped", "10_specificNL": "escaped", "10_specificStructured": "escaped", "10_generalNL": "escaped", "10_generalStructured": "escaped", "number_filled_in": 7 } ``` ### Data Fields - __experiment_id__: a randomly generated alphanumeric sequence for a given story with the sentence index appended at the end after two underscores. Example: cbee2b5a-f2f9-4bca-9630-6825b1e36c13__0 - __story_id__: a random alphanumeric identifier for the story. Example: e56c7c3e-4660-40fb-80d0-052d566d676a - __worker_id__: each worker has a unique identificaiton number. Example: 21 - __submission_time_normalized__: the time of submission in the format YYYYMMDD. Example: 20200115 - __worker_quality_assessment__: rating for the worker on the assignment in the row. Example: 2 - __selected_sentence_index__: the index of a given sentence in a story. Example: 0 - __story__: contains the full text of the ROC story that was used for the HIT. Example: It was bedtime at our house. Two of the three kids hit the pillow and fall asleep. The third is a trouble maker. For two hours he continues to get out of bed and want to play. Finally he becomes tired and falls asleep. - __selected_sentence__: the sentence from the story that is being annotated. Example: It was bedtime at our house. - __[1-10]\_[specific/general][NL/Structured]__: This is the primary data collected. It provides the common sense knowledge about the related stories and those general rules about the world derived from the specific statements. For each of the ten relationships, there are four columns. The specific columns give the specific statements from the story. The general statements give the corresponding generalization. The NL columns are formatted in natural language, whereas the structured columns contain indications of the slots used to fill in the data. Example: - __1_specificNL__: "The school has a football team >Causes/Enables> The football game was last weekend" - __1_specificStructured__: "{The school }\_[subject] {has }\_[verb] {a football team }\_[object1] >Causes/Enables> {The football game }\_[subject] {was last weekend }\_[verb]" - __1_generalNL__: "Somewhere_A (that is a school ) has Something_A (that is a sports team ) >Causes/Enables> The game was last weekend" - __1_generalStructured__: "{Somewhere_A ||that is a school ||}\_[subject] {has }\_[verb] {Something_A ||that is a sports team ||}\_[object1] >Causes/Enables> {The game }\_[subject] {was last weekend }\_[verb]" - __number\_filled\_in__: number of dimensions filled in for the assignment. Example: 4 ### Data Splits Train split: 65,521 examples Test splits: 500 examples, without worker id and rating, number filled in, and structured text. ## Dataset Creation ### Curation Rationale When humans read or listen, they make implicit commonsense inferences that frame their understanding of what happened and why. As a step toward AI systems that can build similar mental models, we introduce GLUCOSE, a large-scale dataset of implicit commonsense causal knowledge, encoded as causal mini-theories about the world, each grounded in a narrative context. ### Source Data #### Initial Data Collection and Normalization Initial text from ROCStories #### Who are the source language producers? Amazon Mechanical Turk. ### Annotations #### Annotation process To enable developing models that can build mental models of narratives, we aimed to crowdsource a large, quality-monitored dataset. Beyond the scalability benefits, using crowd workers (as opposed to a small set of expert annotators) ensures diversity of thought, thus broadening coverage of a common-sense knowledge resource. The annotation task is complex: it requires annotators to understand different causal dimensions in a variety of contexts and to come up with generalized theories beyond the story context. For strict quality control, we designed a three-stage knowledge acquisition pipeline for crowdsourcing the GLUCOSE dataset on the Amazon Mechanical Turk Platform. The workers first go through a qualification test where they must score at least 90% on 10 multiple-choice questions on select GLUCOSE dimensions. Next, qualified workers can work on the main GLUCOSE data collection task: given a story S and a story sentence X, they are asked to fill in (allowing for non-applicable) all ten GLUCOSE dimensions, getting step-by-step guidance from the GLUCOSE data acquisition. To ensure data consistency, the same workers answer all dimensions for an S, X pair. Finally, the submissions are reviewed by an expert who rates each worker on a scale from 0 to 3, and provides feedback on how to improve. Our final UIs are the result of more than six rounds of pilot studies, iteratively improving the interaction elements, functionality, dimension definitions, instructions, and examples. #### Who are the annotators? Amazon Mechanical Turk workers, with feedback from an expert. ### Personal and Sensitive Information No personal or sensitive information. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators Nasrin Mostafazadeh, Aditya Kalyanpur, Lori Moon, David Buchanan, Lauren Berkowitz, Or Biran, Jennifer Chu-Carroll, from Elemental Cognition ### Licensing Information Creative Commons Attribution-NonCommercial 4.0 International Public License ### Citation Information ``` @inproceedings{mostafazadeh2020glucose, title={GLUCOSE: GeneraLized and COntextualized Story Explanations}, author={Nasrin Mostafazadeh and Aditya Kalyanpur and Lori Moon and David Buchanan and Lauren Berkowitz and Or Biran and Jennifer Chu-Carroll}, year={2020}, booktitle={The Conference on Empirical Methods in Natural Language Processing}, publisher={Association for Computational Linguistics} } ``` ### Contributions Thanks to [@TevenLeScao](https://github.com/TevenLeScao) for adding this dataset.
glucose
[ "task_categories:fill-mask", "task_categories:text-generation", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|other-ROC-stories", "language:en", "license:cc-by-4.0", "commonsense-inference", "arxiv:2009.07758", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended|other-ROC-stories"], "task_categories": ["fill-mask", "text-generation"], "paperswithcode_id": "glucose", "pretty_name": "GLUCOSE", "tags": ["commonsense-inference"], "dataset_info": {"features": [{"name": "experiment_id", "dtype": "string"}, {"name": "story_id", "dtype": "string"}, {"name": "worker_id", "dtype": "int64"}, {"name": "worker_ids", "dtype": "string"}, {"name": "submission_time_normalized", "dtype": "string"}, {"name": "worker_quality_assessment", "dtype": "int64"}, {"name": "selected_sentence_index", "dtype": "int64"}, {"name": "story", "dtype": "string"}, {"name": "selected_sentence", "dtype": "string"}, {"name": "number_filled_in", "dtype": "int64"}, {"name": "1_specificNL", "dtype": "string"}, {"name": "1_specificStructured", "dtype": "string"}, {"name": "1_generalNL", "dtype": "string"}, {"name": "1_generalStructured", "dtype": "string"}, {"name": "2_specificNL", "dtype": "string"}, {"name": "2_specificStructured", "dtype": "string"}, {"name": "2_generalNL", "dtype": "string"}, {"name": "2_generalStructured", "dtype": "string"}, {"name": "3_specificNL", "dtype": "string"}, {"name": "3_specificStructured", "dtype": "string"}, {"name": "3_generalNL", "dtype": "string"}, {"name": "3_generalStructured", "dtype": "string"}, {"name": "4_specificNL", "dtype": "string"}, {"name": "4_specificStructured", "dtype": "string"}, {"name": "4_generalNL", "dtype": "string"}, {"name": "4_generalStructured", "dtype": "string"}, {"name": "5_specificNL", "dtype": "string"}, {"name": "5_specificStructured", "dtype": "string"}, {"name": "5_generalNL", "dtype": "string"}, {"name": "5_generalStructured", "dtype": "string"}, {"name": "6_specificNL", "dtype": "string"}, {"name": "6_specificStructured", "dtype": "string"}, {"name": "6_generalNL", "dtype": "string"}, {"name": "6_generalStructured", "dtype": "string"}, {"name": "7_specificNL", "dtype": "string"}, {"name": "7_specificStructured", "dtype": "string"}, {"name": "7_generalNL", "dtype": "string"}, {"name": "7_generalStructured", "dtype": "string"}, {"name": "8_specificNL", "dtype": "string"}, {"name": "8_specificStructured", "dtype": "string"}, {"name": "8_generalNL", "dtype": "string"}, {"name": "8_generalStructured", "dtype": "string"}, {"name": "9_specificNL", "dtype": "string"}, {"name": "9_specificStructured", "dtype": "string"}, {"name": "9_generalNL", "dtype": "string"}, {"name": "9_generalStructured", "dtype": "string"}, {"name": "10_specificNL", "dtype": "string"}, {"name": "10_specificStructured", "dtype": "string"}, {"name": "10_generalNL", "dtype": "string"}, {"name": "10_generalStructured", "dtype": "string"}], "config_name": "glucose", "splits": [{"name": "train", "num_bytes": 204605370, "num_examples": 65522}, {"name": "test", "num_bytes": 355757, "num_examples": 500}], "download_size": 30362105, "dataset_size": 204961127}}
2024-01-18T11:04:19+00:00
[ "2009.07758" ]
[ "en" ]
TAGS #task_categories-fill-mask #task_categories-text-generation #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|other-ROC-stories #language-English #license-cc-by-4.0 #commonsense-inference #arxiv-2009.07758 #region-us
# Dataset Card for [Dataset Name] ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Repository - Paper - Point of Contact: glucose@URL ### Dataset Summary GLUCOSE: GeneraLized and COntextualized Story Explanations, is a novel conceptual framework and dataset for commonsense reasoning. Given a short story and a sentence X in the story, GLUCOSE captures ten dimensions of causal explanation related to X. These dimensions, inspired by human cognitive psychology, cover often-implicit causes and effects of X, including events, location, possession, and other attributes. ### Supported Tasks and Leaderboards Common sense inference of: 1. Causes 2. Emotions motivating an event 3. Locations enabling an event 4. Possession states enabling an event 5. Other attributes enabling an event 6. Consequences 7. Emotions caused by an event 8. Changes in location caused by an event 9. Changes in possession caused by an event 10. Other attributes that may be changed by an event ### Languages English, monolingual ## Dataset Structure ### Data Instances ### Data Fields - __experiment_id__: a randomly generated alphanumeric sequence for a given story with the sentence index appended at the end after two underscores. Example: cbee2b5a-f2f9-4bca-9630-6825b1e36c13__0 - __story_id__: a random alphanumeric identifier for the story. Example: e56c7c3e-4660-40fb-80d0-052d566d676a - __worker_id__: each worker has a unique identificaiton number. Example: 21 - __submission_time_normalized__: the time of submission in the format YYYYMMDD. Example: 20200115 - __worker_quality_assessment__: rating for the worker on the assignment in the row. Example: 2 - __selected_sentence_index__: the index of a given sentence in a story. Example: 0 - __story__: contains the full text of the ROC story that was used for the HIT. Example: It was bedtime at our house. Two of the three kids hit the pillow and fall asleep. The third is a trouble maker. For two hours he continues to get out of bed and want to play. Finally he becomes tired and falls asleep. - __selected_sentence__: the sentence from the story that is being annotated. Example: It was bedtime at our house. - __[1-10]\_[specific/general][NL/Structured]__: This is the primary data collected. It provides the common sense knowledge about the related stories and those general rules about the world derived from the specific statements. For each of the ten relationships, there are four columns. The specific columns give the specific statements from the story. The general statements give the corresponding generalization. The NL columns are formatted in natural language, whereas the structured columns contain indications of the slots used to fill in the data. Example: - __1_specificNL__: "The school has a football team >Causes/Enables> The football game was last weekend" - __1_specificStructured__: "{The school }\_[subject] {has }\_[verb] {a football team }\_[object1] >Causes/Enables> {The football game }\_[subject] {was last weekend }\_[verb]" - __1_generalNL__: "Somewhere_A (that is a school ) has Something_A (that is a sports team ) >Causes/Enables> The game was last weekend" - __1_generalStructured__: "{Somewhere_A ||that is a school ||}\_[subject] {has }\_[verb] {Something_A ||that is a sports team ||}\_[object1] >Causes/Enables> {The game }\_[subject] {was last weekend }\_[verb]" - __number\_filled\_in__: number of dimensions filled in for the assignment. Example: 4 ### Data Splits Train split: 65,521 examples Test splits: 500 examples, without worker id and rating, number filled in, and structured text. ## Dataset Creation ### Curation Rationale When humans read or listen, they make implicit commonsense inferences that frame their understanding of what happened and why. As a step toward AI systems that can build similar mental models, we introduce GLUCOSE, a large-scale dataset of implicit commonsense causal knowledge, encoded as causal mini-theories about the world, each grounded in a narrative context. ### Source Data #### Initial Data Collection and Normalization Initial text from ROCStories #### Who are the source language producers? Amazon Mechanical Turk. ### Annotations #### Annotation process To enable developing models that can build mental models of narratives, we aimed to crowdsource a large, quality-monitored dataset. Beyond the scalability benefits, using crowd workers (as opposed to a small set of expert annotators) ensures diversity of thought, thus broadening coverage of a common-sense knowledge resource. The annotation task is complex: it requires annotators to understand different causal dimensions in a variety of contexts and to come up with generalized theories beyond the story context. For strict quality control, we designed a three-stage knowledge acquisition pipeline for crowdsourcing the GLUCOSE dataset on the Amazon Mechanical Turk Platform. The workers first go through a qualification test where they must score at least 90% on 10 multiple-choice questions on select GLUCOSE dimensions. Next, qualified workers can work on the main GLUCOSE data collection task: given a story S and a story sentence X, they are asked to fill in (allowing for non-applicable) all ten GLUCOSE dimensions, getting step-by-step guidance from the GLUCOSE data acquisition. To ensure data consistency, the same workers answer all dimensions for an S, X pair. Finally, the submissions are reviewed by an expert who rates each worker on a scale from 0 to 3, and provides feedback on how to improve. Our final UIs are the result of more than six rounds of pilot studies, iteratively improving the interaction elements, functionality, dimension definitions, instructions, and examples. #### Who are the annotators? Amazon Mechanical Turk workers, with feedback from an expert. ### Personal and Sensitive Information No personal or sensitive information. ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators Nasrin Mostafazadeh, Aditya Kalyanpur, Lori Moon, David Buchanan, Lauren Berkowitz, Or Biran, Jennifer Chu-Carroll, from Elemental Cognition ### Licensing Information Creative Commons Attribution-NonCommercial 4.0 International Public License ### Contributions Thanks to @TevenLeScao for adding this dataset.
[ "# Dataset Card for [Dataset Name]", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Repository\n- Paper\n- Point of Contact: glucose@URL", "### Dataset Summary\n\nGLUCOSE: GeneraLized and COntextualized Story Explanations, is a novel conceptual framework and dataset for commonsense reasoning. Given a short story and a sentence X in the story, GLUCOSE captures ten dimensions of causal explanation related to X. These dimensions, inspired by human cognitive psychology, cover often-implicit causes and effects of X, including events, location, possession, and other attributes.", "### Supported Tasks and Leaderboards\n\nCommon sense inference of:\n1. Causes\n2. Emotions motivating an event\n3. Locations enabling an event\n4. Possession states enabling an event\n5. Other attributes enabling an event\n6. Consequences\n7. Emotions caused by an event\n8. Changes in location caused by an event\n9. Changes in possession caused by an event\n10. Other attributes that may be changed by an event", "### Languages\n\nEnglish, monolingual", "## Dataset Structure", "### Data Instances", "### Data Fields\n\n- __experiment_id__: a randomly generated alphanumeric sequence for a given story with the sentence index appended at the end after two underscores. Example: cbee2b5a-f2f9-4bca-9630-6825b1e36c13__0\n\n- __story_id__: a random alphanumeric identifier for the story. Example: e56c7c3e-4660-40fb-80d0-052d566d676a\n\n- __worker_id__: each worker has a unique identificaiton number. Example: 21\n\n- __submission_time_normalized__: the time of submission in the format YYYYMMDD. Example: 20200115\n\n- __worker_quality_assessment__: rating for the worker on the assignment in the row. Example: 2\n\n- __selected_sentence_index__: the index of a given sentence in a story. Example: 0\n\n- __story__: contains the full text of the ROC story that was used for the HIT. Example: It was bedtime at our house. Two of the three kids hit the pillow and fall asleep. The third is a trouble maker. For two hours he continues to get out of bed and want to play. Finally he becomes tired and falls asleep.\n\n- __selected_sentence__: the sentence from the story that is being annotated. Example: It was bedtime at our house.\n\n- __[1-10]\\_[specific/general][NL/Structured]__: This is the primary data collected. It provides the common sense knowledge about the related stories and those general rules about the world derived from the specific statements. For each of the ten relationships, there are four columns. The specific columns give the specific statements from the story. The general statements give the corresponding generalization. The NL columns are formatted in natural language, whereas the structured columns contain indications of the slots used to fill in the data. Example: \n - __1_specificNL__: \"The school has a football team >Causes/Enables> The football game was last weekend\" \n - __1_specificStructured__: \"{The school }\\_[subject] {has }\\_[verb] {a football team }\\_[object1] >Causes/Enables> {The football game }\\_[subject] {was last weekend }\\_[verb]\"\n - __1_generalNL__: \"Somewhere_A (that is a school ) has Something_A (that is a sports team ) >Causes/Enables> The game was last weekend\" \n - __1_generalStructured__: \"{Somewhere_A ||that is a school ||}\\_[subject] {has }\\_[verb] {Something_A ||that is a sports team ||}\\_[object1] >Causes/Enables> {The game }\\_[subject] {was last weekend }\\_[verb]\" \n\n- __number\\_filled\\_in__: number of dimensions filled in for the assignment. Example: 4", "### Data Splits\n\nTrain split: 65,521 examples\nTest splits: 500 examples, without worker id and rating, number filled in, and structured text.", "## Dataset Creation", "### Curation Rationale\n\nWhen humans read or listen, they make implicit commonsense inferences that frame their understanding of what happened and why. As a step toward AI systems that can build similar mental models, we introduce GLUCOSE, a large-scale dataset of implicit commonsense causal knowledge, encoded as causal mini-theories about the world, each grounded in a narrative context.", "### Source Data", "#### Initial Data Collection and Normalization\n\nInitial text from ROCStories", "#### Who are the source language producers?\n\nAmazon Mechanical Turk.", "### Annotations", "#### Annotation process\n\nTo enable developing models that can build mental models of narratives, we aimed to crowdsource a large, quality-monitored dataset. Beyond the scalability benefits, using crowd workers (as opposed to a small set of expert annotators) ensures diversity of thought, thus broadening coverage of a common-sense knowledge resource. The annotation task is complex: it requires annotators to understand different causal dimensions in a variety of contexts and to come up with generalized theories beyond the story context. For\nstrict quality control, we designed a three-stage knowledge acquisition pipeline for crowdsourcing the GLUCOSE dataset on the Amazon Mechanical Turk Platform. The workers first go through a qualification test where they must score at least 90% on 10 multiple-choice questions on select GLUCOSE dimensions. Next, qualified workers can work on the main GLUCOSE data collection task: given a story S and a story sentence X, they are asked to fill in (allowing for non-applicable) all ten GLUCOSE dimensions, getting step-by-step guidance from the GLUCOSE data acquisition. To ensure data consistency, the same workers answer all dimensions for an S, X pair. Finally, the submissions are reviewed by an expert who rates each worker on a scale from 0 to 3, and provides feedback on how to improve. Our final UIs are the result of more than six rounds of pilot studies, iteratively improving the interaction elements, functionality, dimension definitions, instructions, and examples.", "#### Who are the annotators?\n\nAmazon Mechanical Turk workers, with feedback from an expert.", "### Personal and Sensitive Information\n\nNo personal or sensitive information.", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nNasrin Mostafazadeh, Aditya Kalyanpur, Lori Moon, David Buchanan, Lauren Berkowitz, Or Biran, Jennifer Chu-Carroll, from Elemental Cognition", "### Licensing Information\n\nCreative Commons Attribution-NonCommercial 4.0 International Public License", "### Contributions\n\nThanks to @TevenLeScao for adding this dataset." ]
[ "TAGS\n#task_categories-fill-mask #task_categories-text-generation #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|other-ROC-stories #language-English #license-cc-by-4.0 #commonsense-inference #arxiv-2009.07758 #region-us \n", "# Dataset Card for [Dataset Name]", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Repository\n- Paper\n- Point of Contact: glucose@URL", "### Dataset Summary\n\nGLUCOSE: GeneraLized and COntextualized Story Explanations, is a novel conceptual framework and dataset for commonsense reasoning. Given a short story and a sentence X in the story, GLUCOSE captures ten dimensions of causal explanation related to X. These dimensions, inspired by human cognitive psychology, cover often-implicit causes and effects of X, including events, location, possession, and other attributes.", "### Supported Tasks and Leaderboards\n\nCommon sense inference of:\n1. Causes\n2. Emotions motivating an event\n3. Locations enabling an event\n4. Possession states enabling an event\n5. Other attributes enabling an event\n6. Consequences\n7. Emotions caused by an event\n8. Changes in location caused by an event\n9. Changes in possession caused by an event\n10. Other attributes that may be changed by an event", "### Languages\n\nEnglish, monolingual", "## Dataset Structure", "### Data Instances", "### Data Fields\n\n- __experiment_id__: a randomly generated alphanumeric sequence for a given story with the sentence index appended at the end after two underscores. Example: cbee2b5a-f2f9-4bca-9630-6825b1e36c13__0\n\n- __story_id__: a random alphanumeric identifier for the story. Example: e56c7c3e-4660-40fb-80d0-052d566d676a\n\n- __worker_id__: each worker has a unique identificaiton number. Example: 21\n\n- __submission_time_normalized__: the time of submission in the format YYYYMMDD. Example: 20200115\n\n- __worker_quality_assessment__: rating for the worker on the assignment in the row. Example: 2\n\n- __selected_sentence_index__: the index of a given sentence in a story. Example: 0\n\n- __story__: contains the full text of the ROC story that was used for the HIT. Example: It was bedtime at our house. Two of the three kids hit the pillow and fall asleep. The third is a trouble maker. For two hours he continues to get out of bed and want to play. Finally he becomes tired and falls asleep.\n\n- __selected_sentence__: the sentence from the story that is being annotated. Example: It was bedtime at our house.\n\n- __[1-10]\\_[specific/general][NL/Structured]__: This is the primary data collected. It provides the common sense knowledge about the related stories and those general rules about the world derived from the specific statements. For each of the ten relationships, there are four columns. The specific columns give the specific statements from the story. The general statements give the corresponding generalization. The NL columns are formatted in natural language, whereas the structured columns contain indications of the slots used to fill in the data. Example: \n - __1_specificNL__: \"The school has a football team >Causes/Enables> The football game was last weekend\" \n - __1_specificStructured__: \"{The school }\\_[subject] {has }\\_[verb] {a football team }\\_[object1] >Causes/Enables> {The football game }\\_[subject] {was last weekend }\\_[verb]\"\n - __1_generalNL__: \"Somewhere_A (that is a school ) has Something_A (that is a sports team ) >Causes/Enables> The game was last weekend\" \n - __1_generalStructured__: \"{Somewhere_A ||that is a school ||}\\_[subject] {has }\\_[verb] {Something_A ||that is a sports team ||}\\_[object1] >Causes/Enables> {The game }\\_[subject] {was last weekend }\\_[verb]\" \n\n- __number\\_filled\\_in__: number of dimensions filled in for the assignment. Example: 4", "### Data Splits\n\nTrain split: 65,521 examples\nTest splits: 500 examples, without worker id and rating, number filled in, and structured text.", "## Dataset Creation", "### Curation Rationale\n\nWhen humans read or listen, they make implicit commonsense inferences that frame their understanding of what happened and why. As a step toward AI systems that can build similar mental models, we introduce GLUCOSE, a large-scale dataset of implicit commonsense causal knowledge, encoded as causal mini-theories about the world, each grounded in a narrative context.", "### Source Data", "#### Initial Data Collection and Normalization\n\nInitial text from ROCStories", "#### Who are the source language producers?\n\nAmazon Mechanical Turk.", "### Annotations", "#### Annotation process\n\nTo enable developing models that can build mental models of narratives, we aimed to crowdsource a large, quality-monitored dataset. Beyond the scalability benefits, using crowd workers (as opposed to a small set of expert annotators) ensures diversity of thought, thus broadening coverage of a common-sense knowledge resource. The annotation task is complex: it requires annotators to understand different causal dimensions in a variety of contexts and to come up with generalized theories beyond the story context. For\nstrict quality control, we designed a three-stage knowledge acquisition pipeline for crowdsourcing the GLUCOSE dataset on the Amazon Mechanical Turk Platform. The workers first go through a qualification test where they must score at least 90% on 10 multiple-choice questions on select GLUCOSE dimensions. Next, qualified workers can work on the main GLUCOSE data collection task: given a story S and a story sentence X, they are asked to fill in (allowing for non-applicable) all ten GLUCOSE dimensions, getting step-by-step guidance from the GLUCOSE data acquisition. To ensure data consistency, the same workers answer all dimensions for an S, X pair. Finally, the submissions are reviewed by an expert who rates each worker on a scale from 0 to 3, and provides feedback on how to improve. Our final UIs are the result of more than six rounds of pilot studies, iteratively improving the interaction elements, functionality, dimension definitions, instructions, and examples.", "#### Who are the annotators?\n\nAmazon Mechanical Turk workers, with feedback from an expert.", "### Personal and Sensitive Information\n\nNo personal or sensitive information.", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nNasrin Mostafazadeh, Aditya Kalyanpur, Lori Moon, David Buchanan, Lauren Berkowitz, Or Biran, Jennifer Chu-Carroll, from Elemental Cognition", "### Licensing Information\n\nCreative Commons Attribution-NonCommercial 4.0 International Public License", "### Contributions\n\nThanks to @TevenLeScao for adding this dataset." ]
bcdcba79d07bc864c1c254ccfcedcce55bcc9a8c
# Dataset Card for GLUE ## Table of Contents - [Dataset Card for GLUE](#dataset-card-for-glue) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [ax](#ax) - [cola](#cola) - [mnli](#mnli) - [mnli_matched](#mnli_matched) - [mnli_mismatched](#mnli_mismatched) - [mrpc](#mrpc) - [qnli](#qnli) - [qqp](#qqp) - [rte](#rte) - [sst2](#sst2) - [stsb](#stsb) - [wnli](#wnli) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [ax](#ax-1) - [cola](#cola-1) - [mnli](#mnli-1) - [mnli_matched](#mnli_matched-1) - [mnli_mismatched](#mnli_mismatched-1) - [mrpc](#mrpc-1) - [qnli](#qnli-1) - [qqp](#qqp-1) - [rte](#rte-1) - [sst2](#sst2-1) - [stsb](#stsb-1) - [wnli](#wnli-1) - [Data Fields](#data-fields) - [ax](#ax-2) - [cola](#cola-2) - [mnli](#mnli-2) - [mnli_matched](#mnli_matched-2) - [mnli_mismatched](#mnli_mismatched-2) - [mrpc](#mrpc-2) - [qnli](#qnli-2) - [qqp](#qqp-2) - [rte](#rte-2) - [sst2](#sst2-2) - [stsb](#stsb-2) - [wnli](#wnli-2) - [Data Splits](#data-splits) - [ax](#ax-3) - [cola](#cola-3) - [mnli](#mnli-3) - [mnli_matched](#mnli_matched-3) - [mnli_mismatched](#mnli_mismatched-3) - [mrpc](#mrpc-3) - [qnli](#qnli-3) - [qqp](#qqp-3) - [rte](#rte-3) - [sst2](#sst2-3) - [stsb](#stsb-3) - [wnli](#wnli-3) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://gluebenchmark.com/ - **Repository:** https://github.com/nyu-mll/GLUE-baselines - **Paper:** https://arxiv.org/abs/1804.07461 - **Leaderboard:** https://gluebenchmark.com/leaderboard - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 1.00 GB - **Size of the generated dataset:** 240.84 MB - **Total amount of disk used:** 1.24 GB ### Dataset Summary GLUE, the General Language Understanding Evaluation benchmark (https://gluebenchmark.com/) is a collection of resources for training, evaluating, and analyzing natural language understanding systems. ### Supported Tasks and Leaderboards The leaderboard for the GLUE benchmark can be found [at this address](https://gluebenchmark.com/). It comprises the following tasks: #### ax A manually-curated evaluation dataset for fine-grained analysis of system performance on a broad range of linguistic phenomena. This dataset evaluates sentence understanding through Natural Language Inference (NLI) problems. Use a model trained on MulitNLI to produce predictions for this dataset. #### cola The Corpus of Linguistic Acceptability consists of English acceptability judgments drawn from books and journal articles on linguistic theory. Each example is a sequence of words annotated with whether it is a grammatical English sentence. #### mnli The Multi-Genre Natural Language Inference Corpus is a crowdsourced collection of sentence pairs with textual entailment annotations. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis (entailment), contradicts the hypothesis (contradiction), or neither (neutral). The premise sentences are gathered from ten different sources, including transcribed speech, fiction, and government reports. The authors of the benchmark use the standard test set, for which they obtained private labels from the RTE authors, and evaluate on both the matched (in-domain) and mismatched (cross-domain) section. They also uses and recommend the SNLI corpus as 550k examples of auxiliary training data. #### mnli_matched The matched validation and test splits from MNLI. See the "mnli" BuilderConfig for additional information. #### mnli_mismatched The mismatched validation and test splits from MNLI. See the "mnli" BuilderConfig for additional information. #### mrpc The Microsoft Research Paraphrase Corpus (Dolan & Brockett, 2005) is a corpus of sentence pairs automatically extracted from online news sources, with human annotations for whether the sentences in the pair are semantically equivalent. #### qnli The Stanford Question Answering Dataset is a question-answering dataset consisting of question-paragraph pairs, where one of the sentences in the paragraph (drawn from Wikipedia) contains the answer to the corresponding question (written by an annotator). The authors of the benchmark convert the task into sentence pair classification by forming a pair between each question and each sentence in the corresponding context, and filtering out pairs with low lexical overlap between the question and the context sentence. The task is to determine whether the context sentence contains the answer to the question. This modified version of the original task removes the requirement that the model select the exact answer, but also removes the simplifying assumptions that the answer is always present in the input and that lexical overlap is a reliable cue. #### qqp The Quora Question Pairs2 dataset is a collection of question pairs from the community question-answering website Quora. The task is to determine whether a pair of questions are semantically equivalent. #### rte The Recognizing Textual Entailment (RTE) datasets come from a series of annual textual entailment challenges. The authors of the benchmark combined the data from RTE1 (Dagan et al., 2006), RTE2 (Bar Haim et al., 2006), RTE3 (Giampiccolo et al., 2007), and RTE5 (Bentivogli et al., 2009). Examples are constructed based on news and Wikipedia text. The authors of the benchmark convert all datasets to a two-class split, where for three-class datasets they collapse neutral and contradiction into not entailment, for consistency. #### sst2 The Stanford Sentiment Treebank consists of sentences from movie reviews and human annotations of their sentiment. The task is to predict the sentiment of a given sentence. It uses the two-way (positive/negative) class split, with only sentence-level labels. #### stsb The Semantic Textual Similarity Benchmark (Cer et al., 2017) is a collection of sentence pairs drawn from news headlines, video and image captions, and natural language inference data. Each pair is human-annotated with a similarity score from 1 to 5. #### wnli The Winograd Schema Challenge (Levesque et al., 2011) is a reading comprehension task in which a system must read a sentence with a pronoun and select the referent of that pronoun from a list of choices. The examples are manually constructed to foil simple statistical methods: Each one is contingent on contextual information provided by a single word or phrase in the sentence. To convert the problem into sentence pair classification, the authors of the benchmark construct sentence pairs by replacing the ambiguous pronoun with each possible referent. The task is to predict if the sentence with the pronoun substituted is entailed by the original sentence. They use a small evaluation set consisting of new examples derived from fiction books that was shared privately by the authors of the original corpus. While the included training set is balanced between two classes, the test set is imbalanced between them (65% not entailment). Also, due to a data quirk, the development set is adversarial: hypotheses are sometimes shared between training and development examples, so if a model memorizes the training examples, they will predict the wrong label on corresponding development set example. As with QNLI, each example is evaluated separately, so there is not a systematic correspondence between a model's score on this task and its score on the unconverted original task. The authors of the benchmark call converted dataset WNLI (Winograd NLI). ### Languages The language data in GLUE is in English (BCP-47 `en`) ## Dataset Structure ### Data Instances #### ax - **Size of downloaded dataset files:** 0.22 MB - **Size of the generated dataset:** 0.24 MB - **Total amount of disk used:** 0.46 MB An example of 'test' looks as follows. ``` { "premise": "The cat sat on the mat.", "hypothesis": "The cat did not sit on the mat.", "label": -1, "idx: 0 } ``` #### cola - **Size of downloaded dataset files:** 0.38 MB - **Size of the generated dataset:** 0.61 MB - **Total amount of disk used:** 0.99 MB An example of 'train' looks as follows. ``` { "sentence": "Our friends won't buy this analysis, let alone the next one we propose.", "label": 1, "id": 0 } ``` #### mnli - **Size of downloaded dataset files:** 312.78 MB - **Size of the generated dataset:** 82.47 MB - **Total amount of disk used:** 395.26 MB An example of 'train' looks as follows. ``` { "premise": "Conceptually cream skimming has two basic dimensions - product and geography.", "hypothesis": "Product and geography are what make cream skimming work.", "label": 1, "idx": 0 } ``` #### mnli_matched - **Size of downloaded dataset files:** 312.78 MB - **Size of the generated dataset:** 3.69 MB - **Total amount of disk used:** 316.48 MB An example of 'test' looks as follows. ``` { "premise": "Hierbas, ans seco, ans dulce, and frigola are just a few names worth keeping a look-out for.", "hypothesis": "Hierbas is a name worth looking out for.", "label": -1, "idx": 0 } ``` #### mnli_mismatched - **Size of downloaded dataset files:** 312.78 MB - **Size of the generated dataset:** 3.91 MB - **Total amount of disk used:** 316.69 MB An example of 'test' looks as follows. ``` { "premise": "What have you decided, what are you going to do?", "hypothesis": "So what's your decision?", "label": -1, "idx": 0 } ``` #### mrpc - **Size of downloaded dataset files:** ?? - **Size of the generated dataset:** 1.5 MB - **Total amount of disk used:** ?? An example of 'train' looks as follows. ``` { "sentence1": "Amrozi accused his brother, whom he called "the witness", of deliberately distorting his evidence.", "sentence2": "Referring to him as only "the witness", Amrozi accused his brother of deliberately distorting his evidence.", "label": 1, "idx": 0 } ``` #### qnli - **Size of downloaded dataset files:** ?? - **Size of the generated dataset:** 28 MB - **Total amount of disk used:** ?? An example of 'train' looks as follows. ``` { "question": "When did the third Digimon series begin?", "sentence": "Unlike the two seasons before it and most of the seasons that followed, Digimon Tamers takes a darker and more realistic approach to its story featuring Digimon who do not reincarnate after their deaths and more complex character development in the original Japanese.", "label": 1, "idx": 0 } ``` #### qqp - **Size of downloaded dataset files:** ?? - **Size of the generated dataset:** 107 MB - **Total amount of disk used:** ?? An example of 'train' looks as follows. ``` { "question1": "How is the life of a math student? Could you describe your own experiences?", "question2": "Which level of prepration is enough for the exam jlpt5?", "label": 0, "idx": 0 } ``` #### rte - **Size of downloaded dataset files:** ?? - **Size of the generated dataset:** 1.9 MB - **Total amount of disk used:** ?? An example of 'train' looks as follows. ``` { "sentence1": "No Weapons of Mass Destruction Found in Iraq Yet.", "sentence2": "Weapons of Mass Destruction Found in Iraq.", "label": 1, "idx": 0 } ``` #### sst2 - **Size of downloaded dataset files:** ?? - **Size of the generated dataset:** 4.9 MB - **Total amount of disk used:** ?? An example of 'train' looks as follows. ``` { "sentence": "hide new secretions from the parental units", "label": 0, "idx": 0 } ``` #### stsb - **Size of downloaded dataset files:** ?? - **Size of the generated dataset:** 1.2 MB - **Total amount of disk used:** ?? An example of 'train' looks as follows. ``` { "sentence1": "A plane is taking off.", "sentence2": "An air plane is taking off.", "label": 5.0, "idx": 0 } ``` #### wnli - **Size of downloaded dataset files:** ?? - **Size of the generated dataset:** 0.18 MB - **Total amount of disk used:** ?? An example of 'train' looks as follows. ``` { "sentence1": "I stuck a pin through a carrot. When I pulled the pin out, it had a hole.", "sentence2": "The carrot had a hole.", "label": 1, "idx": 0 } ``` ### Data Fields The data fields are the same among all splits. #### ax - `premise`: a `string` feature. - `hypothesis`: a `string` feature. - `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2). - `idx`: a `int32` feature. #### cola - `sentence`: a `string` feature. - `label`: a classification label, with possible values including `unacceptable` (0), `acceptable` (1). - `idx`: a `int32` feature. #### mnli - `premise`: a `string` feature. - `hypothesis`: a `string` feature. - `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2). - `idx`: a `int32` feature. #### mnli_matched - `premise`: a `string` feature. - `hypothesis`: a `string` feature. - `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2). - `idx`: a `int32` feature. #### mnli_mismatched - `premise`: a `string` feature. - `hypothesis`: a `string` feature. - `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2). - `idx`: a `int32` feature. #### mrpc - `sentence1`: a `string` feature. - `sentence2`: a `string` feature. - `label`: a classification label, with possible values including `not_equivalent` (0), `equivalent` (1). - `idx`: a `int32` feature. #### qnli - `question`: a `string` feature. - `sentence`: a `string` feature. - `label`: a classification label, with possible values including `entailment` (0), `not_entailment` (1). - `idx`: a `int32` feature. #### qqp - `question1`: a `string` feature. - `question2`: a `string` feature. - `label`: a classification label, with possible values including `not_duplicate` (0), `duplicate` (1). - `idx`: a `int32` feature. #### rte - `sentence1`: a `string` feature. - `sentence2`: a `string` feature. - `label`: a classification label, with possible values including `entailment` (0), `not_entailment` (1). - `idx`: a `int32` feature. #### sst2 - `sentence`: a `string` feature. - `label`: a classification label, with possible values including `negative` (0), `positive` (1). - `idx`: a `int32` feature. #### stsb - `sentence1`: a `string` feature. - `sentence2`: a `string` feature. - `label`: a float32 regression label, with possible values from 0 to 5. - `idx`: a `int32` feature. #### wnli - `sentence1`: a `string` feature. - `sentence2`: a `string` feature. - `label`: a classification label, with possible values including `not_entailment` (0), `entailment` (1). - `idx`: a `int32` feature. ### Data Splits #### ax | |test| |---|---:| |ax |1104| #### cola | |train|validation|test| |----|----:|---------:|---:| |cola| 8551| 1043|1063| #### mnli | |train |validation_matched|validation_mismatched|test_matched|test_mismatched| |----|-----:|-----------------:|--------------------:|-----------:|--------------:| |mnli|392702| 9815| 9832| 9796| 9847| #### mnli_matched | |validation|test| |------------|---------:|---:| |mnli_matched| 9815|9796| #### mnli_mismatched | |validation|test| |---------------|---------:|---:| |mnli_mismatched| 9832|9847| #### mrpc [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### qnli [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### qqp [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### rte [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### sst2 [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### stsb [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### wnli [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information The primary GLUE tasks are built on and derived from existing datasets. We refer users to the original licenses accompanying each dataset. ### Citation Information If you use GLUE, please cite all the datasets you use. In addition, we encourage you to use the following BibTeX citation for GLUE itself: ``` @inproceedings{wang2019glue, title={{GLUE}: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding}, author={Wang, Alex and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R.}, note={In the Proceedings of ICLR.}, year={2019} } ``` If you evaluate using GLUE, we also highly recommend citing the papers that originally introduced the nine GLUE tasks, both to give the original authors their due credit and because venues will expect papers to describe the data they evaluate on. The following provides BibTeX for all of the GLUE tasks, except QQP, for which we recommend adding a footnote to this page: https://data.quora.com/First-Quora-Dataset-Release-Question-Pairs ``` @article{warstadt2018neural, title={Neural Network Acceptability Judgments}, author={Warstadt, Alex and Singh, Amanpreet and Bowman, Samuel R.}, journal={arXiv preprint 1805.12471}, year={2018} } @inproceedings{socher2013recursive, title={Recursive deep models for semantic compositionality over a sentiment treebank}, author={Socher, Richard and Perelygin, Alex and Wu, Jean and Chuang, Jason and Manning, Christopher D and Ng, Andrew and Potts, Christopher}, booktitle={Proceedings of EMNLP}, pages={1631--1642}, year={2013} } @inproceedings{dolan2005automatically, title={Automatically constructing a corpus of sentential paraphrases}, author={Dolan, William B and Brockett, Chris}, booktitle={Proceedings of the International Workshop on Paraphrasing}, year={2005} } @book{agirre2007semantic, editor = {Agirre, Eneko and M`arquez, Llu'{i}s and Wicentowski, Richard}, title = {Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007)}, month = {June}, year = {2007}, address = {Prague, Czech Republic}, publisher = {Association for Computational Linguistics}, } @inproceedings{williams2018broad, author = {Williams, Adina and Nangia, Nikita and Bowman, Samuel R.}, title = {A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference}, booktitle = {Proceedings of NAACL-HLT}, year = 2018 } @inproceedings{rajpurkar2016squad, author = {Rajpurkar, Pranav and Zhang, Jian and Lopyrev, Konstantin and Liang, Percy} title = {{SQ}u{AD}: 100,000+ Questions for Machine Comprehension of Text}, booktitle = {Proceedings of EMNLP} year = {2016}, publisher = {Association for Computational Linguistics}, pages = {2383--2392}, location = {Austin, Texas}, } @incollection{dagan2006pascal, title={The {PASCAL} recognising textual entailment challenge}, author={Dagan, Ido and Glickman, Oren and Magnini, Bernardo}, booktitle={Machine learning challenges. evaluating predictive uncertainty, visual object classification, and recognising tectual entailment}, pages={177--190}, year={2006}, publisher={Springer} } @article{bar2006second, title={The second {PASCAL} recognising textual entailment challenge}, author={Bar Haim, Roy and Dagan, Ido and Dolan, Bill and Ferro, Lisa and Giampiccolo, Danilo and Magnini, Bernardo and Szpektor, Idan}, year={2006} } @inproceedings{giampiccolo2007third, title={The third {PASCAL} recognizing textual entailment challenge}, author={Giampiccolo, Danilo and Magnini, Bernardo and Dagan, Ido and Dolan, Bill}, booktitle={Proceedings of the ACL-PASCAL workshop on textual entailment and paraphrasing}, pages={1--9}, year={2007}, organization={Association for Computational Linguistics}, } @article{bentivogli2009fifth, title={The Fifth {PASCAL} Recognizing Textual Entailment Challenge}, author={Bentivogli, Luisa and Dagan, Ido and Dang, Hoa Trang and Giampiccolo, Danilo and Magnini, Bernardo}, booktitle={TAC}, year={2009} } @inproceedings{levesque2011winograd, title={The {W}inograd schema challenge}, author={Levesque, Hector J and Davis, Ernest and Morgenstern, Leora}, booktitle={{AAAI} Spring Symposium: Logical Formalizations of Commonsense Reasoning}, volume={46}, pages={47}, year={2011} } ``` ### Contributions Thanks to [@patpizio](https://github.com/patpizio), [@jeswan](https://github.com/jeswan), [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@mariamabarham](https://github.com/mariamabarham) for adding this dataset.
glue
[ "task_categories:text-classification", "task_ids:acceptability-classification", "task_ids:natural-language-inference", "task_ids:semantic-similarity-scoring", "task_ids:sentiment-classification", "task_ids:text-scoring", "annotations_creators:other", "language_creators:other", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:other", "qa-nli", "coreference-nli", "paraphrase-identification", "arxiv:1804.07461", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["other"], "language_creators": ["other"], "language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["acceptability-classification", "natural-language-inference", "semantic-similarity-scoring", "sentiment-classification", "text-scoring"], "paperswithcode_id": "glue", "pretty_name": "GLUE (General Language Understanding Evaluation benchmark)", "config_names": ["ax", "cola", "mnli", "mnli_matched", "mnli_mismatched", "mrpc", "qnli", "qqp", "rte", "sst2", "stsb", "wnli"], "tags": ["qa-nli", "coreference-nli", "paraphrase-identification"], "dataset_info": [{"config_name": "ax", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "test", "num_bytes": 237694, "num_examples": 1104}], "download_size": 80767, "dataset_size": 237694}, {"config_name": "cola", "features": [{"name": "sentence", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "unacceptable", "1": "acceptable"}}}}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 484869, "num_examples": 8551}, {"name": "validation", "num_bytes": 60322, "num_examples": 1043}, {"name": "test", "num_bytes": 60513, "num_examples": 1063}], "download_size": 326394, "dataset_size": 605704}, {"config_name": "mnli", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 74619646, "num_examples": 392702}, {"name": "validation_matched", "num_bytes": 1833783, "num_examples": 9815}, {"name": "validation_mismatched", "num_bytes": 1949231, "num_examples": 9832}, {"name": "test_matched", "num_bytes": 1848654, "num_examples": 9796}, {"name": "test_mismatched", "num_bytes": 1950703, "num_examples": 9847}], "download_size": 57168425, "dataset_size": 82202017}, {"config_name": "mnli_matched", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "validation", "num_bytes": 1833783, "num_examples": 9815}, {"name": "test", "num_bytes": 1848654, "num_examples": 9796}], "download_size": 2435055, "dataset_size": 3682437}, {"config_name": "mnli_mismatched", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "validation", "num_bytes": 1949231, "num_examples": 9832}, {"name": "test", "num_bytes": 1950703, "num_examples": 9847}], "download_size": 2509009, "dataset_size": 3899934}, {"config_name": "mrpc", "features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "not_equivalent", "1": "equivalent"}}}}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 943843, "num_examples": 3668}, {"name": "validation", "num_bytes": 105879, "num_examples": 408}, {"name": "test", "num_bytes": 442410, "num_examples": 1725}], "download_size": 1033400, "dataset_size": 1492132}, {"config_name": "qnli", "features": [{"name": "question", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "not_entailment"}}}}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 25612443, "num_examples": 104743}, {"name": "validation", "num_bytes": 1368304, "num_examples": 5463}, {"name": "test", "num_bytes": 1373093, "num_examples": 5463}], "download_size": 19278324, "dataset_size": 28353840}, {"config_name": "qqp", "features": [{"name": "question1", "dtype": "string"}, {"name": "question2", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "not_duplicate", "1": "duplicate"}}}}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 50900820, "num_examples": 363846}, {"name": "validation", "num_bytes": 5653754, "num_examples": 40430}, {"name": "test", "num_bytes": 55171111, "num_examples": 390965}], "download_size": 73982265, "dataset_size": 111725685}, {"config_name": "rte", "features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "not_entailment"}}}}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 847320, "num_examples": 2490}, {"name": "validation", "num_bytes": 90728, "num_examples": 277}, {"name": "test", "num_bytes": 974053, "num_examples": 3000}], "download_size": 1274409, "dataset_size": 1912101}, {"config_name": "sst2", "features": [{"name": "sentence", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "negative", "1": "positive"}}}}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 4681603, "num_examples": 67349}, {"name": "validation", "num_bytes": 106252, "num_examples": 872}, {"name": "test", "num_bytes": 216640, "num_examples": 1821}], "download_size": 3331080, "dataset_size": 5004495}, {"config_name": "stsb", "features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "float32"}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 754791, "num_examples": 5749}, {"name": "validation", "num_bytes": 216064, "num_examples": 1500}, {"name": "test", "num_bytes": 169974, "num_examples": 1379}], "download_size": 766983, "dataset_size": 1140829}, {"config_name": "wnli", "features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "not_entailment", "1": "entailment"}}}}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 107109, "num_examples": 635}, {"name": "validation", "num_bytes": 12162, "num_examples": 71}, {"name": "test", "num_bytes": 37889, "num_examples": 146}], "download_size": 63522, "dataset_size": 157160}], "configs": [{"config_name": "ax", "data_files": [{"split": "test", "path": "ax/test-*"}]}, {"config_name": "cola", "data_files": [{"split": "train", "path": "cola/train-*"}, {"split": "validation", "path": "cola/validation-*"}, {"split": "test", "path": "cola/test-*"}]}, {"config_name": "mnli", "data_files": [{"split": "train", "path": "mnli/train-*"}, {"split": "validation_matched", "path": "mnli/validation_matched-*"}, {"split": "validation_mismatched", "path": "mnli/validation_mismatched-*"}, {"split": "test_matched", "path": "mnli/test_matched-*"}, {"split": "test_mismatched", "path": "mnli/test_mismatched-*"}]}, {"config_name": "mnli_matched", "data_files": [{"split": "validation", "path": "mnli_matched/validation-*"}, {"split": "test", "path": "mnli_matched/test-*"}]}, {"config_name": "mnli_mismatched", "data_files": [{"split": "validation", "path": "mnli_mismatched/validation-*"}, {"split": "test", "path": "mnli_mismatched/test-*"}]}, {"config_name": "mrpc", "data_files": [{"split": "train", "path": "mrpc/train-*"}, {"split": "validation", "path": "mrpc/validation-*"}, {"split": "test", "path": "mrpc/test-*"}]}, {"config_name": "qnli", "data_files": [{"split": "train", "path": "qnli/train-*"}, {"split": "validation", "path": "qnli/validation-*"}, {"split": "test", "path": "qnli/test-*"}]}, {"config_name": "qqp", "data_files": [{"split": "train", "path": "qqp/train-*"}, {"split": "validation", "path": "qqp/validation-*"}, {"split": "test", "path": "qqp/test-*"}]}, {"config_name": "rte", "data_files": [{"split": "train", "path": "rte/train-*"}, {"split": "validation", "path": "rte/validation-*"}, {"split": "test", "path": "rte/test-*"}]}, {"config_name": "sst2", "data_files": [{"split": "train", "path": "sst2/train-*"}, {"split": "validation", "path": "sst2/validation-*"}, {"split": "test", "path": "sst2/test-*"}]}, {"config_name": "stsb", "data_files": [{"split": "train", "path": "stsb/train-*"}, {"split": "validation", "path": "stsb/validation-*"}, {"split": "test", "path": "stsb/test-*"}]}, {"config_name": "wnli", "data_files": [{"split": "train", "path": "wnli/train-*"}, {"split": "validation", "path": "wnli/validation-*"}, {"split": "test", "path": "wnli/test-*"}]}], "train-eval-index": [{"config": "cola", "task": "text-classification", "task_id": "binary_classification", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"sentence": "text", "label": "target"}}, {"config": "sst2", "task": "text-classification", "task_id": "binary_classification", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"sentence": "text", "label": "target"}}, {"config": "mrpc", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"sentence1": "text1", "sentence2": "text2", "label": "target"}}, {"config": "qqp", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"question1": "text1", "question2": "text2", "label": "target"}}, {"config": "stsb", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"sentence1": "text1", "sentence2": "text2", "label": "target"}}, {"config": "mnli", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation_matched"}, "col_mapping": {"premise": "text1", "hypothesis": "text2", "label": "target"}}, {"config": "mnli_mismatched", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"premise": "text1", "hypothesis": "text2", "label": "target"}}, {"config": "mnli_matched", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"premise": "text1", "hypothesis": "text2", "label": "target"}}, {"config": "qnli", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"question": "text1", "sentence": "text2", "label": "target"}}, {"config": "rte", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"sentence1": "text1", "sentence2": "text2", "label": "target"}}, {"config": "wnli", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"sentence1": "text1", "sentence2": "text2", "label": "target"}}]}
2024-01-30T07:41:18+00:00
[ "1804.07461" ]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-acceptability-classification #task_ids-natural-language-inference #task_ids-semantic-similarity-scoring #task_ids-sentiment-classification #task_ids-text-scoring #annotations_creators-other #language_creators-other #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-other #qa-nli #coreference-nli #paraphrase-identification #arxiv-1804.07461 #region-us
Dataset Card for GLUE ===================== Table of Contents ----------------- * Dataset Card for GLUE + Table of Contents + Dataset Description - Dataset Summary - Supported Tasks and Leaderboards * ax * cola * mnli * mnli\_matched * mnli\_mismatched * mrpc * qnli * qqp * rte * sst2 * stsb * wnli - Languages + Dataset Structure - Data Instances * ax * cola * mnli * mnli\_matched * mnli\_mismatched * mrpc * qnli * qqp * rte * sst2 * stsb * wnli - Data Fields * ax * cola * mnli * mnli\_matched * mnli\_mismatched * mrpc * qnli * qqp * rte * sst2 * stsb * wnli - Data Splits * ax * cola * mnli * mnli\_matched * mnli\_mismatched * mrpc * qnli * qqp * rte * sst2 * stsb * wnli + Dataset Creation - Curation Rationale - Source Data * Initial Data Collection and Normalization * Who are the source language producers? - Annotations * Annotation process * Who are the annotators? - Personal and Sensitive Information + Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations + Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: URL * Leaderboard: URL * Point of Contact: * Size of downloaded dataset files: 1.00 GB * Size of the generated dataset: 240.84 MB * Total amount of disk used: 1.24 GB ### Dataset Summary GLUE, the General Language Understanding Evaluation benchmark (URL is a collection of resources for training, evaluating, and analyzing natural language understanding systems. ### Supported Tasks and Leaderboards The leaderboard for the GLUE benchmark can be found at this address. It comprises the following tasks: #### ax A manually-curated evaluation dataset for fine-grained analysis of system performance on a broad range of linguistic phenomena. This dataset evaluates sentence understanding through Natural Language Inference (NLI) problems. Use a model trained on MulitNLI to produce predictions for this dataset. #### cola The Corpus of Linguistic Acceptability consists of English acceptability judgments drawn from books and journal articles on linguistic theory. Each example is a sequence of words annotated with whether it is a grammatical English sentence. #### mnli The Multi-Genre Natural Language Inference Corpus is a crowdsourced collection of sentence pairs with textual entailment annotations. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis (entailment), contradicts the hypothesis (contradiction), or neither (neutral). The premise sentences are gathered from ten different sources, including transcribed speech, fiction, and government reports. The authors of the benchmark use the standard test set, for which they obtained private labels from the RTE authors, and evaluate on both the matched (in-domain) and mismatched (cross-domain) section. They also uses and recommend the SNLI corpus as 550k examples of auxiliary training data. #### mnli\_matched The matched validation and test splits from MNLI. See the "mnli" BuilderConfig for additional information. #### mnli\_mismatched The mismatched validation and test splits from MNLI. See the "mnli" BuilderConfig for additional information. #### mrpc The Microsoft Research Paraphrase Corpus (Dolan & Brockett, 2005) is a corpus of sentence pairs automatically extracted from online news sources, with human annotations for whether the sentences in the pair are semantically equivalent. #### qnli The Stanford Question Answering Dataset is a question-answering dataset consisting of question-paragraph pairs, where one of the sentences in the paragraph (drawn from Wikipedia) contains the answer to the corresponding question (written by an annotator). The authors of the benchmark convert the task into sentence pair classification by forming a pair between each question and each sentence in the corresponding context, and filtering out pairs with low lexical overlap between the question and the context sentence. The task is to determine whether the context sentence contains the answer to the question. This modified version of the original task removes the requirement that the model select the exact answer, but also removes the simplifying assumptions that the answer is always present in the input and that lexical overlap is a reliable cue. #### qqp The Quora Question Pairs2 dataset is a collection of question pairs from the community question-answering website Quora. The task is to determine whether a pair of questions are semantically equivalent. #### rte The Recognizing Textual Entailment (RTE) datasets come from a series of annual textual entailment challenges. The authors of the benchmark combined the data from RTE1 (Dagan et al., 2006), RTE2 (Bar Haim et al., 2006), RTE3 (Giampiccolo et al., 2007), and RTE5 (Bentivogli et al., 2009). Examples are constructed based on news and Wikipedia text. The authors of the benchmark convert all datasets to a two-class split, where for three-class datasets they collapse neutral and contradiction into not entailment, for consistency. #### sst2 The Stanford Sentiment Treebank consists of sentences from movie reviews and human annotations of their sentiment. The task is to predict the sentiment of a given sentence. It uses the two-way (positive/negative) class split, with only sentence-level labels. #### stsb The Semantic Textual Similarity Benchmark (Cer et al., 2017) is a collection of sentence pairs drawn from news headlines, video and image captions, and natural language inference data. Each pair is human-annotated with a similarity score from 1 to 5. #### wnli The Winograd Schema Challenge (Levesque et al., 2011) is a reading comprehension task in which a system must read a sentence with a pronoun and select the referent of that pronoun from a list of choices. The examples are manually constructed to foil simple statistical methods: Each one is contingent on contextual information provided by a single word or phrase in the sentence. To convert the problem into sentence pair classification, the authors of the benchmark construct sentence pairs by replacing the ambiguous pronoun with each possible referent. The task is to predict if the sentence with the pronoun substituted is entailed by the original sentence. They use a small evaluation set consisting of new examples derived from fiction books that was shared privately by the authors of the original corpus. While the included training set is balanced between two classes, the test set is imbalanced between them (65% not entailment). Also, due to a data quirk, the development set is adversarial: hypotheses are sometimes shared between training and development examples, so if a model memorizes the training examples, they will predict the wrong label on corresponding development set example. As with QNLI, each example is evaluated separately, so there is not a systematic correspondence between a model's score on this task and its score on the unconverted original task. The authors of the benchmark call converted dataset WNLI (Winograd NLI). ### Languages The language data in GLUE is in English (BCP-47 'en') Dataset Structure ----------------- ### Data Instances #### ax * Size of downloaded dataset files: 0.22 MB * Size of the generated dataset: 0.24 MB * Total amount of disk used: 0.46 MB An example of 'test' looks as follows. #### cola * Size of downloaded dataset files: 0.38 MB * Size of the generated dataset: 0.61 MB * Total amount of disk used: 0.99 MB An example of 'train' looks as follows. #### mnli * Size of downloaded dataset files: 312.78 MB * Size of the generated dataset: 82.47 MB * Total amount of disk used: 395.26 MB An example of 'train' looks as follows. #### mnli\_matched * Size of downloaded dataset files: 312.78 MB * Size of the generated dataset: 3.69 MB * Total amount of disk used: 316.48 MB An example of 'test' looks as follows. #### mnli\_mismatched * Size of downloaded dataset files: 312.78 MB * Size of the generated dataset: 3.91 MB * Total amount of disk used: 316.69 MB An example of 'test' looks as follows. #### mrpc * Size of downloaded dataset files: ?? * Size of the generated dataset: 1.5 MB * Total amount of disk used: ?? An example of 'train' looks as follows. #### qnli * Size of downloaded dataset files: ?? * Size of the generated dataset: 28 MB * Total amount of disk used: ?? An example of 'train' looks as follows. #### qqp * Size of downloaded dataset files: ?? * Size of the generated dataset: 107 MB * Total amount of disk used: ?? An example of 'train' looks as follows. #### rte * Size of downloaded dataset files: ?? * Size of the generated dataset: 1.9 MB * Total amount of disk used: ?? An example of 'train' looks as follows. #### sst2 * Size of downloaded dataset files: ?? * Size of the generated dataset: 4.9 MB * Total amount of disk used: ?? An example of 'train' looks as follows. #### stsb * Size of downloaded dataset files: ?? * Size of the generated dataset: 1.2 MB * Total amount of disk used: ?? An example of 'train' looks as follows. #### wnli * Size of downloaded dataset files: ?? * Size of the generated dataset: 0.18 MB * Total amount of disk used: ?? An example of 'train' looks as follows. ### Data Fields The data fields are the same among all splits. #### ax * 'premise': a 'string' feature. * 'hypothesis': a 'string' feature. * 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2). * 'idx': a 'int32' feature. #### cola * 'sentence': a 'string' feature. * 'label': a classification label, with possible values including 'unacceptable' (0), 'acceptable' (1). * 'idx': a 'int32' feature. #### mnli * 'premise': a 'string' feature. * 'hypothesis': a 'string' feature. * 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2). * 'idx': a 'int32' feature. #### mnli\_matched * 'premise': a 'string' feature. * 'hypothesis': a 'string' feature. * 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2). * 'idx': a 'int32' feature. #### mnli\_mismatched * 'premise': a 'string' feature. * 'hypothesis': a 'string' feature. * 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2). * 'idx': a 'int32' feature. #### mrpc * 'sentence1': a 'string' feature. * 'sentence2': a 'string' feature. * 'label': a classification label, with possible values including 'not\_equivalent' (0), 'equivalent' (1). * 'idx': a 'int32' feature. #### qnli * 'question': a 'string' feature. * 'sentence': a 'string' feature. * 'label': a classification label, with possible values including 'entailment' (0), 'not\_entailment' (1). * 'idx': a 'int32' feature. #### qqp * 'question1': a 'string' feature. * 'question2': a 'string' feature. * 'label': a classification label, with possible values including 'not\_duplicate' (0), 'duplicate' (1). * 'idx': a 'int32' feature. #### rte * 'sentence1': a 'string' feature. * 'sentence2': a 'string' feature. * 'label': a classification label, with possible values including 'entailment' (0), 'not\_entailment' (1). * 'idx': a 'int32' feature. #### sst2 * 'sentence': a 'string' feature. * 'label': a classification label, with possible values including 'negative' (0), 'positive' (1). * 'idx': a 'int32' feature. #### stsb * 'sentence1': a 'string' feature. * 'sentence2': a 'string' feature. * 'label': a float32 regression label, with possible values from 0 to 5. * 'idx': a 'int32' feature. #### wnli * 'sentence1': a 'string' feature. * 'sentence2': a 'string' feature. * 'label': a classification label, with possible values including 'not\_entailment' (0), 'entailment' (1). * 'idx': a 'int32' feature. ### Data Splits #### ax #### cola #### mnli #### mnli\_matched #### mnli\_mismatched #### mrpc #### qnli #### qqp #### rte #### sst2 #### stsb #### wnli Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information The primary GLUE tasks are built on and derived from existing datasets. We refer users to the original licenses accompanying each dataset. If you use GLUE, please cite all the datasets you use. In addition, we encourage you to use the following BibTeX citation for GLUE itself: If you evaluate using GLUE, we also highly recommend citing the papers that originally introduced the nine GLUE tasks, both to give the original authors their due credit and because venues will expect papers to describe the data they evaluate on. The following provides BibTeX for all of the GLUE tasks, except QQP, for which we recommend adding a footnote to this page: URL ### Contributions Thanks to @patpizio, @jeswan, @thomwolf, @patrickvonplaten, @mariamabarham for adding this dataset.
[ "### Dataset Summary\n\n\nGLUE, the General Language Understanding Evaluation benchmark (URL is a collection of resources for training, evaluating, and analyzing natural language understanding systems.", "### Supported Tasks and Leaderboards\n\n\nThe leaderboard for the GLUE benchmark can be found at this address. It comprises the following tasks:", "#### ax\n\n\nA manually-curated evaluation dataset for fine-grained analysis of system performance on a broad range of linguistic phenomena. This dataset evaluates sentence understanding through Natural Language Inference (NLI) problems. Use a model trained on MulitNLI to produce predictions for this dataset.", "#### cola\n\n\nThe Corpus of Linguistic Acceptability consists of English acceptability judgments drawn from books and journal articles on linguistic theory. Each example is a sequence of words annotated with whether it is a grammatical English sentence.", "#### mnli\n\n\nThe Multi-Genre Natural Language Inference Corpus is a crowdsourced collection of sentence pairs with textual entailment annotations. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis (entailment), contradicts the hypothesis (contradiction), or neither (neutral). The premise sentences are gathered from ten different sources, including transcribed speech, fiction, and government reports. The authors of the benchmark use the standard test set, for which they obtained private labels from the RTE authors, and evaluate on both the matched (in-domain) and mismatched (cross-domain) section. They also uses and recommend the SNLI corpus as 550k examples of auxiliary training data.", "#### mnli\\_matched\n\n\nThe matched validation and test splits from MNLI. See the \"mnli\" BuilderConfig for additional information.", "#### mnli\\_mismatched\n\n\nThe mismatched validation and test splits from MNLI. See the \"mnli\" BuilderConfig for additional information.", "#### mrpc\n\n\nThe Microsoft Research Paraphrase Corpus (Dolan & Brockett, 2005) is a corpus of sentence pairs automatically extracted from online news sources, with human annotations for whether the sentences in the pair are semantically equivalent.", "#### qnli\n\n\nThe Stanford Question Answering Dataset is a question-answering dataset consisting of question-paragraph pairs, where one of the sentences in the paragraph (drawn from Wikipedia) contains the answer to the corresponding question (written by an annotator). The authors of the benchmark convert the task into sentence pair classification by forming a pair between each question and each sentence in the corresponding context, and filtering out pairs with low lexical overlap between the question and the context sentence. The task is to determine whether the context sentence contains the answer to the question. This modified version of the original task removes the requirement that the model select the exact answer, but also removes the simplifying assumptions that the answer is always present in the input and that lexical overlap is a reliable cue.", "#### qqp\n\n\nThe Quora Question Pairs2 dataset is a collection of question pairs from the community question-answering website Quora. The task is to determine whether a pair of questions are semantically equivalent.", "#### rte\n\n\nThe Recognizing Textual Entailment (RTE) datasets come from a series of annual textual entailment challenges. The authors of the benchmark combined the data from RTE1 (Dagan et al., 2006), RTE2 (Bar Haim et al., 2006), RTE3 (Giampiccolo et al., 2007), and RTE5 (Bentivogli et al., 2009). Examples are constructed based on news and Wikipedia text. The authors of the benchmark convert all datasets to a two-class split, where for three-class datasets they collapse neutral and contradiction into not entailment, for consistency.", "#### sst2\n\n\nThe Stanford Sentiment Treebank consists of sentences from movie reviews and human annotations of their sentiment. The task is to predict the sentiment of a given sentence. It uses the two-way (positive/negative) class split, with only sentence-level labels.", "#### stsb\n\n\nThe Semantic Textual Similarity Benchmark (Cer et al., 2017) is a collection of sentence pairs drawn from news headlines, video and image captions, and natural language inference data. Each pair is human-annotated with a similarity score from 1 to 5.", "#### wnli\n\n\nThe Winograd Schema Challenge (Levesque et al., 2011) is a reading comprehension task in which a system must read a sentence with a pronoun and select the referent of that pronoun from a list of choices. The examples are manually constructed to foil simple statistical methods: Each one is contingent on contextual information provided by a single word or phrase in the sentence. To convert the problem into sentence pair classification, the authors of the benchmark construct sentence pairs by replacing the ambiguous pronoun with each possible referent. The task is to predict if the sentence with the pronoun substituted is entailed by the original sentence. They use a small evaluation set consisting of new examples derived from fiction books that was shared privately by the authors of the original corpus. While the included training set is balanced between two classes, the test set is imbalanced between them (65% not entailment). Also, due to a data quirk, the development set is adversarial: hypotheses are sometimes shared between training and development examples, so if a model memorizes the training examples, they will predict the wrong label on corresponding development set example. As with QNLI, each example is evaluated separately, so there is not a systematic correspondence between a model's score on this task and its score on the unconverted original task. The authors of the benchmark call converted dataset WNLI (Winograd NLI).", "### Languages\n\n\nThe language data in GLUE is in English (BCP-47 'en')\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### ax\n\n\n* Size of downloaded dataset files: 0.22 MB\n* Size of the generated dataset: 0.24 MB\n* Total amount of disk used: 0.46 MB\n\n\nAn example of 'test' looks as follows.", "#### cola\n\n\n* Size of downloaded dataset files: 0.38 MB\n* Size of the generated dataset: 0.61 MB\n* Total amount of disk used: 0.99 MB\n\n\nAn example of 'train' looks as follows.", "#### mnli\n\n\n* Size of downloaded dataset files: 312.78 MB\n* Size of the generated dataset: 82.47 MB\n* Total amount of disk used: 395.26 MB\n\n\nAn example of 'train' looks as follows.", "#### mnli\\_matched\n\n\n* Size of downloaded dataset files: 312.78 MB\n* Size of the generated dataset: 3.69 MB\n* Total amount of disk used: 316.48 MB\n\n\nAn example of 'test' looks as follows.", "#### mnli\\_mismatched\n\n\n* Size of downloaded dataset files: 312.78 MB\n* Size of the generated dataset: 3.91 MB\n* Total amount of disk used: 316.69 MB\n\n\nAn example of 'test' looks as follows.", "#### mrpc\n\n\n* Size of downloaded dataset files: ??\n* Size of the generated dataset: 1.5 MB\n* Total amount of disk used: ??\n\n\nAn example of 'train' looks as follows.", "#### qnli\n\n\n* Size of downloaded dataset files: ??\n* Size of the generated dataset: 28 MB\n* Total amount of disk used: ??\n\n\nAn example of 'train' looks as follows.", "#### qqp\n\n\n* Size of downloaded dataset files: ??\n* Size of the generated dataset: 107 MB\n* Total amount of disk used: ??\n\n\nAn example of 'train' looks as follows.", "#### rte\n\n\n* Size of downloaded dataset files: ??\n* Size of the generated dataset: 1.9 MB\n* Total amount of disk used: ??\n\n\nAn example of 'train' looks as follows.", "#### sst2\n\n\n* Size of downloaded dataset files: ??\n* Size of the generated dataset: 4.9 MB\n* Total amount of disk used: ??\n\n\nAn example of 'train' looks as follows.", "#### stsb\n\n\n* Size of downloaded dataset files: ??\n* Size of the generated dataset: 1.2 MB\n* Total amount of disk used: ??\n\n\nAn example of 'train' looks as follows.", "#### wnli\n\n\n* Size of downloaded dataset files: ??\n* Size of the generated dataset: 0.18 MB\n* Total amount of disk used: ??\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### ax\n\n\n* 'premise': a 'string' feature.\n* 'hypothesis': a 'string' feature.\n* 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).\n* 'idx': a 'int32' feature.", "#### cola\n\n\n* 'sentence': a 'string' feature.\n* 'label': a classification label, with possible values including 'unacceptable' (0), 'acceptable' (1).\n* 'idx': a 'int32' feature.", "#### mnli\n\n\n* 'premise': a 'string' feature.\n* 'hypothesis': a 'string' feature.\n* 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).\n* 'idx': a 'int32' feature.", "#### mnli\\_matched\n\n\n* 'premise': a 'string' feature.\n* 'hypothesis': a 'string' feature.\n* 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).\n* 'idx': a 'int32' feature.", "#### mnli\\_mismatched\n\n\n* 'premise': a 'string' feature.\n* 'hypothesis': a 'string' feature.\n* 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).\n* 'idx': a 'int32' feature.", "#### mrpc\n\n\n* 'sentence1': a 'string' feature.\n* 'sentence2': a 'string' feature.\n* 'label': a classification label, with possible values including 'not\\_equivalent' (0), 'equivalent' (1).\n* 'idx': a 'int32' feature.", "#### qnli\n\n\n* 'question': a 'string' feature.\n* 'sentence': a 'string' feature.\n* 'label': a classification label, with possible values including 'entailment' (0), 'not\\_entailment' (1).\n* 'idx': a 'int32' feature.", "#### qqp\n\n\n* 'question1': a 'string' feature.\n* 'question2': a 'string' feature.\n* 'label': a classification label, with possible values including 'not\\_duplicate' (0), 'duplicate' (1).\n* 'idx': a 'int32' feature.", "#### rte\n\n\n* 'sentence1': a 'string' feature.\n* 'sentence2': a 'string' feature.\n* 'label': a classification label, with possible values including 'entailment' (0), 'not\\_entailment' (1).\n* 'idx': a 'int32' feature.", "#### sst2\n\n\n* 'sentence': a 'string' feature.\n* 'label': a classification label, with possible values including 'negative' (0), 'positive' (1).\n* 'idx': a 'int32' feature.", "#### stsb\n\n\n* 'sentence1': a 'string' feature.\n* 'sentence2': a 'string' feature.\n* 'label': a float32 regression label, with possible values from 0 to 5.\n* 'idx': a 'int32' feature.", "#### wnli\n\n\n* 'sentence1': a 'string' feature.\n* 'sentence2': a 'string' feature.\n* 'label': a classification label, with possible values including 'not\\_entailment' (0), 'entailment' (1).\n* 'idx': a 'int32' feature.", "### Data Splits", "#### ax", "#### cola", "#### mnli", "#### mnli\\_matched", "#### mnli\\_mismatched", "#### mrpc", "#### qnli", "#### qqp", "#### rte", "#### sst2", "#### stsb", "#### wnli\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nThe primary GLUE tasks are built on and derived from existing datasets. We refer users to the original licenses accompanying each dataset.\n\n\nIf you use GLUE, please cite all the datasets you use.\n\n\nIn addition, we encourage you to use the following BibTeX citation for GLUE itself:\n\n\nIf you evaluate using GLUE, we also highly recommend citing the papers that originally introduced the nine GLUE tasks, both to give the original authors their due credit and because venues will expect papers to describe the data they evaluate on.\nThe following provides BibTeX for all of the GLUE tasks, except QQP, for which we recommend adding a footnote to this page: URL", "### Contributions\n\n\nThanks to @patpizio, @jeswan, @thomwolf, @patrickvonplaten, @mariamabarham for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-acceptability-classification #task_ids-natural-language-inference #task_ids-semantic-similarity-scoring #task_ids-sentiment-classification #task_ids-text-scoring #annotations_creators-other #language_creators-other #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-other #qa-nli #coreference-nli #paraphrase-identification #arxiv-1804.07461 #region-us \n", "### Dataset Summary\n\n\nGLUE, the General Language Understanding Evaluation benchmark (URL is a collection of resources for training, evaluating, and analyzing natural language understanding systems.", "### Supported Tasks and Leaderboards\n\n\nThe leaderboard for the GLUE benchmark can be found at this address. It comprises the following tasks:", "#### ax\n\n\nA manually-curated evaluation dataset for fine-grained analysis of system performance on a broad range of linguistic phenomena. This dataset evaluates sentence understanding through Natural Language Inference (NLI) problems. Use a model trained on MulitNLI to produce predictions for this dataset.", "#### cola\n\n\nThe Corpus of Linguistic Acceptability consists of English acceptability judgments drawn from books and journal articles on linguistic theory. Each example is a sequence of words annotated with whether it is a grammatical English sentence.", "#### mnli\n\n\nThe Multi-Genre Natural Language Inference Corpus is a crowdsourced collection of sentence pairs with textual entailment annotations. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis (entailment), contradicts the hypothesis (contradiction), or neither (neutral). The premise sentences are gathered from ten different sources, including transcribed speech, fiction, and government reports. The authors of the benchmark use the standard test set, for which they obtained private labels from the RTE authors, and evaluate on both the matched (in-domain) and mismatched (cross-domain) section. They also uses and recommend the SNLI corpus as 550k examples of auxiliary training data.", "#### mnli\\_matched\n\n\nThe matched validation and test splits from MNLI. See the \"mnli\" BuilderConfig for additional information.", "#### mnli\\_mismatched\n\n\nThe mismatched validation and test splits from MNLI. See the \"mnli\" BuilderConfig for additional information.", "#### mrpc\n\n\nThe Microsoft Research Paraphrase Corpus (Dolan & Brockett, 2005) is a corpus of sentence pairs automatically extracted from online news sources, with human annotations for whether the sentences in the pair are semantically equivalent.", "#### qnli\n\n\nThe Stanford Question Answering Dataset is a question-answering dataset consisting of question-paragraph pairs, where one of the sentences in the paragraph (drawn from Wikipedia) contains the answer to the corresponding question (written by an annotator). The authors of the benchmark convert the task into sentence pair classification by forming a pair between each question and each sentence in the corresponding context, and filtering out pairs with low lexical overlap between the question and the context sentence. The task is to determine whether the context sentence contains the answer to the question. This modified version of the original task removes the requirement that the model select the exact answer, but also removes the simplifying assumptions that the answer is always present in the input and that lexical overlap is a reliable cue.", "#### qqp\n\n\nThe Quora Question Pairs2 dataset is a collection of question pairs from the community question-answering website Quora. The task is to determine whether a pair of questions are semantically equivalent.", "#### rte\n\n\nThe Recognizing Textual Entailment (RTE) datasets come from a series of annual textual entailment challenges. The authors of the benchmark combined the data from RTE1 (Dagan et al., 2006), RTE2 (Bar Haim et al., 2006), RTE3 (Giampiccolo et al., 2007), and RTE5 (Bentivogli et al., 2009). Examples are constructed based on news and Wikipedia text. The authors of the benchmark convert all datasets to a two-class split, where for three-class datasets they collapse neutral and contradiction into not entailment, for consistency.", "#### sst2\n\n\nThe Stanford Sentiment Treebank consists of sentences from movie reviews and human annotations of their sentiment. The task is to predict the sentiment of a given sentence. It uses the two-way (positive/negative) class split, with only sentence-level labels.", "#### stsb\n\n\nThe Semantic Textual Similarity Benchmark (Cer et al., 2017) is a collection of sentence pairs drawn from news headlines, video and image captions, and natural language inference data. Each pair is human-annotated with a similarity score from 1 to 5.", "#### wnli\n\n\nThe Winograd Schema Challenge (Levesque et al., 2011) is a reading comprehension task in which a system must read a sentence with a pronoun and select the referent of that pronoun from a list of choices. The examples are manually constructed to foil simple statistical methods: Each one is contingent on contextual information provided by a single word or phrase in the sentence. To convert the problem into sentence pair classification, the authors of the benchmark construct sentence pairs by replacing the ambiguous pronoun with each possible referent. The task is to predict if the sentence with the pronoun substituted is entailed by the original sentence. They use a small evaluation set consisting of new examples derived from fiction books that was shared privately by the authors of the original corpus. While the included training set is balanced between two classes, the test set is imbalanced between them (65% not entailment). Also, due to a data quirk, the development set is adversarial: hypotheses are sometimes shared between training and development examples, so if a model memorizes the training examples, they will predict the wrong label on corresponding development set example. As with QNLI, each example is evaluated separately, so there is not a systematic correspondence between a model's score on this task and its score on the unconverted original task. The authors of the benchmark call converted dataset WNLI (Winograd NLI).", "### Languages\n\n\nThe language data in GLUE is in English (BCP-47 'en')\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### ax\n\n\n* Size of downloaded dataset files: 0.22 MB\n* Size of the generated dataset: 0.24 MB\n* Total amount of disk used: 0.46 MB\n\n\nAn example of 'test' looks as follows.", "#### cola\n\n\n* Size of downloaded dataset files: 0.38 MB\n* Size of the generated dataset: 0.61 MB\n* Total amount of disk used: 0.99 MB\n\n\nAn example of 'train' looks as follows.", "#### mnli\n\n\n* Size of downloaded dataset files: 312.78 MB\n* Size of the generated dataset: 82.47 MB\n* Total amount of disk used: 395.26 MB\n\n\nAn example of 'train' looks as follows.", "#### mnli\\_matched\n\n\n* Size of downloaded dataset files: 312.78 MB\n* Size of the generated dataset: 3.69 MB\n* Total amount of disk used: 316.48 MB\n\n\nAn example of 'test' looks as follows.", "#### mnli\\_mismatched\n\n\n* Size of downloaded dataset files: 312.78 MB\n* Size of the generated dataset: 3.91 MB\n* Total amount of disk used: 316.69 MB\n\n\nAn example of 'test' looks as follows.", "#### mrpc\n\n\n* Size of downloaded dataset files: ??\n* Size of the generated dataset: 1.5 MB\n* Total amount of disk used: ??\n\n\nAn example of 'train' looks as follows.", "#### qnli\n\n\n* Size of downloaded dataset files: ??\n* Size of the generated dataset: 28 MB\n* Total amount of disk used: ??\n\n\nAn example of 'train' looks as follows.", "#### qqp\n\n\n* Size of downloaded dataset files: ??\n* Size of the generated dataset: 107 MB\n* Total amount of disk used: ??\n\n\nAn example of 'train' looks as follows.", "#### rte\n\n\n* Size of downloaded dataset files: ??\n* Size of the generated dataset: 1.9 MB\n* Total amount of disk used: ??\n\n\nAn example of 'train' looks as follows.", "#### sst2\n\n\n* Size of downloaded dataset files: ??\n* Size of the generated dataset: 4.9 MB\n* Total amount of disk used: ??\n\n\nAn example of 'train' looks as follows.", "#### stsb\n\n\n* Size of downloaded dataset files: ??\n* Size of the generated dataset: 1.2 MB\n* Total amount of disk used: ??\n\n\nAn example of 'train' looks as follows.", "#### wnli\n\n\n* Size of downloaded dataset files: ??\n* Size of the generated dataset: 0.18 MB\n* Total amount of disk used: ??\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### ax\n\n\n* 'premise': a 'string' feature.\n* 'hypothesis': a 'string' feature.\n* 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).\n* 'idx': a 'int32' feature.", "#### cola\n\n\n* 'sentence': a 'string' feature.\n* 'label': a classification label, with possible values including 'unacceptable' (0), 'acceptable' (1).\n* 'idx': a 'int32' feature.", "#### mnli\n\n\n* 'premise': a 'string' feature.\n* 'hypothesis': a 'string' feature.\n* 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).\n* 'idx': a 'int32' feature.", "#### mnli\\_matched\n\n\n* 'premise': a 'string' feature.\n* 'hypothesis': a 'string' feature.\n* 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).\n* 'idx': a 'int32' feature.", "#### mnli\\_mismatched\n\n\n* 'premise': a 'string' feature.\n* 'hypothesis': a 'string' feature.\n* 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).\n* 'idx': a 'int32' feature.", "#### mrpc\n\n\n* 'sentence1': a 'string' feature.\n* 'sentence2': a 'string' feature.\n* 'label': a classification label, with possible values including 'not\\_equivalent' (0), 'equivalent' (1).\n* 'idx': a 'int32' feature.", "#### qnli\n\n\n* 'question': a 'string' feature.\n* 'sentence': a 'string' feature.\n* 'label': a classification label, with possible values including 'entailment' (0), 'not\\_entailment' (1).\n* 'idx': a 'int32' feature.", "#### qqp\n\n\n* 'question1': a 'string' feature.\n* 'question2': a 'string' feature.\n* 'label': a classification label, with possible values including 'not\\_duplicate' (0), 'duplicate' (1).\n* 'idx': a 'int32' feature.", "#### rte\n\n\n* 'sentence1': a 'string' feature.\n* 'sentence2': a 'string' feature.\n* 'label': a classification label, with possible values including 'entailment' (0), 'not\\_entailment' (1).\n* 'idx': a 'int32' feature.", "#### sst2\n\n\n* 'sentence': a 'string' feature.\n* 'label': a classification label, with possible values including 'negative' (0), 'positive' (1).\n* 'idx': a 'int32' feature.", "#### stsb\n\n\n* 'sentence1': a 'string' feature.\n* 'sentence2': a 'string' feature.\n* 'label': a float32 regression label, with possible values from 0 to 5.\n* 'idx': a 'int32' feature.", "#### wnli\n\n\n* 'sentence1': a 'string' feature.\n* 'sentence2': a 'string' feature.\n* 'label': a classification label, with possible values including 'not\\_entailment' (0), 'entailment' (1).\n* 'idx': a 'int32' feature.", "### Data Splits", "#### ax", "#### cola", "#### mnli", "#### mnli\\_matched", "#### mnli\\_mismatched", "#### mrpc", "#### qnli", "#### qqp", "#### rte", "#### sst2", "#### stsb", "#### wnli\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nThe primary GLUE tasks are built on and derived from existing datasets. We refer users to the original licenses accompanying each dataset.\n\n\nIf you use GLUE, please cite all the datasets you use.\n\n\nIn addition, we encourage you to use the following BibTeX citation for GLUE itself:\n\n\nIf you evaluate using GLUE, we also highly recommend citing the papers that originally introduced the nine GLUE tasks, both to give the original authors their due credit and because venues will expect papers to describe the data they evaluate on.\nThe following provides BibTeX for all of the GLUE tasks, except QQP, for which we recommend adding a footnote to this page: URL", "### Contributions\n\n\nThanks to @patpizio, @jeswan, @thomwolf, @patrickvonplaten, @mariamabarham for adding this dataset." ]
0798affe9b3f88cfda4267b6fbc50fac67046ee5
# Dataset Card for 10k German News Articles Datasets ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [10k German News Article Dataset](https://tblock.github.io/10kGNAD/) - **Repository:** [10k German News Article Dataset](https://github.com/tblock/10kGNAD) - **Point of Contact:** [Steven Liu]([email protected]) ### Dataset Summary The 10k German News Article Dataset consists of 10273 German language news articles from the online Austrian newspaper website DER Standard. Each news article has been classified into one of 9 categories by professional forum moderators employed by the newspaper. This dataset is extended from the original [One Million Posts Corpus](https://ofai.github.io/million-post-corpus/). The dataset was created to support topic classification in German because a classifier effective on a English dataset may not be as effective on a German dataset due to higher inflections and longer compound words. Additionally, this dataset can be used as a benchmark dataset for German topic classification. ### Supported Tasks and Leaderboards This dataset can be used to train a model, like [BERT](https://huggingface.co/bert-base-uncased) for `topic classification` on German news articles. There are 9 possible categories. ### Languages The text is in German and it comes from an online Austrian newspaper website. The BCP-47 code for German is `de-DE`. ## Dataset Structure ### Data Instances An example data instance contains a German news article (title and article are concatenated) and it's corresponding topic category. ``` {'text': ''Die Gewerkschaft GPA-djp lanciert den "All-in-Rechner" und findet, dass die Vertragsform auf die Führungsebene beschränkt gehört. Wien – Die Gewerkschaft GPA-djp sieht Handlungsbedarf bei sogenannten All-in-Verträgen.' 'label': 'Wirtschaft' } ``` ### Data Fields * `text`: contains the title and content of the article * `label`: can be one of 9 possible topic categories (`Web`, `Panorama`, `International`, `Wirtschaft`, `Sport`, `Inland`, `Etat`, `Wissenschaft`, `Kultur`) ### Data Splits The data is split into a training set consisting of 9245 articles and a test set consisting of 1028 articles. ## Dataset Creation ### Curation Rationale The dataset was created to support topic classification in the German language. English text classification datasets are common ([AG News](https://huggingface.co/datasets/ag_news) and [20 Newsgroup](https://huggingface.co/datasets/newsgroup)), but German datasets are less common. A classifier trained on an English dataset may not work as well on a set of German text due to grammatical differences. Thus there is a need for a German dataset for effectively assessing model performance. ### Source Data #### Initial Data Collection and Normalization The 10k German News Article Dataset is extended from the One Million Posts Corpus. 10273 German news articles were collected from this larger corpus. In the One Million Posts Corpus, each article has a topic path like `Newsroom/Wirtschaft/Wirtschaftpolitik/Finanzmaerkte/Griechenlandkrise`. The 10kGNAD uses the second part of the topic path as the topic label. Article title and texts are concatenated into one text and author names are removed to avoid keyword classification on authors who write frequently on a particular topic. #### Who are the source language producers? The language producers are the authors of the Austrian newspaper website DER Standard. ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators This dataset was curated by Timo Block. ### Licensing Information This dataset is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 license. ### Citation Information Please consider citing the authors of the "One Million Post Corpus" if you use the dataset.: ``` @InProceedings{Schabus2017, Author = {Dietmar Schabus and Marcin Skowron and Martin Trapp}, Title = {One Million Posts: A Data Set of German Online Discussions}, Booktitle = {Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR)}, Pages = {1241--1244}, Year = {2017}, Address = {Tokyo, Japan}, Doi = {10.1145/3077136.3080711}, Month = aug } ``` ### Contributions Thanks to [@stevhliu](https://github.com/stevhliu) for adding this dataset.
gnad10
[ "task_categories:text-classification", "task_ids:topic-classification", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|other-from-One-Million-Posts-Corpus", "language:de", "license:cc-by-nc-sa-4.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["de"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended|other-from-One-Million-Posts-Corpus"], "task_categories": ["text-classification"], "task_ids": ["topic-classification"], "pretty_name": "10k German News Articles Datasets", "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "Web", "1": "Panorama", "2": "International", "3": "Wirtschaft", "4": "Sport", "5": "Inland", "6": "Etat", "7": "Wissenschaft", "8": "Kultur"}}}}], "splits": [{"name": "train", "num_bytes": 24418224, "num_examples": 9245}, {"name": "test", "num_bytes": 2756405, "num_examples": 1028}], "download_size": 27160809, "dataset_size": 27174629}}
2024-01-18T11:04:20+00:00
[]
[ "de" ]
TAGS #task_categories-text-classification #task_ids-topic-classification #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|other-from-One-Million-Posts-Corpus #language-German #license-cc-by-nc-sa-4.0 #region-us
# Dataset Card for 10k German News Articles Datasets ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: 10k German News Article Dataset - Repository: 10k German News Article Dataset - Point of Contact: Steven Liu ### Dataset Summary The 10k German News Article Dataset consists of 10273 German language news articles from the online Austrian newspaper website DER Standard. Each news article has been classified into one of 9 categories by professional forum moderators employed by the newspaper. This dataset is extended from the original One Million Posts Corpus. The dataset was created to support topic classification in German because a classifier effective on a English dataset may not be as effective on a German dataset due to higher inflections and longer compound words. Additionally, this dataset can be used as a benchmark dataset for German topic classification. ### Supported Tasks and Leaderboards This dataset can be used to train a model, like BERT for 'topic classification' on German news articles. There are 9 possible categories. ### Languages The text is in German and it comes from an online Austrian newspaper website. The BCP-47 code for German is 'de-DE'. ## Dataset Structure ### Data Instances An example data instance contains a German news article (title and article are concatenated) and it's corresponding topic category. ### Data Fields * 'text': contains the title and content of the article * 'label': can be one of 9 possible topic categories ('Web', 'Panorama', 'International', 'Wirtschaft', 'Sport', 'Inland', 'Etat', 'Wissenschaft', 'Kultur') ### Data Splits The data is split into a training set consisting of 9245 articles and a test set consisting of 1028 articles. ## Dataset Creation ### Curation Rationale The dataset was created to support topic classification in the German language. English text classification datasets are common (AG News and 20 Newsgroup), but German datasets are less common. A classifier trained on an English dataset may not work as well on a set of German text due to grammatical differences. Thus there is a need for a German dataset for effectively assessing model performance. ### Source Data #### Initial Data Collection and Normalization The 10k German News Article Dataset is extended from the One Million Posts Corpus. 10273 German news articles were collected from this larger corpus. In the One Million Posts Corpus, each article has a topic path like 'Newsroom/Wirtschaft/Wirtschaftpolitik/Finanzmaerkte/Griechenlandkrise'. The 10kGNAD uses the second part of the topic path as the topic label. Article title and texts are concatenated into one text and author names are removed to avoid keyword classification on authors who write frequently on a particular topic. #### Who are the source language producers? The language producers are the authors of the Austrian newspaper website DER Standard. ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators This dataset was curated by Timo Block. ### Licensing Information This dataset is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 license. Please consider citing the authors of the "One Million Post Corpus" if you use the dataset.: ### Contributions Thanks to @stevhliu for adding this dataset.
[ "# Dataset Card for 10k German News Articles Datasets", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: 10k German News Article Dataset\n- Repository: 10k German News Article Dataset\n- Point of Contact: Steven Liu", "### Dataset Summary\n\nThe 10k German News Article Dataset consists of 10273 German language news articles from the online Austrian \nnewspaper website DER Standard. Each news article has been classified into one of 9 categories by professional\nforum moderators employed by the newspaper. This dataset is extended from the original\nOne Million Posts Corpus. The dataset was created to support\ntopic classification in German because a classifier effective on a English dataset may not be as effective on\na German dataset due to higher inflections and longer compound words. Additionally, this dataset can be used\nas a benchmark dataset for German topic classification.", "### Supported Tasks and Leaderboards\n\nThis dataset can be used to train a model, like BERT for 'topic classification' on German news articles. There are 9 possible categories.", "### Languages\n\nThe text is in German and it comes from an online Austrian newspaper website. The BCP-47 code for German is\n'de-DE'.", "## Dataset Structure", "### Data Instances\n\nAn example data instance contains a German news article (title and article are concatenated) and it's corresponding topic category.", "### Data Fields\n\n* 'text': contains the title and content of the article\n* 'label': can be one of 9 possible topic categories ('Web', 'Panorama', 'International', 'Wirtschaft', 'Sport', 'Inland', 'Etat', 'Wissenschaft', 'Kultur')", "### Data Splits\n\nThe data is split into a training set consisting of 9245 articles and a test set consisting of 1028 articles.", "## Dataset Creation", "### Curation Rationale\n\nThe dataset was created to support topic classification in the German language. English text classification datasets are common (AG News and 20 Newsgroup), but German datasets are less common. A classifier trained on an English dataset may not work as well on a set of German text due to grammatical differences. Thus there is a need for a German dataset for effectively assessing model performance.", "### Source Data", "#### Initial Data Collection and Normalization\n\nThe 10k German News Article Dataset is extended from the One Million Posts Corpus. 10273 German news articles were collected from this larger corpus. In the One Million Posts Corpus, each article has a topic path like\n'Newsroom/Wirtschaft/Wirtschaftpolitik/Finanzmaerkte/Griechenlandkrise'. The 10kGNAD uses the second part of the topic path as the topic label. Article title and texts are concatenated into one text and author names are removed to avoid keyword classification on authors who write frequently on a particular topic.", "#### Who are the source language producers?\n\nThe language producers are the authors of the Austrian newspaper website DER Standard.", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nThis dataset was curated by Timo Block.", "### Licensing Information\n\nThis dataset is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 license.\n\n\n\nPlease consider citing the authors of the \"One Million Post Corpus\" if you use the dataset.:", "### Contributions\n\nThanks to @stevhliu for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-topic-classification #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|other-from-One-Million-Posts-Corpus #language-German #license-cc-by-nc-sa-4.0 #region-us \n", "# Dataset Card for 10k German News Articles Datasets", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: 10k German News Article Dataset\n- Repository: 10k German News Article Dataset\n- Point of Contact: Steven Liu", "### Dataset Summary\n\nThe 10k German News Article Dataset consists of 10273 German language news articles from the online Austrian \nnewspaper website DER Standard. Each news article has been classified into one of 9 categories by professional\nforum moderators employed by the newspaper. This dataset is extended from the original\nOne Million Posts Corpus. The dataset was created to support\ntopic classification in German because a classifier effective on a English dataset may not be as effective on\na German dataset due to higher inflections and longer compound words. Additionally, this dataset can be used\nas a benchmark dataset for German topic classification.", "### Supported Tasks and Leaderboards\n\nThis dataset can be used to train a model, like BERT for 'topic classification' on German news articles. There are 9 possible categories.", "### Languages\n\nThe text is in German and it comes from an online Austrian newspaper website. The BCP-47 code for German is\n'de-DE'.", "## Dataset Structure", "### Data Instances\n\nAn example data instance contains a German news article (title and article are concatenated) and it's corresponding topic category.", "### Data Fields\n\n* 'text': contains the title and content of the article\n* 'label': can be one of 9 possible topic categories ('Web', 'Panorama', 'International', 'Wirtschaft', 'Sport', 'Inland', 'Etat', 'Wissenschaft', 'Kultur')", "### Data Splits\n\nThe data is split into a training set consisting of 9245 articles and a test set consisting of 1028 articles.", "## Dataset Creation", "### Curation Rationale\n\nThe dataset was created to support topic classification in the German language. English text classification datasets are common (AG News and 20 Newsgroup), but German datasets are less common. A classifier trained on an English dataset may not work as well on a set of German text due to grammatical differences. Thus there is a need for a German dataset for effectively assessing model performance.", "### Source Data", "#### Initial Data Collection and Normalization\n\nThe 10k German News Article Dataset is extended from the One Million Posts Corpus. 10273 German news articles were collected from this larger corpus. In the One Million Posts Corpus, each article has a topic path like\n'Newsroom/Wirtschaft/Wirtschaftpolitik/Finanzmaerkte/Griechenlandkrise'. The 10kGNAD uses the second part of the topic path as the topic label. Article title and texts are concatenated into one text and author names are removed to avoid keyword classification on authors who write frequently on a particular topic.", "#### Who are the source language producers?\n\nThe language producers are the authors of the Austrian newspaper website DER Standard.", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nThis dataset was curated by Timo Block.", "### Licensing Information\n\nThis dataset is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 license.\n\n\n\nPlease consider citing the authors of the \"One Million Post Corpus\" if you use the dataset.:", "### Contributions\n\nThanks to @stevhliu for adding this dataset." ]
add492243ff905527e67aeb8b80c082af02207c3
# Dataset Card for GoEmotions ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/google-research/google-research/tree/master/goemotions - **Repository:** https://github.com/google-research/google-research/tree/master/goemotions - **Paper:** https://arxiv.org/abs/2005.00547 - **Leaderboard:** - **Point of Contact:** [Dora Demszky](https://nlp.stanford.edu/~ddemszky/index.html) ### Dataset Summary The GoEmotions dataset contains 58k carefully curated Reddit comments labeled for 27 emotion categories or Neutral. The raw data is included as well as the smaller, simplified version of the dataset with predefined train/val/test splits. ### Supported Tasks and Leaderboards This dataset is intended for multi-class, multi-label emotion classification. ### Languages The data is in English. ## Dataset Structure ### Data Instances Each instance is a reddit comment with a corresponding ID and one or more emotion annotations (or neutral). ### Data Fields The simplified configuration includes: - `text`: the reddit comment - `labels`: the emotion annotations - `comment_id`: unique identifier of the comment (can be used to look up the entry in the raw dataset) In addition to the above, the raw data includes: * `author`: The Reddit username of the comment's author. * `subreddit`: The subreddit that the comment belongs to. * `link_id`: The link id of the comment. * `parent_id`: The parent id of the comment. * `created_utc`: The timestamp of the comment. * `rater_id`: The unique id of the annotator. * `example_very_unclear`: Whether the annotator marked the example as being very unclear or difficult to label (in this case they did not choose any emotion labels). In the raw data, labels are listed as their own columns with binary 0/1 entries rather than a list of ids as in the simplified data. ### Data Splits The simplified data includes a set of train/val/test splits with 43,410, 5426, and 5427 examples respectively. ## Dataset Creation ### Curation Rationale From the paper abstract: > Understanding emotion expressed in language has a wide range of applications, from building empathetic chatbots to detecting harmful online behavior. Advancement in this area can be improved using large-scale datasets with a fine-grained typology, adaptable to multiple downstream tasks. ### Source Data #### Initial Data Collection and Normalization Data was collected from Reddit comments via a variety of automated methods discussed in 3.1 of the paper. #### Who are the source language producers? English-speaking Reddit users. ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? Annotations were produced by 3 English-speaking crowdworkers in India. ### Personal and Sensitive Information This dataset includes the original usernames of the Reddit users who posted each comment. Although Reddit usernames are typically disasociated from personal real-world identities, this is not always the case. It may therefore be possible to discover the identities of the individuals who created this content in some cases. ## Considerations for Using the Data ### Social Impact of Dataset Emotion detection is a worthwhile problem which can potentially lead to improvements such as better human/computer interaction. However, emotion detection algorithms (particularly in computer vision) have been abused in some cases to make erroneous inferences in human monitoring and assessment applications such as hiring decisions, insurance pricing, and student attentiveness (see [this article](https://www.unite.ai/ai-now-institute-warns-about-misuse-of-emotion-detection-software-and-other-ethical-issues/)). ### Discussion of Biases From the authors' github page: > Potential biases in the data include: Inherent biases in Reddit and user base biases, the offensive/vulgar word lists used for data filtering, inherent or unconscious bias in assessment of offensive identity labels, annotators were all native English speakers from India. All these likely affect labelling, precision, and recall for a trained model. Anyone using this dataset should be aware of these limitations of the dataset. ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators Researchers at Amazon Alexa, Google Research, and Stanford. See the [author list](https://arxiv.org/abs/2005.00547). ### Licensing Information The GitHub repository which houses this dataset has an [Apache License 2.0](https://github.com/google-research/google-research/blob/master/LICENSE). ### Citation Information @inproceedings{demszky2020goemotions, author = {Demszky, Dorottya and Movshovitz-Attias, Dana and Ko, Jeongwoo and Cowen, Alan and Nemade, Gaurav and Ravi, Sujith}, booktitle = {58th Annual Meeting of the Association for Computational Linguistics (ACL)}, title = {{GoEmotions: A Dataset of Fine-Grained Emotions}}, year = {2020} } ### Contributions Thanks to [@joeddav](https://github.com/joeddav) for adding this dataset.
go_emotions
[ "task_categories:text-classification", "task_ids:multi-class-classification", "task_ids:multi-label-classification", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:apache-2.0", "emotion", "arxiv:2005.00547", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M", "10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification", "multi-label-classification"], "paperswithcode_id": "goemotions", "pretty_name": "GoEmotions", "config_names": ["raw", "simplified"], "tags": ["emotion"], "dataset_info": [{"config_name": "raw", "features": [{"name": "text", "dtype": "string"}, {"name": "id", "dtype": "string"}, {"name": "author", "dtype": "string"}, {"name": "subreddit", "dtype": "string"}, {"name": "link_id", "dtype": "string"}, {"name": "parent_id", "dtype": "string"}, {"name": "created_utc", "dtype": "float32"}, {"name": "rater_id", "dtype": "int32"}, {"name": "example_very_unclear", "dtype": "bool"}, {"name": "admiration", "dtype": "int32"}, {"name": "amusement", "dtype": "int32"}, {"name": "anger", "dtype": "int32"}, {"name": "annoyance", "dtype": "int32"}, {"name": "approval", "dtype": "int32"}, {"name": "caring", "dtype": "int32"}, {"name": "confusion", "dtype": "int32"}, {"name": "curiosity", "dtype": "int32"}, {"name": "desire", "dtype": "int32"}, {"name": "disappointment", "dtype": "int32"}, {"name": "disapproval", "dtype": "int32"}, {"name": "disgust", "dtype": "int32"}, {"name": "embarrassment", "dtype": "int32"}, {"name": "excitement", "dtype": "int32"}, {"name": "fear", "dtype": "int32"}, {"name": "gratitude", "dtype": "int32"}, {"name": "grief", "dtype": "int32"}, {"name": "joy", "dtype": "int32"}, {"name": "love", "dtype": "int32"}, {"name": "nervousness", "dtype": "int32"}, {"name": "optimism", "dtype": "int32"}, {"name": "pride", "dtype": "int32"}, {"name": "realization", "dtype": "int32"}, {"name": "relief", "dtype": "int32"}, {"name": "remorse", "dtype": "int32"}, {"name": "sadness", "dtype": "int32"}, {"name": "surprise", "dtype": "int32"}, {"name": "neutral", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 55343102, "num_examples": 211225}], "download_size": 24828322, "dataset_size": 55343102}, {"config_name": "simplified", "features": [{"name": "text", "dtype": "string"}, {"name": "labels", "sequence": {"class_label": {"names": {"0": "admiration", "1": "amusement", "2": "anger", "3": "annoyance", "4": "approval", "5": "caring", "6": "confusion", "7": "curiosity", "8": "desire", "9": "disappointment", "10": "disapproval", "11": "disgust", "12": "embarrassment", "13": "excitement", "14": "fear", "15": "gratitude", "16": "grief", "17": "joy", "18": "love", "19": "nervousness", "20": "optimism", "21": "pride", "22": "realization", "23": "relief", "24": "remorse", "25": "sadness", "26": "surprise", "27": "neutral"}}}}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4224138, "num_examples": 43410}, {"name": "validation", "num_bytes": 527119, "num_examples": 5426}, {"name": "test", "num_bytes": 524443, "num_examples": 5427}], "download_size": 3464371, "dataset_size": 5275700}], "configs": [{"config_name": "raw", "data_files": [{"split": "train", "path": "raw/train-*"}]}, {"config_name": "simplified", "data_files": [{"split": "train", "path": "simplified/train-*"}, {"split": "validation", "path": "simplified/validation-*"}, {"split": "test", "path": "simplified/test-*"}], "default": true}]}
2024-01-04T11:56:51+00:00
[ "2005.00547" ]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-multi-class-classification #task_ids-multi-label-classification #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #size_categories-10K<n<100K #source_datasets-original #language-English #license-apache-2.0 #emotion #arxiv-2005.00547 #region-us
# Dataset Card for GoEmotions ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: URL - Paper: URL - Leaderboard: - Point of Contact: Dora Demszky ### Dataset Summary The GoEmotions dataset contains 58k carefully curated Reddit comments labeled for 27 emotion categories or Neutral. The raw data is included as well as the smaller, simplified version of the dataset with predefined train/val/test splits. ### Supported Tasks and Leaderboards This dataset is intended for multi-class, multi-label emotion classification. ### Languages The data is in English. ## Dataset Structure ### Data Instances Each instance is a reddit comment with a corresponding ID and one or more emotion annotations (or neutral). ### Data Fields The simplified configuration includes: - 'text': the reddit comment - 'labels': the emotion annotations - 'comment_id': unique identifier of the comment (can be used to look up the entry in the raw dataset) In addition to the above, the raw data includes: * 'author': The Reddit username of the comment's author. * 'subreddit': The subreddit that the comment belongs to. * 'link_id': The link id of the comment. * 'parent_id': The parent id of the comment. * 'created_utc': The timestamp of the comment. * 'rater_id': The unique id of the annotator. * 'example_very_unclear': Whether the annotator marked the example as being very unclear or difficult to label (in this case they did not choose any emotion labels). In the raw data, labels are listed as their own columns with binary 0/1 entries rather than a list of ids as in the simplified data. ### Data Splits The simplified data includes a set of train/val/test splits with 43,410, 5426, and 5427 examples respectively. ## Dataset Creation ### Curation Rationale From the paper abstract: > Understanding emotion expressed in language has a wide range of applications, from building empathetic chatbots to detecting harmful online behavior. Advancement in this area can be improved using large-scale datasets with a fine-grained typology, adaptable to multiple downstream tasks. ### Source Data #### Initial Data Collection and Normalization Data was collected from Reddit comments via a variety of automated methods discussed in 3.1 of the paper. #### Who are the source language producers? English-speaking Reddit users. ### Annotations #### Annotation process #### Who are the annotators? Annotations were produced by 3 English-speaking crowdworkers in India. ### Personal and Sensitive Information This dataset includes the original usernames of the Reddit users who posted each comment. Although Reddit usernames are typically disasociated from personal real-world identities, this is not always the case. It may therefore be possible to discover the identities of the individuals who created this content in some cases. ## Considerations for Using the Data ### Social Impact of Dataset Emotion detection is a worthwhile problem which can potentially lead to improvements such as better human/computer interaction. However, emotion detection algorithms (particularly in computer vision) have been abused in some cases to make erroneous inferences in human monitoring and assessment applications such as hiring decisions, insurance pricing, and student attentiveness (see this article). ### Discussion of Biases From the authors' github page: > Potential biases in the data include: Inherent biases in Reddit and user base biases, the offensive/vulgar word lists used for data filtering, inherent or unconscious bias in assessment of offensive identity labels, annotators were all native English speakers from India. All these likely affect labelling, precision, and recall for a trained model. Anyone using this dataset should be aware of these limitations of the dataset. ### Other Known Limitations ## Additional Information ### Dataset Curators Researchers at Amazon Alexa, Google Research, and Stanford. See the author list. ### Licensing Information The GitHub repository which houses this dataset has an Apache License 2.0. @inproceedings{demszky2020goemotions, author = {Demszky, Dorottya and Movshovitz-Attias, Dana and Ko, Jeongwoo and Cowen, Alan and Nemade, Gaurav and Ravi, Sujith}, booktitle = {58th Annual Meeting of the Association for Computational Linguistics (ACL)}, title = {{GoEmotions: A Dataset of Fine-Grained Emotions}}, year = {2020} } ### Contributions Thanks to @joeddav for adding this dataset.
[ "# Dataset Card for GoEmotions", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard:\n- Point of Contact: Dora Demszky", "### Dataset Summary\n\nThe GoEmotions dataset contains 58k carefully curated Reddit comments labeled for 27 emotion categories or Neutral.\nThe raw data is included as well as the smaller, simplified version of the dataset with predefined train/val/test\nsplits.", "### Supported Tasks and Leaderboards\n\nThis dataset is intended for multi-class, multi-label emotion classification.", "### Languages\n\nThe data is in English.", "## Dataset Structure", "### Data Instances\n\nEach instance is a reddit comment with a corresponding ID and one or more emotion annotations (or neutral).", "### Data Fields\n\nThe simplified configuration includes:\n- 'text': the reddit comment\n- 'labels': the emotion annotations\n- 'comment_id': unique identifier of the comment (can be used to look up the entry in the raw dataset)\n\nIn addition to the above, the raw data includes:\n* 'author': The Reddit username of the comment's author.\n* 'subreddit': The subreddit that the comment belongs to.\n* 'link_id': The link id of the comment.\n* 'parent_id': The parent id of the comment.\n* 'created_utc': The timestamp of the comment.\n* 'rater_id': The unique id of the annotator.\n* 'example_very_unclear': Whether the annotator marked the example as being very unclear or difficult to label (in this\ncase they did not choose any emotion labels).\n\nIn the raw data, labels are listed as their own columns with binary 0/1 entries rather than a list of ids as in the\nsimplified data.", "### Data Splits\n\nThe simplified data includes a set of train/val/test splits with 43,410, 5426, and 5427 examples respectively.", "## Dataset Creation", "### Curation Rationale\n\nFrom the paper abstract:\n\n> Understanding emotion expressed in language has a wide range of applications, from building empathetic chatbots to\ndetecting harmful online behavior. Advancement in this area can be improved using large-scale datasets with a\nfine-grained typology, adaptable to multiple downstream tasks.", "### Source Data", "#### Initial Data Collection and Normalization\n\nData was collected from Reddit comments via a variety of automated methods discussed in 3.1 of the paper.", "#### Who are the source language producers?\n\nEnglish-speaking Reddit users.", "### Annotations", "#### Annotation process", "#### Who are the annotators?\n\nAnnotations were produced by 3 English-speaking crowdworkers in India.", "### Personal and Sensitive Information\n\nThis dataset includes the original usernames of the Reddit users who posted each comment. Although Reddit usernames\nare typically disasociated from personal real-world identities, this is not always the case. It may therefore be\npossible to discover the identities of the individuals who created this content in some cases.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nEmotion detection is a worthwhile problem which can potentially lead to improvements such as better human/computer\ninteraction. However, emotion detection algorithms (particularly in computer vision) have been abused in some cases\nto make erroneous inferences in human monitoring and assessment applications such as hiring decisions, insurance\npricing, and student attentiveness (see\nthis article).", "### Discussion of Biases\n\nFrom the authors' github page:\n\n> Potential biases in the data include: Inherent biases in Reddit and user base biases, the offensive/vulgar word lists used for data filtering, inherent or unconscious bias in assessment of offensive identity labels, annotators were all native English speakers from India. All these likely affect labelling, precision, and recall for a trained model. Anyone using this dataset should be aware of these limitations of the dataset.", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nResearchers at Amazon Alexa, Google Research, and Stanford. See the author list.", "### Licensing Information\n\nThe GitHub repository which houses this dataset has an\nApache License 2.0.\n\n\n\n@inproceedings{demszky2020goemotions,\n author = {Demszky, Dorottya and Movshovitz-Attias, Dana and Ko, Jeongwoo and Cowen, Alan and Nemade, Gaurav and Ravi, Sujith},\n booktitle = {58th Annual Meeting of the Association for Computational Linguistics (ACL)},\n title = {{GoEmotions: A Dataset of Fine-Grained Emotions}},\n year = {2020}\n}", "### Contributions\n\nThanks to @joeddav for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-multi-class-classification #task_ids-multi-label-classification #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #size_categories-10K<n<100K #source_datasets-original #language-English #license-apache-2.0 #emotion #arxiv-2005.00547 #region-us \n", "# Dataset Card for GoEmotions", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard:\n- Point of Contact: Dora Demszky", "### Dataset Summary\n\nThe GoEmotions dataset contains 58k carefully curated Reddit comments labeled for 27 emotion categories or Neutral.\nThe raw data is included as well as the smaller, simplified version of the dataset with predefined train/val/test\nsplits.", "### Supported Tasks and Leaderboards\n\nThis dataset is intended for multi-class, multi-label emotion classification.", "### Languages\n\nThe data is in English.", "## Dataset Structure", "### Data Instances\n\nEach instance is a reddit comment with a corresponding ID and one or more emotion annotations (or neutral).", "### Data Fields\n\nThe simplified configuration includes:\n- 'text': the reddit comment\n- 'labels': the emotion annotations\n- 'comment_id': unique identifier of the comment (can be used to look up the entry in the raw dataset)\n\nIn addition to the above, the raw data includes:\n* 'author': The Reddit username of the comment's author.\n* 'subreddit': The subreddit that the comment belongs to.\n* 'link_id': The link id of the comment.\n* 'parent_id': The parent id of the comment.\n* 'created_utc': The timestamp of the comment.\n* 'rater_id': The unique id of the annotator.\n* 'example_very_unclear': Whether the annotator marked the example as being very unclear or difficult to label (in this\ncase they did not choose any emotion labels).\n\nIn the raw data, labels are listed as their own columns with binary 0/1 entries rather than a list of ids as in the\nsimplified data.", "### Data Splits\n\nThe simplified data includes a set of train/val/test splits with 43,410, 5426, and 5427 examples respectively.", "## Dataset Creation", "### Curation Rationale\n\nFrom the paper abstract:\n\n> Understanding emotion expressed in language has a wide range of applications, from building empathetic chatbots to\ndetecting harmful online behavior. Advancement in this area can be improved using large-scale datasets with a\nfine-grained typology, adaptable to multiple downstream tasks.", "### Source Data", "#### Initial Data Collection and Normalization\n\nData was collected from Reddit comments via a variety of automated methods discussed in 3.1 of the paper.", "#### Who are the source language producers?\n\nEnglish-speaking Reddit users.", "### Annotations", "#### Annotation process", "#### Who are the annotators?\n\nAnnotations were produced by 3 English-speaking crowdworkers in India.", "### Personal and Sensitive Information\n\nThis dataset includes the original usernames of the Reddit users who posted each comment. Although Reddit usernames\nare typically disasociated from personal real-world identities, this is not always the case. It may therefore be\npossible to discover the identities of the individuals who created this content in some cases.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nEmotion detection is a worthwhile problem which can potentially lead to improvements such as better human/computer\ninteraction. However, emotion detection algorithms (particularly in computer vision) have been abused in some cases\nto make erroneous inferences in human monitoring and assessment applications such as hiring decisions, insurance\npricing, and student attentiveness (see\nthis article).", "### Discussion of Biases\n\nFrom the authors' github page:\n\n> Potential biases in the data include: Inherent biases in Reddit and user base biases, the offensive/vulgar word lists used for data filtering, inherent or unconscious bias in assessment of offensive identity labels, annotators were all native English speakers from India. All these likely affect labelling, precision, and recall for a trained model. Anyone using this dataset should be aware of these limitations of the dataset.", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nResearchers at Amazon Alexa, Google Research, and Stanford. See the author list.", "### Licensing Information\n\nThe GitHub repository which houses this dataset has an\nApache License 2.0.\n\n\n\n@inproceedings{demszky2020goemotions,\n author = {Demszky, Dorottya and Movshovitz-Attias, Dana and Ko, Jeongwoo and Cowen, Alan and Nemade, Gaurav and Ravi, Sujith},\n booktitle = {58th Annual Meeting of the Association for Computational Linguistics (ACL)},\n title = {{GoEmotions: A Dataset of Fine-Grained Emotions}},\n year = {2020}\n}", "### Contributions\n\nThanks to @joeddav for adding this dataset." ]
1ba0309027cdfec65262323dd1117d2b23d91cb7
# Dataset Card for GooAQ ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [GooAQ 🥑: Google Answers to Google Questions!](https://github.com/allenai/gooaq) - **Repository:** [GooAQ 🥑: Google Answers to Google Questions!](https://github.com/allenai/gooaq) - **Paper:** [GOOAQ: Open Question Answering with Diverse Answer Types](https://arxiv.org/abs/2104.08727) - **Point of Contact:** [Daniel Khashabi]([email protected]) ### Dataset Summary GooAQ is a large-scale dataset with a variety of answer types. This dataset contains over 5 million questions and 3 million answers collected from Google. GooAQ questions are collected semi-automatically from the Google search engine using its autocomplete feature. This results in naturalistic questions of practical interest that are nonetheless short and expressed using simple language. GooAQ answers are mined from Google's responses to our collected questions, specifically from the answer boxes in the search results. This yields a rich space of answer types, containing both textual answers (short and long) as well as more structured ones such as collections. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The dataset contains samples in English only. ## Dataset Structure ### Data Instances Each row of the data file should look like this: ``` { "id": 3339543, "question": "what is the difference between collagen and whey protein?", "short_answer": None, "answer": "The main differences between the amino acid profiles of whey and collagen are that whey contains all 9 essential amino acids, while collagen only has 8. ... Collagen is a fibrous protein found in the skin, cartilage, and bones of animals whereas whey comes from milk.", "answer_type": "feat_snip" } ``` where the questions `question` are collected via Google auto-complete. The answers responses (`short_answer` and `answer`) were collected from Google's answer boxes. The answer types (`answer_type`) are inferred based on the html content of Google's response. Here is the dominant types in the current dataset: - `feat_snip`: explanatory responses; the majoriy the question/responses are of this type. - `collection`: list responses (e.g., steps to accomplish something). - `knowledge`: typically short responses for knowledge seeking questions. - `unit_conv`: questions about converting units. - `time_conv`: questions about converting times. - `curr_conv`: questions about converting currencies. Dataset instances which are not part of dominant types are marked with -1 label. ### Data Fields - `id`: an `int` feature. - `question`: a `string` feature. - `short_answer`: a `string` feature (could be None as well in some cases). - `answer`: a `string` feature (could be None as well in some cases). - `answer_type`: a `string` feature. ### Data Splits Number of samples in train/validation/test set are given below: | Split | Number of samples | |------------|-------------------| | Train | 3112679 | | Validation | 2500 | | Test | 2500 | ## Dataset Creation ### Curation Rationale While day-to-day questions come with a variety of answer types, the current question-answering (QA) literature has failed to adequately address the answer diversity of questions. Many of the everyday questions that humans deal with and pose to search engines have a more diverse set of responses. Their answer can be a multi-sentence description (a snippet) (e.g., ‘what is’ or ‘can you’ questions), a collection of items such as ingredients (‘what are’, ‘things to’) or of steps towards a goal such as unlocking a phone (‘how to’), etc. Even when the answer is short, it can have richer types, e.g., unit conversion, time zone conversion, or various kinds of knowledge look-up (‘how much’, ‘when is’, etc.). Such answer type diversity is not represented in any existing dataset. ### Source Data #### Initial Data Collection and Normalization Construction this dataset involved two main steps, extracting questions from search auto-complete and extracting answers from answer boxes. 1) Query Extraction: To extract a rich yet natural set of questions they used Google auto-completion. They start with a seed set of question terms (e.g., “who”, “where”, etc.). They bootstrap based on this set, by repeatedly querying prefixes of previously extracted questions, in order to discover longer and richer sets of questions. Such questions extracted from the autocomplete algorithm are highly reflective of popular questions posed by users of Google. They filter out any questions shorter than 5 tokens as they are often in-complete questions. This process yields over ∼5M questions, which were collected over a span of 6 months. The average length of the questions is about 8 tokens. 2) Answer Extraction: They rely on the Google answer boxes shown on top of the search results when the questions are issued to Google. There are a variety of answer boxes. The most common kind involves highlighted sentences (extracted from various websites) that contain the answer to a given question. These form the snippet and collection answers in GOOAQ. In some cases, the answer box shows the answer directly, possibly in addition to the textual snippet. These form theshort answers in GOOAQ. They first scrape the search results for all questions. This is the main extraction bottleneck, which was done over a span of 2 months. Subsequently, they extract answer strings from the HTML content of the search results. Answer types are also inferred at this stage, based on the HTML tags around the answer. #### Who are the source language producers? Answered above. ### Annotations #### Annotation process Answered in above section. #### Who are the annotators? Since their task is focused on English, they required workers to be based in a country with a population predominantly of native English speakers (e.g., USA, Canada, UK, and Australia) and have completed at least 5000 HITs with ≥ 99% assignment approval rate. Additionally, they have a qualification test with half-a-dozen questions all of which need to be answered correctly by the annotators. ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases To prevent biased judgements, they also ask the annotators to avoid using Google search (which is what they used when mined GOOAQ) when annotating the quality of shown instances. ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators List the people involved in collecting the dataset and their affiliation(s). If funding information is known, include it here. ### Licensing Information Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. ### Citation Information ``` @article{gooaq2021, title={GooAQ: Open Question Answering with Diverse Answer Types}, author={Khashabi, Daniel and Ng, Amos and Khot, Tushar and Sabharwal, Ashish and Hajishirzi, Hannaneh and Callison-Burch, Chris}, journal={arXiv preprint}, year={2021} } ``` ### Contributions Thanks to [@bhavitvyamalik](https://github.com/bhavitvyamalik) for adding this dataset.
gooaq
[ "task_categories:question-answering", "task_ids:open-domain-qa", "annotations_creators:expert-generated", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "language:en", "license:apache-2.0", "arxiv:2104.08727", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["machine-generated"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": ["open-domain-qa"], "paperswithcode_id": "gooaq", "pretty_name": "GooAQ: Open Question Answering with Diverse Answer Types", "dataset_info": {"features": [{"name": "id", "dtype": "int32"}, {"name": "question", "dtype": "string"}, {"name": "short_answer", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "answer_type", "dtype": {"class_label": {"names": {"0": "feat_snip", "1": "collection", "2": "knowledge", "3": "unit_conv", "4": "time_conv", "5": "curr_conv"}}}}], "splits": [{"name": "train", "num_bytes": 974320061, "num_examples": 3112679}, {"name": "validation", "num_bytes": 444553, "num_examples": 2500}, {"name": "test", "num_bytes": 445810, "num_examples": 2500}], "download_size": 2111358901, "dataset_size": 975210424}}
2024-01-18T11:04:22+00:00
[ "2104.08727" ]
[ "en" ]
TAGS #task_categories-question-answering #task_ids-open-domain-qa #annotations_creators-expert-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-apache-2.0 #arxiv-2104.08727 #region-us
Dataset Card for GooAQ ====================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: GooAQ : Google Answers to Google Questions! * Repository: GooAQ : Google Answers to Google Questions! * Paper: GOOAQ: Open Question Answering with Diverse Answer Types * Point of Contact: Daniel Khashabi ### Dataset Summary GooAQ is a large-scale dataset with a variety of answer types. This dataset contains over 5 million questions and 3 million answers collected from Google. GooAQ questions are collected semi-automatically from the Google search engine using its autocomplete feature. This results in naturalistic questions of practical interest that are nonetheless short and expressed using simple language. GooAQ answers are mined from Google's responses to our collected questions, specifically from the answer boxes in the search results. This yields a rich space of answer types, containing both textual answers (short and long) as well as more structured ones such as collections. ### Supported Tasks and Leaderboards ### Languages The dataset contains samples in English only. Dataset Structure ----------------- ### Data Instances Each row of the data file should look like this: where the questions 'question' are collected via Google auto-complete. The answers responses ('short\_answer' and 'answer') were collected from Google's answer boxes. The answer types ('answer\_type') are inferred based on the html content of Google's response. Here is the dominant types in the current dataset: * 'feat\_snip': explanatory responses; the majoriy the question/responses are of this type. * 'collection': list responses (e.g., steps to accomplish something). * 'knowledge': typically short responses for knowledge seeking questions. * 'unit\_conv': questions about converting units. * 'time\_conv': questions about converting times. * 'curr\_conv': questions about converting currencies. Dataset instances which are not part of dominant types are marked with -1 label. ### Data Fields * 'id': an 'int' feature. * 'question': a 'string' feature. * 'short\_answer': a 'string' feature (could be None as well in some cases). * 'answer': a 'string' feature (could be None as well in some cases). * 'answer\_type': a 'string' feature. ### Data Splits Number of samples in train/validation/test set are given below: Dataset Creation ---------------- ### Curation Rationale While day-to-day questions come with a variety of answer types, the current question-answering (QA) literature has failed to adequately address the answer diversity of questions. Many of the everyday questions that humans deal with and pose to search engines have a more diverse set of responses. Their answer can be a multi-sentence description (a snippet) (e.g., ‘what is’ or ‘can you’ questions), a collection of items such as ingredients (‘what are’, ‘things to’) or of steps towards a goal such as unlocking a phone (‘how to’), etc. Even when the answer is short, it can have richer types, e.g., unit conversion, time zone conversion, or various kinds of knowledge look-up (‘how much’, ‘when is’, etc.). Such answer type diversity is not represented in any existing dataset. ### Source Data #### Initial Data Collection and Normalization Construction this dataset involved two main steps, extracting questions from search auto-complete and extracting answers from answer boxes. 1. Query Extraction: To extract a rich yet natural set of questions they used Google auto-completion. They start with a seed set of question terms (e.g., “who”, “where”, etc.). They bootstrap based on this set, by repeatedly querying prefixes of previously extracted questions, in order to discover longer and richer sets of questions. Such questions extracted from the autocomplete algorithm are highly reflective of popular questions posed by users of Google. They filter out any questions shorter than 5 tokens as they are often in-complete questions. This process yields over ∼5M questions, which were collected over a span of 6 months. The average length of the questions is about 8 tokens. 2. Answer Extraction: They rely on the Google answer boxes shown on top of the search results when the questions are issued to Google. There are a variety of answer boxes. The most common kind involves highlighted sentences (extracted from various websites) that contain the answer to a given question. These form the snippet and collection answers in GOOAQ. In some cases, the answer box shows the answer directly, possibly in addition to the textual snippet. These form theshort answers in GOOAQ. They first scrape the search results for all questions. This is the main extraction bottleneck, which was done over a span of 2 months. Subsequently, they extract answer strings from the HTML content of the search results. Answer types are also inferred at this stage, based on the HTML tags around the answer. #### Who are the source language producers? Answered above. ### Annotations #### Annotation process Answered in above section. #### Who are the annotators? Since their task is focused on English, they required workers to be based in a country with a population predominantly of native English speakers (e.g., USA, Canada, UK, and Australia) and have completed at least 5000 HITs with ≥ 99% assignment approval rate. Additionally, they have a qualification test with half-a-dozen questions all of which need to be answered correctly by the annotators. ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases To prevent biased judgements, they also ask the annotators to avoid using Google search (which is what they used when mined GOOAQ) when annotating the quality of shown instances. ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators List the people involved in collecting the dataset and their affiliation(s). If funding information is known, include it here. ### Licensing Information Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. ### Contributions Thanks to @bhavitvyamalik for adding this dataset.
[ "### Dataset Summary\n\n\nGooAQ is a large-scale dataset with a variety of answer types. This dataset contains over\n5 million questions and 3 million answers collected from Google. GooAQ questions are collected\nsemi-automatically from the Google search engine using its autocomplete feature. This results in\nnaturalistic questions of practical interest that are nonetheless short and expressed using simple\nlanguage. GooAQ answers are mined from Google's responses to our collected questions, specifically from\nthe answer boxes in the search results. This yields a rich space of answer types, containing both\ntextual answers (short and long) as well as more structured ones such as collections.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nThe dataset contains samples in English only.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nEach row of the data file should look like this:\n\n\nwhere the questions 'question' are collected via Google auto-complete.\nThe answers responses ('short\\_answer' and 'answer') were collected from Google's answer boxes.\nThe answer types ('answer\\_type') are inferred based on the html content of Google's response.\nHere is the dominant types in the current dataset:\n\n\n* 'feat\\_snip': explanatory responses; the majoriy the question/responses are of this type.\n* 'collection': list responses (e.g., steps to accomplish something).\n* 'knowledge': typically short responses for knowledge seeking questions.\n* 'unit\\_conv': questions about converting units.\n* 'time\\_conv': questions about converting times.\n* 'curr\\_conv': questions about converting currencies.\n\n\nDataset instances which are not part of dominant types are marked with -1 label.", "### Data Fields\n\n\n* 'id': an 'int' feature.\n* 'question': a 'string' feature.\n* 'short\\_answer': a 'string' feature (could be None as well in some cases).\n* 'answer': a 'string' feature (could be None as well in some cases).\n* 'answer\\_type': a 'string' feature.", "### Data Splits\n\n\nNumber of samples in train/validation/test set are given below:\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nWhile day-to-day questions come with a variety of answer types, the current question-answering (QA)\nliterature has failed to adequately address the answer diversity of questions. Many of the everyday questions\nthat humans deal with and pose to search engines have a more diverse set of responses. Their answer can be a multi-sentence description (a snippet) (e.g., ‘what is’ or ‘can you’ questions), a collection of items such as ingredients (‘what are’, ‘things to’) or of steps towards a goal such as unlocking a phone (‘how to’), etc. Even when the answer is short, it can have richer types, e.g., unit conversion, time zone conversion, or various kinds of knowledge look-up (‘how much’, ‘when is’, etc.).\nSuch answer type diversity is not represented in any existing dataset.", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nConstruction this dataset involved two main steps, extracting questions from search auto-complete and extracting answers from answer boxes.\n\n\n1. Query Extraction: To extract a rich yet natural set of questions they used Google auto-completion. They start with a seed set of question terms (e.g., “who”, “where”, etc.). They bootstrap based on this set, by repeatedly querying prefixes of previously extracted questions, in order to discover longer and richer sets of questions. Such questions extracted from the autocomplete algorithm are highly reflective of popular questions posed by users of Google. They filter out any questions shorter than 5 tokens as they are often in-complete questions. This process yields over ∼5M questions, which were collected over a span of 6 months. The average length of the questions is about 8 tokens.\n2. Answer Extraction: They rely on the Google answer boxes shown on top of the search results when the questions are issued to Google. There are a variety of answer boxes. The most common kind involves highlighted sentences (extracted from various websites) that contain the answer to a given question. These form the snippet and collection answers in GOOAQ. In some cases, the answer box shows the answer directly, possibly in addition to the textual snippet. These form theshort answers in GOOAQ.\n\n\nThey first scrape the search results for all questions. This is the main extraction bottleneck, which was done over a span of 2 months. Subsequently, they extract answer strings from the HTML content of the search results. Answer types are also inferred at this stage, based on the HTML tags around the answer.", "#### Who are the source language producers?\n\n\nAnswered above.", "### Annotations", "#### Annotation process\n\n\nAnswered in above section.", "#### Who are the annotators?\n\n\nSince their task is focused on English, they required workers to be based in a country with a population predominantly of native English speakers (e.g., USA, Canada, UK, and Australia) and have completed at least 5000 HITs with ≥ 99% assignment approval rate. Additionally, they have a qualification test with half-a-dozen questions all of which need to be answered correctly by the annotators.", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases\n\n\nTo prevent biased judgements, they also ask the annotators to avoid using Google search (which is what they used when mined GOOAQ) when annotating the quality of shown instances.", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nList the people involved in collecting the dataset and their affiliation(s). If funding information is known, include it here.", "### Licensing Information\n\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License.", "### Contributions\n\n\nThanks to @bhavitvyamalik for adding this dataset." ]
[ "TAGS\n#task_categories-question-answering #task_ids-open-domain-qa #annotations_creators-expert-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-apache-2.0 #arxiv-2104.08727 #region-us \n", "### Dataset Summary\n\n\nGooAQ is a large-scale dataset with a variety of answer types. This dataset contains over\n5 million questions and 3 million answers collected from Google. GooAQ questions are collected\nsemi-automatically from the Google search engine using its autocomplete feature. This results in\nnaturalistic questions of practical interest that are nonetheless short and expressed using simple\nlanguage. GooAQ answers are mined from Google's responses to our collected questions, specifically from\nthe answer boxes in the search results. This yields a rich space of answer types, containing both\ntextual answers (short and long) as well as more structured ones such as collections.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nThe dataset contains samples in English only.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nEach row of the data file should look like this:\n\n\nwhere the questions 'question' are collected via Google auto-complete.\nThe answers responses ('short\\_answer' and 'answer') were collected from Google's answer boxes.\nThe answer types ('answer\\_type') are inferred based on the html content of Google's response.\nHere is the dominant types in the current dataset:\n\n\n* 'feat\\_snip': explanatory responses; the majoriy the question/responses are of this type.\n* 'collection': list responses (e.g., steps to accomplish something).\n* 'knowledge': typically short responses for knowledge seeking questions.\n* 'unit\\_conv': questions about converting units.\n* 'time\\_conv': questions about converting times.\n* 'curr\\_conv': questions about converting currencies.\n\n\nDataset instances which are not part of dominant types are marked with -1 label.", "### Data Fields\n\n\n* 'id': an 'int' feature.\n* 'question': a 'string' feature.\n* 'short\\_answer': a 'string' feature (could be None as well in some cases).\n* 'answer': a 'string' feature (could be None as well in some cases).\n* 'answer\\_type': a 'string' feature.", "### Data Splits\n\n\nNumber of samples in train/validation/test set are given below:\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nWhile day-to-day questions come with a variety of answer types, the current question-answering (QA)\nliterature has failed to adequately address the answer diversity of questions. Many of the everyday questions\nthat humans deal with and pose to search engines have a more diverse set of responses. Their answer can be a multi-sentence description (a snippet) (e.g., ‘what is’ or ‘can you’ questions), a collection of items such as ingredients (‘what are’, ‘things to’) or of steps towards a goal such as unlocking a phone (‘how to’), etc. Even when the answer is short, it can have richer types, e.g., unit conversion, time zone conversion, or various kinds of knowledge look-up (‘how much’, ‘when is’, etc.).\nSuch answer type diversity is not represented in any existing dataset.", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nConstruction this dataset involved two main steps, extracting questions from search auto-complete and extracting answers from answer boxes.\n\n\n1. Query Extraction: To extract a rich yet natural set of questions they used Google auto-completion. They start with a seed set of question terms (e.g., “who”, “where”, etc.). They bootstrap based on this set, by repeatedly querying prefixes of previously extracted questions, in order to discover longer and richer sets of questions. Such questions extracted from the autocomplete algorithm are highly reflective of popular questions posed by users of Google. They filter out any questions shorter than 5 tokens as they are often in-complete questions. This process yields over ∼5M questions, which were collected over a span of 6 months. The average length of the questions is about 8 tokens.\n2. Answer Extraction: They rely on the Google answer boxes shown on top of the search results when the questions are issued to Google. There are a variety of answer boxes. The most common kind involves highlighted sentences (extracted from various websites) that contain the answer to a given question. These form the snippet and collection answers in GOOAQ. In some cases, the answer box shows the answer directly, possibly in addition to the textual snippet. These form theshort answers in GOOAQ.\n\n\nThey first scrape the search results for all questions. This is the main extraction bottleneck, which was done over a span of 2 months. Subsequently, they extract answer strings from the HTML content of the search results. Answer types are also inferred at this stage, based on the HTML tags around the answer.", "#### Who are the source language producers?\n\n\nAnswered above.", "### Annotations", "#### Annotation process\n\n\nAnswered in above section.", "#### Who are the annotators?\n\n\nSince their task is focused on English, they required workers to be based in a country with a population predominantly of native English speakers (e.g., USA, Canada, UK, and Australia) and have completed at least 5000 HITs with ≥ 99% assignment approval rate. Additionally, they have a qualification test with half-a-dozen questions all of which need to be answered correctly by the annotators.", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases\n\n\nTo prevent biased judgements, they also ask the annotators to avoid using Google search (which is what they used when mined GOOAQ) when annotating the quality of shown instances.", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nList the people involved in collecting the dataset and their affiliation(s). If funding information is known, include it here.", "### Licensing Information\n\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License.", "### Contributions\n\n\nThanks to @bhavitvyamalik for adding this dataset." ]
287f895e19da0c028407dd0b6356efffc9401ecf
# Dataset Card for Google Query-wellformedness Dataset ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [GitHub](https://github.com/google-research-datasets/query-wellformedness) - **Repository:** [GitHub](https://github.com/google-research-datasets/query-wellformedness) - **Paper:** [ARXIV](https://arxiv.org/abs/1808.09419) - **Leaderboard:** - **Point of Contact:** ### Dataset Summary Google's query wellformedness dataset was created by crowdsourcing well-formedness annotations for 25,100 queries from the Paralex corpus. Every query was annotated by five raters each with 1/0 rating of whether or not the query is well-formed. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages English ## Dataset Structure ### Data Instances ``` {'rating': 0.2, 'content': 'The European Union includes how many ?'} ``` ### Data Fields - `rating`: a `float` between 0-1 - `sentence`: query which you want to rate ### Data Splits | | Train | Valid | Test | | ----- | ------ | ----- | ---- | | Input Sentences | 17500 | 3750 | 3850 | ## Dataset Creation ### Curation Rationale Understanding search queries is a hard problem as it involves dealing with “word salad” text ubiquitously issued by users. However, if a query resembles a well-formed question, a natural language processing pipeline is able to perform more accurate interpretation, thus reducing downstream compounding errors. Hence, identifying whether or not a query is well formed can enhance query understanding. This dataset introduce a new task of identifying a well-formed natural language question. ### Source Data Used the Paralex corpus (Fader et al., 2013) that contains pairs of noisy paraphrase questions. These questions were issued by users in WikiAnswers (a Question-Answer forum) and consist of both web-search query like constructs (“5 parts of chloroplast?”) and well-formed questions (“What is the punishment for grand theft?”). #### Initial Data Collection and Normalization Selected 25,100 queries from the unique list of queries extracted from the corpus such that no two queries in the selected set are paraphrases. #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process The queries are annotated into well-formed or non-wellformed questions if it satisfies the following: 1. Query is grammatical. 2. Query is an explicit question. 3. Query does not contain spelling errors. #### Who are the annotators? Every query was labeled by five different crowdworkers with a binary label indicating whether a query is well-formed or not. And average of the ratings of the five annotators was reported, to get the probability of a query being well-formed. ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Query-wellformedness dataset is licensed under CC BY-SA 4.0. Any third party content or data is provided “As Is” without any warranty, express or implied. ### Citation Information ``` @InProceedings{FaruquiDas2018, title = {{Identifying Well-formed Natural Language Questions}}, author = {Faruqui, Manaal and Das, Dipanjan}, booktitle = {Proc. of EMNLP}, year = {2018} } ``` ### Contributions Thanks to [@vasudevgupta7](https://github.com/vasudevgupta7) for adding this dataset.
google_wellformed_query
[ "task_categories:text-classification", "task_ids:text-scoring", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended", "language:en", "license:cc-by-sa-4.0", "arxiv:1808.09419", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended"], "task_categories": ["text-classification"], "task_ids": ["text-scoring"], "pretty_name": "GoogleWellformedQuery", "dataset_info": {"features": [{"name": "rating", "dtype": "float32"}, {"name": "content", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 857391, "num_examples": 17500}, {"name": "test", "num_bytes": 189503, "num_examples": 3850}, {"name": "validation", "num_bytes": 184110, "num_examples": 3750}], "download_size": 1157019, "dataset_size": 1231004}}
2024-01-18T11:04:23+00:00
[ "1808.09419" ]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-text-scoring #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended #language-English #license-cc-by-sa-4.0 #arxiv-1808.09419 #region-us
Dataset Card for Google Query-wellformedness Dataset ==================================================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: GitHub * Repository: GitHub * Paper: ARXIV * Leaderboard: * Point of Contact: ### Dataset Summary Google's query wellformedness dataset was created by crowdsourcing well-formedness annotations for 25,100 queries from the Paralex corpus. Every query was annotated by five raters each with 1/0 rating of whether or not the query is well-formed. ### Supported Tasks and Leaderboards ### Languages English Dataset Structure ----------------- ### Data Instances ### Data Fields * 'rating': a 'float' between 0-1 * 'sentence': query which you want to rate ### Data Splits Dataset Creation ---------------- ### Curation Rationale Understanding search queries is a hard problem as it involves dealing with “word salad” text ubiquitously issued by users. However, if a query resembles a well-formed question, a natural language processing pipeline is able to perform more accurate interpretation, thus reducing downstream compounding errors. Hence, identifying whether or not a query is well formed can enhance query understanding. This dataset introduce a new task of identifying a well-formed natural language question. ### Source Data Used the Paralex corpus (Fader et al., 2013) that contains pairs of noisy paraphrase questions. These questions were issued by users in WikiAnswers (a Question-Answer forum) and consist of both web-search query like constructs (“5 parts of chloroplast?”) and well-formed questions (“What is the punishment for grand theft?”). #### Initial Data Collection and Normalization Selected 25,100 queries from the unique list of queries extracted from the corpus such that no two queries in the selected set are paraphrases. #### Who are the source language producers? ### Annotations #### Annotation process The queries are annotated into well-formed or non-wellformed questions if it satisfies the following: 1. Query is grammatical. 2. Query is an explicit question. 3. Query does not contain spelling errors. #### Who are the annotators? Every query was labeled by five different crowdworkers with a binary label indicating whether a query is well-formed or not. And average of the ratings of the five annotators was reported, to get the probability of a query being well-formed. ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information Query-wellformedness dataset is licensed under CC BY-SA 4.0. Any third party content or data is provided “As Is” without any warranty, express or implied. ### Contributions Thanks to @vasudevgupta7 for adding this dataset.
[ "### Dataset Summary\n\n\nGoogle's query wellformedness dataset was created by crowdsourcing well-formedness annotations for 25,100 queries from the Paralex corpus. Every query was annotated by five raters each with 1/0 rating of whether or not the query is well-formed.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------", "### Data Instances", "### Data Fields\n\n\n* 'rating': a 'float' between 0-1\n* 'sentence': query which you want to rate", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nUnderstanding search queries is a hard problem as it involves dealing with “word salad” text ubiquitously issued by users. However, if a query resembles a well-formed question, a natural language processing pipeline is able to perform more accurate interpretation, thus reducing downstream compounding errors. Hence, identifying whether or not a query is well formed can enhance query understanding. This dataset introduce a new task of identifying a well-formed natural language question.", "### Source Data\n\n\nUsed the Paralex corpus (Fader et al., 2013) that contains pairs of noisy paraphrase questions. These questions were issued by users in WikiAnswers (a Question-Answer forum) and consist of both web-search query like constructs (“5 parts of chloroplast?”) and well-formed questions (“What is the punishment for grand theft?”).", "#### Initial Data Collection and Normalization\n\n\nSelected 25,100 queries from the unique list of queries extracted from the corpus such that no two queries in the selected set are paraphrases.", "#### Who are the source language producers?", "### Annotations", "#### Annotation process\n\n\nThe queries are annotated into well-formed or non-wellformed questions if it satisfies the following:\n\n\n1. Query is grammatical.\n2. Query is an explicit question.\n3. Query does not contain spelling errors.", "#### Who are the annotators?\n\n\nEvery query was labeled by five different crowdworkers with a binary label indicating whether a query is well-formed or not. And average of the ratings of the five annotators was reported, to get the probability of a query being well-formed.", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nQuery-wellformedness dataset is licensed under CC BY-SA 4.0. Any third party content or data is provided “As Is” without any warranty, express or implied.", "### Contributions\n\n\nThanks to @vasudevgupta7 for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-text-scoring #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended #language-English #license-cc-by-sa-4.0 #arxiv-1808.09419 #region-us \n", "### Dataset Summary\n\n\nGoogle's query wellformedness dataset was created by crowdsourcing well-formedness annotations for 25,100 queries from the Paralex corpus. Every query was annotated by five raters each with 1/0 rating of whether or not the query is well-formed.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------", "### Data Instances", "### Data Fields\n\n\n* 'rating': a 'float' between 0-1\n* 'sentence': query which you want to rate", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nUnderstanding search queries is a hard problem as it involves dealing with “word salad” text ubiquitously issued by users. However, if a query resembles a well-formed question, a natural language processing pipeline is able to perform more accurate interpretation, thus reducing downstream compounding errors. Hence, identifying whether or not a query is well formed can enhance query understanding. This dataset introduce a new task of identifying a well-formed natural language question.", "### Source Data\n\n\nUsed the Paralex corpus (Fader et al., 2013) that contains pairs of noisy paraphrase questions. These questions were issued by users in WikiAnswers (a Question-Answer forum) and consist of both web-search query like constructs (“5 parts of chloroplast?”) and well-formed questions (“What is the punishment for grand theft?”).", "#### Initial Data Collection and Normalization\n\n\nSelected 25,100 queries from the unique list of queries extracted from the corpus such that no two queries in the selected set are paraphrases.", "#### Who are the source language producers?", "### Annotations", "#### Annotation process\n\n\nThe queries are annotated into well-formed or non-wellformed questions if it satisfies the following:\n\n\n1. Query is grammatical.\n2. Query is an explicit question.\n3. Query does not contain spelling errors.", "#### Who are the annotators?\n\n\nEvery query was labeled by five different crowdworkers with a binary label indicating whether a query is well-formed or not. And average of the ratings of the five annotators was reported, to get the probability of a query being well-formed.", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nQuery-wellformedness dataset is licensed under CC BY-SA 4.0. Any third party content or data is provided “As Is” without any warranty, express or implied.", "### Contributions\n\n\nThanks to @vasudevgupta7 for adding this dataset." ]
a568afcef1a08d5fdc687e65b8e201841bd6a3eb
# Dataset Card for Grail QA ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Grail QA](https://dki-lab.github.io/GrailQA/) - **Repository:** - **Paper:** [GrailQA paper (Gu et al. '20)](https://arxiv.org/abs/2011.07743) - **Leaderboard:** - **Point of Contact:** ### Dataset Summary #### What is GrailQA? Strongly Generalizable Question Answering (GrailQA) is a new large-scale, high-quality dataset for question answering on knowledge bases (KBQA) on Freebase with 64,331 questions annotated with both answers and corresponding logical forms in different syntax (i.e., SPARQL, S-expression, etc.). It can be used to test three levels of generalization in KBQA: i.i.d., compositional, and zero-shot. #### Why GrailQA? GrailQA is by far the largest crowdsourced KBQA dataset with questions of high diversity (i.e., questions in GrailQA can have up to 4 relations and optionally have a function from counting, superlatives and comparatives). It also has the highest coverage over Freebase; it widely covers 3,720 relations and 86 domains from Freebase. Last but not least, our meticulous data split allows GrailQA to test not only i.i.d. generalization, but also compositional generalization and zero-shot generalization, which are critical for practical KBQA systems. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages English and Graph query ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields - `qid` (`str`) - `question` (`str`) - `answer` (`List`): Defaults to `[]` in test split. - `answer_type` (`str`) - `answer_argument` (`str`) - `entity_name` (`str`): Defauts to `""` if `answer_type` is not `Entity`. - `function` (`string`): Defaults to `""` in test split. - `num_node` (`int`): Defaults to `-1` in test split. - `num_edge` (`int`): Defaults to `-1` in test split. - `graph_query` (`Dict`) - `nodes` (`List`): Defaults to `[]` in test split. - `nid` (`int`) - `node_type` (`str`) - `id` (`str`) - `class` (`str`) - `friendly_name` (`str`) - `question_node` (`int`) - `function` (`str`) - `edges` (`List`): Defaults to `[]` in test split. - `start` (`int`) - `end` (`int`) - `relation` (`str`) - `friendly_name` (`str`) - `sqarql_query` (`str`): Defaults to `""` in test split. - `domains` (`List[str]`): Defaults to `[]` in test split. - `level` (`str`): Only available in validation split. Defaults to `""` in others. - `s_expression` (`str`): Defaults to `""` in test split. **Notes:** Only `qid` and `question` available in test split. ### Data Splits Dataset Split | Number of Instances in Split --------------|-------------------------------------------- Train | 44,337 Validation | 6,763 Test | 13,231 ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@mattbui](https://github.com/mattbui) for adding this dataset.
grail_qa
[ "task_categories:question-answering", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:unknown", "knowledge-base-qa", "arxiv:2011.07743", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": [], "pretty_name": "Grail QA", "tags": ["knowledge-base-qa"], "dataset_info": {"features": [{"name": "qid", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answer", "sequence": [{"name": "answer_type", "dtype": "string"}, {"name": "answer_argument", "dtype": "string"}, {"name": "entity_name", "dtype": "string"}]}, {"name": "function", "dtype": "string"}, {"name": "num_node", "dtype": "int32"}, {"name": "num_edge", "dtype": "int32"}, {"name": "graph_query", "struct": [{"name": "nodes", "sequence": [{"name": "nid", "dtype": "int32"}, {"name": "node_type", "dtype": "string"}, {"name": "id", "dtype": "string"}, {"name": "class", "dtype": "string"}, {"name": "friendly_name", "dtype": "string"}, {"name": "question_node", "dtype": "int32"}, {"name": "function", "dtype": "string"}]}, {"name": "edges", "sequence": [{"name": "start", "dtype": "int32"}, {"name": "end", "dtype": "int32"}, {"name": "relation", "dtype": "string"}, {"name": "friendly_name", "dtype": "string"}]}]}, {"name": "sparql_query", "dtype": "string"}, {"name": "domains", "sequence": "string"}, {"name": "level", "dtype": "string"}, {"name": "s_expression", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 69433121, "num_examples": 44337}, {"name": "validation", "num_bytes": 9800544, "num_examples": 6763}, {"name": "test", "num_bytes": 2167256, "num_examples": 13231}], "download_size": 17636773, "dataset_size": 81400921}}
2024-01-18T11:04:25+00:00
[ "2011.07743" ]
[ "en" ]
TAGS #task_categories-question-answering #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-unknown #knowledge-base-qa #arxiv-2011.07743 #region-us
Dataset Card for Grail QA ========================= Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: Grail QA * Repository: * Paper: GrailQA paper (Gu et al. '20) * Leaderboard: * Point of Contact: ### Dataset Summary #### What is GrailQA? Strongly Generalizable Question Answering (GrailQA) is a new large-scale, high-quality dataset for question answering on knowledge bases (KBQA) on Freebase with 64,331 questions annotated with both answers and corresponding logical forms in different syntax (i.e., SPARQL, S-expression, etc.). It can be used to test three levels of generalization in KBQA: i.i.d., compositional, and zero-shot. #### Why GrailQA? GrailQA is by far the largest crowdsourced KBQA dataset with questions of high diversity (i.e., questions in GrailQA can have up to 4 relations and optionally have a function from counting, superlatives and comparatives). It also has the highest coverage over Freebase; it widely covers 3,720 relations and 86 domains from Freebase. Last but not least, our meticulous data split allows GrailQA to test not only i.i.d. generalization, but also compositional generalization and zero-shot generalization, which are critical for practical KBQA systems. ### Supported Tasks and Leaderboards ### Languages English and Graph query Dataset Structure ----------------- ### Data Instances ### Data Fields * 'qid' ('str') * 'question' ('str') * 'answer' ('List'): Defaults to '[]' in test split. + 'answer\_type' ('str') + 'answer\_argument' ('str') + 'entity\_name' ('str'): Defauts to '""' if 'answer\_type' is not 'Entity'. * 'function' ('string'): Defaults to '""' in test split. * 'num\_node' ('int'): Defaults to '-1' in test split. * 'num\_edge' ('int'): Defaults to '-1' in test split. * 'graph\_query' ('Dict') + 'nodes' ('List'): Defaults to '[]' in test split. - 'nid' ('int') - 'node\_type' ('str') - 'id' ('str') - 'class' ('str') - 'friendly\_name' ('str') - 'question\_node' ('int') - 'function' ('str') + 'edges' ('List'): Defaults to '[]' in test split. - 'start' ('int') - 'end' ('int') - 'relation' ('str') - 'friendly\_name' ('str') * 'sqarql\_query' ('str'): Defaults to '""' in test split. * 'domains' ('List[str]'): Defaults to '[]' in test split. * 'level' ('str'): Only available in validation split. Defaults to '""' in others. * 's\_expression' ('str'): Defaults to '""' in test split. Notes: Only 'qid' and 'question' available in test split. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to @mattbui for adding this dataset.
[ "### Dataset Summary", "#### What is GrailQA?\n\n\nStrongly Generalizable Question Answering (GrailQA) is a new large-scale, high-quality dataset for question answering on knowledge bases (KBQA) on Freebase with 64,331 questions annotated with both answers and corresponding logical forms in different syntax (i.e., SPARQL, S-expression, etc.). It can be used to test three levels of generalization in KBQA: i.i.d., compositional, and zero-shot.", "#### Why GrailQA?\n\n\nGrailQA is by far the largest crowdsourced KBQA dataset with questions of high diversity (i.e., questions in GrailQA can have up to 4 relations and optionally have a function from counting, superlatives and comparatives). It also has the highest coverage over Freebase; it widely covers 3,720 relations and 86 domains from Freebase. Last but not least, our meticulous data split allows GrailQA to test not only i.i.d. generalization, but also compositional generalization and zero-shot generalization, which are critical for practical KBQA systems.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nEnglish and Graph query\n\n\nDataset Structure\n-----------------", "### Data Instances", "### Data Fields\n\n\n* 'qid' ('str')\n* 'question' ('str')\n* 'answer' ('List'): Defaults to '[]' in test split.\n\t+ 'answer\\_type' ('str')\n\t+ 'answer\\_argument' ('str')\n\t+ 'entity\\_name' ('str'): Defauts to '\"\"' if 'answer\\_type' is not 'Entity'.\n* 'function' ('string'): Defaults to '\"\"' in test split.\n* 'num\\_node' ('int'): Defaults to '-1' in test split.\n* 'num\\_edge' ('int'): Defaults to '-1' in test split.\n* 'graph\\_query' ('Dict')\n\t+ 'nodes' ('List'): Defaults to '[]' in test split.\n\t\t- 'nid' ('int')\n\t\t- 'node\\_type' ('str')\n\t\t- 'id' ('str')\n\t\t- 'class' ('str')\n\t\t- 'friendly\\_name' ('str')\n\t\t- 'question\\_node' ('int')\n\t\t- 'function' ('str')\n\t+ 'edges' ('List'): Defaults to '[]' in test split.\n\t\t- 'start' ('int')\n\t\t- 'end' ('int')\n\t\t- 'relation' ('str')\n\t\t- 'friendly\\_name' ('str')\n* 'sqarql\\_query' ('str'): Defaults to '\"\"' in test split.\n* 'domains' ('List[str]'): Defaults to '[]' in test split.\n* 'level' ('str'): Only available in validation split. Defaults to '\"\"' in others.\n* 's\\_expression' ('str'): Defaults to '\"\"' in test split.\n\n\nNotes: Only 'qid' and 'question' available in test split.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @mattbui for adding this dataset." ]
[ "TAGS\n#task_categories-question-answering #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-unknown #knowledge-base-qa #arxiv-2011.07743 #region-us \n", "### Dataset Summary", "#### What is GrailQA?\n\n\nStrongly Generalizable Question Answering (GrailQA) is a new large-scale, high-quality dataset for question answering on knowledge bases (KBQA) on Freebase with 64,331 questions annotated with both answers and corresponding logical forms in different syntax (i.e., SPARQL, S-expression, etc.). It can be used to test three levels of generalization in KBQA: i.i.d., compositional, and zero-shot.", "#### Why GrailQA?\n\n\nGrailQA is by far the largest crowdsourced KBQA dataset with questions of high diversity (i.e., questions in GrailQA can have up to 4 relations and optionally have a function from counting, superlatives and comparatives). It also has the highest coverage over Freebase; it widely covers 3,720 relations and 86 domains from Freebase. Last but not least, our meticulous data split allows GrailQA to test not only i.i.d. generalization, but also compositional generalization and zero-shot generalization, which are critical for practical KBQA systems.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nEnglish and Graph query\n\n\nDataset Structure\n-----------------", "### Data Instances", "### Data Fields\n\n\n* 'qid' ('str')\n* 'question' ('str')\n* 'answer' ('List'): Defaults to '[]' in test split.\n\t+ 'answer\\_type' ('str')\n\t+ 'answer\\_argument' ('str')\n\t+ 'entity\\_name' ('str'): Defauts to '\"\"' if 'answer\\_type' is not 'Entity'.\n* 'function' ('string'): Defaults to '\"\"' in test split.\n* 'num\\_node' ('int'): Defaults to '-1' in test split.\n* 'num\\_edge' ('int'): Defaults to '-1' in test split.\n* 'graph\\_query' ('Dict')\n\t+ 'nodes' ('List'): Defaults to '[]' in test split.\n\t\t- 'nid' ('int')\n\t\t- 'node\\_type' ('str')\n\t\t- 'id' ('str')\n\t\t- 'class' ('str')\n\t\t- 'friendly\\_name' ('str')\n\t\t- 'question\\_node' ('int')\n\t\t- 'function' ('str')\n\t+ 'edges' ('List'): Defaults to '[]' in test split.\n\t\t- 'start' ('int')\n\t\t- 'end' ('int')\n\t\t- 'relation' ('str')\n\t\t- 'friendly\\_name' ('str')\n* 'sqarql\\_query' ('str'): Defaults to '\"\"' in test split.\n* 'domains' ('List[str]'): Defaults to '[]' in test split.\n* 'level' ('str'): Only available in validation split. Defaults to '\"\"' in others.\n* 's\\_expression' ('str'): Defaults to '\"\"' in test split.\n\n\nNotes: Only 'qid' and 'question' available in test split.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @mattbui for adding this dataset." ]
8e2d62aa66b071b147de52c47cc0e2e91ae4e049
# Dataset Card for GREAT ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** None - **Repository:** https://github.com/google-research-datasets/great - **Paper:** https://openreview.net/forum?id=B1lnbRNtwr - **Leaderboard:** [More Information Needed] - **Point of Contact:** [More Information Needed] ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances Here are some examples of questions and facts: ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
great_code
[ "task_categories:table-to-text", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "language:en", "license:cc-by-sa-3.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-sa-3.0"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["table-to-text"], "task_ids": [], "pretty_name": "GREAT", "dataset_info": {"features": [{"name": "id", "dtype": "int32"}, {"name": "source_tokens", "sequence": "string"}, {"name": "has_bug", "dtype": "bool"}, {"name": "error_location", "dtype": "int32"}, {"name": "repair_candidates", "sequence": "string"}, {"name": "bug_kind", "dtype": "int32"}, {"name": "bug_kind_name", "dtype": "string"}, {"name": "repair_targets", "sequence": "int32"}, {"name": "edges", "list": {"list": [{"name": "before_index", "dtype": "int32"}, {"name": "after_index", "dtype": "int32"}, {"name": "edge_type", "dtype": "int32"}, {"name": "edge_type_name", "dtype": "string"}]}}, {"name": "provenances", "list": [{"name": "datasetProvenance", "struct": [{"name": "datasetName", "dtype": "string"}, {"name": "filepath", "dtype": "string"}, {"name": "license", "dtype": "string"}, {"name": "note", "dtype": "string"}]}]}], "splits": [{"name": "train", "num_bytes": 14705534822, "num_examples": 1798742}, {"name": "validation", "num_bytes": 1502956919, "num_examples": 185656}, {"name": "test", "num_bytes": 7880762248, "num_examples": 968592}], "download_size": 23310374002, "dataset_size": 24089253989}}
2024-01-18T11:04:27+00:00
[]
[ "en" ]
TAGS #task_categories-table-to-text #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-cc-by-sa-3.0 #region-us
# Dataset Card for GREAT ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: None - Repository: URL - Paper: URL - Leaderboard: - Point of Contact: ### Dataset Summary ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances Here are some examples of questions and facts: ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions Thanks to @abhishekkrthakur for adding this dataset.
[ "# Dataset Card for GREAT", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: None\n- Repository: URL\n- Paper: URL\n- Leaderboard: \n- Point of Contact:", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances\n\nHere are some examples of questions and facts:", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @abhishekkrthakur for adding this dataset." ]
[ "TAGS\n#task_categories-table-to-text #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-cc-by-sa-3.0 #region-us \n", "# Dataset Card for GREAT", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: None\n- Repository: URL\n- Paper: URL\n- Leaderboard: \n- Point of Contact:", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances\n\nHere are some examples of questions and facts:", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @abhishekkrthakur for adding this dataset." ]
de0fdb34424f07d1ac6f0ede23ee0ed44bd9f5d1
# Dataset Card for Greek Legal Code ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** https://github.com/christospi/glc-nllp-21 - **Paper:** https://arxiv.org/abs/2109.15298 - **Data:** https://doi.org/10.5281/zenodo.5528002 - **Leaderboard:** N/A - **Point of Contact:** [Christos Papaloukas](mailto:[email protected]) ### Dataset Summary Greek_Legal_Code (GLC) is a dataset consisting of approx. 47k legal resources from Greek legislation. The origin of GLC is “Permanent Greek Legislation Code - Raptarchis”, a collection of Greek legislative documents classified into multi-level (from broader to more specialized) categories. **Topics** GLC consists of 47 legislative volumes and each volume corresponds to a main thematic topic. Each volume is divided into thematic sub categories which are called chapters and subsequently, each chapter breaks down to subjects which contain the legal resources. The total number of chapters is 389 while the total number of subjects is 2285, creating an interlinked thematic hierarchy. So, for the upper thematic level (volume) GLC has 47 classes. For the next thematic level (chapter) GLC offers 389 classes and for the inner and last thematic level (subject), GLC has 2285 classes. GLC classes are divided into three categories for each thematic level: frequent classes, which occur in more than 10 training documents and can be found in all three subsets (training, development and test); few-shot classes which appear in 1 to 10 training documents and also appear in the documents of the development and test sets, and zero-shot classes which appear in the development and/or test, but not in the training documents. ### Supported Tasks and Leaderboards The dataset supports: **Multi-class Text Classification:** Given the text of a document, a model predicts the corresponding class. **Few-shot and Zero-shot learning:** As already noted, the classes can be divided into three groups: frequent, few-shot, and zero- shot, depending on whether they were assigned to more than 10, fewer than 10 but at least one, or no training documents, respectively. | Level | Total | Frequent | Few-Shot (<10) | Zero-Shot | |---|---|---|---|---| |Volume|47|47|0|0| |Chapter|389|333|53|3| |Subject|2285|712|1431|142| ### Languages All documents are written in Greek. ## Dataset Structure ### Data Instances ```json { "text": "179. ΑΠΟΦΑΣΗ ΥΠΟΥΡΓΟΥ ΜΕΤΑΦΟΡΩΝ ΚΑΙ ΕΠΙΚΟΙΝΩΝΙΩΝ Αριθ. Β-οικ. 68425/4765 της 2/17 Νοεμ. 2000 (ΦΕΚ Β΄ 1404) Τροποποίηση της 42000/2030/81 κοιν. απόφασης του Υπουργού Συγκοινωνιών «Κωδικοποίηση και συμπλήρωση καν. Αποφάσεων» που εκδόθηκαν κατ’ εξουσιοδότηση του Ν.Δ. 102/73 «περί οργανώσεως των δια λεωφορείων αυτοκινήτων εκτελουμένων επιβατικών συγκοινωνιών». ", "volume": 24, # "ΣΥΓΚΟΙΝΩΝΙΕΣ" } ``` ### Data Fields The following data fields are provided for documents (`train`, `dev`, `test`): `text`: (**str**) The full content of each document, which is represented by its `header` and `articles` (i.e., the `main_body`).\ `label`: (**class label**): Depending on the configurarion, the volume/chapter/subject of the document. For volume-level class it belongs to specifically: ["ΚΟΙΝΩΝΙΚΗ ΠΡΟΝΟΙΑ", "ΓΕΩΡΓΙΚΗ ΝΟΜΟΘΕΣΙΑ", "ΡΑΔΙΟΦΩΝΙΑ ΚΑΙ ΤΥΠΟΣ", "ΒΙΟΜΗΧΑΝΙΚΗ ΝΟΜΟΘΕΣΙΑ", "ΥΓΕΙΟΝΟΜΙΚΗ ΝΟΜΟΘΕΣΙΑ", "ΠΟΛΕΜΙΚΟ ΝΑΥΤΙΚΟ", "ΤΑΧΥΔΡΟΜΕΙΑ - ΤΗΛΕΠΙΚΟΙΝΩΝΙΕΣ", "ΔΑΣΗ ΚΑΙ ΚΤΗΝΟΤΡΟΦΙΑ", "ΕΛΕΓΚΤΙΚΟ ΣΥΝΕΔΡΙΟ ΚΑΙ ΣΥΝΤΑΞΕΙΣ", "ΠΟΛΕΜΙΚΗ ΑΕΡΟΠΟΡΙΑ", "ΝΟΜΙΚΑ ΠΡΟΣΩΠΑ ΔΗΜΟΣΙΟΥ ΔΙΚΑΙΟΥ", "ΝΟΜΟΘΕΣΙΑ ΑΝΩΝΥΜΩΝ ΕΤΑΙΡΕΙΩΝ ΤΡΑΠΕΖΩΝ ΚΑΙ ΧΡΗΜΑΤΙΣΤΗΡΙΩΝ", "ΠΟΛΙΤΙΚΗ ΑΕΡΟΠΟΡΙΑ", "ΕΜΜΕΣΗ ΦΟΡΟΛΟΓΙΑ", "ΚΟΙΝΩΝΙΚΕΣ ΑΣΦΑΛΙΣΕΙΣ", "ΝΟΜΟΘΕΣΙΑ ΔΗΜΩΝ ΚΑΙ ΚΟΙΝΟΤΗΤΩΝ", "ΝΟΜΟΘΕΣΙΑ ΕΠΙΜΕΛΗΤΗΡΙΩΝ ΣΥΝΕΤΑΙΡΙΣΜΩΝ ΚΑΙ ΣΩΜΑΤΕΙΩΝ", "ΔΗΜΟΣΙΑ ΕΡΓΑ", "ΔΙΟΙΚΗΣΗ ΔΙΚΑΙΟΣΥΝΗΣ", "ΑΣΦΑΛΙΣΤΙΚΑ ΤΑΜΕΙΑ", "ΕΚΚΛΗΣΙΑΣΤΙΚΗ ΝΟΜΟΘΕΣΙΑ", "ΕΚΠΑΙΔΕΥΤΙΚΗ ΝΟΜΟΘΕΣΙΑ", "ΔΗΜΟΣΙΟ ΛΟΓΙΣΤΙΚΟ", "ΤΕΛΩΝΕΙΑΚΗ ΝΟΜΟΘΕΣΙΑ", "ΣΥΓΚΟΙΝΩΝΙΕΣ", "ΕΘΝΙΚΗ ΑΜΥΝΑ", "ΣΤΡΑΤΟΣ ΞΗΡΑΣ", "ΑΓΟΡΑΝΟΜΙΚΗ ΝΟΜΟΘΕΣΙΑ", "ΔΗΜΟΣΙΟΙ ΥΠΑΛΛΗΛΟΙ", "ΠΕΡΙΟΥΣΙΑ ΔΗΜΟΣΙΟΥ ΚΑΙ ΝΟΜΙΣΜΑ", "ΟΙΚΟΝΟΜΙΚΗ ΔΙΟΙΚΗΣΗ", "ΛΙΜΕΝΙΚΗ ΝΟΜΟΘΕΣΙΑ", "ΑΣΤΙΚΗ ΝΟΜΟΘΕΣΙΑ", "ΠΟΛΙΤΙΚΗ ΔΙΚΟΝΟΜΙΑ", "ΔΙΠΛΩΜΑΤΙΚΗ ΝΟΜΟΘΕΣΙΑ", "ΔΙΟΙΚΗΤΙΚΗ ΝΟΜΟΘΕΣΙΑ", "ΑΜΕΣΗ ΦΟΡΟΛΟΓΙΑ", "ΤΥΠΟΣ ΚΑΙ ΤΟΥΡΙΣΜΟΣ", "ΕΘΝΙΚΗ ΟΙΚΟΝΟΜΙΑ", "ΑΣΤΥΝΟΜΙΚΗ ΝΟΜΟΘΕΣΙΑ", "ΑΓΡΟΤΙΚΗ ΝΟΜΟΘΕΣΙΑ", "ΕΡΓΑΤΙΚΗ ΝΟΜΟΘΕΣΙΑ", "ΠΟΙΝΙΚΗ ΝΟΜΟΘΕΣΙΑ", "ΕΜΠΟΡΙΚΗ ΝΟΜΟΘΕΣΙΑ", "ΕΠΙΣΤΗΜΕΣ ΚΑΙ ΤΕΧΝΕΣ", "ΕΜΠΟΡΙΚΗ ΝΑΥΤΙΛΙΑ", "ΣΥΝΤΑΓΜΑΤΙΚΗ ΝΟΜΟΘΕΣΙΑ" ] \ The labels can also be a the chapter-level or subject-level class it belongs to. Some chapter labels are omitted due to size (389 classes). Some subject labels are also omitted due to size (2285 classes). ### Data Splits | Split | No of Documents | Avg. words | | ------------------- | ------------------------------------ | --- | | Train | 28,536 | 600 | |Development | 9,511 | 574 | |Test | 9,516 | 595 | ## Dataset Creation ### Curation Rationale The dataset was curated by Papaloukas et al. (2021) with the hope to support and encourage further research in NLP for the Greek language. ### Source Data #### Initial Data Collection and Normalization The ``Permanent Greek Legislation Code - Raptarchis`` is a thorough catalogue of Greek legislation since the creation of the Greek state in 1834 until 2015. It includes Laws, Royal and Presidential Decrees, Regulations and Decisions, retrieved from the Official Government Gazette, where Greek legislation is published. This collection is one of the official, publicly available sources of classified Greek legislation suitable for classification tasks. Currently, the original catalogue is publicly offered in MS Word (.doc) format through the portal e-Themis, the legal database and management service of it, under the administration of the Ministry of the Interior (Affairs). E-Themis is primarily focused on providing legislation on a multitude of predefined thematic categories, as described in the catalogue. The main goal is to help users find legislation of interest using the thematic index. #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information The dataset does not include personal or sensitive information. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators Papaloukas et al. (2021) ### Licensing Information [More Information Needed] ### Citation Information *Christos Papaloukas, Ilias Chalkidis, Konstantinos Athinaios, Despina-Athanasia Pantazi and Manolis Koubarakis.* *Multi-granular Legal Topic Classification on Greek Legislation.* *Proceedings of the 3rd Natural Legal Language Processing (NLLP) Workshop, Punta Cana, Dominican Republic, 2021* ``` @inproceedings{papaloukas-etal-2021-glc, title = "Multi-granular Legal Topic Classification on Greek Legislation", author = "Papaloukas, Christos and Chalkidis, Ilias and Athinaios, Konstantinos and Pantazi, Despina-Athanasia and Koubarakis, Manolis", booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2021", year = "2021", address = "Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/2109.15298", doi = "10.48550/arXiv.2109.15298", pages = "63--75" } ``` ### Contributions Thanks to [@christospi](https://github.com/christospi) for adding this dataset.
AI-team-UoA/greek_legal_code
[ "task_categories:text-classification", "task_ids:multi-class-classification", "task_ids:topic-classification", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:el", "license:cc-by-4.0", "arxiv:2109.15298", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["el"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification", "topic-classification"], "pretty_name": "Greek Legal Code", "dataset_info": [{"config_name": "chapter", "features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "\u039c\u0395\u03a4\u0391\u039b\u039b\u0395\u0399\u0391 \u039a\u0391\u0399 \u039f\u03a1\u03a5\u03a7\u0395\u0399\u0391", "1": "\u03a3\u03a4\u0391\u03a4\u0399\u03a9\u03a4\u0399\u039a\u0395\u03a3 \u03a3\u03a7\u039f\u039b\u0395\u03a3", "2": "\u03a0\u0391\u03a1\u039f\u03a7\u0395\u03a3 \u0391\u039d\u0395\u03a1\u0393\u0399\u0391\u03a3", "3": "\u03a3\u0399\u0394\u0397\u03a1\u039f\u0394\u03a1\u039f\u039c\u0399\u039a\u0391 \u0394\u0399\u039a\u03a4\u03a5\u0391", "4": "\u0395\u0399\u0394\u0399\u039a\u0391 \u03a3\u03a4\u03a1\u0391\u03a4\u0399\u03a9\u03a4\u0399\u039a\u0391 \u0391\u0394\u0399\u039a\u0397\u039c\u0391\u03a4\u0391", "5": "\u039a\u03a1\u0391\u03a4\u0399\u039a\u0395\u03a3 \u03a0\u03a1\u039f\u039c\u0397\u0398\u0395\u0399\u0395\u03a3", "6": "\u0391\u0393\u03a1\u039f\u03a4\u0399\u039a\u0397 \u0391\u03a0\u039f\u039a\u0391\u03a4\u0391\u03a3\u03a4\u0391\u03a3\u0397", "7": "\u0391\u039e\u0399\u03a9\u039c\u0391\u03a4\u0399\u039a\u039f\u0399 \u03a0\u039f\u039b\u0395\u039c\u0399\u039a\u039f\u03a5 \u039d\u0391\u03a5\u03a4\u0399\u039a\u039f\u03a5", "8": "\u03a3\u03a7\u0395\u0394\u0399\u0391 \u03a0\u039f\u039b\u0395\u03a9\u039d", "9": "\u03a3\u03a5\u039a\u0391", "10": "\u03a0\u03a1\u039f\u039b\u0397\u03a8\u0399\u03a3 \u039a\u0391\u0399 \u0394\u0399\u03a9\u039e\u0399\u03a3 \u03a4\u039f\u03a5 \u0395\u0393\u039a\u039b\u0397\u039c\u0391\u03a4\u039f\u03a3", "11": "\u0394\u0399\u0395\u0398\u039d\u0395\u0399\u03a3 \u039c\u0395\u03a4\u0391\u03a6\u039f\u03a1\u0395\u03a3", "12": "\u0393\u0395\u039d\u0399\u039a\u0397 \u03a3\u03a5\u0393\u039a\u039f\u0399\u039d\u03a9\u039d\u0399\u0391 \u039a\u0391\u0399 \u0394\u0399\u0391\u03a4\u0391\u039e\u0395\u0399\u03a3", "13": "\u039a\u039b\u0397\u03a1\u039f\u039d\u039f\u039c\u0399\u039a\u039f \u0394\u0399\u039a\u0391\u0399\u039f", "14": "\u039a\u039f\u0399\u039d\u03a9\u039d\u0399\u039a\u0397 \u0391\u039d\u03a4\u0399\u039b\u0397\u03a8\u0397", "15": "\u039d\u0391\u03a5\u03a4\u0399\u039b\u0399\u0391\u039a\u0395\u03a3 \u03a3\u0397\u039c\u0391\u039d\u03a3\u0395\u0399\u03a3", "16": "\u0394\u0399\u0395\u0398\u039d\u0395\u03a3 \u03a0\u039f\u0399\u039d\u0399\u039a\u039f \u0394\u0399\u039a\u0391\u0399\u039f", "17": "\u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u03a4\u0399\u039a\u039f\u0399 \u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u0399 \u0395.\u039d", "18": "\u03a3\u03a9\u039c\u0391\u03a4\u0399\u039a\u0397 \u0391\u0393\u03a9\u0393\u0397", "19": "\u03a3\u03a0\u039f\u03a1\u039f\u03a0\u0391\u03a1\u0391\u0393\u03a9\u0393\u0397", "20": "\u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u0391\u0399 \u0394\u0397\u039c\u039f\u03a3\u0399\u03a9\u039d \u0395\u03a1\u0393\u03a9\u039d", "21": "\u03a4\u0391\u039c\u0395\u0399\u0391 \u03a3\u03a5\u039d\u03a4\u0391\u039e\u0395\u03a9\u039d \u03a4\u03a1\u0391\u03a0\u0395\u0396\u03a9\u039d", "22": "\u03a0\u03a5\u03a1\u039f\u03a3\u0392\u0395\u03a3\u03a4\u0399\u039a\u039f \u03a3\u03a9\u039c\u0391", "23": "\u0394\u0399\u0391\u03a6\u039f\u03a1\u0395\u03a3 \u0392\u0399\u039f\u039c\u0397\u03a7\u0391\u039d\u0399\u0395\u03a3", "24": "\u0395\u039a\u03a4\u0395\u039b\u0395\u03a3\u0397 \u039a\u0391\u0399 \u03a3\u03a5\u039d\u0395\u03a0\u0395\u0399\u0395\u03a3 \u03a4\u0397\u03a3 \u03a0\u039f\u0399\u039d\u0397\u03a3", "25": "\u0394\u0399\u0395\u0398\u039d\u0395\u0399\u03a3 \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u03a4\u0399\u039a\u0395\u03a3 \u03a3\u03a5\u039c\u0392\u0391\u03a3\u0395\u0399\u03a3", "26": "\u039d\u039f\u039c\u039f\u0398\u0395\u03a3\u0399\u0391", "27": "\u0392\u0391\u039c\u0392\u0391\u039a\u0399", "28": "\u03a0\u0391\u03a1\u039f\u03a7\u0395\u03a3 \u03a3\u03a5\u039d\u03a4\u0391\u039e\u0395\u03a9\u039d", "29": "\u039d\u039f\u039c\u0399\u03a3\u039c\u0391", "30": "\u03a3\u03a5\u039c\u0392\u0391\u03a3\u0397 \u039d\u0391\u03a5\u03a4\u0399\u039a\u0397\u03a3 \u0395\u03a1\u0393\u0391\u03a3\u0399\u0391\u03a3", "31": "\u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u038a \u039a\u039f\u0399\u039d\u03a9\u039d\u0399\u039a\u0389\u03a3 \u0391\u03a3\u03a6\u0391\u039b\u038a\u03a3\u0395\u03a9\u03a3", "32": "\u0391\u0393\u03a1\u039f\u03a4\u0399\u039a\u0397 \u0391\u03a3\u03a6\u0391\u039b\u0395\u0399\u0391", "33": "\u03a5\u0393\u0395\u0399\u039f\u039d\u039f\u039c\u0399\u039a\u039f\u03a3 \u0395\u039b\u0395\u0393\u03a7\u039f\u03a3 \u0395\u0399\u03a3\u0395\u03a1\u03a7\u039f\u039c\u0395\u039d\u03a9\u039d", "34": "\u039c\u039f\u03a5\u03a3\u0395\u0399\u0391 \u039a\u0391\u0399 \u03a3\u03a5\u039b\u039b\u039f\u0393\u0395\u03a3", "35": "\u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f \u0399.\u039a.\u0391", "36": "\u039e\u0395\u039d\u039f\u0394\u039f\u03a7\u0395\u0399\u0391", "37": "\u039a\u03a1\u0391\u03a4\u0399\u039a\u0397 \u0391\u03a3\u03a6\u0391\u039b\u0395\u0399\u0391", "38": "\u03a3\u03a5\u039d\u0395\u03a4\u0391\u0399\u03a1\u0399\u03a3\u039c\u039f\u0399", "39": "\u03a0\u039f\u039b\u03a5\u0395\u0398\u039d\u0395\u0399\u03a3 \u03a3\u03a5\u039c\u03a6\u03a9\u039d\u0399\u0395\u03a3", "40": "\u0395\u03a4\u0395\u03a1\u039f\u0394\u039f\u039e\u039f\u0399", "41": "\u039c\u0395\u03a3\u0397 \u0395\u039a\u03a0\u0391\u0399\u0394\u0395\u03a5\u03a3\u0399\u03a3", "42": "\u0393\u0395\u03a9\u03a1\u0393\u0399\u039a\u039f\u0399 \u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u0399", "43": "\u0393\u0395\u039d\u0399\u039a\u039f \u039b\u039f\u0393\u0399\u03a3\u03a4\u0397\u03a1\u0399\u039f", "44": "\u03a1\u03a5\u0398\u039c\u0399\u03a3\u0397 \u03a4\u0397\u03a3 \u0391\u0393\u039f\u03a1\u0391\u03a3 \u0395\u03a1\u0393\u0391\u03a3\u0399\u0391\u03a3", "45": "\u03a0\u0391\u03a1\u039f\u03a7\u039f\u0399 \u039a\u0399\u039d\u0397\u03a4\u03a9\u039d \u03a4\u0397\u039b\u0395\u03a0\u0399\u039a\u039f\u0399\u039d\u03a9\u039d\u0399\u03a9\u039d", "46": "\u0395\u039c\u03a0\u03a1\u0391\u0393\u039c\u0391\u03a4\u039f\u03a3 \u0391\u03a3\u03a6\u0391\u039b\u0395\u0399\u0391", "47": "\u03a6\u039f\u03a1\u039f\u039b\u039f\u0393\u0399\u0391 \u0391\u039a\u0391\u0398\u0391\u03a1\u0399\u03a3\u03a4\u039f\u03a5 \u03a0\u03a1\u039f\u03a3\u039f\u0394\u039f\u03a5", "48": "\u039a\u03a4\u0397\u039c\u0391\u03a4\u0399\u039a\u0395\u03a3 \u03a4\u03a1\u0391\u03a0\u0395\u0396\u0395\u03a3", "49": "\u03a3\u03a4\u0391\u03a4\u0399\u03a3\u03a4\u0399\u039a\u0397", "50": "\u039a\u0395\u03a1\u0391\u0399\u0395\u03a3 \u2013 \u03a3\u03a4\u0391\u0398\u039c\u039f\u0399 \u039a\u0395\u03a1\u0391\u0399\u03a9\u039d", "51": "\u03a0\u039f\u0399\u039d\u0399\u039a\u039f\u03a3 \u039d\u039f\u039c\u039f\u03a3", "52": "\u039c\u0395\u03a3\u0391 \u0394\u0399\u0394\u0391\u03a3\u039a\u0391\u039b\u0399\u0391\u03a3", "53": "\u0395\u039c\u03a0\u039f\u03a1\u0399\u039f \u03a6\u0391\u03a1\u039c\u0391\u039a\u03a9\u039d", "54": "\u0394\u0399\u0391\u03a6\u039f\u03a1\u0391", "55": "\u0394\u0397\u039c\u039f\u03a3\u0399\u0391 \u039a\u03a4\u0397\u039c\u0391\u03a4\u0391", "56": "\u0395\u0399\u03a3\u03a6\u039f\u03a1\u0395\u03a3 \u0399.\u039a.\u0391", "57": "\u039a\u0391\u03a4\u0391\u0393\u0393\u0395\u039b\u0399\u0391 \u03a3\u03a5\u039c\u0392\u0391\u03a3\u0395\u03a9\u03a3 \u0395\u03a1\u0393\u0391\u03a3\u0399\u0391\u03a3", "58": "\u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f \u03a0\u039f\u039b\u0399\u03a4\u0399\u039a\u0397\u03a3 \u0391\u0395\u03a1\u039f\u03a0\u039f\u03a1\u0399\u0391\u03a3", "59": "\u0394\u0397\u039c\u039f\u03a3\u0399\u039f \u03a7\u03a1\u0395\u039f\u03a3", "60": "\u0391\u03a0\u039f\u03a4\u0391\u039c\u0399\u0395\u03a5\u03a3\u0397", "61": "\u0391\u039b\u039b\u039f\u0398\u03a1\u0397\u03a3\u039a\u039f\u0399", "62": "\u03a0\u039b\u039f\u0397\u0393\u0399\u039a\u0397 \u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u0391", "63": "\u03a4\u03a5\u03a0\u039f\u03a3 \u039a\u0391\u0399 \u03a0\u039b\u0397\u03a1\u039f\u03a6\u039f\u03a1\u0399\u0395\u03a3", "64": "\u03a4\u03a1\u039f\u03a0\u039f\u03a0\u039f\u0399\u0397\u03a3\u0397 \u039a\u0391\u0399 \u039a\u0391\u03a4\u0391\u03a1\u0393\u0397\u03a3\u0397 \u03a4\u0397\u03a3 \u03a0\u039f\u0399\u039d\u0397\u03a3", "65": "\u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u03a4\u0399\u039a\u0391 \u03a4\u0391\u039c\u0395\u0399\u0391 \u03a4\u03a5\u03a0\u039f\u03a5", "66": "\u039f\u0399\u039a\u039f\u0393\u0395\u039d\u0395\u0399\u0391\u039a\u039f \u0394\u0399\u039a\u0391\u0399\u039f", "67": "\u0394\u0399\u039f\u0399\u039a\u0397\u03a3\u0397 \u0395\u0398\u039d\u0399\u039a\u0397\u03a3 \u039f\u0399\u039a\u039f\u039d\u039f\u039c\u0399\u0391\u03a3", "68": "\u03a5\u03a0\u039f\u03a5\u03a1\u0393\u0395\u0399\u039f \u0395\u0398\u039d\u0399\u039a\u0397\u03a3 \u0391\u039c\u03a5\u039d\u0391\u03a3", "69": "\u0394\u0399\u0395\u0398\u039d\u0395\u0399\u03a3 \u03a3\u03a5\u039c\u0392\u0391\u03a3\u0395\u0399\u03a3 \u03a0\u03a1\u039f\u039d\u039f\u0399\u0391\u03a3", "70": "\u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f \u03a4\u03a9\u039d \u0394\u0399\u039a\u0391\u03a3\u03a4\u0397\u03a1\u0399\u03a9\u039d", "71": "\u03a0\u03a1\u039f\u03a3\u03a4\u0391\u03a3\u0399\u0391 \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u03a9\u039d \u0395\u0399\u0394\u0399\u039a\u03a9\u039d \u039a\u0391\u03a4\u0397\u0393\u039f\u03a1\u0399\u03a9\u039d", "72": "\u03a0\u0391\u03a1\u039f\u03a7\u0395\u03a3 \u0391\u03a3\u0398\u0395\u039d\u0395\u0399\u0391\u03a3", "73": "\u039c\u0395\u03a4\u0391\u039d\u0391\u03a3\u03a4\u0395\u03a5\u03a3\u0397", "74": "\u03a5\u03a0\u039f\u03a5\u03a1\u0393\u0395\u0399\u039f \u03a0\u0391\u0399\u0394\u0395\u0399\u0391\u03a3", "75": "\u0391\u03a3\u03a6\u0391\u039b\u0395\u0399\u0391 \u039d\u0391\u03a5\u03a3\u0399\u03a0\u039b\u039f\u03aa\u0391\u03a3", "76": "\u039f\u0394\u039f\u03a0\u039f\u0399\u03aa\u0391", "77": "\u03a3\u03a4\u03a1\u0391\u03a4\u039f\u0394\u0399\u039a\u0395\u0399\u0391", "78": "\u039c\u0399\u03a3\u0398\u03a9\u03a3\u0397", "79": "\u0395\u0399\u03a3\u03a0\u03a1\u0391\u039e\u0397 \u0394\u0397\u039c\u039f\u03a3\u0399\u03a9\u039d \u0395\u03a3\u039f\u0394\u03a9\u039d", "80": "\u039f\u03a0\u039b\u0399\u03a4\u0395\u03a3 \u039a\u0391\u0399 \u0391\u039d\u0398\u03a5\u03a0\u0391\u03a3\u03a0\u0399\u03a3\u03a4\u0395\u03a3", "81": "\u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u03a3 \u03a4\u0397\u039b\u0395\u03a0\u0399\u039a\u039f\u0399\u039d\u03a9\u039d\u0399\u03a9\u039d \u0395\u039b\u039b\u0391\u0394\u0391\u03a3 (\u039f.\u03a4.\u0395.)", "82": "\u038c\u03a1\u0393\u0391\u039d\u0391 \u0386\u03a3\u039a\u0397\u03a3\u0397\u03a3 \u0394\u0399\u0391\u03a7\u0395\u0399\u03a1\u0399\u03a3\u03a4\u0399\u039a\u039f\u038e \u0395\u039b\u0388\u0393\u03a7\u039f\u03a5 \u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u038f\u039d \u039a\u0391\u0399 \u0395\u03a0\u0399\u03a7\u0395\u0399\u03a1\u0389\u03a3\u0395\u03a9\u039d", "83": "\u03a0\u039f\u0399\u039d\u0399\u039a\u0397 \u039d\u039f\u039c\u039f\u0398\u0395\u03a3\u0399\u0391 \u03a4\u03a5\u03a0\u039f\u03a5", "84": "\u0395\u039e\u0391\u0393\u03a9\u0393\u0399\u039a\u039f \u0395\u039c\u03a0\u039f\u03a1\u0399\u039f", "85": "\u0391\u0395\u03a1\u039f\u03a0\u039f\u03a1\u0399\u039a\u0395\u03a3 \u03a3\u03a5\u039c\u0392\u0391\u03a3\u0395\u0399\u03a3", "86": "\u0393\u0395\u03a9\u03a1\u0393\u0399\u039a\u039f\u0399 \u03a3\u03a5\u039d\u0395\u03a4\u0391\u0399\u03a1\u0399\u03a3\u039c\u039f\u0399 \u0391\u0393\u03a1\u039f\u03a4\u0399\u039a\u0395\u03a3 \u03a3\u03a5\u039d\u0395\u03a4\u0391\u0399\u03a1\u0399\u03a3\u03a4\u0399\u039a\u0395\u03a3 \u039f\u03a1\u0393\u0391\u039d\u03a9\u03a3\u0395\u0399\u03a3", "87": "\u039f\u0399\u039a\u039f\u039d\u039f\u039c\u0399\u039a\u0395\u03a3 \u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u0395\u03a3", "88": "\u039f\u03a7\u03a5\u03a1\u03a9\u03a3\u0395\u0399\u03a3", "89": "\u0395\u039a\u03a4\u0391\u039a\u03a4\u039f\u0399 \u03a0\u039f\u0399\u039d\u0399\u039a\u039f\u0399 \u039d\u039f\u039c\u039f\u0399", "90": "\u0395\u039a\u03a4\u0395\u039b\u0395\u03a3\u0397", "91": "\u0394\u0399\u039f\u0399\u039a\u0397\u03a4\u0399\u039a\u039f\u0399 \u039a\u0391\u039d\u039f\u039d\u0399\u03a3\u039c\u039f\u0399", "92": "\u03a5\u0394\u03a1\u0391\u03a5\u039b\u0399\u039a\u0391 \u0395\u03a1\u0393\u0391", "93": "\u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u03a4\u0399\u039a\u0391 \u03a4\u0391\u039c\u0395\u0399\u0391 \u0394\u0397\u039c\u039f\u03a3\u0399\u03a9\u039d \u03a5\u03a0\u0391\u039b\u039b\u0397\u039b\u03a9\u039d", "94": "\u0395\u039a\u039a\u0391\u0398\u0391\u03a1\u0399\u03a3\u0395\u0399\u03a3 \u0394\u0397\u039c\u039f\u03a3\u0399\u03a9\u039d \u03a5\u03a0\u0391\u039b\u039b\u0397\u039b\u03a9\u039d", "95": "\u0394\u0399\u039f\u0399\u039a\u0397\u03a3\u0397 \u0395\u039c\u03a0\u039f\u03a1\u0399\u039a\u039f\u03a5 \u039d\u0391\u03a5\u03a4\u0399\u039a\u039f\u03a5", "96": "\u0391\u039d\u03a9\u03a4\u0391\u03a4\u039f \u0395\u0399\u0394\u0399\u039a\u039f \u0394\u0399\u039a\u0391\u03a3\u03a4\u0397\u03a1\u0399\u039f", "97": "\u0391\u03a1\u03a4\u039f\u03a3", "98": "\u0395\u0399\u03a3\u0391\u0393\u03a9\u0393\u0399\u039a\u039f \u0395\u039c\u03a0\u039f\u03a1\u0399\u039f", "99": "\u0391\u039b\u0399\u0395\u0399\u0391", "100": "\u0395\u039a\u039a\u039b\u0397\u03a3\u0399\u0391\u03a3\u03a4\u0399\u039a\u0397 \u03a0\u0395\u03a1\u0399\u039f\u03a5\u03a3\u0399\u0391", "101": "\u0394\u0399\u0391\u03a6\u039f\u03a1\u0391 \u0394\u0397\u039c\u039f\u03a3\u0399\u0391 \u0395\u03a1\u0393\u0391", "102": "\u039c\u039f\u039d\u0395\u03a3", "103": "\u03a0\u03a1\u039f\u0395\u0394\u03a1\u039f\u03a3 \u03a4\u0397\u03a3 \u0394\u0397\u039c\u039f\u039a\u03a1\u0391\u03a4\u0399\u0391\u03a3 \u039a\u0391\u0399 \u03a0\u03a1\u039f\u0395\u0394\u03a1\u0399\u0391 \u03a4\u0397\u03a3 \u0394\u0397\u039c\u039f\u039a\u03a1\u0391\u03a4\u0399\u0391\u03a3", "104": "\u03a0\u039f\u039b\u03a5\u0395\u0398\u039d\u0395\u0399\u03a3 \u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u0399", "105": "\u0391\u03a1\u03a7\u0391\u0399\u039f\u03a4\u0397\u03a4\u0395\u03a3", "106": "\u039d\u0391\u039f\u0399 \u039a\u0391\u0399 \u039b\u0395\u0399\u03a4\u039f\u03a5\u03a1\u0393\u039f\u0399 \u0391\u03a5\u03a4\u03a9\u039d", "107": "\u0395\u039a\u039a\u039b\u0397\u03a3\u0399\u0391\u03a3\u03a4\u0399\u039a\u0397 \u0395\u039a\u03a0\u0391\u0399\u0394\u0395\u03a5\u03a3\u0397", "108": "\u0395\u039d\u0399\u03a3\u03a7\u03a5\u03a3\u0399\u03a3 \u03a4\u0397\u03a3 \u0393\u0395\u03a9\u03a1\u0393\u0399\u0391\u03a3", "109": "\u0395\u039a\u0398\u0395\u03a3\u0395\u0399\u03a3", "110": "\u03a0\u03a1\u039f\u03a3\u03a4\u0391\u03a3\u0399\u0391 \u03a4\u03a9\u039d \u03a3\u03a5\u039d\u0391\u039b\u039b\u0391\u0393\u03a9\u039d", "111": "\u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0397", "112": "\u039a\u03a4\u0397\u039d\u039f\u03a4\u03a1\u039f\u03a6\u0399\u0391", "113": "\u0395\u039a\u03a0\u0391\u0399\u0394\u0395\u03a5\u03a4\u0399\u039a\u0391 \u03a4\u0395\u039b\u0397", "114": "\u0394\u0399\u039f\u0399\u039a\u0397\u03a3\u0397 \u0395\u039a\u03a0\u0391\u0399\u0394\u0395\u03a5\u03a3\u0395\u03a9\u03a3", "115": "\u03a4\u0391\u039c\u0395\u0399\u039f \u03a0\u0391\u03a1\u0391\u039a\u0391\u03a4\u0391\u0398\u0397\u039a\u03a9\u039d \u039a\u0391\u0399 \u0394\u0391\u039d\u0395\u0399\u03a9\u039d", "116": "\u0391\u0393\u0391\u0398\u039f\u0395\u03a1\u0393\u0391 \u0399\u0394\u03a1\u03a5\u039c\u0391\u03a4\u0391", "117": "\u03a6\u039f\u03a1\u039f\u039b\u039f\u0393\u0399\u039a\u0391 \u0394\u0399\u039a\u0391\u03a3\u03a4\u0397\u03a1\u0399\u0391", "118": "\u03a6\u039f\u03a1\u039f\u0399 \u039a\u0391\u03a4\u0391\u039d\u0391\u039b\u03a9\u03a3\u0395\u03a9\u03a3", "119": "\u0392\u0399\u0392\u039b\u0399\u039f\u0398\u0397\u039a\u0395\u03a3-\u03a0\u03a1\u039f\u03a3\u03a4\u0391\u03a3\u0399\u0391 \u0392\u0399\u0392\u039b\u0399\u039f\u03a5-\u0394\u0399\u0391\u0394\u039f\u03a3\u0397 \u039b\u039f\u0393\u039f\u03a4\u0395\u03a7\u039d\u0399\u0391\u03a3", "120": "\u03a4\u0397\u039b\u0395\u03a0\u0399\u039a\u039f\u0399\u039d\u03a9\u039d\u0399\u0391\u039a\u0395\u03a3 \u039a\u0391\u0399 \u03a4\u0391\u03a7\u03a5\u0394\u03a1\u039f\u039c\u0399\u039a\u0395\u03a3 \u03a3\u03a5\u039c\u0392\u0391\u03a3\u0395\u0399\u03a3", "121": "\u0399\u0394\u0399\u03a9\u03a4\u0399\u039a\u0397 \u0395\u039a\u03a0\u0391\u0399\u0394\u0395\u03a5\u03a3\u0397", "122": "\u03a4\u0397\u039b\u0395\u03a0\u0399\u039a\u039f\u0399\u039d\u03a9\u039d\u0399\u0395\u03a3", "123": "\u0391\u03a3\u03a5\u03a1\u039c\u0391\u03a4\u039f\u03a3", "124": "\u0391\u03a0\u039f\u0394\u039f\u03a7\u0395\u03a3 \u0394\u0397\u039c\u039f\u03a3\u0399\u03a9\u039d \u03a5\u03a0\u0391\u039b\u039b\u0397\u03a9\u039d", "125": "\u03a5\u0393\u0395\u0399\u039f\u039d\u039f\u039c\u0399\u039a\u0397 \u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u0391 \u03a3\u03a4\u03a1\u0391\u03a4\u039f\u03a5", "126": "\u03a6\u0391\u03a1\u039c\u0391\u039a\u0395\u0399\u0391", "127": "\u0394\u0397\u039c\u039f\u03a3\u0399\u039f \u039b\u039f\u0393\u0399\u03a3\u03a4\u0399\u039a\u039f", "128": "\u039d\u0391\u03a5\u03a4\u0399\u039a\u0397 \u0395\u039a\u03a0\u0391\u0399\u0394\u0395\u03a5\u03a3\u0397", "129": "\u0395\u039e\u03a5\u03a0\u0397\u03a1\u0395\u03a4\u0397\u03a3\u0397 \u03a0\u039f\u039b\u0399\u03a4\u0399\u039a\u0397\u03a3 \u0391\u0395\u03a1\u039f\u03a0\u039f\u03a1\u0399\u0391\u03a3", "130": "\u03a0\u0391\u03a1\u039f\u03a7\u0395\u03a3 \u0399.\u039a.\u0391", "131": "\u0393\u0395\u039d\u0399\u039a\u0391 \u03a5\u0393\u0395\u0399\u039f\u039d\u039f\u039c\u0399\u039a\u0391 \u039c\u0395\u03a4\u03a1\u0391", "132": "\u0395\u039a\u039c\u0395\u03a4\u0391\u039b\u039b\u0395\u03a5\u03a3\u0397 \u0398\u0391\u039b\u0391\u03a3\u03a3\u0399\u03a9\u039d \u03a3\u03a5\u0393\u039a\u039f\u0399\u039d\u03a9\u039d\u0399\u03a9\u039d", "133": "\u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f \u03a4\u0391\u03a7\u03a5\u0394\u03a1\u039f\u039c\u0395\u0399\u03a9\u039d", "134": "\u0395\u039a\u03a4\u0395\u039b\u0395\u03a3\u03a4\u0399\u039a\u0397 \u0395\u039e\u039f\u03a5\u03a3\u0399\u0391", "135": "\u03a3\u03a5\u03a3\u03a4\u0391\u03a3\u0397 \u039a\u0391\u0399 \u0395\u0394\u03a1\u0391 \u03a4\u039f\u03a5 \u039a\u03a1\u0391\u03a4\u039f\u03a5\u03a3", "136": "\u03a6\u039f\u03a1\u039f\u039b\u039f\u0393\u0399\u0391 \u0394\u0399\u0391\u03a3\u039a\u0395\u0394\u0391\u03a3\u0395\u03a9\u039d", "137": "\u03a4\u0397\u039b\u0395\u03a6\u03a9\u039d\u0391", "138": "\u03a3\u03a4\u03a1\u0391\u03a4\u039f\u039b\u039f\u0393\u0399\u0391", "139": "\u0395\u039a\u03a0\u0391\u0399\u0394\u0395\u03a5\u03a3\u0397 \u0395\u03a1\u0393\u0391\u03a4\u03a9\u039d", "140": "\u03a5\u03a0\u039f\u03a5\u03a1\u0393\u0395\u0399\u039f \u03a0\u039f\u039b\u0399\u03a4\u0399\u03a3\u039c\u039f\u03a5", "141": "\u03a6\u039f\u03a1\u039f\u039b\u039f\u0393\u0399\u0391 \u039f\u0399\u039d\u039f\u03a0\u039d\u0395\u03a5\u039c\u0391\u03a4\u03a9\u0394\u03a9\u039d \u03a0\u039f\u03a4\u03a9\u039d", "142": "\u03a5\u03a0\u039f\u03a5\u03a1\u0393\u0395\u0399\u039f \u0393\u0395\u03a9\u03a1\u0393\u0399\u0391\u03a3", "143": "\u03a3\u03a9\u039c\u0391\u03a4\u0395\u0399\u0391", "144": "\u0395\u0399\u0394\u0399\u039a\u0395\u03a3 \u039c\u039f\u03a1\u03a6\u0395\u03a3 \u0391\u03a0\u0391\u03a3\u03a7\u039f\u039b\u0397\u03a3\u0397\u03a3", "145": "\u03a5\u03a0\u039f\u03a5\u03a1\u0393\u0395\u0399\u039f \u0394\u0399\u039a\u0391\u0399\u039f\u03a3\u03a5\u039d\u0397\u03a3", "146": "\u039d\u0391\u03a5\u03a4\u0399\u039b\u0399\u0391\u039a\u039f\u0399 \u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u0399", "147": "\u03a4\u039f\u03a5\u03a1\u0399\u03a3\u039c\u039f\u03a3", "148": "\u039a\u0391\u03a0\u039d\u039f\u03a3", "149": "\u03a0\u03a1\u039f\u03a3\u03a4\u0391\u03a3\u0399\u0391 \u0397\u0398\u03a9\u039d", "150": "\u0395\u0399\u0394\u0399\u039a\u0395\u03a3 \u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u0395\u03a3 \u039d\u0391\u03a5\u03a4\u0399\u039a\u039f\u03a5", "151": "\u0391\u03a0\u039f\u0394\u039f\u03a7\u0395\u03a3 \u03a3\u03a4\u03a1\u0391\u03a4\u0399\u03a9\u03a4\u0399\u039a\u03a9\u039d", "152": "\u03a0\u03a1\u039f\u039d\u039f\u0399\u0391 \u03a0\u039b\u0397\u03a1\u03a9\u039c\u0391\u03a4\u03a9\u039d \u0395.\u039d", "153": "\u0395\u0399\u0394\u0399\u039a\u0395\u03a3 \u0394\u0399\u0391\u03a4\u0391\u039e\u0395\u0399\u03a3 \u03a0\u0395\u03a1\u0399 \u0391\u039d\u03a9\u039d.\u0395\u03a4\u0391\u0399\u03a1\u0395\u0399\u03a9\u039d", "154": "\u0394\u0397\u039c\u039f\u03a3\u0399\u0391 \u0394\u0399\u039f\u0399\u039a\u0397\u03a3\u0397", "155": "\u03a4\u039f\u03a0\u0399\u039a\u0391 \u03a3\u03a7\u0395\u0394\u0399\u0391 \u03a0\u039f\u039b\u0395\u03a9\u039d", "156": "\u03a0\u03a1\u039f\u03a3\u03a4\u0391\u03a3\u0399\u0391 \u03a0\u0391\u0399\u0394\u0399\u039a\u0397\u03a3 \u0397\u039b\u0399\u039a\u0399\u0391\u03a3", "157": "\u0395\u039b\u039b\u0397\u039d\u0399\u039a\u0397 \u0391\u03a3\u03a4\u03a5\u039d\u039f\u039c\u0399\u0391", "158": "\u039b\u0399\u039c\u0395\u039d\u0399\u039a\u039f \u03a3\u03a9\u039c\u0391", "159": "\u03a4\u039f\u03a5\u03a1\u0399\u03a3\u03a4\u0399\u039a\u0397 \u0391\u03a3\u03a4\u03a5\u039d\u039f\u039c\u0399\u0391", "160": "\u0392\u0399\u039f\u039c\u0397\u03a7\u0391\u039d\u0399\u0391", "161": "\u03a3\u03a7\u039f\u039b\u0395\u03a3 \u03a0\u0391\u039d\u0395\u03a0\u0399\u03a3\u03a4\u0397\u039c\u0399\u039f\u03a5 \u0391\u0398\u0397\u039d\u03a9\u039d", "162": "\u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u03a4\u0399\u039a\u039f\u0399 \u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u0399 \u03a3\u03a4\u03a1\u0391\u03a4\u039f\u03a5", "163": "\u0391\u039b\u03a5\u039a\u0395\u03a3", "164": "\u0395\u03a3\u03a9\u03a4\u0395\u03a1\u0399\u039a\u039f \u0395\u039c\u03a0\u039f\u03a1\u0399\u039f", "165": "\u0395\u0398\u039d\u0399\u039a\u039f \u03a3\u03a5\u03a3\u03a4\u0397\u039c\u0391 \u03a5\u0393\u0395\u0399\u0391\u03a3", "166": "\u039d\u039f\u039c\u039f\u0398\u0395\u03a4\u0399\u039a\u0397 \u0395\u039e\u039f\u03a5\u03a3\u0399\u0391", "167": "\u0394\u0399\u039f\u0399\u039a\u0397\u03a3H \u039a\u039f\u0399\u039d\u03a9\u039dIK\u0397\u03a3 \u03a0\u03a1\u039f\u039d\u039f\u0399\u0391\u03a3", "168": "\u03a0\u039b\u0397\u03a1\u03a9\u039c\u0391\u03a4\u0391", "169": "\u039c\u0391\u0398\u0397\u03a4\u0399\u039a\u0397 \u03a0\u03a1\u039f\u039d\u039f\u0399\u0391", "170": "\u0394\u0399\u039f\u0399\u039a\u0397\u03a3\u0397 \u03a4\u03a5\u03a0\u039f\u03a5 \u039a\u0391\u0399 \u03a4\u039f\u03a5\u03a1\u0399\u03a3\u039c\u039f\u03a5", "171": "\u0395\u03a0\u039f\u0399\u039a\u0399\u03a3\u039c\u039f\u03a3", "172": "\u03a4\u03a1\u039f\u03a7\u0399\u039f\u0394\u03a1\u039f\u039c\u039f\u0399", "173": "\u0395\u03a0\u0391\u0393\u0393\u0395\u039b\u039c\u0391\u03a4\u0399\u039a\u0397 \u0395\u039a\u03a0\u0391\u0399\u0394\u0395\u03a5\u03a3\u0397", "174": "\u0391\u0395\u03a1\u039f\u03a0\u039f\u03a1\u0399\u039a\u0397 \u0395\u039a\u03a0\u0391\u0399\u0394\u0395\u03a5\u03a3\u0397", "175": "\u03a5\u03a0\u039f\u03a5\u03a1\u0393\u0395\u0399\u039f \u0395\u0398\u039d\u0399\u039a\u0397\u03a3 \u039f\u0399\u039a\u039f\u039d\u039f\u039c\u0399\u0391\u03a3", "176": "\u0398\u0395\u0391\u03a4\u03a1\u039f", "177": "\u03a5\u0394\u03a1\u0395\u03a5\u03a3\u0397", "178": "\u0394\u0399\u0395\u0398\u039d\u0395\u0399\u03a3 \u03a3\u03a4\u03a1\u0391\u03a4\u0399\u03a9\u03a4\u0399\u039a\u0395\u03a3 \u03a3\u03a5\u039c\u0392\u0391\u03a3\u0395\u0399\u03a3", "179": "\u0395\u0398\u039d\u0399\u039a\u039f \u039c\u0395\u03a4\u03a3\u039f\u0392\u0399\u039f \u03a0\u039f\u039b\u03a5\u03a4\u0395\u03a7\u039d\u0395\u0399\u039f", "180": "\u03a5\u03a0\u039f\u03a5\u03a1\u0393\u0395\u0399\u039f \u0395\u039e\u03a9\u03a4\u0395\u03a1\u0399\u039a\u03a9\u039d", "181": "\u0395\u03a5\u03a1\u03a9\u03a0\u0391\u03aa\u039a\u039f\u0399 \u03a0\u039f\u039b\u03a5\u0395\u0398\u039d\u0395\u0399\u03a3 \u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u0399", "182": "\u0395\u039b\u0395\u03a5\u0398\u0395\u03a1\u0399\u0391 \u03a4\u0397\u03a3 \u0395\u03a1\u0393\u0391\u03a3\u0399\u0391\u03a3", "183": "\u03a5\u03a0\u039f\u03a5\u03a1\u0393\u0395\u0399\u039f \u0395\u03a3\u03a9\u03a4\u0395\u03a1\u0399\u039a\u03a9\u039d \u0394\u0397\u039c.\u0394\u0399\u039f\u0399\u039a\u0397\u03a3\u0397\u03a3 \u039a\u0391\u0399 \u0391\u03a0\u039f\u039a\u0395\u039d\u03a4\u03a1\u03a9\u03a3\u0397\u03a3", "184": "\u0394\u0399\u0391\u03a6\u039f\u03a1\u0395\u03a3 \u0395\u039d\u039f\u03a7\u0399\u039a\u0395\u03a3 \u03a3\u03a7\u0395\u03a3\u0395\u0399\u03a3", "185": "\u039b\u0397\u039e\u0399\u0391\u03a1\u03a7\u0395\u0399\u0391", "186": "\u0395\u0399\u0394\u0399\u039a\u039f\u0399 \u039a\u0391\u039d\u039f\u039d\u0399\u03a3\u039c\u039f\u0399", "187": "\u03a4\u0395\u039b\u03a9\u039d\u0395\u0399\u0391\u039a\u0395\u03a3 \u03a3\u03a5\u039c\u0392\u0391\u03a3\u0395\u0399\u03a3", "188": "\u039d\u0391\u03a5\u03a4\u0399\u039a\u039f \u03a0\u039f\u0399\u039d\u0399\u039a\u039f \u0394\u0399\u039a\u0391\u0399\u039f", "189": "\u03a3\u03a4\u0395\u0393\u0391\u03a3\u0397 \u0394\u0397\u039c\u039f\u03a3\u0399\u03a9\u039d \u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u03a9\u039d", "190": "\u03a0\u039b\u0397\u03a1\u03a9\u039c\u0391\u03a4\u0391 \u03a0\u039f\u039b\u0395\u039c\u0399\u039a\u039f\u03a5 \u039d\u0391\u03a5\u03a4\u0399\u039a\u039f\u03a5", "191": "\u03a3\u03a5\u039d\u03a4\u0391\u0393\u039c\u0391\u03a4\u0399\u039a\u039f\u03a3 \u03a7\u0391\u03a1\u03a4\u0397\u03a3", "192": "\u0397\u039b\u0395\u039a\u03a4\u03a1\u0399\u03a3\u039c\u039f\u03a3", "193": "\u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u03a4\u0399\u039a\u0391 \u0394\u0399\u039a\u0391\u03a3\u03a4\u0397\u03a1\u0399\u0391", "194": "\u039b\u0395\u03a3\u03a7\u0395\u03a3 \u0395\u039b\u039b\u0397\u039d\u0399\u039a\u0397\u03a3 \u0391\u03a3\u03a4\u03a5\u039d\u039f\u039c\u0399\u0391\u03a3", "195": "\u03a5\u03a0\u039f\u03a5\u03a1\u0393\u0395\u0399\u039f \u0394\u0397\u039c\u039f\u03a3\u0399\u0391\u03a3 TA\u039e\u0397\u03a3", "196": "\u0395\u039a\u03a4\u0395\u039b\u0395\u03a3 \u0394\u0397\u039c\u039f\u03a3\u0399\u03a9\u039d \u0395\u03a1\u0393\u03a9\u039d", "197": "\u03a0\u0391\u039d\u0395\u03a0\u0399\u03a3\u03a4\u0397\u039c\u0399\u039f \u0398\u0395\u03a3\u03a3\u0391\u039b\u039f\u039d\u0399\u039a\u0397\u03a3", "198": "\u0394\u0391\u03a3\u0399\u039a\u0397 \u039d\u039f\u039c\u039f\u0398\u0395\u03a3\u0399\u0391", "199": "\u0395\u0399\u0394\u0399\u039a\u0395\u03a3 \u0391\u039d\u03a9\u03a4\u0391\u03a4\u0395\u03a3 \u03a3\u03a7\u039f\u039b\u0395\u03a3", "200": "\u0395\u0394\u0391\u03a6\u039f\u03a3 \u03a4\u039f\u03a5 \u0395\u039b\u039b\u0397\u039d\u0399\u039a\u039f\u03a5 \u039a\u03a1\u0391\u03a4\u039f\u03a5\u03a3", "201": "\u0394\u0399\u039a\u0397\u0393\u039f\u03a1\u039f\u0399", "202": "\u0394\u0399\u039a\u0391\u0399\u039f \u03a4\u03a9\u039d \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u03a9\u039d", "203": "\u0394\u0399\u039f\u0399\u039a\u0397\u03a3\u0397 \u03a4\u0391\u03a7\u03a5\u0394\u03a1\u039f\u039c\u0399\u039a\u0397\u03a3, \u03a4\u0397\u039b\u0395\u0393\u03a1\u0391\u03a6\u0399\u039a\u0397\u03a3", "204": "\u03a3\u03a7\u039f\u039b\u0399\u039a\u0391 \u039a\u03a4\u0399\u03a1\u0399\u0391 \u039a\u0391\u0399 \u03a4\u0391\u039c\u0395\u0399\u0391", "205": "\u0391\u0395\u03a1\u039f\u039b\u0399\u039c\u0395\u039d\u0395\u03a3", "206": "\u03a5\u03a0\u039f\u0398\u0397\u039a\u039f\u03a6\u03a5\u039b\u0391\u039a\u0395\u0399\u0391", "207": "\u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u03a4\u0399\u039a\u0391 \u03a4\u0391\u039c\u0395\u0399\u0391 \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u03a5 \u03a5\u03a0\u039f\u03a5\u03a1\u0393\u0395\u0399\u039f\u03a5 \u0394\u0397\u039c\u039f\u03a3\u0399\u0391\u03a3 \u03a4\u0391\u039e\u0397\u03a3", "208": "\u0394\u0399\u0391\u03a7\u0395\u0399\u03a1\u0399\u03a3\u0395\u0399\u03a3 \u03a4\u039f\u03a5 \u0394\u0397\u039c\u039f\u03a3\u0399\u039f\u03a5", "209": "\u0395\u039c\u03a0\u03a1\u0391\u0393\u039c\u0391\u03a4\u039f \u0394\u0399\u039a\u0391\u0399\u039f", "210": "\u03a6\u039f\u03a1\u03a4\u039f\u0395\u039a\u03a6\u039f\u03a1\u03a4\u03a9\u03a3\u0395\u0399\u03a3", "211": "\u0391\u039d\u03a9\u039d\u03a5\u039c\u0395\u03a3 \u0395\u03a4\u0391\u0399\u03a1\u0395\u0399\u0395\u03a3", "212": "\u0395\u0399\u0394\u0399\u039a\u039f\u0399 \u0395\u03a0\u0399\u03a3\u0399\u03a4\u0399\u03a3\u03a4\u0399\u039a\u039f\u0399 \u039d\u039f\u039c\u039f\u0399", "213": "\u0395\u039a\u039a\u039b\u0397\u03a3\u0399\u0395\u03a3 \u0391\u039b\u039b\u039f\u0394\u0391\u03a0\u0397\u03a3", "214": "\u0394\u0399\u0395\u0398\u039d\u0395\u0399\u03a3 \u03a3\u03a5\u039c\u0392\u0391\u03a3\u0395\u0399\u03a3", "215": "\u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u03a3 \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0397\u03a3 \u0395\u039b\u0395\u03a5\u0398\u0395\u03a1\u03a9\u039d \u0395\u03a0\u0391\u0393\u0393\u0395\u039b\u039c\u0391\u03a4\u0399\u03a9\u039d", "216": "\u0391\u03a3\u03a6\u0391\u039b\u0395\u0399\u0391 \u0391\u0395\u03a1\u039f\u03a0\u039b\u039f\u03aa\u0391\u03a3", "217": "\u03a4\u0391\u039c\u0395\u0399\u0391 \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u03a3 \u039a\u0391\u0399 \u0391\u03a1\u03a9\u0393\u0397\u03a3", "218": "\u0391\u039d\u03a9\u03a4\u0391\u03a4\u0397 \u0395\u039a\u03a0\u0391\u0399\u0394\u0395\u03a5\u03a3\u0397", "219": "\u03a0\u039f\u039b\u0395\u039c\u0399\u039a\u0397 \u0394\u0399\u0391\u0398\u0395\u03a3\u0399\u039c\u039f\u03a4\u0397\u03a4\u0391", "220": "\u03a0\u039f\u0399\u039d\u0399\u039a\u039f \u039a\u0391\u0399 \u03a0\u0395\u0399\u0398\u0391\u03a1\u03a7\u0399\u039a\u039f \u0394\u0399\u039a\u0391\u0399\u039f", "221": "\u03a6\u039f\u03a1\u039f\u039b\u039f\u0393\u0399\u0391 \u0395\u03a0\u0399\u03a4\u0397\u0394\u0395\u03a5\u039c\u0391\u03a4\u039f\u03a3", "222": "\u0395\u039a\u03a4\u0391\u039a\u03a4\u0395\u03a3 \u03a6\u039f\u03a1\u039f\u039b\u039f\u0393\u0399\u0395\u03a3", "223": "\u03a0\u039f\u0399\u039d\u0399\u039a\u0397 \u0394\u0399\u039a\u039f\u039d\u039f\u039c\u0399\u0391", "224": "\u03a3\u03a4\u039f\u0399\u03a7\u0395\u0399\u03a9\u0394\u0397\u03a3 \u0395\u039a\u03a0\u0391\u0399\u0394\u0395\u03a5\u03a3\u0397", "225": "\u03a3\u03a5\u039c\u0392\u039f\u03a5\u039b\u0399\u039f \u0395\u03a0\u0399\u039a\u03a1\u0391\u03a4\u0395\u0399\u0391\u03a3 \u039a\u0391\u0399 \u0394\u0399\u039f\u0399\u039a\u0397\u03a4\u0399\u039a\u0391 \u0394\u0399\u039a\u0391\u03a3\u03a4\u0397\u03a1\u0399\u0391", "226": "\u039d\u039f\u039c\u0399\u039a\u0391 \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0391 \u039a\u0391\u0399 \u0395\u039a\u039c\u0395\u03a4\u0391\u039b\u039b\u0395\u03a5\u03a3\u0395\u0399\u03a3", "227": "\u039f\u0399\u039a\u039f\u039d\u039f\u039c\u0399\u039a\u0397 \u0394\u0399\u039f\u0399\u039a\u0397\u03a3\u0397 \u039d\u0391\u03a5\u03a4\u0399\u039a\u039f\u03a5", "228": "\u03a4\u03a5\u03a0\u039f\u03a3", "229": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u03a3 \u0395\u03a0\u0391\u0393\u0393\u0395\u039b\u039c\u0391\u03a4\u0399\u03a9\u039d", "230": "\u03a0\u0391\u039d\u0395\u03a0\u0399\u03a3\u03a4\u0397\u039c\u0399\u039f \u0399\u03a9\u0391\u039d\u039d\u0399\u039d\u03a9\u039d", "231": "\u03a7\u03a1\u0395\u03a9\u0393\u03a1\u0391\u03a6\u0391", "232": "\u03a0\u03a1\u039f\u03aa\u039f\u039d\u03a4\u0391 \u0395\u039b\u0391\u0399\u0391\u03a3", "233": "\u0395\u039a\u039a\u039b\u0397\u03a3\u0399\u0391 \u0399\u039f\u039d\u0399\u03a9\u039d \u039d\u0397\u03a3\u03a9\u039d", "234": "\u0394\u0399\u039f\u0399\u039a\u0397\u03a3H \u03a5\u0393\u0399\u0395\u0399\u039d\u0397\u03a3", "235": "\u0391\u0395\u03a1\u039f\u03a0\u039f\u03a1\u0399\u039a\u039f \u03a0\u039f\u0399\u039d\u0399\u039a\u039f \u0394\u0399\u039a\u0391\u0399\u039f", "236": "\u039a\u0391\u03a4\u0391\u03a0\u039f\u039b\u0395\u039c\u0397\u03a3\u0397 \u039d\u039f\u03a3\u03a9\u039d \u039a\u0391\u03a4\u2019 \u0399\u0394\u0399\u0391\u039d", "237": "\u0395\u0399\u0394\u0399\u039a\u039f\u0399 \u03a0\u039f\u0399\u039d\u0399\u039a\u039f\u0399 \u039d\u039f\u039c\u039f\u0399", "238": "\u0398\u0397\u03a1\u0391", "239": "\u03a5\u0393\u0399\u0395\u0399\u039d\u0397 \u039a\u0391\u0399 \u0391\u03a3\u03a6\u0391\u039b\u0395\u0399\u0391 \u0395\u03a1\u0393\u0391\u0396\u039f\u039c\u0395\u039d\u03a9\u039d", "240": "\u0394\u0399\u039f\u0399\u039a\u0397\u03a3\u0397 \u03a3\u03a5\u0393\u039a\u039f\u0399\u039d\u03a9\u039d\u0399\u03a9\u039d", "241": "\u0391\u03a0\u039f\u03a3\u03a4\u039f\u039b\u0399\u039a\u0397 \u0394\u0399\u0391\u039a\u039f\u039d\u0399\u0391 \u0395\u039a\u039a\u039b\u0397\u03a3\u0399\u0391\u03a3 \u03a4\u0397\u03a3 \u0395\u039b\u039b\u0391\u0394\u039f\u03a3", "242": "\u03a0\u03a1\u039f\u03a3\u03a9\u03a1\u0399\u039d\u0395\u03a3 \u0391\u03a4\u0395\u039b\u0395\u0399\u0395\u03a3", "243": "\u03a4\u0391\u03a7\u03a5\u0394\u03a1\u039f\u039c\u0399\u039a\u0391 \u03a4\u0391\u039c\u0399\u0395\u03a5\u03a4\u0397\u03a1\u0399\u0391", "244": "\u0391\u039d\u03a9\u03a4\u0391\u03a4\u0397 \u03a3\u03a7\u039f\u039b\u0397 \u039a\u0391\u039b\u03a9\u039d \u03a4\u0395\u03a7\u039d\u03a9\u039d", "245": "\u0394\u0399\u039f\u0399\u039a\u0397\u03a3\u0397 \u0395\u03a1\u0393\u0391\u03a3\u0399\u0391\u03a3", "246": "\u0391\u0393\u0399\u039f\u039d \u039f\u03a1\u039f\u03a3", "247": "\u03a3\u03a7\u039f\u039b\u0395\u03a3 \u03a0. \u039d\u0391\u03a5\u03a4\u0399\u039a\u039f\u03a5", "248": "\u03a4\u03a1\u0391\u03a0\u0395\u0396\u0395\u03a3", "249": "\u0395\u039b\u0395\u0393\u03a7\u039f\u03a3 \u039a\u0399\u039d\u0397\u03a3\u0395\u03a9\u03a3 \u039c\u0395 \u03a4\u039f \u0395\u039e\u03a9\u03a4\u0395\u03a1\u0399\u039a\u039f", "250": "\u0395\u0399\u0394\u0399\u039a\u0391\u0399 \u039a\u0391\u03a4\u0397\u0393\u039f\u03a1\u0399\u0391\u0399 \u03a0\u039b\u039f\u0399\u03a9\u039d", "251": "\u0393\u0395\u03a9\u03a1\u0393\u0399\u039a\u0397 \u03a5\u0393\u0399\u0395\u0399\u039d\u0397", "252": "\u0395\u039e\u039f\u0394\u0391 \u03a0\u039f\u0399\u039d\u0399\u039a\u0397\u03a3 \u0394\u0399\u0391\u0394\u0399\u039a\u0391\u03a3\u0399\u0391\u03a3", "253": "\u0395\u03a1\u0393\u0391\u03a3\u0399\u0391 \u0393\u03a5\u039d\u0391\u0399\u039a\u03a9\u039d \u039a\u0391\u0399 \u0391\u039d\u0397\u039b\u0399\u039a\u03a9\u039d", "254": "\u0394\u0399\u039f\u0399\u039a\u0397\u03a3\u0397 \u0395\u03a6\u039f\u0394\u0399\u0391\u03a3\u039c\u039f\u03a5", "255": "\u0395\u039c\u03a0\u039f\u03a1\u0399\u039a\u0391 \u0395\u03a0\u0391\u0393\u0393\u0395\u039b\u039c\u0391\u03a4\u0391", "256": "\u0395\u039a\u03a4\u0395\u039b\u03a9\u039d\u0399\u03a3\u03a4\u0395\u03a3", "257": "\u03a6\u039f\u03a1\u039f\u039b\u039f\u0393\u0399\u0391 \u039a\u039b\u0397\u03a1\u039f\u039d\u039f\u039c\u0399\u03a9\u039d, \u0394\u03a9\u03a1\u0395\u03a9\u039d \u039a\u039b\u03a0", "258": "\u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u0399 \u03a5\u03a0\u039f\u03a5\u03a1\u0393\u0395\u0399\u039f\u03a5 \u0395\u03a1\u0393\u0391\u03a3\u0399\u0391\u03a3", "259": "\u0395\u039d\u0399\u03a3\u03a7\u03a5\u03a3\u0397 \u0395\u03a0\u0399\u03a3\u03a4\u0397\u039c\u03a9\u039d \u039a\u0391\u0399 \u03a4\u0395\u03a7\u039d\u03a9\u039d", "260": "\u0394\u0399\u0391\u03a6\u039f\u03a1\u039f\u0399 \u03a6\u039f\u03a1\u039f\u039b\u039f\u0393\u0399\u039a\u039f\u0399 \u039d\u039f\u039c\u039f\u0399", "261": "\u03a4\u0395\u03a7\u039d\u0399\u039a\u0395\u03a3 \u03a0\u03a1\u039f\u0394\u0399\u0391\u0393\u03a1\u0391\u03a6\u0395\u03a3", "262": "\u039c\u0397\u03a4\u03a1\u03a9\u0391 \u0394\u0397\u039c\u039f\u03a4\u03a9\u039d", "263": "\u039a\u0391\u03a4\u0391\u03a3\u03a4\u0391\u03a3\u0397 \u0394\u0397\u039c\u039f\u03a3\u0399\u03a9\u039d \u03a5\u03a0\u0391\u039b\u039b\u0397\u039b\u03a9\u039d", "264": "\u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u039d \u0394\u0397\u039c\u03a9\u039d \u039a\u0391\u0399 \u039a\u039f\u0399\u039d\u039f\u03a4\u0397\u03a4\u03a9\u039d", "265": "\u03a5\u0393\u0395\u0399\u039f\u039d\u039f\u039c\u0399\u039a\u0397 \u0391\u039d\u03a4\u0399\u039b\u0397\u03a8\u0397", "266": "\u03a4\u0395\u039b\u0397 \u03a7\u0391\u03a1\u03a4\u039f\u03a3\u0397\u039c\u039f\u03a5", "267": "\u03a3\u03a4\u03a1\u0391\u03a4\u0399\u03a9\u03a4\u0399\u039a\u039f\u0399 \u0393\u0395\u039d\u0399\u039a\u0391", "268": "\u039b\u0399\u039c\u0395\u039d\u0399\u039a\u0395\u03a3 \u0391\u03a1\u03a7\u0395\u03a3", "269": "\u0395\u039b\u0395\u0393\u03a7\u039f\u03a3 \u039a\u03a5\u039a\u039b\u039f\u03a6\u039f\u03a1\u0399\u0391\u03a3", "270": "\u03a4\u0391\u039c\u0395\u0399\u039f \u03a3\u03a5\u039d\u03a4\u0391\u039e\u0395\u03a9\u03a3 \u039a\u0391\u0399 \u0391\u03a5\u03a4\u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u03a3 \u03a5\u0393\u0395\u0399\u039f\u039d\u039f\u039c\u0399\u039a\u03a9\u039d", "271": "\u03a0\u039f\u039b\u0399\u03a4\u0399\u039a\u0397 \u039a\u0391\u0399 \u039f\u0399\u039a\u039f\u039d\u039f\u039c\u0399\u039a\u0397 \u0395\u03a0\u0399\u03a3\u03a4\u03a1\u0391\u03a4\u0395\u03a5\u03a3\u0397", "272": "\u03a4\u0397\u039b\u0395\u0393\u03a1\u0391\u03a6\u039f\u0399", "273": "\u03a3\u0395\u0399\u03a3\u039c\u039f\u03a0\u039b\u0397\u039a\u03a4\u039f\u0399", "274": "\u0399\u0391\u039c\u0391\u03a4\u0399\u039a\u0395\u03a3 \u03a0\u0397\u0393\u0395\u03a3", "275": "\u0399\u0394\u0399\u03a9\u03a4\u0399\u039a\u039f \u039d\u0391\u03a5\u03a4\u0399\u039a\u039f \u0394\u0399\u039a\u0391\u0399\u039f", "276": "\u0394\u0399\u0395\u0398\u039d\u0395\u0399\u03a3 \u03a5\u0393\u0395\u0399\u039f\u039d\u039f\u039c\u0399\u039a\u0395\u03a3 \u03a3\u03a5\u039c\u0392\u0391\u03a3\u0395\u0399\u03a3", "277": "\u039d\u039f\u039c\u0399\u039a\u0391 \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0391 \u0394\u0397\u039c\u039f\u03a3\u0399\u039f\u03a5 \u0394\u0399\u039a\u0391\u0399\u039f\u03a5", "278": "\u0395\u039a\u039a\u039b\u0397\u03a3\u0399\u0391 \u039a\u03a1\u0397\u03a4\u0397\u03a3", "279": "\u03a0\u03a1\u039f\u03a3\u03a4\u0391\u03a3\u0399\u0391 \u039d\u039f\u039c\u0399\u03a3\u039c\u0391\u03a4\u039f\u03a3", "280": "\u03a0\u03a1\u039f\u03a3\u03a4\u0391\u03a3\u0399\u0391 \u03a0\u03a1\u039f\u03aa\u039f\u039d\u03a4\u03a9\u039d \u0391\u039c\u03a0\u0395\u039b\u039f\u03a5", "281": "\u0391\u039d\u0391\u03a0\u0397\u03a1\u039f\u0399 \u039a\u0391\u0399 \u0398\u03a5\u039c\u0391\u03a4\u0391 \u03a0\u039f\u039b\u0395\u039c\u039f\u03a5", "282": "\u03a0\u0391\u03a1\u039f\u03a7\u0395\u03a3 \u0394\u0399\u0391\u03a6\u039f\u03a1\u0395\u03a3", "283": "\u03a4\u039f\u03a0\u0399\u039a\u0397 \u0391\u03a5\u03a4\u039f\u0394\u0399\u039f\u0399\u039a\u0397\u03a3\u0397", "284": "O\u03a1\u0393\u0391\u039d\u03a9\u03a3\u0397 \u03a3\u03a4\u03a1\u0391\u03a4\u039f\u03a5 \u039e\u0397\u03a1\u0391\u03a3", "285": "\u0394\u0399\u0391\u039a\u039f\u03a0\u0395\u03a3 \u03a4\u0397\u03a3 \u0395\u03a1\u0393\u0391\u03a3\u0399\u0391\u03a3", "286": "\u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u03a3 \u03a0\u039f\u039b\u0395\u039c\u0399\u039a\u0397\u03a3 \u0391\u0395\u03a1\u039f\u03a0\u039f\u03a1\u0399\u0391\u03a3", "287": "\u0395\u03a0\u0399\u039c\u0395\u039b\u0397\u03a4\u0397\u03a1\u0399\u0391", "288": "\u0395\u039a\u039a\u039b\u0397\u03a3\u0399\u0391 \u03a4\u0397\u03a3 \u0395\u039b\u039b\u0391\u0394\u039f\u03a3", "289": "\u039d\u0391\u03a1\u039a\u03a9\u03a4\u0399\u039a\u0391", "290": "\u0395\u039a\u039c\u0395\u03a4\u0391\u039b\u039b\u0395\u03a5\u03a3\u0397 \u03a4\u0391\u03a7\u03a5\u0394\u03a1\u039f\u039c\u0395\u0399\u03a9\u039d", "291": "\u039c\u039f\u03a5\u03a3\u0399\u039a\u0397", "292": "\u039d\u039f\u039c\u0391\u03a1\u03a7\u0399\u0395\u03a3", "293": "\u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f \u03a0\u039f\u039b\u0395\u039c\u0399\u039a\u039f\u03a5 \u039d\u0391\u03a5\u03a4\u0399\u039a\u039f\u03a5", "294": "\u0393\u0395\u039d\u0399\u039a\u039f \u03a7\u0397\u039c\u0395\u0399\u039f \u03a4\u039f\u03a5 \u039a\u03a1\u0391\u03a4\u039f\u03a5\u03a3", "295": "\u039a\u03a1\u0391\u03a4\u0399\u039a\u0397", "296": "\u0394\u0399\u039f\u0399\u039a\u0397\u03a3\u0397 \u03a0\u039f\u039b\u0395\u039c\u0399\u039a\u039f\u03a5 \u039d\u0391\u03a5\u03a4\u0399\u039a\u039f\u03a5", "297": "\u03a0\u0391\u03a1\u039f\u03a7\u039f\u0399 \u03a3\u03a4\u0391\u0398\u0395\u03a1\u03a9\u039d \u0397\u039b\u0395\u039a\u03a4\u03a1\u039f\u039d\u0399\u039a\u03a9\u039d \u0395\u03a0\u0399\u039a\u039f\u0399\u039d\u03a9\u039d\u0399\u03a9\u039d", "298": "\u0395\u03a0\u0391\u0393\u0393\u0395\u039b\u039c\u0391\u03a4\u0399\u039a\u039f\u03a3 \u039a\u0399\u039d\u0394\u03a5\u039d\u039f\u03a3", "299": "\u0395\u039d\u039f\u03a7\u0395\u03a3 \u03a3\u0395 \u03a7\u03a1\u03a5\u03a3\u039f \u039a\u0391\u0399 \u03a3\u03a5\u039d\u0391\u039b\u039b\u0391\u0393\u039c\u0391", "300": "\u0399\u03a0\u03a0\u039f\u03a0\u0391\u03a1\u0391\u0393\u03a9\u0393\u0397", "301": "\u0391\u03a5\u03a4\u039f\u039a\u0399\u039d\u0397\u03a4\u0391", "302": "\u0391\u0393\u039f\u03a1\u0391\u039d\u039f\u039c\u0399\u039a\u0395\u03a3 \u0394\u0399\u0391\u03a4\u0391\u039e\u0395\u0399\u03a3", "303": "\u03a0\u03a1\u039f\u03a3\u03a6\u03a5\u0393\u0395\u03a3", "304": "\u0394\u0399\u0391\u03a6\u039f\u03a1\u0391 \u03a3\u03a4\u03a1\u0391\u03a4\u0399\u03a9\u03a4\u0399\u039a\u0391 \u0398\u0395\u039c\u0391\u03a4\u0391", "305": "\u0393\u0395\u039d. \u0393\u03a1\u0391\u039c\u039c. \u0392\u0399\u039f\u039c\u0397\u03a7\u0391\u039d\u0399\u0391\u03a3 - \u0393\u0395\u039d. \u0393\u03a1\u0391\u039c\u039c. \u0395\u03a1\u0395\u03a5\u039d\u0391\u03a3 \u039a\u0391\u0399 \u03a4\u0395\u03a7\u039d\u039f\u039b\u039f\u0393\u0399\u0391\u03a3", "306": "\u0394\u0399\u0391\u039c\u0395\u03a4\u0391\u039a\u039f\u039c\u0399\u03a3\u0397", "307": "\u0394\u0399\u039a\u0391\u0399\u039f\u03a3\u03a4\u0391\u03a3\u0399\u039f", "308": "\u03a5\u0394\u0391\u03a4\u0391", "309": "\u03a6\u039f\u03a1\u039f\u039b\u039f\u0393\u0399\u039a\u0395\u03a3 \u0394\u0399\u0395\u03a5\u039a\u039f\u039b\u03a5\u039d\u03a3\u0395\u0399\u03a3 \u039a\u0391\u0399 \u0391\u03a0\u0391\u039b\u039b\u0391\u0393\u0395\u03a3", "310": "\u039c\u039f\u039d\u039f\u03a0\u03a9\u039b\u0399\u0391", "311": "\u0395\u0399\u0394\u0399\u039a\u0395\u03a3 \u0394\u0399\u0391\u0394\u0399\u039a\u0391\u03a3\u0399\u0395\u03a3", "312": "\u03a0\u03a1\u039f\u039d\u039f\u0399\u0391 \u0393\u0399\u0391 \u03a4\u039f\u03a5\u03a3 \u03a3\u03a4\u03a1\u0391\u03a4\u0399\u03a9\u03a4\u0399\u039a\u039f\u03a5\u03a3", "313": "\u03a0\u039f\u039b\u0399\u03a4\u0399\u039a\u0397 \u0394\u0399\u039a\u039f\u039d\u039f\u039c\u0399\u0391", "314": "\u039f\u03a1\u0393\u0391\u039d\u03a9\u03a3\u0397 \u03a7\u03a1\u039f\u039d\u039f\u03a5 \u0395\u03a1\u0393\u0391\u03a3\u0399\u0391\u03a3", "315": "\u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f \u03a4\u03a5\u03a0\u039f\u03a5", "316": "\u0394\u0399\u039a\u0391\u03a3\u03a4\u0399\u039a\u039f\u0399 \u0395\u03a0\u0399\u039c\u0395\u039b\u0397\u03a4\u0395\u03a3", "317": "\u039b\u039f\u03a5\u03a4\u03a1\u039f\u03a0\u039f\u039b\u0395\u0399\u03a3", "318": "\u03a4\u0395\u039b\u03a9\u039d\u0395\u0399\u0391\u039a\u039f\u03a3 \u039a\u03a9\u0394\u0399\u039a\u0391\u03a3", "319": "\u03a4\u0391\u039c\u0395\u0399\u039f \u03a3\u03a5\u039d\u03a4\u0391\u039e\u0395\u03a9\u039d \u039d\u039f\u039c\u0399\u039a\u03a9\u039d", "320": "\u0394\u0399\u0391\u03a6\u039f\u03a1\u039f\u0399 \u03a4\u0395\u039b\u03a9\u039d\u0395\u0399\u0391\u039a\u039f\u0399 \u039d\u039f\u039c\u039f\u0399", "321": "\u0394\u0399\u039f\u0399\u039a\u0397\u03a3\u0397 \u03a0\u039f\u039b\u0399\u03a4\u0399\u039a\u0397\u03a3 \u0391\u0395\u03a1\u039f\u03a0\u039f\u03a1\u0399\u0391\u03a3", "322": "\u0391\u0395\u03a1\u039f\u03a0\u039f\u03a1\u0399\u039a\u0395\u03a3 \u0395\u039a\u039c\u0395\u03a4\u0391\u039b\u039b\u0395\u03a5\u03a3\u0395\u0399\u03a3", "323": "\u0395\u039c\u03a0\u039f\u03a1\u0399\u039a\u0395\u03a3 \u03a0\u03a1\u0391\u039e\u0395\u0399\u03a3", "324": "\u0394\u0399\u039a\u0391\u03a3\u03a4\u0397\u03a1\u0399\u0391", "325": "\u0392\u0391\u03a3\u0399\u039b\u0395\u0399\u0391 \u039a\u0391\u0399 \u0391\u039d\u03a4\u0399\u0392\u0391\u03a3\u0399\u039b\u0395\u0399\u0391", "326": "\u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f \u03a0\u039f\u039b\u0395\u039c\u0399\u039a\u0397\u03a3 \u0391\u0395\u03a1\u039f\u03a0\u039f\u03a1\u0399\u0391\u03a3", "327": "\u03a0\u03a1\u039f\u03a3\u03a4\u0391\u03a3\u0399\u0391 \u039a\u0391\u0399 \u039a\u0399\u039d\u0397\u03a4\u03a1\u0391 \u0399\u0394\u0399\u03a9\u03a4\u0399\u039a\u03a9\u039d \u0395\u03a0\u0395\u039d\u0394\u03a5\u03a3\u0395\u03a9\u039d", "328": "\u0392\u0391\u03a3\u0399\u039b\u0399\u039a\u0391 \u0399\u0394\u03a1\u03a5\u039c\u0391\u03a4\u0391", "329": "\u03a3\u0399\u0394\u0397\u03a1\u039f\u0394\u03a1\u039f\u039c\u039f\u0399 \u0393\u0395\u039d\u0399\u039a\u0391", "330": "\u03a0\u039d\u0395\u03a5\u039c\u0391\u03a4\u0399\u039a\u0397 \u0399\u0394\u0399\u039f\u039a\u03a4\u0397\u03a3\u0399\u0391", "331": "\u0394\u0399\u0391\u03a6\u039f\u03a1\u0391 \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u03a4\u0399\u039a\u0391 \u03a4\u0391\u039c\u0395\u0399\u0391", "332": "\u03a5\u0393\u0395\u0399\u039f\u039d\u039f\u039c\u0399\u039a\u0391 \u0395\u03a0\u0391\u0393\u0393\u0395\u039b\u039c\u0391\u03a4\u0391", "333": "\u03a6\u039f\u03a1\u039f\u039b\u039f\u0393\u0399\u0391 \u039a\u0391\u03a0\u039d\u039f\u03a5", "334": "\u039f\u0399\u039a\u039f\u039d\u039f\u039c\u0399\u039a\u0397 \u0394\u0399\u039f\u0399\u039a\u0397\u03a3\u0397", "335": "\u03a7\u03a9\u03a1\u039f\u03a6\u03a5\u039b\u0391\u039a\u0397", "336": "\u03a4\u0395\u039b\u03a9\u039d\u0395\u0399\u0391\u039a\u0397 \u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u0391", "337": "\u03a0\u0391\u039d\u0395\u03a0\u0399\u03a3\u03a4\u0397\u039c\u0399\u039f \u03a0\u0391\u03a4\u03a1\u03a9\u039d", "338": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0395\u03a0\u0399\u039a\u039f\u03a5\u03a1\u0399\u039a\u0397\u03a3 \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u03a3 \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u03a4\u03a9\u039d", "339": "\u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u03a4\u0399\u039a\u039f\u0399 \u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u0399", "340": "\u03a3\u03a4\u03a1\u0391\u03a4\u0399\u03a9\u03a4\u0399\u039a\u0391 \u0395\u03a1\u0393\u0391 \u039a\u0391\u0399 \u03a0\u03a1\u039f\u039c\u0397\u0398\u0395\u0399\u0395\u03a3", "341": "\u03a5\u03a0\u039f\u039d\u039f\u039c\u039f\u0399", "342": "\u03a6\u039f\u03a1\u039f\u039b\u039f\u0393\u0399\u0391 \u039a\u0395\u03a6\u0391\u039b\u0391\u0399\u039f\u03a5", "343": "\u0395\u03a4\u0391\u0399\u03a1\u0395\u0399\u0395\u03a3 \u03a0\u0395\u03a1\u0399\u03a9\u03a1\u0399\u03a3\u039c\u0395\u039d\u0397\u03a3 \u0395\u03a5\u0398\u03a5\u039d\u0397\u03a3", "344": "\u03a5\u03a0\u039f\u03a5\u03a1\u0393\u0395\u038a\u039f \u039a\u039f\u0399\u039d\u03a9\u039d\u0399\u039a\u038f\u039d \u0391\u03a3\u03a6\u0391\u039b\u038a\u03a3\u0395\u03a9\u039d", "345": "\u03a3\u03a5\u039c\u0392\u039f\u039b\u0391\u0399\u039f\u0393\u03a1\u0391\u03a6\u039f\u0399", "346": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u03a3 \u0391\u03a1\u03a4\u0395\u03a1\u0393\u0391\u03a4\u03a9\u039d", "347": "\u0395\u03a1\u0393\u0391 \u039a\u0391\u0399 \u03a0\u03a1\u039f\u039c\u0397\u0398\u0395\u0399\u0395\u03a3 \u0394\u0397\u039c\u03a9\u039d \u039a\u0391\u0399 \u039a\u039f\u0399\u039d\u039f\u03a4\u0397\u03a4\u03a9\u039d", "348": "\u0395\u039b\u0395\u0393\u039a\u03a4\u0399\u039a\u039f \u03a3\u03a5\u039d\u0395\u0394\u03a1\u0399\u039f", "349": "\u0394\u0399\u0391\u03a6\u039f\u03a1\u0391 \u0395\u03a0\u0399\u03a3\u03a4\u0397\u039c\u039f\u039d\u0399\u039a\u0391 \u0399\u0394\u03a1\u03a5\u039c\u0391\u03a4\u0391", "350": "\u0391\u039e\u0399\u03a9\u039c\u0391\u03a4\u0399\u039a\u039f\u0399 \u0395\u039d\u039f\u03a0\u039b\u03a9\u039d \u0394\u03a5\u039d\u0391\u039c\u0395\u03a9\u039d", "351": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u03a3 \u0395\u039c\u03a0\u039f\u03a1\u03a9\u039d (\u03a4.\u0391.\u0395)", "352": "\u03a3\u03a4\u03a1\u0391\u03a4\u0399\u03a9\u03a4\u0399\u039a\u0397 \u03a0\u039f\u0399\u039d\u0399\u039a\u0397", "353": "\u03a6\u039f\u03a1\u039f\u039b\u039f\u0393\u0399\u0391 \u039f\u0399\u039d\u039f\u03a0\u039d\u0395\u03a5\u039c\u0391\u03a4\u039f\u03a3", "354": "\u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u03a3 \u0393\u0395\u03a9\u03a1\u0393\u0399\u039a\u03a9\u039d \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u039d", "355": "\u03a3\u03a5\u039b\u039b\u039f\u0393\u0399\u039a\u0395\u03a3 \u03a3\u03a5\u039c\u0392\u0391\u03a3\u0395\u0399\u03a3 \u0395\u03a1\u0393\u0391\u03a3\u0399\u0391\u03a3", "356": "\u03a7\u03a1\u0397\u039c\u0391\u03a4\u0399\u03a3\u03a4\u0397\u03a1\u0399\u0391", "357": "\u03a0\u039f\u039b\u0399\u03a4\u0399\u039a\u0391\u0399 \u039a\u0391\u0399 \u03a3\u03a4\u03a1\u0391\u03a4\u0399\u03a9\u03a4\u0399\u039a\u0391\u0399 \u03a3\u03a5\u039d\u03a4\u0391\u039e\u0395\u0399\u03a3", "358": "\u039a\u039f\u0399\u039d\u03a9\u039d\u0399\u039a\u0397 \u03a3\u03a4\u0395\u0393\u0391\u03a3\u03a4\u0399\u039a\u0397 \u03a3\u03a5\u039d\u0394\u03a1\u039f\u039c\u0397", "359": "\u039a\u0391\u03a4\u039f\u03a7\u03a5\u03a1\u03a9\u03a3\u0397 \u0395\u03a0\u0391\u0393\u0393\u0395\u039b\u039c\u0391\u03a4\u03a9\u039d", "360": "\u03a6\u039f\u03a1\u039f\u039b\u039f\u0393\u0399\u0391 \u039a\u0391\u0398\u0391\u03a1\u0391\u03a3 \u03a0\u03a1\u039f\u03a3\u039f\u0394\u039f\u03a5", "361": "\u03a0\u0395\u03a1\u0399\u03a6\u0395\u03a1\u0395\u0399\u0395\u03a3", "362": "\u0395\u039a\u039a\u039b\u0397\u03a3\u0399\u0391\u03a3\u03a4\u0399\u039a\u0397 \u0394\u0399\u039a\u0391\u0399\u039f\u03a3\u03a5\u039d\u0397", "363": "\u03a5\u03a0\u039f\u03a5\u03a1\u0393\u0395\u0399\u039f \u039f\u0399\u039a\u039f\u039d\u039f\u039c\u0399\u039a\u03a9\u039d", "364": "\u0395\u0398\u039d\u0399\u039a\u0391 \u039a\u039b\u0397\u03a1\u039f\u0394\u039f\u03a4\u0397\u039c\u0391\u03a4\u0391", "365": "\u0395\u0393\u0393\u0395\u0399\u039f\u0392\u0395\u039b\u03a4\u0399\u03a9\u03a4\u0399\u039a\u0391 \u0395\u03a1\u0393\u0391", "366": "\u039b\u0399\u039c\u0395\u039d\u0395\u03a3", "367": "\u03a6\u03a5\u039b\u0391\u039a\u0395\u03a3", "368": "\u0393\u0395\u03a9\u03a1\u0393\u0399\u039a\u0397 \u0395\u039a\u03a0\u0391\u0399\u0394\u0395\u03a5\u03a3\u0397", "369": "\u03a0\u039b\u0397\u03a1\u03a9\u039c\u0397 \u0395\u03a1\u0393\u0391\u03a3\u0399\u0391\u03a3", "370": "\u0395\u039c\u03a0\u039f\u03a1\u0399\u039a\u039f\u03a3 \u039d\u039f\u039c\u039f\u03a3", "371": "\u0399\u0394\u03a1\u03a5\u039c\u0391 \u039a\u039f\u0399\u039d\u03a9\u039d\u0399\u039a\u03a9\u039d \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u039d", "372": "\u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u03a4\u0399\u039a\u0391 \u03a4\u0391\u039c\u0395\u0399\u0391 \u03a4\u03a1\u0391\u03a0\u0395\u0396\u03a9\u039d", "373": "\u0395\u0399\u0394\u0399\u039a\u039f\u0399 \u0391\u0393\u03a1\u039f\u03a4\u0399\u039a\u039f\u0399 \u039d\u039f\u039c\u039f\u0399", "374": "\u0394\u0399\u0395\u0398\u039d\u0395\u0399\u03a3 \u0394\u0399\u039a\u039f\u039d\u039f\u039c\u0399\u039a\u0395\u03a3 \u03a3\u03a5\u039c\u0392\u0391\u03a3\u0395\u0399\u03a3", "375": "\u03a5\u03a0\u039f\u03a5\u03a1\u0393\u0395\u0399\u0391 \u039c\u0391\u039a\u0395\u0394\u039f\u039d\u0399\u0391\u03a3\u2013\u0398\u03a1\u0391\u039a\u0397\u03a3, \u0391\u0399\u0393\u0391\u0399\u039f\u03a5 \u039a.\u039b.\u03a0", "376": "\u0391\u03a3\u03a4\u03a5\u039d\u039f\u039c\u0399\u039a\u039f\u038a \u03a3\u039a\u038e\u039b\u039f\u0399", "377": "\u0394\u0399\u0391\u03a6\u039f\u03a1\u0391 \u0398\u0395\u039c\u0391\u03a4\u0391", "378": "\u0395\u039a\u0394\u039f\u03a3\u0397 \u0395\u0393\u039a\u039b\u0397\u039c\u0391\u03a4\u0399\u03a9\u039d", "379": "\u0391\u0393\u039f\u03a1\u0391\u039d\u039f\u039c\u0399\u0391", "380": "\u0394\u0399\u039a\u0391\u03a3\u03a4\u0399\u039a\u039f \u03a4\u039f\u03a5 \u0394\u0397\u039c\u039f\u03a3\u0399\u039f\u03a5", "381": "\u0391\u03a3\u03a4\u0399\u039a\u039f\u03a3 \u039a\u03a9\u0394\u0399\u039a\u0391\u03a3", "382": "\u03a4\u0395\u039b\u03a9\u039d\u0395\u0399\u0391\u039a\u0395\u03a3 \u0391\u03a4\u0395\u039b\u0395\u0399\u0395\u03a3", "383": "\u0391\u0393\u03a1\u039f\u03a4\u0399\u039a\u0395\u03a3 \u039c\u0399\u03a3\u0398\u03a9\u03a3\u0395\u0399\u03a3", "384": "\u039b\u0395\u03a9\u03a6\u039f\u03a1\u0395\u0399\u0391", "385": "\u0393\u0395\u039d\u0399\u039a\u039f\u0399 \u0395\u03a0\u0399\u03a3\u0399\u03a4\u0399\u03a3\u03a4\u0399\u039a\u039f\u0399 \u039d\u039f\u039c\u039f\u0399", "386": "\u0391\u03a3\u03a4\u03a5\u039d\u039f\u039c\u0399\u0391 \u03a0\u039f\u039b\u0395\u03a9\u039d", "387": "\u039c\u0397\u03a7\u0391\u039d\u0399\u039a\u039f\u0399 \u039a\u0391\u0399 \u0395\u03a1\u0393\u039f\u039b\u0391\u0392\u039f\u0399", "388": "\u03a0\u039f\u039b\u0395\u039c\u0399\u039a\u0395\u03a3 \u03a3\u03a5\u039d\u03a4\u0391\u039e\u0395\u0399\u03a3"}}}}], "splits": [{"name": "train", "num_bytes": 216757887, "num_examples": 28536}, {"name": "test", "num_bytes": 71533786, "num_examples": 9516}, {"name": "validation", "num_bytes": 68824457, "num_examples": 9511}], "download_size": 145510070, "dataset_size": 357116130}, {"config_name": "subject", "features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "\u039c\u0395\u03a4\u039f\u03a7\u0399\u039a\u039f \u03a4\u0391\u039c\u0395\u0399\u039f \u03a0.\u039d", "1": "\u039c\u0395\u03a4\u0391\u039d\u0391\u03a3\u03a4\u0395\u03a5\u03a3\u0397 \u03a3\u03a4\u039f \u0392\u0395\u039b\u0393\u0399\u039f", "2": "\u039d\u0391\u03a5\u03a4\u0399\u039a\u0395\u03a3 \u03a6\u03a5\u039b\u0391\u039a\u0395\u03a3", "3": "\u039a\u0391\u039d\u039f\u039d\u0399\u03a3\u039c\u039f\u03a3 \u0395\u039a\u03a4\u0395\u039b\u0395\u03a3\u0395\u03a9\u03a3 \u03a3\u03a4\u03a1\u0391\u03a4\u0399\u03a9\u03a4\u0399\u039a\u03a9\u039d \u0395\u03a1\u0393\u03a9\u039d", "4": "\u0394\u0399\u039f\u0399\u039a\u0397\u03a4\u0399\u039a\u0397 \u039a\u0391\u0399 \u039f\u0399\u039a\u039f\u039d\u039f\u039c\u0399\u039a\u0397 \u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u0391", "5": "\u0391\u03a3\u039a\u0397\u03a3\u0397 \u03a0\u039f\u0399\u039d\u0399\u039a\u0397\u03a3 \u0391\u0393\u03a9\u0393\u0397\u03a3", "6": "\u039a\u0391\u039d\u039f\u039d\u0399\u03a3\u039c\u039f\u03a3 \u0395\u03a3\u03a9\u03a4\u0395\u03a1\u0399\u039a\u0397\u03a3 \u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u0391\u03a3 \u0395\u03a0\u0399\u0392\u0391\u03a4\u0397\u0393\u03a9\u039d \u03a0\u039b\u039f\u0399\u03a9\u039d", "7": "\u039a\u03a9\u0394\u0399\u039a\u0391\u03a3 \u03a0\u039f\u039b\u0399\u03a4\u0399\u039a\u0397\u03a3 \u0394\u0399\u039a\u039f\u039d\u039f\u039c\u0399\u0391\u03a3 - \u03a0\u0391\u039b\u0391\u0399\u039f\u03a3", "8": "\u039a\u0391\u03a4\u0391\u03a3\u03a4\u0391\u03a4\u0399\u039a\u039f \u03a4\u0391\u039c\u0395\u0399\u039f\u03a5 \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u03a3 \u0395\u039c\u03a0\u039f\u03a1\u03a9\u039d (\u03a4.\u0391.\u0395)", "9": "\u039c\u0397\u03a7\u0391\u039d\u039f\u039b\u039f\u0393\u039f\u0399, \u0397\u039b\u0395\u039a\u03a4\u03a1\u039f\u039b\u039f\u0393\u039f\u0399, \u039d\u0391\u03a5\u03a0\u0397\u0393\u039f\u0399 \u039a\u0391\u0399 \u039c\u0397\u03a7\u0391\u039d\u039f\u0394\u0397\u0393\u039f\u0399", "10": "\u03a3\u03a4\u0395\u0393\u0391\u03a3\u0397 \u03a0\u0391\u03a1\u0391\u03a0\u0397\u0393\u039c\u0391\u03a4\u039f\u03a5\u03a7\u03a9\u039d", "11": "\u039d\u039f\u039c\u0399\u03a3\u039c\u0391\u03a4\u0399\u039a\u0397 \u0395\u03a0\u0399\u03a4\u03a1\u039f\u03a0\u0397", "12": "\u03a0\u0395\u03a1\u0399\u03a6\u0395\u03a1\u0395\u0399\u0391\u039a\u0391 \u03a4\u0391\u039c\u0395\u0399\u0391", "13": "\u039c\u0397\u03a4\u03a1\u03a9\u0391 \u0391\u03a1\u03a1\u0395\u039d\u03a9\u039d", "14": "\u0394\u0399\u039a\u0391\u03a3\u03a4\u0399\u039a\u0395\u03a3 \u0394\u0399\u0391\u039a\u039f\u03a0\u0395\u03a3", "15": "\u03a3\u03a5\u039c\u03a6\u03a9\u039d\u0399\u0391 \u03a0\u0395\u03a1\u0399 \u03a0\u03a1\u039f\u039e\u0395\u039d\u0399\u039a\u03a9\u039d \u03a3\u03a7\u0395\u03a3\u0395\u03a9\u039d", "16": "\u03a0\u0391\u039b\u0391\u0399\u039f\u0399 \u0391\u03a3\u03a4\u0399\u039a\u039f\u0399 \u039a\u03a9\u0394\u0399\u039a\u0395\u03a3", "17": "\u039a\u039b\u0391\u0394\u039f\u03a3 \u0395\u03a0\u0399\u039a\u039f\u03a5\u03a1\u0399\u039a\u0397\u03a3 \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u03a3 \u0394\u0399\u039a\u0397\u0393\u039f\u03a1\u03a9\u039d (\u039a.\u0395.\u0391.\u0394.)", "18": "\u039f\u0399\u039a\u039f\u039d\u039f\u039c\u0399\u039a\u0395\u03a3 \u0391\u03a1\u039c\u039f\u0394\u0399\u039f\u03a4\u0397\u03a4\u0395\u03a3 \u03a3\u03a4\u03a1\u0391\u03a4\u0399\u03a9\u03a4\u0399\u039a\u03a9\u039d \u0391\u03a1\u03a7\u03a9\u039d", "19": "\u03a5\u03a0\u039f\u039d\u039f\u039c\u039f\u0399 \u0398\u0395\u03a3\u03a3\u0391\u039b\u039f\u039d\u0399\u039a\u0397\u03a3", "20": "\u0394\u0399\u0391\u03a6\u039f\u03a1\u0391 \u03a5\u0394\u03a1\u0391\u03a5\u039b\u0399\u039a\u0391 \u03a4\u0391\u039c\u0395\u0399\u0391", "21": "\u0395\u039b\u0395\u0393\u03a7\u039f\u03a3 \u0398\u0395\u0391\u03a4\u03a1\u0399\u039a\u03a9\u039d \u0395\u03a1\u0393\u03a9\u039d \u039a\u0391\u0399 \u0394\u0399\u03a3\u039a\u03a9\u039d", "22": "\u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u0391 \u0399\u03a0\u03a0\u039f\u03a0\u0391\u03a1\u0391\u0393\u03a9\u0393\u0397\u03a3", "23": "\u03a3\u03a9\u039c\u0391\u03a4\u0399\u039a\u0397 \u0391\u0393\u03a9\u0393\u0397", "24": "\u0395\u039a\u0394\u0399\u039a\u0391\u03a3\u0397 \u03a4\u0395\u039b\u03a9\u039d\u0395\u0399\u0391\u039a\u03a9\u039d \u03a0\u0391\u03a1\u0391\u0392\u0391\u03a3\u0395\u03a9\u039d", "25": "\u039a\u0399\u039d\u0397\u03a4\u03a1\u0391 \u0399\u0394\u0399\u03a9\u03a4\u0399\u039a\u03a9\u039d \u0395\u03a0\u0395\u039d\u0394\u03a5\u03a3\u0395\u03a9\u039d \u03a3\u03a4\u0397\u039d \u03a0\u0395\u03a1\u0399\u03a6\u0395\u03a1\u0395\u0399\u0391", "26": "\u039c\u0395\u039b\u0397 \u039f\u0399\u039a\u039f\u0393\u0395\u039d\u0395\u0399\u0391\u03a3 \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u039c\u0395\u039d\u03a9\u039d", "27": "\u039a\u0395\u03a1\u039c\u0391\u03a4\u0391", "28": "\u0395\u03a0\u0399\u0394\u039f\u039c\u0391 \u0391\u039d\u0391\u03a0\u03a1\u039f\u03a3\u0391\u03a1\u039c\u039f\u0393\u0397\u03a3", "29": "\u0395\u039a\u03a4\u0395\u039b\u0395\u03a3\u0397 \u0394\u0391\u03a3\u0399\u039a\u03a9\u039d \u0395\u03a1\u0393\u03a9\u039d", "30": "\u039b\u0399\u03a0\u0391\u03a3\u039c\u0391\u03a4\u0391", "31": "\u0395\u03a0\u0399\u03a7\u039f\u03a1\u0397\u0393\u0397\u03a3\u0397 \u03a3\u03a0\u039f\u03a5\u0394\u0391\u03a3\u03a4\u03a9\u039d \u03a4\u0395\u039a\u039d\u03a9\u039d \u0395\u03a1\u0393\u0391\u0396\u039f\u039c\u0395\u039d\u03a9\u039d", "32": "\u03a0\u03a1\u039f\u03a3\u03a4\u0391\u03a3\u0399\u0391 \u039f\u0399\u039d\u039f\u03a5", "33": "\u03a0\u03a4\u0397\u03a4\u0399\u039a\u039f \u039a\u0391\u0399 \u039a\u0391\u03a4\u0391\u0394\u03a5\u03a4\u0399\u039a\u039f \u0395\u03a0\u0399\u0394\u039f\u039c\u0391", "34": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0395\u03a0\u0399\u039a\u039f\u03a5\u03a1\u0399\u039a\u0397\u03a3 \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u03a3 \u03a5\u03a0\u0391\u039b\u039b\u0397\u039b\u03a9\u039d \u0395\u039c\u03a0\u039f\u03a1\u0399\u039a\u03a9\u039d \u039a\u0391\u03a4\u0391\u03a3\u03a4\u0397\u039c\u0391\u03a4\u03a9\u039d (\u03a4.\u0395.\u0391.\u03a5.\u0395.\u039a.)", "35": "\u0395\u039a\u039a\u039f\u039a\u039a\u0399\u03a3\u0397 \u0392\u0391\u039c\u0392\u0391\u039a\u039f\u03a3", "36": "\u039c\u039f\u039d\u039f\u03a0\u03a9\u039b\u0399\u039f \u039a\u0399\u039d\u0399\u039d\u039f\u03a5", "37": "\u0399\u039d\u03a3\u03a4\u0399\u03a4\u039f\u03a5\u03a4\u0391 \u0394\u0399\u0395\u0398\u039d\u039f\u03a5\u03a3 \u0394\u0399\u039a\u0391\u0399\u039f\u03a5", "38": "\u0399\u0391\u03a0\u03a9\u039d\u0399\u0391 \u2013 \u0399\u039d\u0394\u0399\u0391 \u2013\u0399\u039f\u03a1\u0394\u0391\u039d\u0399\u0391 \u039a.\u039b\u03a0", "39": "\u0395\u03a0\u0399\u0394\u039f\u039c\u0391 \u03a3\u03a4\u039f\u039b\u0397\u03a3", "40": "\u0391\u039d\u0391\u0393\u039d\u03a9\u03a1\u0399\u03a3\u0395\u0399\u03a3", "41": "\u03a4\u0391\u039c\u0395\u0399\u039f \u03a0\u03a1\u039f\u039d\u039f\u0399\u0391\u03a3 \u0395\u03a1\u0393\u039f\u039b\u0397\u03a0\u03a4\u03a9\u039d", "42": "\u0391\u039d\u0391\u03a3\u03a4\u039f\u039b\u0397 \u03a4\u0397\u03a3 \u03a0\u039f\u0399\u039d\u0397\u03a3", "43": "\u03a0\u039f\u03a4\u0391\u039c\u039f\u03a0\u039b\u039f\u0399\u0391", "44": "\u0395\u0399\u0394\u0399\u039a\u0397 \u03a4\u0395\u039b\u03a9\u039d\u0395\u0399\u0391\u039a\u0397 \u03a0\u0391\u03a1\u0391\u039a\u039f\u039b\u039f\u03a5\u0398\u0397\u03a3\u0397", "45": "\u0395\u03a0\u0399\u0398\u0395\u03a9\u03a1\u0397\u03a3\u0397 \u03a6\u0391\u03a1\u039c\u0391\u039a\u0395\u0399\u03a9\u039d", "46": "\u03a3\u03a5\u039d\u03a4\u0391\u039e\u0395\u0399\u03a3 \u0398\u03a5\u039c\u0391\u03a4\u03a9\u039d \u0395\u0398\u039d\u0399\u039a\u03a9\u039d", "47": "\u0391\u03a0\u039b\u039f\u03a0\u039f\u0399\u0397\u03a3\u0397 \u03a4\u0395\u039b\u03a9\u039d\u0395\u0399\u0391\u039a\u03a9\u039d \u0394\u0399\u0391\u03a4\u03a5\u03a0\u03a9\u03a3\u0395\u03a9\u039d", "48": "\u039a\u039b\u0391\u0394\u039f\u03a3 \u0391\u03a3\u0398\u0395\u039d\u0395\u0399\u0391\u03a3 \u03a4.\u0391.\u039a.\u0395", "49": "\u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u0391 \u03a5\u03a0\u039f\u0394\u039f\u03a7\u0397\u03a3 \u03a0\u039b\u039f\u0399\u03a9\u039d \u039a\u0391\u0399 \u03a0\u039f\u039b\u0395\u039c\u0399\u039a\u0397 \u03a7\u03a1\u0397\u03a3\u0397 \u039b\u0399\u039c\u0395\u039d\u03a9\u039d", "50": "\u03a6\u0391\u03a1\u039c\u0391\u039a\u0395\u0399\u039f \u03a0\u039f\u039b\u0395\u039c\u0399\u039a\u039f\u03a5 \u039d\u0391\u03a5\u03a4\u0399\u039a\u039f\u03a5", "51": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0391\u03a0\u039f\u039a\u0391\u03a4\u0391\u03a3\u03a4\u0391\u03a3\u0395\u03a9\u03a3 \u03a0\u03a1\u039f\u03a3\u03a6\u03a5\u0393\u03a9\u039d \u03a3\u03a5\u039c\u0392\u039f\u03a5\u039b\u0399\u039f\u03a5 \u03a4\u0397\u03a3 \u0395\u03a5\u03a1\u03a9\u03a0\u0397\u03a3", "52": "\u039d\u0391\u03a5\u03a4\u0399\u039a\u0395\u03a3 \u0395\u03a4\u0391\u0399\u03a1\u0395\u0399\u0395\u03a3", "53": "\u0399\u03a3\u03a1\u0391\u0397\u039b\u0399\u03a4\u0399\u039a\u0395\u03a3 \u039a\u039f\u0399\u039d\u039f\u03a4\u0397\u03a4\u0395\u03a3", "54": "\u03a3\u0395\u0399\u03a3\u039c\u039f\u03a0\u039b\u0397\u039a\u03a4\u039f\u0399 \u03a3\u03a4\u0395\u03a1\u0395\u0391\u03a3 \u0395\u039b\u039b\u0391\u0394\u0391\u03a3 (\u0391\u03a4\u03a4\u0399\u039a\u0397\u03a3, \u0392\u039f\u0399\u03a9\u03a4\u0399\u0391\u03a3 \u039a.\u039b.\u03a0.)", "55": "\u0394\u0399\u0391\u03a6\u039f\u03a1\u0395\u03a3 \u03a3\u03a7\u039f\u039b\u0395\u03a3 \u03a0.\u039d", "56": "\u03a4\u0391\u039c\u0395\u0399\u039f \u03a0\u03a1\u039f\u039d\u039f\u0399\u0391\u03a3 \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u03a5 \u0395\u039c\u03a0\u039f\u03a1.\u039a\u0391\u0399 \u0392\u0399\u039f\u039c.- \u0395\u03a0\u0391\u0393\u0393\u0395\u039b. \u039a\u0391\u0399 \u0392\u0399\u039f\u03a4\u0395\u03a7\u039d. \u0395\u03a0\u0399\u039c\u0395\u039b\u0397\u03a4\u0397\u03a1\u0399\u03a9\u039d \u03a4\u039f\u03a5 \u039a\u03a1\u0391\u03a4\u039f\u03a5\u03a3", "57": "\u0395\u0398\u039d\u0399\u039a\u0397 \u039a\u03a4\u0397\u039c\u0391\u03a4\u0399\u039a\u0397 \u03a4\u03a1\u0391\u03a0\u0395\u0396\u0391", "58": "\u039d\u0391\u03a5\u03a4\u0399\u039a\u039f\u0399 \u0391\u039a\u039f\u039b\u039f\u03a5\u0398\u039f\u0399", "59": "\u0394\u0397\u039c\u039f\u03a3\u0399\u0395\u03a3 \u039d\u0391\u03a5\u03a4\u0399\u039a\u0395\u03a3 \u03a3\u03a7\u039f\u039b\u0395\u03a3", "60": "\u039c\u0399\u039a\u03a1\u039f\u03a6\u03a9\u03a4\u039f\u0393\u03a1\u0391\u03a6\u0399\u0395\u03a3", "61": "\u039a\u0391\u03a4\u0391\u03a3\u03a4\u0391\u03a4\u0399\u039a\u039f\u0399 \u039d\u039f\u039c\u039f\u0399-\u03a4.\u03a3.\u0391.\u03a5", "62": "\u039a\u0391\u03a4\u0391\u03a3\u03a4\u0391\u03a3\u0397 \u0391\u039e\u0399\u03a9\u039c\u0391\u03a4\u0399\u039a\u03a9\u039d \u03a0.\u039d", "63": "\u0395\u039b\u039b\u0397\u039d\u0399\u039a\u0391 \u03a3\u03a7\u039f\u039b\u0395\u0399\u0391 \u0391\u039b\u039b\u039f\u0394\u0391\u03a0\u0397\u03a3", "64": "\u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u03a3 \u039f\u0399\u039a\u039f\u039d\u039f\u039c\u0399\u039a\u0397\u03a3", "65": "\u0395\u0398\u039d\u0399\u039a\u0397 \u03a4\u03a1\u0391\u03a0\u0395\u0396\u0391 \u03a4\u0397\u03a3 \u0395\u039b\u039b\u0391\u0394\u039f\u03a3", "66": "\u0393\u0395\u039d\u0399\u039a\u0395\u03a3 \u0394\u0399\u0391\u03a4\u0391\u039e\u0395\u0399\u03a3 \u039d.\u03a0.\u0394.\u0394", "67": "\u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f \u039c\u0395 \u03a3\u03a7\u0395\u03a3\u0397 \u0399\u0394\u0399\u03a9\u03a4\u0399\u039a\u039f\u03a5 \u0394\u0399\u039a\u0391\u0399\u039f\u03a5", "68": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0395\u03a0\u0399\u039a\u039f\u03a5\u03a1\u0399\u039a\u0397\u03a3 \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u03a3 \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u03a5 \u0395\u03a4\u0391\u0399\u03a1\u0395\u0399\u0391\u03a3 \u03a5\u0394\u03a1\u0395\u03a5\u03a3\u0397\u03a3 \u039a\u0391\u0399 \u0391\u03a0\u039f\u03a7\u0395\u03a4\u0395\u03a5\u03a3\u0397\u03a3 \u03a0\u03a1\u03a9\u03a4\u0395\u03a5\u039f\u03a5\u03a3\u0397\u03a3 (\u03a4.\u0395.\u0391.\u03a0.\u0395.\u03a5.\u0391.\u03a0.)", "69": "\u03a3\u03a9\u039c\u0391 \u039f\u0399\u039a\u039f\u039d\u039f\u039c\u0399\u039a\u039f\u03a5 \u0395\u039b\u0395\u0393\u03a7\u039f\u03a5", "70": "\u03a3\u03a5\u039c\u0392\u0391\u03a3\u0397 \u03a0\u0395\u03a1\u0399 \u0394\u0399\u0395\u039a\u0394\u0399\u039a\u0397\u03a3\u0395\u03a9\u03a3 \u0394\u0399\u0391\u03a4\u03a1\u039f\u03a6\u0397\u03a3", "71": "\u0399\u03a3\u039f\u03a4\u0397\u03a4\u0391 \u03a4\u03a9\u039d \u0394\u03a5\u039f \u03a6\u03a5\u039b\u03a9\u039d", "72": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0391\u03a1\u03a9\u0393\u0397\u03a3 \u039a\u0391\u0399 \u0395\u03a0\u0399\u039a\u039f\u03a5\u03a1\u0399\u039a\u039f \u03a4\u0391\u039c\u0395\u0399\u039f", "73": "\u03a4\u039f\u03a5\u03a1\u0399\u03a3\u03a4\u0399\u039a\u039f \u0394\u0395\u039b\u03a4\u0399\u039f", "74": "\u0394\u0399\u0391\u03a6\u039f\u03a1\u039f\u0399 \u039d\u039f\u039c\u039f\u0399", "75": "\u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u03a3 \u039b\u0399\u039c\u0395\u039d\u039f\u03a3 \u03a0\u0395\u0399\u03a1\u0391\u0399\u03a9\u03a3 \u0391\u039d\u03a9\u039d\u03a5\u039c\u0397 \u0395\u03a4\u0391\u0399\u03a1\u0399\u0391", "76": "\u0395\u039a\u039a\u0391\u0398\u0391\u03a1\u0399\u03a3\u0399\u03a3 \u0394\u0399\u039f\u03a1\u0399\u03a3\u039c\u03a9\u039d \u039a\u0391\u0399 \u03a0\u03a1\u039f\u0391\u0393\u03a9\u0393\u03a9\u039d \u039a\u0391\u03a4\u039f\u03a7\u0397\u03a3", "77": "\u03a4\u0391\u039e\u0399\u039d\u039f\u039c\u0397\u03a3\u0397 \u0392\u0391\u039c\u0392\u0391\u039a\u039f\u03a3", "78": "\u03a0\u03a1\u03a5\u03a4\u0391\u039d\u0395\u0399\u03a3 \u039a\u0391\u0399 \u039a\u039f\u03a3\u039c\u0397\u03a4\u039f\u03a1\u0395\u03a3", "79": "\u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u0391\u039a\u039f \u03a3\u03a5\u039c\u0392\u039f\u03a5\u039b\u0399\u039f \u0395\u039a\u039a\u039b\u0397\u03a3\u0399\u0391\u03a3\u03a4\u0399\u039a\u0397\u03a3 \u0395\u039a\u03a0\u0391\u0399\u0394\u0395\u03a5\u03a3\u0397\u03a3", "80": "\u03a9\u03a1\u0395\u03a3 \u0395\u03a1\u0393\u0391\u03a3\u0399\u0391\u03a3 \u03a3\u03a4\u0397\u039d \u0392\u0399\u039f\u039c\u0397\u03a7\u0391\u039d\u0399\u0391 \u039a\u0391\u0399 \u0392\u0399\u039f\u03a4\u0395\u03a7\u039d\u0399\u0391", "81": "\u03a7\u0391\u03a1\u03a4\u0397\u03a3 \u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u03a5 \u039f\u0399\u039a\u039f\u039d\u039f\u039c\u0399\u039a\u0397\u03a3 \u03a3\u03a5\u039d\u0395\u03a1\u0393\u0391\u03a3\u0399\u0391\u03a3", "82": "\u0393\u03a5\u039c\u039d\u0391\u03a3\u0399\u039f \u0391\u03a0\u039f\u0394\u0397\u039c\u03a9\u039d \u0395\u039b\u039b\u0397\u039d\u039f\u03a0\u0391\u0399\u0394\u03a9\u039d", "83": "\u039a\u0391\u039d\u039f\u039d\u0399\u03a3\u039c\u039f\u03a3 \u0391\u03a3\u0398\u0395\u039d\u0395\u0399\u0391\u03a3", "84": "\u0395\u039a\u0394\u039f\u03a3\u0395\u0399\u03a3 \u03a5\u03a0\u039f\u03a5\u03a1\u0393\u0395\u0399\u039f\u03a5 \u0395\u039c\u03a0\u039f\u03a1\u0399\u039a\u0397\u03a3 \u039d\u0391\u03a5\u03a4\u0399\u039b\u0399\u0391\u03a3", "85": "\u03a0\u039b\u0397\u03a4\u03a4\u039f\u039c\u0395\u039d\u039f\u0399 \u0391\u03a0\u039f \u0398\u0395\u039f\u039c\u0397\u039d\u0399\u0395\u03a3 \u039a\u0391\u0399 \u0391\u039b\u039b\u0391 \u0395\u039a\u03a4\u0391\u039a\u03a4\u0391 \u0393\u0395\u0393\u039f\u039d\u039f\u03a4\u0391", "86": "\u03a9\u03a1\u0395\u03a3 \u0395\u03a1\u0393\u0391\u03a3\u0399\u0391\u03a3 \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u03a5", "87": "\u0393\u0395\u03a9\u039c\u0397\u039b\u0391", "88": "\u03a6\u039f\u03a1\u039f\u039b\u039f\u0393\u0399\u0391 \u0391\u039d\u0391\u03a4\u0399\u039c\u0397\u03a3\u0397\u03a3 \u0391\u039a\u0399\u039d\u0397\u03a4\u03a9\u039d", "89": "\u03a0\u0391\u039d\u03a9\u039b\u0397\u03a3", "90": "\u03a3\u03a7\u039f\u039b\u0395\u03a3 \u039d\u0397\u03a0\u0399\u0391\u0393\u03a9\u0393\u03a9\u039d", "91": "\u03a6\u0391\u03a1\u039c\u0391\u039a\u0391\u03a0\u039f\u0398\u0397\u039a\u0395\u03a3", "92": "\u03a6\u03a1\u039f\u039d\u03a4\u0399\u03a3\u03a4\u0397\u03a1\u0399\u0391 \u039d\u039f\u039c\u0399\u039a\u03a9\u039d \u03a3\u03a0\u039f\u03a5\u0394\u03a9\u039d", "93": "\u039f\u0399\u039a\u039f\u0393\u0395\u039d\u0395\u0399\u0391\u039a\u0391 \u0395\u03a0\u0399\u0394\u039f\u039c\u0391\u03a4\u0391 \u039c\u0399\u03a3\u0398\u03a9\u03a4\u03a9\u039d", "94": "\u0397\u039b\u0395\u039a\u03a4\u03a1\u039f\u039a\u0399\u039d\u0397\u03a4\u0391 \u039b\u0395\u03a9\u03a6\u039f\u03a1\u0395\u0399\u0391 \u0391\u0398\u0397\u039d\u03a9\u039d \u2013 \u03a0\u0395\u0399\u03a1\u0391\u0399\u03a9\u03a3 (\u0397.\u039b.\u03a0.\u0391.\u03a0.)", "95": "\u0391\u03a3\u03a4\u0399\u039a\u0391 \u0394\u0399\u039a\u0391\u0399\u03a9\u039c\u0391\u03a4\u0391 \u0391\u039b\u039b\u039f\u0394\u0391\u03a0\u03a9\u039d", "96": "\u03a0\u039f\u039b\u0399\u03a4\u0399\u039a\u039f \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f \u0391\u0395\u03a1\u039f\u03a0\u039f\u03a1\u0399\u0391\u03a3", "97": "\u0394\u0399\u039a\u0391\u03a3\u03a4\u0399\u039a\u0397 \u0395\u039a\u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0397\u03a3\u0397 \u0399.\u039a.\u0391", "98": "\u03a5\u0393\u0395\u0399\u039f\u039d\u039f\u039c\u0399\u039a\u0397 \u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u0391 \u03a0.\u03a3", "99": "\u03a5\u0393\u0395\u0399\u039f\u039d\u039f\u039c\u0399\u039a\u039f\u0399 \u03a3\u03a4\u0391\u0398\u039c\u039f\u0399", "100": "\u0399\u0395\u03a1\u0391\u03a1\u03a7\u0399\u0391 \u039a\u0391\u0399 \u03a0\u03a1\u039f\u0391\u0393\u03a9\u0393\u0395\u03a3 \u039c\u039f\u039d\u0399\u039c\u03a9\u039d \u03a5\u03a0\u0391\u039e\u0399\u03a9\u039c\u0391\u03a4\u0399\u039a\u03a9\u039d \u039a\u0391\u0399 \u0391\u039d\u0398\u03a5\u03a0\u0391\u03a3\u03a0\u0399\u03a3\u03a4\u03a9\u039d", "101": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0395\u03a0\u0399\u039a\u039f\u03a5\u03a1\u0399\u039a\u0397\u03a3 \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u03a3 \u0395\u03a1\u0393\u0391\u03a4\u039f\u03a4\u0395\u03a7\u039d\u0399\u03a4\u03a9\u039d \u039a\u0391\u0399 \u03a5\u03a0\u0391\u039b\u039b\u0397\u039b\u03a9\u039d \u0394\u0395\u03a1\u039c\u0391\u03a4\u039f\u03a3 \u0395\u039b\u039b\u0391\u0394\u0391\u03a3 (\u03a4.\u0395.\u0391.\u0395.\u03a5.\u0394.\u0395.)", "102": "\u03a0\u03a1\u0391\u03a4\u0397\u03a1\u0399\u0391 \u0391\u03a1\u03a4\u039f\u03a5", "103": "\u03a0\u039b\u0397\u03a1\u03a9\u039c\u0397 \u039c\u0395 \u0395\u03a0\u0399\u03a4\u0391\u0393\u0397", "104": "\u03a4\u0395\u03a7\u039d\u0399\u039a\u0397 \u0395\u039a\u039c\u0395\u03a4\u0391\u039b\u039b\u0395\u03a5\u03a3\u0397 \u0395\u039b\u0399\u039a\u039f\u03a0\u03a4\u0395\u03a1\u03a9\u039d", "105": "\u0394\u0399\u0395\u0398\u039d\u0395\u0399\u03a3 \u03a4\u0391\u03a7\u03a5\u0394\u03a1\u039f\u039c\u0399\u039a\u0395\u03a3 \u03a3\u03a5\u039c\u0392\u0391\u03a3\u0395\u0399\u03a3", "106": "\u0394\u0399\u039a\u0391\u03a3\u03a4\u0399\u039a\u039f\u0399 \u0391\u039d\u03a4\u0399\u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u039f\u0399 \u03a4\u039f\u03a5 \u0394\u0397\u039c\u039f\u03a3\u0399\u039f\u03a5", "107": "\u03a9\u03a1\u0395\u03a3 \u0395\u03a1\u0393\u0391\u03a3\u0399\u0391\u03a3 \u03a3\u0395 \u0394\u0399\u0391\u03a6\u039f\u03a1\u0391 \u0395\u03a0\u0391\u0393\u0393\u0395\u039b\u039c\u0391\u03a4\u0391", "108": "\u0394\u0399\u0395\u03a5\u0398\u03a5\u039d\u03a3\u0397 \u039a\u03a4\u0397\u039d\u039f\u03a4\u03a1\u039f\u03a6\u0399\u0391\u03a3", "109": "\u0395\u03a0\u0399\u0398\u0395\u03a9\u03a1\u0397\u03a3\u0397 \u03a3\u03a6\u0391\u0393\u0399\u03a9\u039d", "110": "\u03a0\u039b\u03a9\u0399\u039c\u039f\u03a4\u0397\u03a4\u0391 \u0391\u0395\u03a1\u039f\u03a3\u039a\u0391\u03a6\u03a9\u039d", "111": "\u0391\u0393\u039f\u03a1\u0391\u039d\u039f\u039c\u0399\u039a\u039f\u03a3 \u039a\u03a9\u0394\u0399\u039a\u0391\u03a3", "112": "\u0394\u0399\u0395\u0398\u039d\u0395\u0399\u03a3 \u039c\u0395\u03a4\u0391\u03a6\u039f\u03a1\u0395\u03a3 \u0395\u03a0\u0399\u0392\u0391\u03a4\u03a9\u039d \u039a\u0391\u0399 \u0395\u039c\u03a0\u039f\u03a1\u0395\u03a5\u039c\u0391\u03a4\u03a9\u039d", "113": "\u03a0\u03a1\u039f\u039c\u0397\u0398\u0395\u0399\u0395\u03a3", "114": "\u0394\u0399\u0391\u03a6\u039f\u03a1\u0395\u03a3 \u0394\u0399\u0391\u03a4\u0391\u039e\u0395\u0399\u03a3", "115": "\u0394\u0399\u0391\u0399\u03a4\u0397\u03a3\u0399\u0391 \u03a3\u03a5\u039b\u039b\u039f\u0393\u0399\u039a\u03a9\u039d \u0394\u0399\u0391\u03a6\u039f\u03a1\u03a9\u039d - \u039c\u0395\u03a3\u039f\u039b\u0391\u0392\u0397\u03a4\u0395\u03a3 \u0394\u0399\u0391\u0399\u03a4\u0397\u03a4\u0395\u03a3", "116": "\u03a3\u039f\u03a5\u039b\u03a4\u0391\u039d\u0399\u039d\u0391", "117": "\u039c\u0395\u03a4\u0391\u0393\u03a1\u0391\u03a6\u0397", "118": "\u0395\u0399\u03a3\u0391\u0393\u03a9\u0393\u0397 \u0395\u03a0\u0399\u03a3\u03a4\u0397\u039c\u039f\u039d\u0399\u039a\u039f\u03a5 \u03a5\u039b\u0399\u039a\u039f\u03a5", "119": "\u0394\u0399\u0391\u03a1\u0398\u03a1\u03a9\u03a3\u0397 \u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u03a9\u039d \u039f.\u0393.\u0391", "120": "\u0394\u0399\u039a\u0391\u03a3\u03a4\u0399\u039a\u039f\u0399 \u039b\u0395\u0399\u03a4\u039f\u03a5\u03a1\u0393\u039f\u0399 - \u0395\u0398\u039d\u0399\u039a\u0397 \u03a3\u03a7\u039f\u039b\u0397 \u0394\u0399\u039a\u0391\u03a3\u03a4\u03a9\u039d", "121": "\u03a0\u0399\u03a3\u03a4\u039f\u03a0\u039f\u0399\u0397\u03a4\u0399\u039a\u0391 \u039a\u0391\u0399 \u0394\u0399\u039a\u0391\u0399\u039f\u039b\u039f\u0393\u0397\u03a4\u0399\u039a\u0391", "122": "\u0391\u03a3\u039a\u0397\u03a3\u0397 \u0399\u0391\u03a4\u03a1\u0399\u039a\u039f\u03a5 \u0395\u03a0\u0391\u0393\u0393\u0395\u039b\u039c\u0391\u03a4\u039f\u03a3", "123": "\u03a3\u03a5\u039d\u0395\u03a4\u0391\u0399\u03a1\u0399\u03a3\u039c\u039f\u0399 \u0394\u0397\u039c\u039f\u03a3\u0399\u03a9\u039d \u03a5\u03a0\u0391\u039b\u039b\u0397\u039b\u03a9\u039d", "124": "\u03a3\u03a7\u039f\u039b\u0397 \u0395\u03a0\u0399\u03a3\u03a4\u0397\u039c\u03a9\u039d \u03a5\u0393\u0395\u0399\u0391\u03a3 \u03a0\u0391\u039d\u039c\u0399\u039f\u03a5 \u03a0\u0391\u03a4\u03a1\u03a9\u039d", "125": "\u0391\u039b\u039b\u039f\u0394\u0391\u03a0\u0395\u03a3 \u039d\u0391\u03a5\u03a4\u0399\u039b\u0399\u0391\u039a\u0395\u03a3 \u0395\u03a0\u0399\u03a7\u0395\u0399\u03a1\u0397\u03a3\u0395\u0399\u03a3", "126": "\u039b\u0391\u03a4\u039f\u039c\u0395\u0399\u0391", "127": "\u0395\u039a\u039c\u0395\u03a4\u0391\u039b\u039b\u0395\u03a5\u03a3\u0397 \u0399\u0391\u039c\u0391\u03a4\u0399\u039a\u03a9\u039d \u03a0\u0397\u0393\u03a9\u039d", "128": "\u03a0\u03a9\u039b\u0397\u03a3\u0397 \u03a7\u03a1\u0395\u03a9\u0393\u03a1\u0391\u03a6\u03a9\u039d \u039c\u0395 \u0394\u039f\u03a3\u0395\u0399\u03a3", "129": "\u039d\u039f\u039c\u039f\u0398\u0395\u03a3\u0399\u0391 \u03a0\u0395\u03a1\u0399 \u03a4\u03a1\u0391\u03a0\u0395\u0396\u03a9\u039d (\u0393\u0395\u039d\u0399\u039a\u0391)", "130": "\u0395\u0399\u0394\u0399\u039a\u0391 \u039c\u0395\u03a4\u0391\u039b\u039b\u0395\u0399\u0391", "131": "Y\u03a0\u039f\u03a5\u03a1\u0393\u0395\u0399\u039f \u03a5\u0393\u0399\u0395\u0399\u039d\u0397\u03a3", "132": "\u039b\u0397\u039e\u0399\u0391\u03a1\u03a7\u0399\u039a\u0395\u03a3 \u03a0\u03a1\u0391\u039e\u0395\u0399\u03a3", "133": "\u0393\u0395\u039d\u0399\u039a\u0395\u03a3 \u0394\u0399\u0391\u03a4\u0391\u039e\u0395\u0399\u03a3 \u0393\u0399\u0391 \u03a4\u039f\u039d \u03a4\u03a5\u03a0\u039f", "134": "\u0395\u0398\u039d\u0399\u039a\u039f \u03a3\u03a5\u03a3\u03a4\u0397\u039c\u0391 \u0395\u03a0\u0391\u0393\u0393\u0395\u039b\u039c\u0391\u03a4\u0399\u039a\u0397\u03a3 \u0395\u039a\u03a0\u0391\u0399\u0394\u0395\u03a5\u03a3\u0397\u03a3-\u039a\u0391\u03a4\u0391\u03a1\u03a4\u0399\u03a3\u0397\u03a3", "135": "\u0391\u03a1\u039f\u03a5\u03a1\u0391\u0399\u039f\u0399 \u039a\u0391\u0399 \u0391\u039a\u03a1\u0399\u0394\u0395\u03a3", "136": "\u03a0\u03a1\u039f\u03a3\u03a4\u0391\u03a3\u0399\u0391 \u03a6\u03a5\u039c\u0391\u03a4\u0399\u039a\u03a9\u039d \u039d\u0391\u03a5\u03a4\u0399\u039a\u03a9\u039d", "137": "\u0391\u03a0\u039f\u03a1\u03a1\u0397\u03a4\u039f \u0395\u03a0\u0399\u03a3\u03a4\u039f\u039b\u03a9\u039d \u039a\u0391\u0399 \u0395\u03a0\u0399\u039a\u039f\u0399\u039d\u03a9\u039d\u0399\u03a9\u039d", "138": "\u03a0\u039f\u03a1\u0398\u039c\u0395\u0399\u0391 \u039a\u0391\u0399 \u039f\u03a7\u0397\u039c\u0391\u03a4\u0391\u0393\u03a9\u0393\u0391", "139": "\u039c\u0395\u03a4\u03a1\u0391 \u0395\u039e\u039f\u0399\u039a\u039f\u039d\u039f\u039c\u0397\u03a3\u0397\u03a3 \u0395\u039d\u0395\u03a1\u0393\u0395\u0399\u0391\u03a3", "140": "\u03a3\u03a4\u039f\u0399\u03a7\u0395\u0399\u0391 \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u03a5 \u0394\u0397\u039c\u039f\u03a3\u0399\u03a9\u039d \u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u03a9\u039d \u039a\u0391\u0399 \u039d.\u03a0.\u0394.\u0394", "141": "\u03a0\u0391\u0393\u0399\u0395\u03a3 \u0391\u039c\u039f\u0399\u0392\u0395\u03a3 \u0394\u0399\u039a\u0397\u0393\u039f\u03a1\u03a9\u039d", "142": "\u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u03a3 \u03a3\u03a7\u039f\u039b\u0397\u03a3 \u0395\u03a5\u0395\u039b\u03a0\u0399\u0394\u03a9\u039d", "143": "\u039f\u0399\u039a\u039f\u039d\u039f\u039c\u0399\u039a\u039f \u0395\u03a0\u0399\u039c\u0395\u039b\u0397\u03a4\u0397\u03a1\u0399\u039f \u03a4\u0397\u03a3 \u0395\u039b\u039b\u0391\u0394\u0391\u03a3", "144": "\u0393\u03a1\u0391\u03a6\u0395\u0399\u0391 \u0395\u03a5\u03a1\u0395\u03a3\u0395\u03a9\u03a3 \u0395\u03a1\u0393\u0391\u03a3\u0399\u0391\u03a3", "145": "\u0394\u0399\u0391\u03a6\u0397\u039c\u0399\u03a3\u0395\u0399\u03a3", "146": "\u0394\u0399\u0391\u03a6\u039f\u03a1\u0395\u03a3 \u03a5\u03a0\u039f\u03a4\u03a1\u039f\u03a6\u0399\u0395\u03a3", "147": "\u03a6\u039f\u03a1\u03a4\u0397\u0393\u0391 \u0391\u039a\u03a4\u039f\u03a0\u039b\u039f\u0399\u039a\u0391 \u03a0\u039b\u039f\u0399\u0391 (\u039cS) \u039c\u0395\u03a7\u03a1\u0399 500 \u039a.\u039f.\u03a7", "148": "\u0395\u03a0\u0399\u03a4\u03a1\u039f\u03a0\u0397 \u03a3\u03a5\u039d\u0395\u03a1\u0393\u0391\u03a3\u0399\u0391\u03a3 UNICEF", "149": "\u03a5\u0393\u0399\u0395\u0399\u039d\u0397 \u0398\u0395\u03a1\u0395\u03a4\u03a1\u03a9\u039d", "150": "\u0395\u03a0\u0399\u03a3\u03a4\u0397\u039c\u039f\u039d\u0399\u039a\u0397 \u0395\u03a1\u0395\u03a5\u039d\u0391 \u039a\u0391\u0399 \u03a4\u0395\u03a7\u039d\u039f\u039b\u039f\u0393\u0399\u0391", "151": "\u0391\u03a0\u0391\u0393\u039f\u03a1\u0395\u03a5\u03a3\u0395\u0399\u03a3 \u0395\u039e\u0391\u0393\u03a9\u0393\u0397\u03a3", "152": "\u0391\u039c\u03a0\u0395\u039b\u039f\u03a5\u03a1\u0393\u0399\u039a\u039f \u039a\u03a4\u0397\u039c\u0391\u03a4\u039f\u039b\u039f\u0393\u0399\u039f", "153": "\u03a5\u03a0\u039f\u03a5\u03a1\u0393\u0395\u0399\u039f \u03a5\u0393\u0395\u0399\u0391\u03a3 \u039a\u0391\u0399 \u03a0\u03a1\u039f\u039d\u039f\u0399\u0391\u03a3", "154": "\u0394\u0399\u0395\u0398\u039d\u0397\u03a3 \u039d\u0391\u03a5\u03a4\u0399\u039b\u0399\u0391\u039a\u039f\u03a3 \u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u03a3", "155": "\u0394\u0399\u0395\u03a5\u0398\u03a5\u039d\u03a3\u0397 \u03a4\u0395\u039b\u03a9\u039d\u0395\u0399\u0391\u039a\u039f\u03a5 \u0395\u039b\u0395\u0393\u03a7\u039f\u03a5", "156": "\u0394\u0395\u039b\u03a4\u0399\u0391 \u03a4\u0391\u03a5\u03a4\u039f\u03a4\u0397\u03a4\u039f\u03a3 \u03a0. \u039d\u0391\u03a5\u03a4\u0399\u039a\u039f\u03a5", "157": "\u0391\u039d\u03a9\u03a4\u0391\u03a4\u0397 \u03a5\u0393\u0395\u0399\u039f\u039d\u039f\u039c\u0399\u039a\u0397 \u0395\u03a0\u0399\u03a4\u03a1\u039f\u03a0\u0397", "158": "\u03a0\u03a1\u039f\u03a3\u03a4\u0391\u03a3\u0399\u0391 \u0395\u03a6\u0395\u0394\u03a1\u03a9\u039d \u0391\u039e\u0399\u03a9\u039c\u0391\u03a4\u0399\u039a\u03a9\u039d, \u0391\u039d\u0391\u03a0\u0397\u03a1\u03a9\u039d \u03a0\u039f\u039b\u0395\u039c\u039f\u03a5 \u039a\u0391\u0399 \u0391\u0393\u03a9\u039d\u0399\u03a3\u03a4\u03a9\u039d \u0395\u0398\u039d. \u0391\u039d\u03a4\u0399\u03a3\u03a4\u0391\u03a3\u0397\u03a3", "159": "\u03a6\u039f\u03a1\u039f\u0399 \u03a5\u03a0\u0395\u03a1 \u03a4\u03a1\u0399\u03a4\u03a9\u039d", "160": "\u0391\u0393\u03a1\u039f\u039b\u0397\u03a8\u0399\u0395\u03a3 \u0399\u039f\u039d\u0399\u03a9\u039d \u039d\u0397\u03a3\u0399\u03a9\u039d", "161": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0395\u03a0\u0399\u039a\u039f\u03a5\u03a1\u0399\u039a\u0397\u03a3 \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u03a3 \u03a5\u03a0\u0391\u039b\u039b\u0397\u039b\u03a9\u039d \u0395\u039c\u03a0\u039f\u03a1\u0399\u039f\u03a5 \u03a4\u03a1\u039f\u03a6\u0399\u039c\u03a9\u039d (\u03a4.\u0395.\u0391.\u03a5.\u0395.\u03a4)", "162": "\u0391\u039d\u03a9\u03a4\u0391\u03a4\u039f \u0395\u0399\u0394\u0399\u039a\u039f \u0394\u0399\u039a\u0391\u03a3\u03a4\u0397\u03a1\u0399\u039f", "163": "\u0395\u0399\u03a3\u0391\u0393\u03a9\u0393\u0397 \u0393\u03a5\u039d\u0391\u0399\u039a\u03a9\u039d \u03a3\u03a4\u0399\u03a3 \u0391\u039d\u03a9\u03a4\u0391\u03a4\u0395\u03a3 \u03a3\u03a4\u03a1\u0391\u03a4\u0399\u03a9\u03a4\u0399\u039a\u0395\u03a3 \u03a3\u03a7\u039f\u039b\u0395\u03a3", "164": "\u03a3\u03a7\u039f\u039b\u0397 \u0391\u039e\u0399\u03a9\u039c\u0391\u03a4\u0399\u039a\u03a9\u039d \u039d\u039f\u03a3\u0397\u039b\u0395\u03a5\u03a4\u0399\u039a\u0397\u03a3 (\u03a3.\u0391.\u039d.)", "165": "\u0394\u0399\u0391\u0394\u0399\u039a\u0391\u03a3\u0399\u0391 \u0394\u0399\u039f\u0399\u039a\u0397\u03a4\u0399\u039a\u03a9\u039d \u0394\u0399\u039a\u0391\u03a3\u03a4\u0397\u03a1\u0399\u03a9\u039d", "166": "\u03a0\u03a1\u039f\u03a3\u03a4\u0391\u03a3\u0399\u0391 \u0395\u03a1\u0393\u0391\u0396\u039f\u039c\u0395\u039d\u039f\u03a5 \u03a0\u0391\u0399\u0394\u0399\u039f\u03a5", "167": "\u0391\u039c\u039d\u0397\u03a3\u03a4\u0399\u0391", "168": "\u03a3\u03a7\u039f\u039b\u0395\u03a3 \u039a\u0391\u039b\u039b\u0399\u03a4\u0395\u03a7\u039d\u0399\u039a\u0397\u03a3 \u0395\u039a\u03a0\u0391\u0399\u0394\u0395\u03a5\u03a3\u0397\u03a3", "169": "\u03a7\u0391\u03a1\u0397 \u039a\u0391\u0399 \u039c\u0395\u03a4\u03a1\u0399\u0391\u03a3\u039c\u039f\u03a3", "170": "\u03a4\u03a5\u03a6\u039b\u039f\u0399", "171": "\u03a3\u03a5\u039c\u0392\u039f\u03a5\u039b\u0399\u039f \u03a4\u0397\u03a3 \u0395\u03a5\u03a1\u03a9\u03a0\u0397\u03a3", "172": "\u0395\u03a1\u0393\u039f\u03a3\u03a4\u0391\u03a3\u0399\u0391 \u0395\u039a\u03a1\u0397\u039a\u03a4\u0399\u039a\u03a9\u039d \u03a5\u039b\u03a9\u039d", "173": "\u039c\u0397\u03a4\u03a1\u03a9\u0391 \u03a0. \u039d\u0391\u03a5\u03a4\u0399\u039a\u039f\u03a5", "174": "\u03a5\u0393\u03a1\u0397 \u0391\u039c\u039c\u03a9\u039d\u0399\u0391", "175": "\u03a0\u0395\u0399\u03a1\u0391\u039c\u0391\u03a4\u0399\u039a\u0391 \u03a3\u03a7\u039f\u039b\u0395\u0399\u0391", "176": "\u03a4\u0391\u039c\u0395\u0399\u039f \u03a0\u03a1\u039f\u039d\u039f\u0399\u0391\u03a3 \u0391\u039e\u0399\u03a9\u039c\u0391\u03a4\u0399\u039a\u03a9\u039d \u0395.\u039d", "177": "\u0395\u03a0\u0391\u0393\u0393\u0395\u039b\u039c\u0391\u03a4\u0399\u039a\u039f\u03a3 \u03a0\u03a1\u039f\u03a3\u0391\u039d\u0391\u03a4\u039f\u039b\u0399\u03a3\u039c\u039f\u03a3 \u039a\u0391\u0399 \u039a\u0391\u03a4\u0391\u03a1\u03a4\u0399\u03a3\u0397", "178": "\u03a4\u0395\u039b\u03a9\u039d\u0395\u0399\u0391\u039a\u0397 \u0395\u03a0\u0399\u0392\u039b\u0395\u03a8\u0397", "179": "\u03a0\u03a1\u039f\u03a3\u03a9\u03a1\u0399\u039d\u0395\u03a3 \u0395\u03a5\u03a1\u03a9\u03a0\u0391\u0399\u039a\u0395\u03a3 \u03a3\u03a5\u039c\u03a6\u03a9\u039d\u0399\u0395\u03a3", "180": "\u039c\u039f\u039d\u039f\u03a0\u03a9\u039b\u0399\u039f \u03a0\u0391\u0399\u0393\u039d\u0399\u039f\u03a7\u0391\u03a1\u03a4\u03a9\u039d", "181": "\u039b\u0395\u0399\u03a4\u039f\u03a5\u03a1\u0393\u0399\u0391 \u03a4\u039f\u03a5\u03a1\u0399\u03a3\u03a4\u0399\u039a\u0397\u03a3 \u0391\u03a3\u03a4\u03a5\u039d\u039f\u039c\u0399\u0391\u03a3", "182": "\u0395\u039a\u03a0\u039f\u0399\u0397\u03a3\u0397 \u0395\u039a\u039a\u039b\u0397\u03a3\u0399\u0391\u03a3\u03a4\u0399\u039a\u03a9\u039d \u039a\u0399\u039d\u0397\u03a4\u03a9\u039d \u039a\u0391\u0399 \u0391\u039a\u0399\u039d\u0397\u03a4\u03a9\u039d", "183": "\u03a3\u03a5\u039b\u039b\u039f\u0393\u0399\u039a\u0395\u03a3 \u03a3\u03a5\u039c\u0392\u0391\u03a3\u0395\u0399\u03a3 (\u0393\u0395\u039d\u0399\u039a\u0391)", "184": "\u039f\u0394\u039f\u0399\u03a0\u039f\u03a1\u0399\u039a\u0391 \u039a\u0391\u0399 \u0391\u03a0\u039f\u0396\u0397\u039c\u0399\u03a9\u03a3\u0395\u0399\u03a3 \u0395\u039a\u03a4\u039f\u03a3 \u0395\u0394\u03a1\u0391\u03a3", "185": "\u03a3\u03a4\u0395\u0393\u0391\u03a3\u03a4\u0399\u039a\u0397 \u0391\u03a0\u039f\u039a\u0391\u03a4\u0391\u03a3\u03a4\u0391\u03a3\u0397 \u03a0\u03a1\u039f\u03a3\u03a6\u03a5\u0393\u03a9\u039d", "186": "\u0391\u039d\u03a9\u03a4\u0391\u03a4\u0391 \u03a3\u03a5\u039c\u0392\u039f\u03a5\u039b\u0399\u0391 \u0395\u039a\u03a0\u0391\u0399\u0394\u0395\u03a5\u03a3\u0395\u03a9\u03a3", "187": "\u0391\u03a1\u03a7\u0395\u0399\u0391 \u03a5\u03a0\u039f\u03a5\u03a1\u0393\u0395\u0399\u039f\u03a5 \u039f\u0399\u039a\u039f\u039d\u039f\u039c\u0399\u039a\u03a9\u039d", "188": "\u0393\u0395\u039d\u0399\u039a\u0397 \u0393\u03a1\u0391\u039c\u039c\u0391\u03a4\u0395\u0399\u0391 \u03a5\u03a0\u039f\u03a5\u03a1\u0393\u0399\u039a\u039f\u03a5 \u03a3\u03a5\u039c\u0392\u039f\u03a5\u039b\u0399\u039f\u03a5", "189": "\u03a0\u0395\u03a1\u0399\u03a0\u03a4\u0395\u03a1\u0391 \u0391\u039d\u0391\u03a0\u0397\u03a1\u03a9\u039d \u03a0\u039f\u039b\u0395\u039c\u039f\u03a5", "190": "\u0395\u03a0\u0391\u0393\u0393\u0395\u039b\u039c\u0391\u03a4\u0399\u039a\u0395\u03a3 \u039f\u03a1\u0393\u0391\u039d\u03a9\u03a3\u0395\u0399\u03a3 \u0395\u039c\u03a0\u039f\u03a1\u03a9\u039d, \u0392\u0399\u039f\u03a4\u0395\u03a7\u039d\u03a9\u039d \u039a\u0391\u0399 \u039b\u039f\u0399\u03a0\u03a9\u039d \u0395\u03a0\u0391\u0393\u0393\u0395\u039b\u039c\u0391\u03a4\u0399\u03a9\u039d", "191": "\u0399\u0394\u0399\u03a9\u03a4\u0399\u039a\u039f\u0399 \u03a3\u03a4\u0391\u0398\u039c\u039f\u0399 \u03a0\u0391\u03a1\u0391\u0393\u03a9\u0393\u0397\u03a3 \u0397\u039b\u0395\u039a\u03a4\u03a1\u0399\u039a\u0397\u03a3 \u0395\u039d\u0395\u03a1\u0393\u0395\u0399\u0391\u03a3", "192": "\u0398\u0395\u0391\u03a4\u03a1\u0399\u039a\u0391 \u0395\u03a1\u0393\u0391", "193": "\u039c\u0395 \u03a4\u0397 \u039d\u0395\u0391 \u0396\u0397\u039b\u0391\u039d\u0394\u0399\u0391", "194": "\u03a6\u039f\u03a1\u039f\u03a3 \u039a\u0391\u03a4\u0391\u039d\u0391\u039b\u03a9\u03a3\u0395\u03a9\u03a3 \u03a3\u0391\u039a\u03a7\u0391\u03a1\u0395\u03a9\u03a3", "195": "\u039d\u039f\u039c\u0391\u03a1\u03a7\u0399\u0391\u039a\u0391 \u03a4\u0391\u039c\u0395\u0399\u0391", "196": "\u0391\u0393\u03a9\u0393\u0395\u03a3 \u039a\u0391\u039a\u039f\u0394\u0399\u039a\u0399\u0391\u03a3", "197": "\u039a\u03a9\u0394\u0399\u039a\u0391\u03a3 \u03a6\u039f\u03a1\u039f\u039b\u039f\u0393\u0399\u039a\u0397\u03a3 \u0394\u0399\u039a\u039f\u039d\u039f\u039c\u0399\u0391\u03a3", "198": "\u0391\u03a4\u039f\u039c\u0391 \u0392\u0391\u03a1\u0399\u0391 \u039d\u039f\u0397\u03a4\u0399\u039a\u0391 \u039a\u0391\u0398\u03a5\u03a3\u03a4\u0395\u03a1\u0397\u039c\u0395\u039d\u0391", "199": "\u039c\u0395 \u03a4\u0397 \u03a3\u039f\u03a5\u0397\u0394\u0399\u0391", "200": "\u0391\u0395\u03a1\u039f\u039d\u0391\u03a5\u03a4\u0399\u039a\u0397 \u039c\u0395\u03a4\u0395\u03a9\u03a1\u039f\u039b\u039f\u0393\u0399\u0391", "201": "\u0399\u0394\u0399\u03a9\u03a4\u0399\u039a\u0395\u03a3 \u03a3\u03a7\u039f\u039b\u0395\u03a3 \u0393\u03a5\u039c\u039d\u0391\u03a3\u03a4\u0399\u039a\u0397\u03a3", "202": "\u03a0\u0395\u03a1\u0399\u039f\u03a5\u03a3\u0399\u0391 \u0394\u0397\u039c\u03a9\u039d \u039a\u0391\u0399 \u039a\u039f\u0399\u039d\u039f\u03a4\u0397\u03a4\u03a9\u039d", "203": "\u0391\u0393\u039f\u03a1\u0391\u03a0\u03a9\u039b\u0397\u03a3\u0399\u0395\u03a3 \u039a\u0391\u03a4\u039f\u03a7\u0397\u03a3", "204": "\u0395\u039a\u039a\u039b\u0397\u03a3\u0399\u0391 \u03a0\u0391\u03a1\u0399\u03a3\u0399\u03a9\u039d", "205": "\u0394\u0399\u0395\u0398\u039d\u0395\u0399\u03a3 \u03a3\u03a5\u039c\u0392\u0391\u03a3\u0395\u0399\u03a3 \u03a0\u03a1\u039f\u03a3\u03a4\u0391\u03a3\u0399\u0391\u03a3 \u03a6\u03a5\u03a4\u03a9\u039d", "206": "\u039a\u0391\u03a4\u039f\u03a7\u03a5\u03a1\u03a9\u03a3\u0397 \u0398\u03a1\u0397\u03a3\u039a\u0395\u03a5\u03a4\u0399\u039a\u0397\u03a3 \u0395\u039b\u0395\u03a5\u0398\u0395\u03a1\u0399\u0391\u03a3", "207": "\u03a5\u0393\u0395\u0399\u039f\u039d\u039f\u039c\u0399\u039a\u0397 \u0395\u039e\u0395\u03a4\u0391\u03a3\u0397 \u039c\u0397 \u0399\u03a0\u03a4\u0391\u039c\u0395\u039d\u039f\u03a5 \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u03a5", "208": "\u03a3\u03a5\u039d\u03a4\u0391\u039e\u0395\u0399\u03a3 \u0398\u03a5\u039c\u0391\u03a4\u03a9\u039d \u03a0\u039f\u039b\u0395\u039c\u039f\u03a5 1940", "209": "\u03a5\u0394\u03a1\u0391\u03a5\u039b\u0399\u039a\u0395\u03a3 \u0395\u0393\u039a\u0391\u03a4\u0391\u03a3\u03a4\u0391\u03a3\u0395\u0399\u03a3", "210": "\u039a\u039f\u0399\u039d\u03a9\u039d\u0399\u039a\u039f\u0399 \u039b\u0395\u0399\u03a4\u039f\u03a5\u03a1\u0393\u039f\u0399 - \u039a\u039f\u0399\u039d\u03a9\u039d\u0399\u039a\u039f\u0399 \u03a3\u03a5\u039c\u0392\u039f\u03a5\u039b\u039f\u0399", "211": "\u0394\u0399\u0391\u03a6\u039f\u03a1\u0395\u03a3 \u03a0\u03a1\u039f\u03a3\u03a9\u03a1\u0399\u039d\u0395\u03a3 \u0391\u03a4\u0395\u039b\u0395\u0399\u0395\u03a3", "212": "\u039f\u0399\u039a\u039f\u039d\u039f\u039c\u0399\u039a\u0397 \u0394\u0399\u0391\u03a7\u0395\u0399\u03a1\u0399\u03a3\u0397 \u039a\u0391\u0399 \u039b\u039f\u0393\u0399\u03a3\u03a4\u0399\u039a\u039f", "213": "\u0395\u039e\u0397\u039b\u0395\u039a\u03a4\u03a1\u0399\u03a3\u039c\u039f\u03a3 \u039d\u0397\u03a3\u03a9\u039d", "214": "\u0395\u039a\u03a0\u0391\u0399\u0394\u0395\u03a5\u03a3\u0397 \u03a3\u03a4\u0395\u039b\u0395\u03a7\u03a9\u039d", "215": "\u03a9\u03a1\u0395\u03a3 \u0395\u03a1\u0393\u0391\u03a3\u0399\u0391\u03a3 \u039a\u0391\u03a4\u0391\u03a3\u03a4\u0397\u039c\u0391\u03a4\u03a9\u039d \u039a\u0391\u0399 \u0393\u03a1\u0391\u03a6\u0395\u0399\u03a9\u039d", "216": "\u0397\u039c\u0395\u03a1\u039f\u039b\u039f\u0393\u0399\u039f \u0393\u0395\u03a6\u03a5\u03a1\u0391\u03a3", "217": "\u03a0\u03a1\u039f\u03a3\u03a4\u0391\u03a3\u0399\u0391 \u03a4\u0397\u03a3 \u03a3\u03a4\u0391\u03a6\u0399\u0394\u0391\u03a3", "218": "\u03a0\u0391\u039b\u0391\u0399\u039f\u0399 \u0394\u0399\u039a\u039f\u039d\u039f\u039c\u0399\u039a\u039f\u0399 \u039d\u039f\u039c\u039f\u0399", "219": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0395\u03a0\u0399\u039a. \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u03a3 \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u03a5 \u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u03a9\u039d \u039a\u039f\u0399\u039d\u03a9\u039d\u0399\u039a\u0397\u03a3 \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u03a3 (\u03a4.\u0395.\u0391.\u03a0.\u039f.\u039a.\u0391.)", "220": "\u03a0\u0391\u03a1\u039f\u03a7\u0395\u03a3 \u03a5\u0393\u0395\u0399\u0391\u03a3 \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u03a4\u0399\u039a\u03a9\u039d \u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u03a9\u039d", "221": "\u03a0\u039b\u0391\u039d\u039f\u0394\u0399\u039f\u0399 \u0399\u03a7\u0398\u03a5\u039f\u03a0\u03a9\u039b\u0395\u03a3", "222": "\u0394\u0399\u0391\u03a6\u039f\u03a1\u039f\u0399 \u039d\u039f\u039c\u039f\u0399 \u03a0\u0395\u03a1\u0399 \u0391\u039e\u0399\u03a9\u039c\u0391\u03a4\u0399\u039a\u03a9\u039d \u03a0.\u039d", "223": "\u03a5\u03a0\u039f\u03a7\u03a1\u0395\u03a9\u03a3\u0395\u0399\u03a3 \u0395\u03a6\u039f\u03a0\u039b\u0399\u03a3\u03a4\u03a9\u039d \u03a3\u0395 \u0391\u03a3\u0398\u0395\u039d\u0395\u0399\u0391 \u0397 \u0398\u0391\u039d\u0391\u03a4\u039f \u039d\u0391\u03a5\u03a4\u0399\u039a\u03a9\u039d", "224": "\u03a0\u03a1\u039f\u03a3\u03a4\u0391\u03a3\u0399\u0391 \u039a\u0391\u03a4\u0391 \u03a4\u0397\u03a3 \u0391\u03a3\u0398\u0395\u039d\u0395\u0399\u0391\u03a3", "225": "\u0393\u0395\u039d\u0399\u039a\u0391 \u03a0\u0395\u03a1\u0399 \u03a3\u03a7\u0395\u0394\u0399\u03a9\u039d \u03a0\u039f\u039b\u0395\u03a9\u039d", "226": "\u0395\u039e\u0391\u0399\u03a1\u0395\u03a3\u0395\u0399\u03a3 \u0391\u03a0\u039f \u03a4\u0397\u039d \u0395\u03a1\u0393\u0391\u03a4\u0399\u039a\u0397 \u039d\u039f\u039c\u039f\u0398\u0395\u03a3\u0399\u0391", "227": "\u0391\u0393\u03a1\u039f\u03a4\u0399\u039a\u039f \u039a\u03a4\u0397\u039c\u0391\u03a4\u039f\u039b\u039f\u0393\u0399\u039f", "228": "\u03a3\u03a5\u039d\u03a4\u0391\u0393\u039c\u0391\u03a4\u0399\u039a\u0395\u03a3 \u0394\u0399\u0391\u03a4\u0391\u039e\u0395\u0399\u03a3 \u0395\u039a\u039a\u039b\u0397\u03a3\u0399\u0391\u03a3 \u03a4\u0397\u03a3 \u0395\u039b\u039b\u0391\u0394\u039f\u03a3", "229": "\u03a0\u0391\u039d\u0391\u0393\u0399\u039f\u03a3 \u03a4\u0391\u03a6\u039f\u03a3", "230": "\u03a3\u03a5\u039d\u0395\u03a1\u0393\u0395\u0399\u0391 \u03a0. \u039d\u0391\u03a5\u03a4\u0399\u039a\u039f\u03a5", "231": "\u0395\u03a0\u0399\u0398\u0395\u03a9\u03a1\u0397\u03a3\u0399\u03a3 \u03a3\u03a4\u03a1\u0391\u03a4\u039f\u03a5", "232": "\u03a3\u03a5\u039d\u0398\u0395\u03a3\u0397 \u03a0\u039b\u0397\u03a1\u03a9\u039c\u0391\u03a4\u03a9\u039d", "233": "\u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u03a3 \u0395\u03a1\u0393\u0391\u03a4\u0399\u039a\u0397\u03a3 \u0395\u03a3\u03a4\u0399\u0391\u03a3", "234": "\u0394\u0399\u0391\u03a6\u039f\u03a1\u0391 \u03a5\u0394\u03a1\u0391\u03a5\u039b\u0399\u039a\u0391 \u0395\u03a1\u0393\u0391", "235": "\u0394\u0399\u039a\u0391\u0399\u03a9\u039c\u0391 \u03a4\u039f\u03a5 \u03a3\u03a5\u039d\u0395\u03a1\u03a7\u0395\u03a3\u0398\u0391\u0399", "236": "\u039a\u039f\u0399\u039d\u03a9\u039d\u0399\u039a\u039f\u03a0\u039f\u0399\u0397\u03a3\u0397 - \u0391\u03a0\u039f\u039a\u03a1\u0391\u03a4\u0399\u039a\u039f\u03a0\u039f\u0399\u0397\u03a3\u0397 \u0395\u03a0\u0399\u03a7\u0395\u0399\u03a1\u0397\u03a3\u0395\u03a9\u039d \u0394\u0397\u039c\u039f\u03a3\u0399\u039f\u03a5 \u03a7\u0391\u03a1\u0391\u039a\u03a4\u0397\u03a1\u0391", "237": "\u039b\u0391\u0399\u039a\u0397 \u039a\u0391\u03a4\u039f\u0399\u039a\u0399\u0391", "238": "\u03a6\u039f\u03a1\u039f\u039b\u039f\u0393\u0399\u0391 \u039a\u0395\u03a1\u0394\u03a9\u039d", "239": "\u03a4\u0395\u03a7\u039d\u0399\u039a\u0397 \u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u0391", "240": "\u039c\u0395\u03a4\u0395\u039a\u03a0\u0391\u0399\u0394\u0395\u03a5\u03a3\u0397 \u0394\u0397\u039c\u039f\u0394\u0399\u0394\u0391\u03a3\u039a\u0391\u039b\u03a9\u039d", "241": "\u03a3\u03a5\u039d\u03a4\u0391\u039e\u0395\u0399\u03a3 \u03a5\u03a0\u039f\u03a5\u03a1\u0393\u03a9\u039d \u039a\u0391\u0399 \u0392\u039f\u03a5\u039b\u0395\u03a5\u03a4\u03a9\u039d", "242": "\u039f\u03a1\u0399\u039f \u0397\u039b\u0399\u039a\u0399\u0391\u03a3", "243": "\u03a3\u03a4\u03a1\u0391\u03a4\u0399\u03a9\u03a4\u0399\u039a\u0395\u03a3 \u03a0\u03a1\u039f\u039c\u0397\u0398\u0395\u0399\u0395\u03a3", "244": "\u0391\u03a0\u039f\u03a3\u03a4\u039f\u039b\u0391\u0399 \u0395\u039e\u03a9\u03a4\u0395\u03a1\u0399\u039a\u039f\u03a5", "245": "\u03a6\u039f\u03a1\u039f\u039b\u039f\u0393\u0399\u0391 \u0391\u039a\u0399\u039d\u0397\u03a4\u0397\u03a3 \u03a0\u0395\u03a1\u0399\u039f\u03a5\u03a3\u0399\u0391\u03a3", "246": "\u03a7\u03a1\u039f\u039d\u039f\u03a3 \u0395\u03a1\u0393\u0391\u03a3\u0399\u0391\u03a3 - \u0391\u0394\u0395\u0399\u0395\u03a3 \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u03a5 \u0395\u039b\u039b\u0397\u039d\u0399\u039a\u0397\u03a3 \u0391\u03a3\u03a4\u03a5\u039d\u039f\u039c\u0399\u0391\u03a3", "247": "\u039d\u0391\u03a5\u03a4\u0399\u039a\u0391 \u0395\u03a1\u0393\u0391 \u039a\u0391\u0399 \u03a0\u03a1\u039f\u039c\u0397\u0398\u0395\u0399\u0395\u03a3", "248": "\u039f\u0399\u039a\u039f\u039d\u039f\u039c\u0399\u039a\u0397 \u0394\u0399\u039f\u0399\u039a\u0397\u03a3\u0397 \u039a\u0391\u0399 \u039b\u039f\u0393\u0399\u03a3\u03a4\u0399\u039a\u039f", "249": "\u0394\u0391\u03a3\u039c\u039f\u039b\u039f\u0393\u0399\u039a\u0395\u03a3 \u03a3\u03a5\u039c\u0392\u0391\u03a3\u0395\u0399\u03a3", "250": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u03a3 \u03a7\u03a1\u0397\u039c\u0391\u03a4\u0399\u03a3\u03a4\u03a9\u039d ,\u039c\u0395\u03a3\u0399\u03a4\u03a9\u039d,\u0391\u039d\u03a4\u0399\u039a\u03a1\u03a5\u03a3\u03a4\u03a9\u039d \u039a\u0391\u0399 \u03a5\u03a0\u0391\u039b\u039b\u0397\u039b\u03a9\u039d \u03a7\u03a1\u0397\u039c\u0391\u03a4\u0399\u03a3\u03a4\u0397\u03a1\u0399\u039f\u03a5 \u0391\u0398\u0397\u039d\u03a9\u039d (\u03a4.\u0391.\u03a7.\u039c.\u0391.)", "251": "\u039a\u03a1\u0391\u03a4\u0399\u039a\u0397 \u03a3\u03a7\u039f\u039b\u0397 \u039f\u03a1\u03a7\u0397\u03a3\u03a4\u0399\u039a\u0397\u03a3 \u03a4\u0395\u03a7\u039d\u0397\u03a3", "252": "\u0395\u0398\u039d\u0399\u039a\u0397 \u039b\u03a5\u03a1\u0399\u039a\u0397 \u03a3\u039a\u0397\u039d\u0397", "253": "\u0391\u0395\u03a1\u039f\u039d\u0391\u03a5\u03a4\u0399\u039a\u0395\u03a3 \u03a4\u0397\u039b\u0395\u03a0\u0399\u039a\u039f\u0399\u039d\u03a9\u039d\u0399\u0395\u03a3", "254": "\u039a\u0395\u039d\u03a4\u03a1\u039f \u0392\u0399\u039f\u03a4\u0395\u03a7\u039d\u0399\u039a\u0397\u03a3 \u0391\u039d\u0391\u03a0\u03a4\u03a5\u039e\u0397\u03a3", "255": "\u0391\u03a1\u03a7\u0391\u0399\u039f\u039b\u039f\u0393\u0399\u039a\u039f \u039c\u039f\u03a5\u03a3\u0395\u0399\u039f", "256": "\u03a5\u03a0\u0395\u03a1\u03a9\u039a\u0395\u0391\u039d\u0395\u0399\u0391", "257": "\u0394\u0391\u03a3\u0397", "258": "\u0391\u03a3\u039a\u0397\u03a3\u0397 \u039a\u03a4\u0397\u039d\u0399\u0391\u03a4\u03a1\u0399\u039a\u039f\u03a5 \u0395\u03a0\u0391\u0393\u0393\u0395\u039b\u039c\u0391\u03a4\u039f\u03a3", "259": "\u039a\u03a4\u0397\u03a3\u0397 \u039a\u0391\u0399 \u0391\u03a0\u03a9\u039b\u0395\u0399\u0391", "260": "\u03a1\u0391\u0394\u0399\u039f\u03a4\u0397\u039b\u0395\u0393\u03a1\u0391\u03a6\u0399\u039a\u0397 \u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u0391", "261": "\u0391\u0395\u03a1\u039f\u039b\u0399\u039c\u0395\u039d\u0391\u03a3 \u0391\u0398\u0397\u039d\u03a9\u039d", "262": "\u03a0\u03a1\u03a9\u03a4\u039f\u0392\u0391\u0398\u039c\u0399\u0391 \u0395\u039a\u03a0\u0391\u0399\u0394\u0395\u03a5\u03a3\u0397", "263": "\u03a3\u03a4\u0395\u039b\u0395\u03a7\u039f\u03a3 \u0395\u03a6\u0395\u0394\u03a1\u03a9\u039d \u0391\u039e\u0399\u03a9\u039c\u0391\u03a4\u0399\u039a\u03a9\u039d", "264": "\u03a0\u03a4\u03a9\u03a7\u0395\u03a5\u03a3\u0397 \u039a\u0391\u0399 \u03a3\u03a5\u039c\u0392\u0399\u0392\u0391\u03a3\u039c\u039f\u03a3", "265": "\u03a0\u039f\u039b\u0399\u03a4\u0399\u039a\u039f\u03a3 \u0393\u0391\u039c\u039f\u03a3", "266": "\u0399\u0394\u0399\u03a9\u03a4\u0399\u039a\u0397 \u0395\u03a0\u0399\u03a7\u0395\u0399\u03a1\u0397\u03a3\u0397 \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u03a3", "267": "\u03a0\u039b\u039f\u0399\u0391 \u03a0\u039f\u039b\u0395\u039c\u0399\u039a\u039f\u03a5 \u039d\u0391\u03a5\u03a4\u0399\u039a\u039f\u03a5", "268": "\u0399\u0391\u03a4\u03a1\u0399\u039a\u0395\u03a3 \u0391\u039c\u039f\u0399\u0392\u0395\u03a3", "269": "\u0395\u039b\u039b\u0397\u039d\u0399\u039a\u039f\u03a3 \u0395\u03a1\u03a5\u0398\u03a1\u039f\u03a3 \u03a3\u03a4\u0391\u03a5\u03a1\u039f\u03a3", "270": "\u0391\u039d\u03a9\u039c\u0391\u039b\u0395\u03a3 \u039a\u0391\u03a4\u0391\u0398\u0395\u03a3\u0395\u0399\u03a3 \u03a3\u0395 \u03a7\u03a1\u03a5\u03a3\u039f", "271": "\u03a3\u03a5\u039c\u0392\u039f\u03a5\u039b\u0399\u039f \u03a4\u0399\u039c\u0397\u03a3 \u0391\u039e\u0399\u03a9\u039c\u0391\u03a4\u0399\u039a\u03a9\u039d \u03a0.\u039d", "272": "\u0394\u0399\u0391\u03a6\u039f\u03a1\u039f\u0399 \u0391\u03a1\u0394\u0395\u03a5\u03a4\u0399\u039a\u039f\u0399 \u039d\u039f\u039c\u039f\u0399", "273": "\u039a\u03a5\u0392\u0395\u03a1\u039d\u0397\u03a4\u0399\u039a\u039f\u03a3 \u0395\u03a0\u0399\u03a4\u03a1\u039f\u03a0\u039f\u03a3", "274": "\u0395\u039a\u03a4\u0395\u039b\u0395\u03a3\u0397 \u03a3\u03a5\u0393\u039a\u039f\u0399\u039d\u03a9\u039d\u0399\u0391\u039a\u03a9\u039d \u0395\u03a1\u0393\u03a9\u039d", "275": "\u03a4\u0391\u039c\u0395\u0399\u039f \u03a3\u03a5\u039d\u03a4\u0391\u039e\u0395\u03a9\u039d \u039a\u0391\u0399 \u0391\u03a1\u03a9\u0393\u0397\u03a3", "276": "\u0394\u0391\u03a3\u0399\u039a\u0395\u03a3 \u039c\u0395\u03a4\u0391\u03a6\u039f\u03a1\u0395\u03a3", "277": "\u039c\u0395 \u03a4\u0397 \u0394\u0397\u039c\u039f\u039a\u03a1\u0391\u03a4\u0399\u0391 \u03a4\u039f\u03a5 \u039a\u0395\u039c\u03a0\u0395\u039a", "278": "\u0395\u03a0\u0391\u039d\u0395\u039e\u0391\u0393\u039f\u039c\u0395\u039d\u0391 \u039c\u0395 \u0395\u0393\u0393\u03a5\u0397\u03a3\u0397", "279": "\u0394\u0399\u0391\u039d\u039f\u039c\u0397 \u0397\u039b\u0395\u039a\u03a4\u03a1\u0399\u039a\u0397\u03a3 \u0395\u039d\u0395\u03a1\u0393\u0395\u0399\u0391\u03a3", "280": "\u0391\u03a1\u03a3\u0397 \u03a3\u03a5\u0393\u039a\u03a1\u039f\u03a5\u03a3\u0395\u03a9\u03a3 \u039a\u0391\u0398\u0397\u039a\u039f\u039d\u03a4\u03a9\u039d", "281": "\u0395\u039a\u03a0\u0391\u0399\u0394\u0395\u03a5\u03a4\u0399\u039a\u0391 \u03a0\u039b\u039f\u0399\u0391", "282": "\u039a\u0395\u039d\u03a4\u03a1\u039f \u039c\u0395\u03a4\u0391\u03a6\u03a1\u0391\u03a3\u0397\u03a3", "283": "\u0395\u0399\u03a3\u03a6\u039f\u03a1\u0395\u03a3 \u039a\u0391\u0399 \u039d\u0391\u03a5\u039b\u03a9\u03a3\u0395\u0399\u03a3", "284": "\u039c\u0395\u03a4\u0395\u0393\u0393\u03a1\u0391\u03a6\u0395\u03a3 \u03a6\u039f\u0399\u03a4\u0397\u03a4\u03a9\u039d \u0391\u039d\u03a9\u03a4. \u0395\u039a\u03a0\u0391\u0399\u0394\u0395\u03a5\u03a4\u0399\u039a\u03a9\u039d \u0399\u0394\u03a1\u03a5\u039c\u0391\u03a4\u03a9\u039d", "285": "\u03a4\u039c\u0397\u039c\u0391\u03a4\u0391 \u0395\u03a0\u0399\u03a3\u03a4\u0397\u039c\u0397\u03a3 \u03a6\u03a5\u03a3\u0399\u039a\u0397\u03a3 \u0391\u0393\u03a9\u0393\u0397\u03a3 - \u0391\u0398\u039b\u0397\u03a4\u0399\u03a3\u039c\u039f\u03a5", "286": "\u03a8\u03a5\u03a7\u0399\u0391\u03a4\u03a1\u0395\u0399\u0391", "287": "\u03a6\u039f\u03a1\u039f\u039b\u039f\u0393\u0399\u0391 \u039a\u0395\u03a6\u0391\u039b\u0391\u0399\u039f\u03a5 \u0391\u039d\u03a9\u039d. \u0395\u03a4\u0391\u0399\u03a1\u0395\u0399\u03a9\u039d", "288": "\u03a4\u03a5\u03a0\u039f\u0399 \u03a3\u03a5\u039c\u0392\u039f\u039b\u0391\u0399\u03a9\u039d", "289": "\u039a\u0391\u039d\u039f\u039d\u0399\u03a3\u039c\u039f\u03a3 \u0395\u03a0\u0399\u0398\u0395\u03a9\u03a1\u0397\u03a3\u0395\u03a9\u03a3", "290": "\u039c\u039f\u03a5\u03a3\u0395\u0399\u039f \u0395\u039b\u039b\u0397\u039d\u0399\u039a\u0397\u03a3 \u039b\u0391\u0399\u039a\u0397\u03a3 \u03a4\u0395\u03a7\u039d\u0397\u03a3", "291": "\u03a0\u0391\u039d\u0395\u03a0\u0399\u03a3\u03a4\u0397\u039c\u0399\u039f \u03a0\u0395\u039b\u039f\u03a0\u039f\u039d\u039d\u0397\u03a3\u039f\u03a5", "292": "\u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u03a3 \u0395\u03a1\u0393\u0391\u03a4\u0399\u039a\u0397\u03a3 \u039a\u0391\u03a4\u039f\u0399\u039a\u0399\u0391\u03a3", "293": "\u0391\u03a3\u03a6\u0391\u039b\u0395\u0399\u0391 \u0395\u03a1\u0393\u0391\u0396\u039f\u039c\u0395\u039d\u03a9\u039d \u03a3\u0395 \u039f\u0399\u039a\u039f\u0394\u039f\u039c\u0395\u03a3", "294": "\u03a3\u03a4\u0395\u0393\u0391\u039d\u0397 \u03a5\u03a0\u039f\u0394\u0399\u0391\u0399\u03a1\u0395\u03a3\u0397 \u03a0\u039b\u039f\u0399\u03a9\u039d", "295": "\u0394\u0399\u039f\u0399\u039a\u0397\u03a3\u0397 \u03a0\u03a1\u03a9\u03a4\u0395\u03a5\u039f\u03a5\u03a3\u0397\u03a3", "296": "\u0394\u0399\u0394\u0391\u039a\u03a4\u039f\u03a1\u0399\u039a\u0395\u03a3 - \u039c\u0395\u03a4\u0391\u03a0\u03a4\u03a5\u03a7\u0399\u0391\u039a\u0395\u03a3 \u03a3\u03a0\u039f\u03a5\u0394\u0395\u03a3 \u0395\u0398\u039d\u0399\u039a\u039f\u03a5 \u039c\u0395\u03a4\u03a3\u039f\u0392\u0399\u039f\u03a5", "297": "\u0395\u0399\u03a3\u03a6\u039f\u03a1\u0391 \u039a\u0391\u03a4\u039f\u03a7\u03a9\u039d \u0395\u0399\u0394\u03a9\u039d \u03a0\u03a1\u03a9\u03a4\u0397\u03a3 \u0391\u039d\u0391\u0393\u039a\u0397\u03a3", "298": "\u0394\u0399\u0391\u03a6\u039f\u03a1\u039f\u0399 \u0394\u0399\u039a\u039f\u039d\u039f\u039c\u0399\u039a\u039f\u0399 \u039d\u039f\u039c\u039f\u0399", "299": "\u0394\u0399\u0395\u0398\u039d\u0395\u0399\u03a3 \u039b\u0399\u039c\u0395\u039d\u0399\u039a\u0395\u03a3 \u03a3\u03a5\u039c\u0392\u0391\u03a3\u0395\u0399\u03a3", "300": "\u03a5\u0393\u0395\u0399\u039f\u039d\u039f\u039c\u0399\u039a\u0397 \u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u0391 \u0395\u039b.\u0391\u03a3", "301": "\u0395\u039b\u039b\u0397\u039d\u0399\u039a\u0391 \u03a4\u0391\u03a7\u03a5\u0394\u03a1\u039f\u039c\u0395\u0399\u0391 (\u0395\u039b.\u03a4\u0391)", "302": "\u039c\u0399\u03a3\u0398\u039f\u0399 \u039a\u0391\u0399 \u0395\u03a0\u0399\u0394\u039f\u039c\u0391\u03a4\u0391 \u03a0. \u039d\u0391\u03a5\u03a4\u0399\u039a\u039f\u03a5", "303": "\u0393\u0395\u03a9\u03a1\u0393\u0399\u039a\u0391 \u03a4\u0391\u039c\u0395\u0399\u0391", "304": "\u0391\u03a4\u0395\u039b\u0395\u0399\u0395\u03a3 \u03a5\u03a0\u0395\u03a1 \u039c\u0395\u03a4\u0391\u039b\u039b\u0395\u03a5\u03a4\u0399\u039a\u03a9\u039d \u0395\u03a0\u0399\u03a7\u0395\u0399\u03a1\u0397\u03a3\u0395\u03a9\u039d", "305": "\u0391\u03a0\u039f\u0392\u0391\u03a1\u039f", "306": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0395\u03a0\u0399\u039a\u039f\u03a5\u03a1\u0399\u039a\u0397\u03a3 \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u03a3 \u0395\u039a\u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u03a9\u039d \u039a\u0391\u0399 \u03a5\u03a0\u0391\u039b\u039b\u0397\u039b\u03a9\u039d", "307": "\u039a\u03a9\u0394\u0399\u039a\u0391\u03a3 \u03a0\u0395\u03a1\u0399 \u0394\u0399\u039a\u0397\u0393\u039f\u03a1\u03a9\u039d", "308": "\u0399\u0395\u03a1\u0391\u03a1\u03a7\u0399\u0391 \u039a\u0391\u0399 \u03a0\u03a1\u039f\u0392\u0399\u0392\u0391\u03a3\u039c\u039f\u0399", "309": "\u0399\u03a3\u03a1\u0391\u0397\u039b\u0399\u03a4\u0395\u03a3", "310": "\u03a3\u03a9\u039c\u0391 \u039a\u03a4\u0397\u039d\u0399\u0391\u03a4\u03a1\u0399\u039a\u039f", "311": "\u039d\u039f\u03a1\u0392\u0397\u0393\u0399\u0391 - \u039d\u0395\u0391 \u0396\u0397\u039b\u0391\u039d\u0394\u0399\u0391 \u2013 \u039d\u0399\u0393\u0397\u03a1\u0399\u0391 \u039a.\u039b\u03a0", "312": "\u0395\u039d\u03a4\u03a5\u03a0\u0391 \u039a\u0391\u0399 \u0392\u0399\u0392\u039b\u0399\u039f\u0398\u0397\u039a\u0395\u03a3 \u039d\u0391\u03a5\u03a4\u0399\u039a\u039f\u03a5", "313": "\u03a5\u03a0\u039f\u03a5\u03a1\u0393\u0395\u0399\u039f \u03a4\u03a5\u03a0\u039f\u03a5 \u039a\u0391\u0399 \u039c\u0395\u03a3\u03a9\u039d \u039c\u0391\u0396\u0399\u039a\u0397\u03a3 \u0395\u039d\u0397\u039c\u0395\u03a1\u03a9\u03a3\u0397\u03a3", "314": "\u039d\u0391\u03a5\u03a4\u0399\u039a\u0395\u03a3 \u03a0\u0395\u0399\u0398\u0391\u03a1\u03a7\u0399\u039a\u0395\u03a3 \u03a0\u039f\u0399\u039d\u0395\u03a3", "315": "\u039c\u0399\u03a3\u0398\u03a9\u03a3\u0395\u0399\u03a3 \u0391\u0393\u03a1\u039f\u03a4\u0399\u039a\u03a9\u039d \u0391\u039a\u0399\u039d\u0397\u03a4\u03a9\u039d", "316": "\u0394\u0399\u0391\u03a6\u039f\u03a1\u039f\u0399 \u03a3\u03a5\u039d\u0395\u03a4\u0391\u0399\u03a1\u0399\u03a3\u039c\u039f\u0399", "317": "\u0391\u0393\u03a1\u039f\u03a4\u0399\u039a\u0397 \u03a0\u0399\u03a3\u03a4\u0397", "318": "\u039b\u0391\u0399\u039a\u0395\u03a3 \u0391\u0393\u039f\u03a1\u0395\u03a3-\u03a4\u0391\u039c\u0395\u0399\u039f \u039b\u0391\u0399\u039a\u03a9\u039d \u0391\u0393\u039f\u03a1\u03a9\u039d", "319": "\u039a\u0391\u039d\u039f\u039d\u0399\u03a3\u039c\u039f\u03a3 \u03a0\u0395\u0399\u0398\u0391\u03a1\u03a7\u0399\u0391\u03a3 \u03a7\u03a9\u03a1\u039f\u03a6\u03a5\u039b\u0391\u039a\u0397\u03a3", "320": "\u0391\u0394\u0399\u039a\u0397\u039c\u0391\u03a4\u0391 \u039a\u0391\u03a4\u0391 \u03a4\u0397\u03a3 \u0394\u0397\u039c\u039f\u03a3\u0399\u0391\u03a3 \u0391\u03a3\u03a6\u0391\u039b\u0395\u0399\u0391\u03a3", "321": "\u0395\u039d\u039f\u0399\u039a\u0399\u0391\u03a3\u0397 \u03a6\u039f\u03a1\u039f\u03a5 \u0394\u0397\u039c\u039f\u03a3\u0399\u03a9\u039d \u0398\u0395\u0391\u039c\u0391\u03a4\u03a9\u039d", "322": "\u0395\u03a5\u03a1\u03a9\u03a0\u0391\u0399\u039a\u0397 \u03a3\u03a5\u039c\u0392\u0391\u03a3\u0397 \u039a\u039f\u0399\u039d\u03a9\u039d\u0399\u039a\u0397\u03a3 \u039a\u0391\u0399 \u0399\u0391\u03a4\u03a1\u0399\u039a\u0397\u03a3 \u0391\u039d\u03a4\u0399\u039b\u0397\u03a8\u0395\u03a9\u03a3", "323": "\u0395\u03a0\u0399\u0392\u0391\u03a4\u0397\u0393\u0391 \u0391\u0395\u03a1\u039f\u03a3\u03a4\u03a1\u03a9\u039c\u039d\u0391 \u039f\u03a7\u0397\u039c\u0391\u03a4\u0391", "324": "\u0395\u03a6\u0395\u0394\u03a1\u039f\u0399", "325": "\u03a3\u03a4\u03a1\u0391\u03a4\u0399\u03a9\u03a4\u0399\u039a\u0395\u03a3 \u039b\u0395\u03a3\u03a7\u0395\u03a3", "326": "\u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f \u03a6\u03a5\u039b\u0391\u039a\u03a9\u039d", "327": "\u0391\u039d\u0391\u0398\u0395\u03a9\u03a1\u0397\u03a3\u0397 \u03a4\u0399\u039c\u03a9\u039d", "328": "\u039c\u0391\u039b\u0391\u039a\u0399\u0391 \u039a\u0391\u0399 \u039c\u0391\u039b\u0391\u039a\u039f\u03a3\u03a4\u03a1\u0391\u039a\u0391", "329": "\u039a\u03a9\u0394\u0399\u039a\u0391\u03a3 \u0394\u0397\u039c\u039f\u03a3\u0399\u039f\u03a5 \u039d\u0391\u03a5\u03a4\u0399\u039a\u039f\u03a5 \u0394\u0399\u039a\u0391\u0399\u039f\u03a5", "330": "\u0394\u0399\u0391\u03a6\u039f\u03a1\u0391 \u03a3\u03a9\u039c\u0391\u03a4\u0395\u0399\u0391", "331": "\u0393\u0395\u039d\u0399\u039a\u0395\u03a3 \u0394\u0399\u0391\u03a4\u0391\u039e\u0395\u0399\u03a3", "332": "\u039a\u03a9\u0394\u0399\u039a\u039f\u03a0\u039f\u0399\u0397\u03a3\u0397 \u0391\u0393\u039f\u03a1\u0391\u039d\u039f\u039c\u0399\u039a\u03a9\u039d \u0394\u0399\u0391\u03a4\u0391\u039e\u0395\u03a9\u039d", "333": "\u0395\u039a\u03a0\u0391\u0399\u0394\u0395\u03a5\u03a3\u0397 \u03a3\u03a4\u0397\u039d \u0391\u039b\u039b\u039f\u0394\u0391\u03a0\u0397", "334": "\u0394\u0399\u0394\u0391\u039a\u03a4\u0399\u039a\u0391 \u0392\u0399\u0392\u039b\u0399\u0391", "335": "\u03a3\u03a5\u039d\u03a4\u0391\u039e\u0399\u039f\u0394\u039f\u03a4\u0399\u039a\u0391 \u039a\u0391\u0399 \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u03a4\u0399\u039a\u0391 \u0398\u0395\u039c\u0391\u03a4\u0391 \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u03a5 \u039d.\u03a0.\u0394.\u0394", "336": "\u0395\u03a0\u0399\u0394\u039f\u039c\u0391 \u039f\u0399\u039a\u039f\u0393\u0395\u039d\u0395\u0399\u03a9\u039d \u03a3\u03a4\u03a1\u0391\u03a4\u0399\u03a9\u03a4\u0399\u039a\u03a9\u039d \u0395\u039e\u0391\u03a6\u0391\u039d\u0399\u03a3\u0398\u0395\u039d\u03a4\u03a9\u039d \u039a\u0391\u0399 \u0391\u0399\u03a7\u039c\u0391\u039b\u03a9\u03a4\u03a9\u039d", "337": "\u0394\u0399\u0395\u0398\u039d\u0395\u0399\u03a3 \u03a3\u03a5\u039c\u0392\u0391\u03a3\u0395\u0399\u03a3", "338": "\u039a\u0395\u039d\u03a4\u03a1\u039f \u0394\u0399\u03a0\u039b\u03a9\u039c\u0391\u03a4\u0399\u039a\u03a9\u039d \u03a3\u03a0\u039f\u03a5\u0394\u03a9\u039d", "339": "\u0393\u0395\u039d. \u0394\u0399\u0395\u03a5\u0398\u03a5\u039d\u03a3\u0397 \u03a4\u03a5\u03a0\u039f\u03a5 \u039a\u0391\u0399 \u03a0\u039b\u0397\u03a1\u039f\u03a6\u039f\u03a1\u0399\u03a9\u039d", "340": "\u0391\u03a1\u03a7\u0395\u0399\u0391 \u03a4\u0395\u039b\u03a9\u039d\u0395\u0399\u0391\u039a\u03a9\u039d \u0391\u03a1\u03a7\u03a9\u039d", "341": "\u0395\u0399\u0394\u0399\u039a\u0395\u03a3 \u03a4\u0399\u039c\u0395\u03a3 \u039a\u0391\u03a5\u03a3\u0399\u039c\u03a9\u039d", "342": "\u03a3\u03a4\u0395\u0393\u0397 \u03a5\u0393\u0395\u0399\u039f\u039d\u039f\u039c\u0399\u039a\u03a9\u039d", "343": "\u0393\u0395\u039d\u0399\u039a\u0391 \u03a0\u0395\u03a1\u0399 \u03a3\u03a5\u039c\u0392\u039f\u039b\u0391\u0399\u039f\u0393\u03a1\u0391\u03a6\u03a9\u039d", "344": "\u0392\u039f\u03a5\u039b\u0397", "345": "\u0395\u03a0\u0399\u039b\u039f\u0393\u0397 & \u0391\u039e\u0399\u039f\u039b\u039f\u0393\u0397\u03a3\u0397 \u0391\u03a3\u03a4\u03a5\u039d\u039f\u039c\u0399\u039a\u039f\u03a5 \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u03a5 \u0395\u039b.\u0391\u03a3", "346": "\u03a7\u039f\u0399\u03a1\u039f\u03a4\u03a1\u039f\u03a6\u0399\u0391", "347": "\u03a6\u039f\u03a1\u039f\u03a3 \u039a\u0391\u03a4\u0391\u039d\u0391\u039b\u03a9\u03a3\u0395\u03a9\u03a3 \u03a0\u0395\u03a4\u03a1\u0395\u039b\u0391\u0399\u039f\u0395\u0399\u0394\u03a9\u039d", "348": "\u0395\u03a0\u0399\u0392\u039f\u039b\u0397 \u03a4\u0395\u039b\u03a9\u039d\u0399\u0391\u039a\u03a9\u039d \u0394\u0391\u03a3\u039c\u03a9\u039d", "349": "\u0391\u0395\u03a1\u039f\u03a0\u039f\u03a1\u0399\u039a\u0397 \u03a3\u03a4\u03a1\u0391\u03a4\u039f\u039b\u039f\u0393\u0399\u0391", "350": "\u0394\u0399\u0395\u0398\u039d\u0395\u0399\u03a3 \u03a3\u03a5\u039c\u0392\u0391\u03a3\u0395\u0399\u03a3 \u0393\u0399\u0391 \u03a4\u0391 \u039d\u0391\u03a1\u039a\u03a9\u03a4\u0399\u039a\u0391", "351": "\u0394\u0399\u0391\u03a6\u039f\u03a1\u0395\u03a3 \u03a4\u03a1\u0391\u03a0\u0395\u0396\u0395\u03a3", "352": "\u039f\u0399\u039d\u039f\u039b\u039f\u0393\u039f\u0399", "353": "\u03a4\u0395\u039b\u03a9\u039d\u039f\u03a6\u03a5\u039b\u0391\u039a\u0397", "354": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0395\u0398\u039d\u0399\u039a\u0397\u03a3 \u0391\u039c\u03a5\u039d\u0391\u03a3 (T.E\u0398.A.) - \u0395\u0398\u039d\u0399\u039a\u0397 \u0395\u03a0\u0399\u03a4\u03a1\u039f\u03a0\u0397 \u0395\u039e\u039f\u03a0\u039b\u0399\u03a3\u039c\u039f\u03a5 \u0395\u039d\u039f\u03a0\u039b\u03a9\u039d \u0394\u03a5\u039d\u0391\u039c\u0395\u03a9\u039d (\u0395.\u0395.\u0395.\u0395.\u0394.)", "355": "\u0395\u039a\u03a4\u0395\u039b\u0395\u03a3\u0397 \u03a4\u0397\u03a3 \u03a0\u039f\u0399\u039d\u0397\u03a3", "356": "\u0399\u03a3\u039f\u039b\u039f\u0393\u0399\u03a3\u039c\u039f\u0399 \u0391\u039d\u03a9\u039d\u03a5\u039c\u03a9\u039d \u0395\u03a4\u0391\u0399\u03a1\u0395\u0399\u03a9\u039d", "357": "\u0391\u03a1\u03a7\u0399\u03a4\u0395\u039a\u03a4\u039f\u039d\u0399\u039a\u039f\u0399 \u0394\u0399\u0391\u0393\u03a9\u039d\u0399\u03a3\u039c\u039f\u0399", "358": "\u039a\u0391\u03a4\u0391\u03a1\u0393\u0397\u03a3\u0397 \u03a6\u03a5\u039b\u0395\u03a4\u0399\u039a\u03a9\u039d \u0394\u0399\u0391\u039a\u03a1\u0399\u03a3\u0395\u03a9\u039d", "359": "\u0395\u03a0\u0391\u0393\u0393\u0395\u039b\u039c\u0391\u03a4\u0399\u039a\u0391 \u0394\u0399\u039a\u0391\u0399\u03a9\u039c\u0391\u03a4\u0391 \u0391\u03a0\u039f\u03a6\u039f\u0399\u03a4\u03a9\u039d", "360": "\u039c\u039f\u039d\u0391\u03a3\u03a4\u0397\u03a1\u0399\u0391\u039a\u0397 \u03a0\u0395\u03a1\u0399\u039f\u03a5\u03a3\u0399\u0391 \u03a3\u0391\u039c\u039f\u03a5", "361": "\u03a3\u03a5\u039d\u03a4\u0391\u039e\u0397 \u0394\u0397\u039c\u039f\u03a4\u0399\u039a\u03a9\u039d \u039a\u0391\u0399 \u039a\u039f\u0399\u039d\u039f\u03a4\u0399\u039a\u03a9\u039d \u03a5\u03a0\u0391\u039b\u039b\u0397\u039b\u03a9\u039d", "362": "\u039f\u0399\u039a\u039f\u039d\u039f\u039c\u0399\u039a\u0395\u03a3 \u0395\u03a6\u039f\u03a1\u0399\u0395\u03a3", "363": "\u03a6\u03a1\u039f\u039d\u03a4\u0399\u03a3\u03a4\u0397\u03a1\u0399\u0391 \u0395\u03a6\u0391\u03a1\u039c\u039f\u0393\u03a9\u039d", "364": "\u039d\u039f\u039c\u0391\u03a1\u03a7\u0399\u0395\u03a3 \u0391\u03a4\u03a4\u0399\u039a\u0397\u03a3", "365": "\u03a6\u03a5\u039c\u0391\u03a4\u0399\u03a9\u03a3\u0397", "366": "\u0395\u039b\u0395\u0393\u03a7\u039f\u03a3 \u0391\u039d\u0391\u03a4\u0399\u039c\u0397\u03a3\u0395\u03a9\u039d", "367": "\u0391\u03a4\u0395\u039b\u0395\u0399\u0395\u03a3 \u03a5\u03a0\u0395\u03a1 \u03a4\u0397\u03a3 \u039d\u0391\u03a5\u03a4\u0399\u039b\u0399\u0391\u03a3", "368": "\u039a\u03a9\u03a6\u0391\u039b\u0391\u039b\u039f\u0399", "369": "\u0399\u0391\u03a4\u03a1\u0399\u039a\u0397 \u0394\u0395\u039f\u039d\u03a4\u039f\u039b\u039f\u0393\u0399\u0391", "370": "\u0395\u039e\u039f\u0394\u0391 \u0394\u0397\u039c\u039f\u03a3\u0399\u0391\u03a3 \u0391\u03a3\u03a6\u0391\u039b\u0395\u0399\u0391\u03a3", "371": "\u039c\u0395 \u03a4\u0397\u039d \u0391\u03a1\u0393\u0395\u039d\u03a4\u0399\u039d\u0397", "372": "\u039a\u039b\u0391\u0394\u039f\u03a3 \u03a5\u0393\u0395\u0399\u039f\u039d\u039f\u039c\u0399\u039a\u0397\u03a3 \u03a0\u0395\u03a1\u0399\u0398\u0391\u039b\u03a8\u0397\u03a3 \u03a4.\u0391.\u0395", "373": "\u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u0391 \u0395\u039a\u039a\u0391\u0398\u0391\u03a1\u0399\u03a3\u0395\u03a9\u03a3 \u039d\u0391\u03a1\u039a\u039f\u03a0\u0395\u0394\u0399\u03a9\u039d", "374": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0391\u03a1\u03a9\u0393\u0397\u03a3 \u03a5\u03a0\u0391\u039b\u039b\u0397\u039b\u03a9\u039d \u0391\u03a3\u03a4\u03a5\u039d\u039f\u039c\u0399\u0391\u03a3 \u03a0\u039f\u039b\u0395\u03a9\u039d \u03a4.\u0391.\u03a5.\u0391.\u03a0", "375": "\u03a0\u03a1\u039f\u03a3\u03a4\u0391\u03a3\u0399\u0391 \u0394\u0397\u039c\u039f\u03a3\u0399\u03a9\u039d \u039a\u03a4\u0397\u039c\u0391\u03a4\u03a9\u039d", "376": "\u0392\u0399\u0392\u039b\u0399\u0391 \u0395\u039d\u0394\u0399\u039a\u03a9\u039d \u039c\u0395\u03a3\u03a9\u039d", "377": "\u0395\u039b\u039b\u0397\u039d\u0399\u039a\u039f\u03a3 \u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u03a3 \u039c\u0399\u039a\u03a1\u039f\u039c\u0395\u03a3\u0391\u0399\u03a9\u039d \u039c\u0395\u03a4\u0391\u03a0\u039f\u0399\u0397\u03a4\u0399\u039a\u03a9\u039d \u0395\u03a0\u0399\u03a7\u0395\u0399\u03a1\u0397\u03a3\u0395\u03a9\u039d \u039a\u0391\u0399 \u03a7\u0395\u0399\u03a1\u039f\u03a4\u0395\u03a7\u039d\u0399\u0391\u03a3", "378": "\u0394\u0397\u039c\u039f\u03a3\u0399\u039f\u0393\u03a1\u0391\u03a6\u0399\u039a\u039f\u03a3 \u03a7\u0391\u03a1\u03a4\u0397\u03a3", "379": "\u03a6\u039f\u03a1\u039f\u03a3 \u0393\u0391\u039c\u0399\u039a\u03a9\u039d \u03a3\u03a5\u039c\u03a6\u03a9\u039d\u03a9\u039d \u0399\u03a3\u03a1\u0391\u0397\u039b\u0399\u03a4\u03a9\u039d", "380": "\u03a5\u03a0\u039f\u03a4\u03a1\u039f\u03a6\u0399\u0391\u0399 \u039a\u03a4\u0397\u039d\u0399\u0391\u03a4\u03a1\u0399\u039a\u0397\u03a3", "381": "\u0391\u03a0\u039f\u0394\u039f\u03a7\u0395\u03a3 \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u03a5 \u0399\u0394\u0399\u03a9\u03a4\u0399\u039a\u039f\u03a5 \u0394\u0399\u039a\u0391\u0399\u039f\u03a5", "382": "\u0395\u03a0\u0399\u0392\u0391\u03a4\u0397\u0393\u0391 \u0391\u039a\u03a4\u039f\u03a0\u039b\u039f\u0399\u039a\u0391 \u03a0\u039b\u039f\u0399\u0391", "383": "\u03a0\u0391\u039b\u0391\u0399\u039f\u0399 \u0394\u0397\u039c\u039f\u03a3\u0399\u039f\u03a5\u03a0\u0391\u039b\u039b\u0397\u039b\u0399\u039a\u039f\u0399 \u039d\u039f\u039c\u039f\u0399", "384": "\u039a\u03a9\u0394\u0399\u039a\u0391\u03a3 \u03a0\u0395\u03a1\u0399 \u039a\u039b\u0397\u03a1\u039f\u0394\u039f\u03a4\u0397\u039c\u0391\u03a4\u03a9\u039d", "385": "\u039f\u0399\u039a\u039f\u039d\u039f\u039c\u0399\u039a\u0397 \u0395\u03a0\u0399\u0398\u0395\u03a9\u03a1\u0397\u03a3\u0397", "386": "\u039a\u03a4\u0397\u039c\u0391\u03a4\u039f\u0393\u03a1\u0391\u03a6\u0397\u03a3\u0397 \u0394\u0391\u03a3\u03a9\u039d", "387": "\u039f\u03a1\u0393\u0391\u039d\u0399\u039a\u0395\u03a3 \u0398\u0395\u03a3\u0395\u0399\u03a3", "388": "\u03a0\u0395\u03a1\u0399\u039f\u03a1\u0399\u03a3\u039c\u039f\u03a3 \u03a7\u03a1\u0397\u03a3\u0397\u03a3 \u039f\u03a1\u0399\u03a3\u039c\u0395\u039d\u03a9\u039d \u03a3\u03a5\u039c\u0392\u0391\u03a4\u0399\u039a\u03a9\u039d \u039f\u03a0\u039b\u03a9\u039d", "389": "\u0391\u0393\u0399\u039f\u039d \u039f\u03a1\u039f\u03a3", "390": "\u039a\u03a5\u03a1\u03a9\u03a3\u0395\u0399\u03a3 \u03a6\u039f\u03a1\u039f\u039b\u039f\u0393\u0399\u039a\u03a9\u039d \u03a0\u0391\u03a1\u0391\u0392\u0391\u03a3\u0395\u03a9\u039d", "391": "\u039a\u0391\u03a4\u0391\u03a3\u03a4\u0391\u03a3\u0397 \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u03a5 \u039f.\u0393.\u0391", "392": "\u0395\u03a0\u0391\u039d\u0391\u03a0\u0391\u03a4\u03a1\u0399\u03a3\u039c\u039f\u03a3 \u039a\u0395\u03a6\u0391\u039b\u0391\u0399\u03a9\u039d", "393": "\u039c\u0391\u0398\u0397\u03a4\u0395\u03a3 \u03a4\u0395\u03a7\u039d\u0399\u03a4\u0395\u03a3", "394": "\u0394\u0399\u0391\u0392\u0399\u0392\u0391\u03a3\u0395\u0399\u03a3", "395": "\u0395\u039c\u039c\u0399\u03a3\u0398\u039f\u0399 \u039a\u0391\u0399 \u03a0\u039f\u0399\u039d\u0399\u039a\u039f\u0399 \u0394\u0399\u039a. \u0395\u03a0\u0399\u039c\u0395\u039b\u0397\u03a4\u0395\u03a3", "396": "\u03a3\u03a5\u039c\u0392\u0391\u03a3\u0395\u0399\u03a3 \u0394\u0399\u039a\u0391\u03a3\u03a4\u0399\u039a\u0397\u03a3 \u03a3\u03a5\u039d\u0394\u03a1\u039f\u039c\u0397\u03a3", "397": "\u0394\u0397\u039c\u039f\u03a3\u0399\u0391 \u0395\u03a0\u0399\u03a7\u0395\u0399\u03a1\u0397\u03a3\u0397 \u03a0\u0395\u03a4\u03a1\u0395\u039b\u0391\u0399\u039f\u03a5", "398": "\u0395\u039b\u039b\u0397\u039d\u0399\u039a\u0397 \u03a4\u03a1\u0391\u03a0\u0395\u0396\u0391 \u0392\u0399\u039f\u039c\u0397\u03a7\u0391\u039d\u0399\u039a\u0397\u03a3 \u0391\u039d\u0391\u03a0\u03a4\u03a5\u039e\u0395\u03a9\u03a3 \u0391\u039d\u03a9\u039d\u03a5\u039c\u039f\u03a3 \u0395\u03a4\u0391\u0399\u03a1\u0395\u0399\u0391 (\u0395.\u03a4.\u0392.\u0391. \u0391.\u0395.)", "399": "\u0395\u0399\u0394\u0399\u039a\u039f\u03a4\u0397\u03a4\u0395\u03a3 \u039a\u0391\u0399 \u03a4\u03a1\u039f\u03a0\u039f\u03a3 \u0395\u0399\u03a3\u039f\u0394\u039f\u03a5 \u03a3\u03a4\u0395\u039b\u0395\u03a7\u03a9\u039d", "400": "\u03a0\u03a1\u039f\u03a3\u03a4\u0391\u03a3\u0399\u0391 \u0395\u03a1\u0393\u0391\u0396\u039f\u039c\u0395\u039d\u03a9\u039d \u03a3\u03a4\u0397\u039d \u0397\u039c\u0395\u0394\u0391\u03a0\u0397 - \u03a3\u03a9\u039c\u0391 \u0395\u03a0\u0399\u0398\u0395\u03a9\u03a1\u0397\u03a3\u0397\u03a3 \u0395\u03a1\u0393\u0391\u03a3\u0399\u0391\u03a3", "401": "\u0399\u039d\u03a3\u03a4\u0399\u03a4\u039f\u03a5\u03a4\u039f \u03a9\u039a\u0395\u0391\u039d\u039f\u0393\u03a1\u0391\u03a6\u0399\u039a\u03a9\u039d \u039a\u0391\u0399 \u0391\u039b\u0399\u0395\u03a5\u03a4\u0399\u039a\u03a9\u039d \u0395\u03a1\u0395\u03a5\u039d\u03a9\u039d", "402": "\u0395\u039b\u0395\u0393\u03a7\u039f\u03a3 \u0391\u03a0\u039f\u039b\u03a5\u03a3\u0395\u03a9\u039d \u039c\u0399\u03a3\u0398\u03a9\u03a4\u03a9\u039d", "403": "\u03a0\u0391\u039d\u0395\u039b\u039b\u0397\u039d\u0399\u0391 \u0395\u039a\u0398\u0395\u03a3\u0397 \u039b\u0391\u039c\u0399\u0391\u03a3", "404": "\u039a\u03a5\u03a1\u0399\u0391\u039a\u0397 \u0391\u03a1\u0393\u0399\u0391 \u039a\u0391\u0399 \u0391\u039b\u039b\u0395\u03a3 \u03a5\u03a0\u039f\u03a7\u03a1\u0395\u03a9\u03a4\u0399\u039a\u0395\u03a3 \u0391\u03a1\u0393\u0399\u0395\u03a3", "405": "\u039a\u039b\u0391\u0394\u039f\u03a3 \u03a5\u0393\u0395\u0399\u0391\u03a3 \u039f.\u0391.\u0395.\u0395", "406": "\u039f\u03a1\u039a\u039f\u03a3 \u03a3\u03a4\u03a1\u0391\u03a4\u0399\u03a9\u03a4\u0399\u039a\u03a9\u039d", "407": "\u0395\u039c\u03a0\u039f\u03a1\u0399\u039a\u0391 \u0392\u0399\u0392\u039b\u0399\u0391", "408": "\u03a5\u0393\u0395\u0399\u039f\u039d\u039f\u039c\u0399\u039a\u0395\u03a3 \u0395\u03a0\u0399\u03a4\u03a1\u039f\u03a0\u0395\u03a3 \u0395\u039d\u039f\u03a0\u039b\u03a9\u039d \u0394\u03a5\u039d\u0391\u039c\u0395\u03a9\u039d", "409": "\u0391\u0393\u0399\u039f\u03a3 \u0392\u0399\u039a\u0395\u039d\u03a4\u0399\u039f\u03a3-\u0393\u03a1\u0395\u039d\u0391\u0394\u0399\u039d\u039f\u0399, \u0391\u0393\u0399\u039f\u03a3 \u039c\u0391\u03a1\u0399\u039d\u039f\u03a3 \u039a.\u039b\u03a0", "410": "\u0391\u03a0\u039f\u0396\u0397\u039c\u0399\u03a9\u03a3\u0397 \u0394\u0399\u0391\u03a4\u0395\u039b\u0395\u03a3\u0391\u039d\u03a4\u03a9\u039d \u03a0\u03a1\u03a9\u0398\u03a5\u03a0\u039f\u03a5\u03a1\u0393\u03a9\u039d", "411": "\u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0397 \u039b\u039f\u0393\u039f\u03a4\u0395\u03a7\u039d\u03a9\u039d \u039a\u0391\u0399 \u039a\u0391\u039b\u039b\u0399\u03a4\u0395\u03a7\u039d\u03a9\u039d", "412": "\u03a0\u0395\u0399\u0398\u0391\u03a1\u03a7\u0399\u039a\u0391 \u03a3\u03a5\u039c\u0392\u039f\u03a5\u039b\u0399\u0391", "413": "\u0395\u03a4\u0391\u0399\u03a1\u0399\u0395\u03a3 \u03a7\u03a1\u0397\u039c\u0391\u03a4\u039f\u0394\u039f\u03a4\u0399\u039a\u0397\u03a3 \u039c\u0399\u03a3\u0398\u03a9\u03a3\u0397\u03a3", "414": "\u039a\u039f\u0399\u039d\u03a9\u039d\u0399\u039a\u0397 \u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u0391 \u03a6\u03a5\u039b\u0391\u039a\u03a9\u039d", "415": "\u039a\u0391\u039d\u039f\u039d\u0399\u03a3\u039c\u039f\u03a3 \u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u03a9\u039d \u0391\u0393\u03a1\u039f\u03a6\u03a5\u039b\u0391\u039a\u0397\u03a3", "416": "\u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0397 \u03a3\u03a4\u039f \u0399\u039a\u0391", "417": "\u0395\u039c\u03a0\u039f\u03a1\u0399\u039a\u039f\u0399 \u03a3\u03a5\u039c\u0392\u039f\u03a5\u039b\u039f\u0399 \u039a\u0391\u0399 \u0391\u039a\u039f\u039b\u039f\u03a5\u0398\u039f\u0399", "418": "\u0395\u03a0\u0399\u039a\u039f\u03a5\u03a1\u039f\u0399 \u03a0\u0391\u03a1\u0391\u03a4\u0397\u03a1\u0397\u03a4\u0395\u03a3", "419": "\u03a5\u03a0\u039f\u03a4\u03a1\u039f\u03a6\u0399\u0395\u03a3", "420": "\u039a\u0395\u039d\u03a4\u03a1\u039f \u03a0\u03a1\u039f\u0393\u03a1\u0391\u039c\u039c\u0391\u03a4\u0399\u03a3\u039c\u039f\u03a5", "421": "\u03a0\u03a1\u03a9\u03a4\u0395\u03a3 \u03a5\u039b\u0395\u03a3 \u03a3\u039f\u039a\u039f\u039b\u0391\u03a4\u039f\u03a0\u039f\u0399\u0399\u0391\u03a3", "422": "\u0395\u03a0\u0399\u03a4\u03a1\u039f\u03a0\u0397 \u039a\u0397\u03a0\u03a9\u039d \u039a\u0391\u0399 \u0394\u0395\u039d\u0394\u03a1\u039f\u03a3\u03a4\u039f\u0399\u03a7\u0399\u03a9\u039d", "423": "\u039a\u0399\u039d\u0397\u03a4\u039f \u0395\u03a0\u0399\u03a3\u0397\u039c\u0391", "424": "\u03a3\u03a5\u039d\u0394\u0399\u039a\u0391\u039b\u0399\u03a3\u039c\u039f\u03a3 \u0394\u0397\u039c\u039f\u03a3\u0399\u03a9\u039d \u03a5\u03a0\u0391\u039b\u039b\u0397\u039b\u03a9\u039d", "425": "\u03a4\u0391\u039c\u0395\u0399\u039f \u03a0\u03a1\u039f\u039d\u039f\u0399\u0391\u03a3 \u03a0.\u039d", "426": "\u039f\u03a1\u0393\u0391\u039d\u0399\u039a\u0395\u03a3 \u0394\u0399\u0391\u03a4\u0391\u039e\u0395\u0399\u03a3 \u03a4\u0391\u039c\u0395\u0399\u039f\u03a5 \u03a0\u0391\u03a1\u0391\u039a\u0391\u03a4\u0391\u0398\u0397\u039a\u03a9\u039d \u039a\u0391\u0399 \u0394\u0391\u039d\u0395\u0399\u03a9\u039d", "427": "\u0391\u0394\u0395\u0399\u0395\u03a3 \u0397\u039d\u0399\u039f\u03a7\u0399\u0391\u03a3", "428": "\u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u0391 \u03a0\u03a1\u039f\u0393\u03a1\u0391\u039c\u039c\u0391\u03a4\u0399\u03a3\u039c\u039f\u03a5 \u039a\u0391\u0399 \u039c\u0395\u039b\u0395\u03a4\u03a9\u039d", "429": "\u039a\u03a1\u0391\u03a4\u0399\u039a\u0391 \u0391\u03a5\u03a4\u039f\u039a\u0399\u039d\u0397\u03a4\u0391", "430": "\u0391\u03a4\u039f\u039c\u0399\u039a\u0397 \u039a\u0391\u03a4\u0391\u0393\u0393\u0395\u039b\u0399\u0391 \u03a3\u03a5\u039c\u0392\u0391\u03a3\u0395\u03a9\u03a3 \u0395\u03a1\u0393\u0391\u03a3\u0399\u0391\u03a3", "431": "\u03a0\u039f\u039b\u03a5\u03a4\u0395\u039a\u039d\u039f\u0399", "432": "\u0399\u03a3\u03a4\u039f\u03a1\u0399\u039a\u039f \u0391\u03a1\u03a7\u0395\u0399\u039f \u039c\u0391\u039a\u0395\u0394\u039f\u039d\u0399\u0391\u03a3", "433": "\u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0397 \u0391\u03a5\u03a4\u039f\u039a\u0399\u039d\u0397\u03a4\u0399\u039a\u03a9\u039d \u0391\u03a4\u03a5\u03a7\u0397\u039c\u0391\u03a4\u03a9\u039d", "434": "\u0394\u0391\u039d\u0395\u0399\u0391 \u0395\u03a3\u03a9\u03a4\u0395\u03a1\u0399\u039a\u0391", "435": "\u0395\u039a\u039a\u039b\u0397\u03a3\u0399\u0391 \u039a\u03a1\u0397\u03a4\u0397\u03a3", "436": "\u03a6\u039f\u03a1\u039f\u039b\u039f\u0393\u0399\u0391 \u03a3\u03a4\u0391\u03a6\u0399\u0394\u0391\u03a3", "437": "\u0395\u039a\u03a0\u0391\u0399\u0394\u0395\u03a5\u03a4\u0399\u039a\u0395\u03a3 \u0391\u0394\u0395\u0399\u0395\u03a3", "438": "\u0391\u0395\u03a1\u039f\u0394\u0399\u039a\u0395\u0399\u0391", "439": "\u0395\u03a0\u0399\u0394\u039f\u039c\u0391 \u0391\u03a3\u0398\u0395\u039d\u0395\u0399\u0391\u03a3", "440": "\u0398\u0395\u03a3\u0395\u0399\u03a3 \u03a3\u03a5\u039c\u0392\u039f\u039b\u0391\u0399\u039f\u0393\u03a1\u0391\u03a6\u03a9\u039d", "441": "\u0391\u0393\u039f\u03a1\u0391 \u03a3\u03a5\u039d\u0391\u039b\u039b\u0391\u0393\u039c\u0391\u03a4\u039f\u03a3", "442": "\u039d\u039f\u039c\u0399\u039a\u039f \u03a3\u03a5\u039c\u0392\u039f\u03a5\u039b\u0399\u039f \u03a4\u039f\u03a5 \u039a\u03a1\u0391\u03a4\u039f\u03a5\u03a3 (\u039d.\u03a3.\u039a.)", "443": "\u03a6\u039f\u03a1\u039f\u039b\u039f\u0393\u0399\u0391 \u039c\u0395\u03a4\u0391\u0392\u0399\u0392\u0391\u03a3\u0397\u03a3", "444": "\u03a3\u03a5\u039c\u0392\u039f\u03a5\u039b\u0399\u0391 - \u0395\u03a0\u0399\u03a4\u03a1\u039f\u03a0\u0395\u03a3 - \u0399\u039d\u03a3\u03a4\u0399\u03a4\u039f\u03a5\u03a4\u0391 \u0395\u03a1\u0393\u0391\u03a3\u0399\u0391\u03a3 \u039a\u0391\u0399 \u039a\u039f\u0399\u039d\u03a9\u039d\u0399\u039a\u03a9\u039d \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u039d", "445": "\u03a4\u0395\u039b\u0397 \u0395\u0399\u03a3\u0399\u03a4\u0397\u03a1\u0399\u03a9\u039d \u039a\u0391\u0399 \u039a\u039f\u039c\u0399\u03a3\u03a4\u03a1\u03a9\u039d", "446": "\u039f\u0399\u039a\u039f\u039d\u039f\u039c\u0399\u039a\u0397 \u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u0391 \u03a5\u0393\u0395\u0399\u039f\u039d\u039f\u039c\u0399\u039a\u039f\u03a5 \u03a3\u03a9\u039c\u0391\u03a4\u039f\u03a3", "447": "\u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f \u03a3\u03a9\u039c\u0391\u03a4\u03a9\u039d \u0391\u03a3\u03a6\u0391\u039b\u0395\u0399\u0391\u03a3 \u039c\u0395 \u03a3\u03a7\u0395\u03a3\u0397 \u0399\u0394\u0399\u03a9\u03a4\u0399\u039a\u039f\u03a5 \u0394\u0399\u039a\u0391\u0399\u039f\u03a5", "448": "\u0391\u03a1\u03a4\u0395\u03a1\u0393\u0391\u03a4\u0395\u03a3", "449": "\u0395\u03a5\u039a\u039f\u039b\u0399\u0395\u03a3 \u03a3\u0395 \u03a6\u039f\u0399\u03a4\u0397\u03a4\u0395\u03a3", "450": "\u03a3\u03a5\u039d\u0395\u03a4\u0391\u0399\u03a1\u0399\u03a3\u039c\u039f\u0399 \u039a\u039f\u0399\u039d\u0397\u03a3 \u03a7\u039f\u03a1\u03a4\u039f\u039d\u039f\u039c\u0397\u03a3 \u039a\u0391\u0399 \u03a3\u03a5\u039d\u0399\u0394\u0399\u039f\u039a\u03a4\u0397\u03a3\u0399\u0391\u03a3", "451": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u03a3 \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u03a5 \u03a0\u0395\u03a1\u0399\u03a6\u0395\u03a1\u0395\u0399\u0391\u039a\u039f\u03a5 \u0393\u0395\u039d\u0399\u039a\u039f\u03a5 \u039d\u039f\u03a3\u039f\u039a\u039f\u039c\u0395\u0399\u039f\u03a5 \u039f \u0395\u03a5\u0391\u0393\u0393\u0395\u039b\u0399\u03a3\u039c\u039f\u03a3", "452": "\u03a0\u03a1\u039f\u03a3\u039a\u039f\u03a0\u0399\u03a3\u039c\u039f\u03a3", "453": "\u03a3\u03a5\u039c\u0392\u039f\u03a5\u039b\u0399\u0391 \u0395\u03a0\u0391\u0393\u0393\u0395\u039b\u039c\u0391\u03a4\u0399\u039a\u0397\u03a3 \u039a\u0391\u0399 \u03a4\u0395\u03a7\u039d\u0399\u039a\u0397\u03a3 \u0395\u039a\u03a0\u0391\u0399\u0394\u0395\u03a5\u03a3\u0395\u03a9\u03a3", "454": "\u039a\u03a1\u0391\u03a4\u0399\u039a\u039f\u03a3 \u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u03a3 \u039c\u0397\u03a7\u0391\u039d\u0397\u039c\u0391\u03a4\u03a9\u039d \u0394\u0397\u039c\u039f\u03a3\u0399\u03a9\u039d \u0395\u03a1\u0393\u03a9\u039d", "455": "\u0391\u03a4\u039f\u039c\u0399\u039a\u0391 \u0395\u0393\u0393\u03a1\u0391\u03a6\u0391 \u0391\u039d\u0398\u03a5\u03a0\u0391\u03a3\u03a0\u0399\u03a3\u03a4\u03a9\u039d-\u03a5\u03a0\u0391\u039e\u0399\u03a9\u039c\u0391\u03a4\u0399\u039a\u03a9\u039d", "456": "\u0394\u0399\u0391\u03a6\u039f\u03a1\u0395\u03a3 \u03a3\u03a7\u039f\u039b\u0395\u03a3", "457": "\u0392\u0399\u0392\u039b\u0399\u0391 \u0394\u0397\u039c\u039f\u03a3\u0399\u0395\u03a5\u03a3\u0395\u03a9\u03a3 \u0394\u0399\u0391\u0398\u0397\u039a\u03a9\u039d", "458": "\u039a\u0391\u039d\u039f\u039d\u0399\u03a3\u039c\u039f\u0399 \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u03a5 \u03a3\u03a5\u0393\u039a\u039f\u0399\u039d\u03a9\u039d\u0399\u0391\u039a\u03a9\u039d \u0395\u03a0\u0399\u03a7\u0395\u0399\u03a1\u0397\u03a3\u0395\u03a9\u039d", "459": "\u03a4\u039f\u03a5\u03a1\u0399\u03a3\u03a4\u0399\u039a\u039f\u0399 \u03a4\u039f\u03a0\u039f\u0399", "460": "\u0399\u039d\u03a3\u03a4\u0399\u03a4\u039f\u03a5\u03a4\u039f \u039e\u0395\u039d\u03a9\u039d \u0393\u039b\u03a9\u03a3\u03a3\u03a9\u039d \u039a\u0391\u0399 \u03a6\u0399\u039b\u039f\u039b\u039f\u0393\u0399\u03a9\u039d", "461": "\u039a\u0391\u03a0\u039d\u039f\u03a0\u03a9\u039b\u0395\u03a3", "462": "\u0391\u0393\u03a9\u0393\u0395\u03a3 \u0393\u0399\u0391\u03a4\u03a1\u03a9\u039d", "463": "\u03a3\u03a5\u03a3\u03a4\u0391\u03a3\u0397 \u039a\u0391\u0399 \u0391\u03a0\u039f\u0394\u039f\u03a3\u0397 \u03a0\u0391\u03a1\u0391\u039a\u0391\u03a4\u0391\u0398\u0397\u039a\u03a9\u039d \u0391\u03a0\u039f \u03a4.\u03a0. \u039a\u0391\u0399 \u0394", "464": "\u0391\u0394\u0399\u039a\u0397\u039c\u0391\u03a4\u0391 \u0394\u0399\u0391\u03a0\u03a1\u0391\u03a4\u03a4\u039f\u039c\u0395\u039d\u0391 \u03a3\u03a4\u0391 \u039a\u03a1\u0391\u03a4\u0397-\u039c\u0395\u039b\u0397", "465": "\u0391\u039d\u0391\u03a3\u03a4\u039f\u039b\u0395\u03a3 \u03a4\u039f\u03a5 \u03a3\u03a5\u039d\u03a4\u0391\u0393\u039c\u0391\u03a4\u039f\u03a3 - \u039a\u0391\u03a4\u0391\u03a3\u03a4\u0391\u03a3\u0397 \u03a0\u039f\u039b\u0399\u039f\u03a1\u039a\u0399\u0391\u03a3", "466": "\u03a3\u03a5\u039c\u0392\u0391\u03a3\u0395\u0399\u03a3 \u03a0\u0391\u03a1\u039f\u03a7\u0397\u03a3 \u0391\u03a3\u03a6\u0391\u039b\u0395\u0399\u0391\u03a3 (\u0395\u039d\u0395\u03a7\u03a5\u03a1\u039f, \u03a5\u03a0\u039f\u0398\u0397\u039a\u0397 \u039a.\u039b\u03a0.)", "467": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u03a3\u039d\u0391\u03a5\u03a4\u0399\u039a\u03a9\u039d \u03a0\u03a1\u0391\u039a\u03a4\u039f\u03a1\u03a9\u039d \u039a\u0391\u0399 \u03a5\u03a0\u0391\u039b\u039b\u0397\u039b\u03a9\u039d (\u03a4.\u0391.\u039d.\u03a0.\u03a5.)", "468": "\u0391\u039d\u03a9\u03a4\u0391\u03a4\u039f \u03a3\u03a5\u0393\u039a\u039f\u0399\u039d\u03a9\u039d\u0399\u0391\u039a\u039f \u03a3\u03a5\u039c\u0392\u039f\u03a5\u039b\u0399\u039f", "469": "\u03a0\u03a1\u0395\u0392\u0395\u039d\u03a4\u039f\u03a1\u0399\u0391", "470": "\u0391\u039d\u0391\u0392\u039f\u039b\u0397 \u03a3\u03a4\u03a1\u0391\u03a4\u0395\u03a5\u03a3\u0395\u03a9\u03a3", "471": "\u0395\u0399\u0394\u0399\u039a\u0391 \u039b\u0397\u039e\u0399\u0391\u03a1\u03a7\u0395\u0399\u0391", "472": "\u0393\u0395\u03a9\u03a4\u0395\u03a7\u039d\u0399\u039a\u039f \u0395\u03a0\u0399\u039c\u0395\u039b\u0397\u03a4\u0397\u03a1\u0399\u039f", "473": "\u03a5\u0393\u0395\u0399\u039f\u039d\u039f\u039c\u0399\u039a\u0391 \u0394\u0399\u039a\u0391\u0399\u03a9\u039c\u0391\u03a4\u0391", "474": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0395\u03a0\u0399\u039a\u039f\u03a5\u03a1\u0399\u039a\u0397\u03a3 \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0397\u03a3 \u0395\u039a\u03a0\u0391\u0399\u0394\u0395\u03a5\u03a4\u0399\u039a\u03a9\u039d", "475": "\u039a\u0391\u0396\u0391\u039a\u03a3\u03a4\u0391\u039d \u2013 \u039a\u0391\u039c\u0395\u03a1\u039f\u03a5\u039d \u2013 \u039a\u0391\u039d\u0391\u0394\u0391\u03a3 \u039a.\u039b\u03a0", "476": "\u03a3\u03a5\u039d\u03a4\u0391\u039e\u0395\u0399\u03a3 \u0398\u03a5\u039c\u0391\u03a4\u03a9\u039d \u0391\u03a0\u039f \u03a4\u039f\u039d \u0391\u039c\u0391\u03a7\u039f \u03a0\u039b\u0397\u0398\u03a5\u03a3\u039c\u039f", "477": "\u03a6\u0399\u039b\u039f\u03a3\u039f\u03a6\u0399\u039a\u0397 \u03a3\u03a7\u039f\u039b\u0397", "478": "\u0395\u039a\u03a4\u0395\u039b\u03a9\u039d\u0399\u03a3\u039c\u039f\u03a3 \u03a4\u0391\u03a7\u03a5\u0394\u03a1\u039f\u039c\u0399\u039a\u03a9\u039d \u0394\u0395\u039c\u0391\u03a4\u03a9\u039d", "479": "\u03a5\u0394\u03a1\u0395\u03a5\u03a3\u0397 \u0398\u0395\u03a3\u03a3\u0391\u039b\u039f\u039d\u0399\u039a\u0397\u03a3", "480": "\u03a3\u03a5\u039c\u03a6\u03a9\u039d\u0399\u0395\u03a3 \u03a0\u0395\u03a1\u0399 \u03a0\u039b\u03a9\u03a4\u03a9\u039d \u039f\u0394\u03a9\u039d", "481": "\u0391\u039d\u0391\u039a\u0397\u03a1\u03a5\u039e\u0397 \u03a4\u0397\u03a3 \u0391\u039d\u0395\u039e\u0391\u03a1\u03a4\u0397\u03a3\u0399\u0391\u03a3", "482": "\u0395\u03a0\u0399\u03a4\u03a1\u039f\u03a0\u0397 \u039f\u039b\u03a5\u039c\u03a0\u0399\u0391\u039a\u03a9\u039d \u0391\u0393\u03a9\u039d\u03a9\u039d", "483": "\u039f\u0399\u039d\u039f\u03a0\u0391\u03a1\u0391\u0393\u03a9\u0393\u0397 \u0391\u03a4\u03a4\u0399\u039a\u039f\u0392\u039f\u0399\u03a9\u03a4\u0399\u0391\u03a3", "484": "\u0395\u039a\u03a0\u03a4\u03a9\u03a3\u0395\u0399\u03a3 \u03a5\u03a0\u0395\u03a1 \u0395\u039e\u0391\u0393\u03a9\u0393\u0395\u03a9\u039d", "485": "\u03a6\u039f\u03a1\u039f\u039b\u039f\u0393\u0399\u0391 \u039a\u039b\u0397\u03a1\u039f\u039d\u039f\u039c\u0399\u03a9\u039d, \u0394\u03a9\u03a1\u0395\u03a9\u039d, \u0393\u039f\u039d\u0399\u039a\u03a9\u039d \u03a0\u0391\u03a1\u039f\u03a7\u03a9\u039d", "486": "\u039f\u03a1\u03a6\u0391\u039d\u039f\u03a4\u03a1\u039f\u03a6\u0395\u0399\u0391 \u039a\u0391\u0399 \u039f\u0399\u039a\u039f\u03a4\u03a1\u039f\u03a6\u0395\u0399\u0391", "487": "\u039c\u0395 \u03a4\u0397\u039d \u039f\u03a5\u03a1\u0391\u0393\u039f\u03a5\u0391\u0397", "488": "\u039c\u0395 \u03a4\u0397\u039d \u0391\u03a5\u03a3\u03a4\u03a1\u0399\u0391\u039a\u0397", "489": "\u0394\u0399\u0391\u03a6\u039f\u03a1\u039f\u0399 \u03a6\u039f\u03a1\u039f\u0399 \u039a\u0391\u03a4\u0391\u039d\u0391\u039b\u03a9\u03a3\u0395\u03a9\u03a3", "490": "\u0394\u0399\u0395\u03a5\u0398\u03a5\u039d\u03a3\u0397 \u0395\u03a6\u0395\u0394\u03a1\u03a9\u039d - \u03a0\u039f\u039b\u0395\u039c\u0399\u03a3\u03a4\u03a9\u039d - \u0391\u0393\u03a9\u039d\u0399\u03a3\u03a4\u03a9\u039d", "491": "\u0391\u0393\u03a1\u039f\u03a4\u0399\u039a\u0395\u03a3 \u039f\u0399\u039a\u039f\u039a\u03a5\u03a1\u0399\u039a\u0395\u03a3 \u03a3\u03a7\u039f\u039b\u0395\u03a3", "492": "\u039e\u03a5\u039b\u0395\u0399\u0391", "493": "\u0392\u0399\u0392\u039b\u0399\u0391\u03a1\u0399\u0391 \u03a5\u0393\u0395\u0399\u0391\u03a3 \u0395\u03a1\u0393\u0391\u03a4\u03a9\u039d", "494": "\u03a3\u03a7\u039f\u039b\u0397 \u0391\u039e\u0399\u03a9\u039c\u0391\u03a4\u0399\u039a\u03a9\u039d \u03a3\u03a4\u03a1\u0391\u03a4\u0399\u03a9\u03a4\u0399\u039a\u03a9\u039d \u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u03a9\u039d", "495": "\u039d\u039f\u039c\u0391\u03a1\u03a7\u0399\u0391\u039a\u0395\u03a3 \u039a\u0391\u0399 \u0394\u0397\u039c\u039f\u03a4\u0399\u039a\u0395\u03a3 \u0395\u039a\u039b\u039f\u0393\u0395\u03a3", "496": "\u0395\u0393\u0393\u03a5\u0397\u03a3\u0395\u0399\u03a3 \u039a\u0391\u0399 \u0394\u0391\u039d\u0395\u0399\u0391 \u03a4\u039f\u03a5 \u0394\u0397\u039c\u039f\u03a3\u0399\u039f\u03a5", "497": "\u03a5\u03a0\u039f\u03a5\u03a1\u0393\u0395\u0399\u039f \u0391\u039d\u0391\u03a0\u03a4\u03a5\u039e\u0397\u03a3", "498": "\u03a4\u0391\u039a\u03a4\u0399\u039a\u0391 \u0394\u0399\u039f\u0399\u039a\u0397\u03a4\u0399\u039a\u0391 \u0394\u0399\u039a\u0391\u03a3\u03a4\u0397\u03a1\u0399\u0391 - \u0393\u0395\u039d\u0399\u039a\u0395\u03a3 \u0394\u0399\u0391\u03a4\u0391\u039e\u0395\u0399\u03a3", "499": "\u03a4\u03a1\u039f\u03a6\u039f\u0394\u039f\u03a3\u0399\u0391 \u03a0\u039b\u0397\u03a1\u03a9\u039c\u0391\u03a4\u03a9\u039d \u03a0\u039b\u039f\u0399\u03a9\u039d", "500": "\u0394\u0399\u0391\u03a6\u039f\u03a1\u039f\u0399 \u039b\u0399\u039c\u0395\u039d\u0395\u03a3 \u039a\u0391\u0399 \u039b\u0399\u039c\u0395\u039d\u0399\u039a\u0391 \u03a4\u0391\u039c\u0395\u0399\u0391", "501": "\u0397\u039b\u0395\u039a\u03a4\u03a1\u0399\u039a\u0395\u03a3 \u0395\u039a\u039c\u0395\u03a4\u0391\u039b\u039b\u0395\u03a5\u03a3\u0395\u0399\u03a3", "502": "\u03a0\u03a1\u039f\u03a5\u03a0\u039f\u0398\u0395\u03a3\u0395\u0399\u03a3 \u0391\u03a3\u039a\u0397\u03a3\u0397\u03a3 \u0394\u0399\u0391\u03a6\u039f\u03a1\u03a9\u039d \u0395\u03a0\u0391\u0393\u0393\u0395\u039b\u039c\u0391\u03a4\u03a9\u039d", "503": "\u03a4\u0395\u039b\u03a9\u039d\u0395\u0399\u0391\u039a\u0397 \u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u0391 \u0391\u0395\u03a1\u039f\u03a3\u039a\u0391\u03a6\u03a9\u039d", "504": "\u0395\u03a0\u0399\u03a4\u03a1\u039f\u03a0\u0397 \u0394\u0391\u03a3\u039c\u039f\u039b\u039f\u0393\u0399\u039f\u03a5", "505": "\u039d\u0391\u03a5\u03a0\u0397\u0393\u0395\u0399\u0391 \u03a0. \u039d\u0391\u03a5\u03a4\u0399\u039a\u039f\u03a5", "506": "\u0392\u0399\u039f\u039c\u0397\u03a7\u0391\u039d\u0399\u039a\u0395\u03a3 \u039a\u0391\u0399 \u0395\u03a0\u0399\u03a7\u0395\u0399\u03a1\u0397\u039c\u0391\u03a4\u0399\u039a\u0395\u03a3 \u03a0\u0395\u03a1\u0399\u039f\u03a7\u0395\u03a3", "507": "\u0399\u0391\u03a4\u03a1\u039f\u0394\u0399\u039a\u0391\u03a3\u03a4\u0395\u03a3", "508": "\u0391\u0398\u039b\u0397\u03a4\u0399\u03a3\u039c\u039f\u03a3 \u0395\u039d\u039f\u03a0\u039b\u03a9\u039d \u0394\u03a5\u039d\u0391\u039c\u0395\u03a9\u039d", "509": "\u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u03a3 \u03a3\u03a5\u039a\u03a9\u039d", "510": "\u039a\u0391\u039d\u039f\u039d\u0399\u03a3\u039c\u039f\u03a3 \u0391\u03a3\u0398\u0395\u039d\u0395\u0399\u0391\u03a3 \u03a4\u0391\u039c\u0395\u0399\u039f\u03a5 \u03a3\u03a5\u039d\u03a4\u0391\u039e\u0395\u03a9\u039d \u0395\u03a6\u0397\u039c\u0395\u03a1\u0399\u0394\u039f\u03a0\u03a9\u039b\u03a9\u039d \u039a\u0391\u0399 \u03a5\u03a0\u0391\u039b\u039b\u0397\u039b\u03a9\u039d \u03a0\u03a1\u0391\u039a\u03a4\u039f\u03a1\u0395\u0399\u03a9\u039d (\u03a4.\u03a3.\u0395.\u03a5.\u03a0.)", "511": "\u0391\u0394\u0395\u0399\u0395\u03a3 \u039c\u0399\u03a3\u0398\u03a9\u03a4\u03a9\u039d", "512": "\u03a0\u03a1\u039f\u03a3\u03a4\u0391\u03a3\u0399\u0391 \u039a\u0395\u03a6\u0391\u039b\u0391\u0399\u03a9\u039d \u0395\u039e\u03a9\u03a4\u0395\u03a1\u0399\u039a\u039f\u03a5", "513": "\u0391\u03a0\u039f\u0394\u0395\u0399\u039a\u03a4\u0399\u039a\u0391 \u03a6\u039f\u03a1\u039f\u039b\u039f\u0393\u0399\u039a\u0397\u03a3 \u0395\u039d\u0397\u039c\u0395\u03a1\u039f\u03a4\u0397\u03a4\u0391\u03a3", "514": "\u039f\u03a1\u0393\u0391\u039d\u03a9\u03a3\u0397 \u039a\u0391\u0399 \u039b\u0395\u0399\u03a4\u039f\u03a5\u03a1\u0393\u0399\u0391 \u03a4\u03a9\u039d \u03a4\u0397\u039b\u0395\u03a0\u0399\u039a\u039f\u0399\u039d\u03a9\u039d\u0399\u03a9\u039d \u0395\u0398\u039d\u0399\u039a\u0397 \u0395\u03a0\u0399\u03a4\u03a1\u039f\u03a0\u0397 \u03a4\u0397\u039b\u0395\u03a0\u0399\u039a\u039f\u0399\u039d\u03a9\u039d\u0399\u03a9\u039d \u039a\u0391\u0399 \u03a4\u0391\u03a7\u03a5\u0394\u03a1\u039f\u039c\u0395\u0399\u03a9\u039d (\u0395.\u0395.\u03a4.\u03a4.)", "515": "\u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f \u039f.\u03a4.\u0395", "516": "\u0392\u0391\u03a3\u0399\u039b\u0399\u039a\u0391 \u0399\u0394\u03a1\u03a5\u039c\u0391\u03a4\u0391", "517": "\u0391\u03a0\u039f\u039a\u0391\u03a4\u0391\u03a3\u03a4\u0391\u03a3\u0397 \u03a0\u039b\u0397\u0393\u0395\u039d\u03a4\u03a9\u039d \u0391\u03a0\u039f \u0395\u039a\u03a1\u0397\u039e\u0397 \u03a0\u039b\u039f\u0399\u039f\u03a5 \u03a3\u03a4\u0397\u039d \u039a\u03a1\u0397\u03a4\u0397", "518": "\u0395\u039a\u039c\u0395\u03a4\u0391\u039b\u039b\u0395\u03a5\u03a3\u0397 \u0394\u03a5\u039d\u0391\u039c\u0395\u03a9\u03a3 \u03a1\u0395\u039f\u039d\u03a4\u03a9\u039d \u03a5\u0394\u0391\u03a4\u03a9\u039d", "519": "\u039a\u0391\u039a\u039f\u03a5\u03a1\u0393\u0399\u039f\u0394\u0399\u039a\u0395\u0399\u0391", "520": "\u039a\u0395\u039d\u03a4\u03a1\u0399\u039a\u0395\u03a3 \u0391\u0393\u039f\u03a1\u0395\u03a3 \u0391\u039b\u039b\u03a9\u039d \u03a0\u039f\u039b\u0395\u03a9\u039d", "521": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0391\u039b\u039b\u0397\u039b\u039f\u0392\u039f\u0397\u0398\u0395\u0399\u0391\u03a3 \u03a0.\u039d", "522": "\u0395\u039a\u039b\u039f\u0393\u0399\u039a\u039f\u0399 \u039a\u0391\u03a4\u0391\u039b\u039f\u0393\u039f\u0399 \u039a\u0391\u0399 \u0392\u0399\u0392\u039b\u0399\u0391\u03a1\u0399\u0391", "523": "\u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u0391 \u0395\u0393\u0393\u0395\u0399\u03a9\u039d \u0392\u0395\u039b\u03a4\u0399\u03a9\u03a3\u0395\u03a9\u039d", "524": "\u03a4\u039f\u03a5\u03a1\u0399\u03a3\u03a4\u0399\u039a\u0397 \u0391\u039d\u0391\u03a0\u03a4\u03a5\u039e\u0397", "525": "\u039d\u039f\u039c\u039f\u0398\u0395\u03a3\u0399\u0391 \u03a0\u0395\u03a1\u0399 \u03a3\u03a5\u039c\u0392\u0391\u03a3\u0395\u03a9\u03a3 \u0395\u03a1\u0393\u0391\u03a3\u0399\u0391\u03a3", "526": "\u0395\u039b\u0395\u0393\u03a7\u039f\u03a3 \u0395\u039a\u03a1\u0397\u039a\u03a4\u0399\u039a\u03a9\u039d \u03a5\u039b\u03a9\u039d", "527": "\u039c\u0391\u039a\u0395\u0394\u039f\u039d\u0399\u039a\u039f\u0399 \u03a3\u0399\u0394\u0397\u03a1\u039f\u0394\u03a1\u039f\u039c\u039f\u0399", "528": "\u0394\u0399\u0395\u03a5\u039a\u039f\u039b\u03a5\u039d\u03a3\u0395\u0399\u03a3 \u03a3\u0395 \u0394\u0397\u039c\u039f\u03a3\u0399\u039f\u03a5\u03a3 \u03a5\u03a0\u0391\u039b\u039b\u0397\u039b\u039f\u03a5\u03a3", "529": "\u03a3\u03a4\u03a1\u0391\u03a4\u0399\u03a9\u03a4\u0399\u039a\u0395\u03a3 \u03a5\u03a0\u039f\u03a7\u03a1\u0395\u03a9\u03a3\u0395\u0399\u03a3 \u0395\u03a0\u0391\u039d\u0391\u03a0\u0391\u03a4\u03a1\u0399\u0396\u039f\u039c\u0395\u039d\u03a9\u039d", "530": "\u0394\u0399\u0391\u039a\u03a1\u0399\u03a3\u0397 \u0395\u039c\u03a0\u039f\u03a1\u0399\u039a\u03a9\u039d \u03a0\u03a1\u0391\u039e\u0395\u03a9\u039d", "531": "\u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u03a3 \u0395\u039b\u039b\u0397\u039d\u0399\u039a\u03a9\u039d \u0393\u0395\u03a9\u03a1\u0393\u0399\u039a\u03a9\u039d \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u039d (\u0395.\u039b.\u0393.\u0391.)", "532": "\u0395\u039e\u03a9\u03a3\u03a7\u039f\u039b\u0399\u039a\u0397 \u03a3\u03a9\u039c\u0391\u03a4\u0399\u039a\u0397 \u0391\u0393\u03a9\u0393\u0397", "533": "\u0394\u03a1\u0391\u03a7\u039c\u039f\u03a0\u039f\u0399\u0397\u03a3\u0397", "534": "\u039c\u0395 \u03a4\u0397 \u0392\u03a1\u0391\u0396\u0399\u039b\u0399\u0391", "535": "\u0395\u039a\u039a\u039b\u0397\u03a3\u0399\u0391\u03a3\u03a4\u0399\u039a\u0397 \u0391\u039a\u0391\u0394\u0397\u039c\u0399\u0391", "536": "\u0391\u039d\u03a4\u0391\u039b\u039b\u0391\u0393\u0397 \u0398\u0395\u03a1\u0391\u03a0\u0395\u03a5\u03a4\u0399\u039a\u03a9\u039d \u039f\u03a5\u03a3\u0399\u03a9\u039d", "537": "\u0393\u0391\u039b\u039b\u0399\u0391, \u0393\u0395\u03a1\u039c\u0391\u039d\u0399\u0391 \u039a.\u039b\u03a0", "538": "\u039d\u039f\u039c\u039f\u03a0\u0391\u03a1\u0391\u03a3\u039a\u0395\u03a5\u0391\u03a3\u03a4\u0399\u039a\u0395\u03a3 \u0395\u03a0\u0399\u03a4\u03a1\u039f\u03a0\u0395\u03a3", "539": "\u039a\u03a5\u0392\u0395\u03a1\u039d\u0395\u0399\u039f \u0398\u0395\u03a3\u03a3\u0391\u039b\u039f\u039d\u0399\u039a\u0397\u03a3", "540": "\u03a3\u03a4\u03a1\u0391\u03a4\u0399\u03a9\u03a4\u0399\u039a\u039f\u0399 \u0391\u039a\u039f\u039b\u039f\u03a5\u0398\u039f\u0399", "541": "\u0394\u0399\u0391\u0398\u0395\u03a3\u0397 \u0391\u03a0\u039f\u03a3\u03a4\u03a1\u0391\u0393\u0393\u0399\u0396\u039f\u039c\u0395\u039d\u03a9\u039d \u0393\u0391\u0399\u03a9\u039d", "542": "\u0393\u0395\u039d\u0399\u039a\u0395\u03a3 \u0394\u0399\u0391\u03a4\u0391\u039e\u0395\u0399\u03a3 \u0393\u0399\u0391 \u03a1\u0391\u0394\u0399\u039f\u03a6\u03a9\u039d\u0399\u0391 \u2013 \u03a4\u0397\u039b\u0395\u039f\u03a1\u0391\u03a3\u0397", "543": "\u0393\u039d\u03a9\u039c\u039f\u0394\u039f\u03a4\u0399\u039a\u039f \u03a3\u03a5\u039c\u0392\u039f\u03a5\u039b\u0399\u039f \u03a6\u0391\u03a1\u039c\u0391\u039a\u03a9\u039d", "544": "\u03a3\u03a5\u039c\u0392\u0391\u03a3\u0395\u0399\u03a3 \u0394\u0399\u0391\u03a6\u039f\u03a1\u0395\u03a3", "545": "\u03a0\u03a1\u0391\u039e\u0395\u0399\u03a3 \u039a\u0391\u03a4\u0391 \u03a4\u0397\u03a3 \u0391\u03a3\u03a6\u0391\u039b\u0395\u0399\u0391\u03a3 \u03a4\u0397\u03a3 \u0391\u0395\u03a1\u039f\u03a0\u039f\u03a1\u0399\u0391\u03a3", "546": "\u0399\u0391\u03a4\u03a1\u039f\u0399 \u0399\u0391\u039c\u0391\u03a4\u0399\u039a\u03a9\u039d \u03a0\u0397\u0393\u03a9\u039d", "547": "\u039a\u0395\u039d\u03a4\u03a1\u0399\u039a\u039f \u03a3\u03a5\u039c\u0392\u039f\u03a5\u039b\u0399\u039f \u03a5\u0393\u0395\u0399\u0391\u03a3 (\u039a\u0395.\u03a3.\u03a5.)", "548": "\u0391\u039d\u03a9\u03a4\u0391\u03a4\u039f \u03a3\u03a5\u039c\u0392\u039f\u03a5\u039b\u0399\u039f \u0394\u0397\u039c\u039f\u03a3\u0399\u03a9\u039d \u03a5\u03a0\u0391\u039b\u039b\u0397\u039b\u03a9\u039d", "549": "\u03a5\u03a0\u039f\u03a5\u03a1\u0393\u0395\u0399\u039f \u0395\u039d\u0395\u03a1\u0393\u0395\u0399\u0391\u03a3 \u039a\u0391\u0399 \u03a6\u03a5\u03a3\u0399\u039a\u03a9\u039d \u03a0\u039f\u03a1\u03a9\u039d", "550": "\u03a4\u0395\u03a7\u039d\u0399\u039a\u0397 \u0395\u039a\u039c\u0395\u03a4\u0391\u039b\u039b\u0395\u03a5\u03a3\u0397 \u0395\u039b\u0391\u03a6\u03a1\u03a9\u039d \u0391\u0395\u03a1\u039f\u03a0\u039b\u0391\u039d\u03a9\u039d \u0394.\u03a7", "551": "\u03a0\u039f\u039b\u03a5\u0395\u0398\u039d\u0395\u0399\u03a3 \u039c\u039f\u03a1\u03a6\u03a9\u03a4\u0399\u039a\u0395\u03a3 \u03a3\u03a5\u039c\u03a6\u03a9\u039d\u0399\u0395\u03a3", "552": "\u0395\u039a\u03a0\u0391\u0399\u0394\u0395\u03a5\u03a3\u0397 \u039b.\u03a3", "553": "\u03a0\u03a1\u039f\u03a3\u03a4\u0391\u03a3\u0399\u0391 \u0395\u039b\u0395\u03a5\u0398\u0395\u03a1\u039f\u03a5 \u0391\u039d\u03a4\u0391\u0393\u03a9\u039d\u0399\u03a3\u039c\u039f\u03a5", "554": "\u0395\u0398\u039d\u0399\u039a\u0397 \u0395\u03a0\u0399\u03a4\u03a1\u039f\u03a0\u0397 \u0394\u0399\u0395\u0398\u039d\u039f\u03a5\u03a3 \u0395\u039c\u03a0\u039f\u03a1\u0399\u039a\u039f\u03a5 \u0395\u03a0\u0399\u039c\u0395\u039b\u0397\u03a4\u0397\u03a1\u0399\u039f\u03a5", "555": "\u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u03a3", "556": "\u03a4\u0395\u039b\u03a9\u039d\u0395\u0399\u0391\u039a\u0395\u03a3 \u03a0\u0391\u03a1\u0391\u039a\u0391\u03a4\u0391\u0398\u0397\u039a\u0395\u03a3", "557": "\u0395\u039b\u0395\u0393\u03a7\u039f\u03a3 \u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u03a9\u039d \u039a\u039f\u0399\u039d\u03a9\u039d\u0399\u039a\u0397\u03a3 \u03a0\u039f\u039b\u0399\u03a4\u0399\u039a\u0397\u03a3", "558": "\u0395\u039d\u03a9\u03a3\u0395\u0399\u03a3 \u0391\u03a0\u039f\u03a3\u03a4\u03a1\u0391\u03a4\u03a9\u039d \u0391\u039e\u0399\u03a9\u039c\u0391\u03a4\u0399\u039a\u03a9\u039d \u0395\u039d\u039f\u03a0\u039b\u03a9\u039d \u0394\u03a5\u039d\u0391\u039c\u0395\u03a9\u039d", "559": "\u03a6\u03a5\u039b\u039b\u0391 \u03a0\u039f\u0399\u039f\u03a4\u0397\u03a4\u0391\u03a3 \u0391\u039e\u0399\u03a9\u039c\u0391\u03a4\u0399\u039a\u03a9\u039d \u03a0.\u039d", "560": "\u0399\u039d\u03a3\u03a4\u0399\u03a4\u039f\u03a5\u03a4\u039f \u0393\u0395\u03a9\u039b\u039f\u0393\u0399\u039a\u03a9\u039d \u039a\u0391\u0399 \u039c\u0395\u03a4\u0391\u039b\u039b\u0395\u03a5\u03a4\u0399\u039a\u03a9\u039d \u0395\u03a1\u0395\u03a5\u039d\u03a9\u039d", "561": "\u039b\u0391\u039f\u0393\u03a1\u0391\u03a6\u0399\u039a\u039f \u039a\u0391\u0399 \u0395\u0398\u039d\u039f\u039b\u039f\u0393\u0399\u039a\u039f \u039c\u039f\u03a5\u03a3\u0395\u0399\u039f \u039c\u0391\u039a\u0395\u0394\u039f\u039d\u0399\u0391\u03a3 - \u0398\u03a1\u0391\u039a\u0397\u03a3", "562": "\u03a0\u03a1\u03a9\u03a4\u0395\u03a3 \u03a5\u039b\u0395\u03a3 \u03a4\u0391\u03a0\u0397\u03a4\u039f\u03a5\u03a1\u0393\u0399\u0391\u03a3", "563": "\u03a0\u0391\u039d\u0395\u03a0\u0399\u03a3\u03a4\u0397\u039c\u0399\u039f \u039a\u03a1\u0397\u03a4\u0397\u03a3", "564": "\u039a\u03a9\u0394\u0399\u039a\u0391\u03a3 \u039f\u0394\u0399\u039a\u0397\u03a3 \u039a\u03a5\u039a\u039b\u039f\u03a6\u039f\u03a1\u0399\u0391\u03a3", "565": "\u03a6\u0391\u03a1\u039c\u0391\u039a\u0395\u03a5\u03a4\u0399\u039a\u0397 \u03a0\u0395\u03a1\u0399\u0398\u0391\u039b\u03a8\u0397", "566": "\u039c\u0395\u039b\u0395\u03a4\u0395\u03a3 \u03a0\u03a1\u039f\u0393\u03a1\u0391\u039c\u039c\u0391\u03a4\u039f\u03a3 \u0394\u0397\u039c\u039f\u03a3\u0399\u03a9\u039d \u0395\u03a0\u0395\u039d\u0394\u03a5\u03a3\u0395\u03a9\u039d", "567": "\u0395\u03a0\u0399\u0394\u039f\u03a3\u0397 \u0394\u0399\u0391 \u03a4\u039f\u03a5 \u03a4\u0391\u03a7\u03a5\u0394\u03a1\u039f\u039c\u0395\u0399\u039f\u03a5", "568": "\u03a0\u0391\u039d\u0395\u03a0\u0399\u03a3\u03a4\u0397\u039c\u0399\u039f \u0398\u03a1\u0391\u039a\u0397\u03a3", "569": "\u0397\u0398\u0399\u039a\u0395\u03a3 \u0391\u039c\u039f\u0399\u0392\u0395\u03a3", "570": "\u0394\u0397\u039c\u039f\u03a3\u0399\u0391 \u039a\u03a4\u0397\u039c\u0391\u03a4\u0391 \u03a3\u03a4\u0397 \u0394\u03a9\u0394\u0395\u039a\u0391\u039d\u0397\u03a3\u039f", "571": "\u03a3\u03a5\u039c\u0392\u0391\u03a3\u0395\u0399\u03a3 \u0394\u0399\u039a\u0391\u03a3\u03a4\u0399\u039a\u0397\u03a3 \u0391\u039d\u03a4\u0399\u039b\u0397\u03a8\u0395\u03a9\u03a3", "572": "\u03a0\u0395\u03a1\u0399\u039f\u03a1\u0399\u03a3\u039c\u039f\u0399 \u0391\u039b\u0399\u0395\u0399\u0391\u03a3", "573": "\u03a0\u03a5\u03a1\u0397\u039d\u0399\u039a\u0395\u03a3 \u0395\u0393\u039a\u0391\u03a4\u0391\u03a3\u03a4\u0391\u03a3\u0395\u0399\u03a3", "574": "\u03a9\u03a1\u0395\u03a3 \u0395\u03a1\u0393\u0391\u03a3\u0399\u0391\u03a3 \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u03a5 \u0391\u03a5\u03a4\u039f\u039a\u0399\u039d\u0397\u03a4\u03a9\u039d", "575": "\u0395\u0393\u0393\u03a1\u0391\u03a6\u0395\u03a3, \u0395\u039e\u0395\u03a4\u0391\u03a3\u0395\u0399\u03a3, \u0391\u039d\u0391\u039b\u03a5\u03a4\u0399\u039a\u0391 \u03a0\u03a1\u039f\u0393\u03a1\u0391\u039c\u039c\u0391\u03a4\u0391", "576": "\u0394\u0399\u039a\u0391\u0399\u03a9\u039c\u0391\u03a4\u0391 \u03a4\u0395\u039b\u03a9\u039d\u0395\u0399\u0391\u039a\u03a9\u039d \u0395\u03a1\u0393\u0391\u03a3\u0399\u03a9\u039d", "577": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u03a3 \u0391\u03a5\u03a4\u039f\u039a\u0399\u039d\u0397\u03a4\u0399\u03a3\u03a4\u03a9\u039d (\u03a4.\u03a3.\u0391.)", "578": "\u03a4\u0397\u039b\u0395\u03a6\u03a9\u039d\u0399\u039a\u039f\u03a3 \u039a\u0391\u039d\u039f\u039d\u0399\u03a3\u039c\u039f\u03a3", "579": "\u03a6\u039f\u03a1\u039f\u039b\u039f\u0393\u0399\u0391 \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u03a4\u03a1\u03a9\u039d", "580": "\u0394\u0399\u0395\u0398\u039d\u0397\u03a3 \u03a5\u0394\u03a1\u039f\u0393\u03a1\u0391\u03a6\u0399\u039a\u039f\u03a3 \u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u03a3", "581": "\u0395\u03a0\u0391\u03a1\u03a7\u0399\u0395\u03a3", "582": "\u0391\u0393\u03a1\u039f\u03a4. \u0391\u03a0\u039f\u039a\u0391\u03a4\u0391\u03a3\u03a4\u0391\u03a3\u0397 \u03a0\u03a1\u039f\u03a3\u03a6\u03a5\u0393\u03a9\u039d", "583": "\u0393\u0395\u039d\u0399\u039a\u0391 \u0393\u0399\u0391 \u03a4\u0391 \u0398\u0395\u0391\u03a4\u03a1\u0391", "584": "\u03a3\u03a5\u039c\u0392\u0391\u03a3\u0395\u0399\u03a3 \u0394\u0399\u03a9\u039e\u0395\u03a9\u03a3 \u039b\u0391\u0398\u03a1\u0395\u039c\u03a0\u039f\u03a1\u0399\u039f\u03a5", "585": "\u039c\u0397\u03a7\u0391\u039d\u0395\u03a3 \u03a0\u03a1\u039f\u03a0\u039b\u0397\u03a1\u03a9\u039c\u0397\u03a3 \u03a4\u0395\u039b\u03a9\u039d", "586": "\u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u03a3 \u039a\u03a1\u0391\u03a4\u0399\u039a\u03a9\u039d \u0398\u0395\u0391\u03a4\u03a1\u03a9\u039d", "587": "\u039a\u0395\u039d\u03a4\u03a1\u039f \u0397\u039b\u0395\u039a\u03a4\u03a1\u039f\u039d\u0399\u039a\u039f\u03a5 \u03a5\u03a0\u039f\u039b\u039f\u0393\u0399\u03a3\u03a4\u039f\u03a5 \u039a\u039f\u0399\u039d\u03a9\u039d\u0399\u039a\u03a9\u039d \u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u03a9\u039d", "588": "\u03a6\u039f\u03a1\u039f\u03a3 \u03a0\u03a1\u039f\u03a3\u03a4\u0399\u0398\u0395\u039c\u0395\u039d\u0397\u03a3 \u0391\u039e\u0399\u0391\u03a3", "589": "\u03a4\u0391\u039c\u0395\u0399\u0391 \u0391\u03a1\u03a9\u0393\u0397\u03a3 \u03a4\u03a4\u03a4. \u03a5\u03a0\u0391\u039b\u039b\u0397\u039b\u03a9\u039d", "590": "\u03a3\u03a9\u039c\u0391 \u039f\u03a1\u039a\u03a9\u03a4\u03a9\u039d \u0395\u039b\u0395\u0393\u039a\u03a4\u03a9\u039d \u039b\u039f\u0393\u0399\u03a3\u03a4\u03a9\u039d (\u03a3.\u039f.\u0395.\u039b.), \u0395\u03a0\u0399\u03a4\u03a1\u039f\u03a0\u0397 \u039b\u039f\u0393\u0399\u03a3\u03a4\u0399\u039a\u0397\u03a3 \u03a4\u03a5\u03a0\u039f\u03a0\u039f\u0399\u0397\u03a3\u0397\u03a3 \u039a\u0391\u0399 \u0395\u039b\u0395\u0393\u03a7\u03a9\u039d (\u0395.\u039b.\u03a4.\u0395.)", "591": "\u0391\u0393\u03a1\u039f\u03a4\u0399\u039a\u0391 \u039d\u0397\u03a0\u0399\u039f\u03a4\u03a1\u039f\u03a6\u0395\u0399\u0391", "592": "\u03a3\u03a7\u0395\u0394\u0399\u039f \u03a0\u039f\u039b\u0395\u03a9\u03a3 \u0391\u0398\u0397\u039d\u03a9\u039d \u03a0\u0395\u0399\u03a1\u0391\u0399\u03a9\u03a3", "593": "\u039c\u0399\u03a3\u0398\u03a9\u03a3\u0395\u0399\u03a3 \u0391\u039a\u0399\u039d\u0397\u03a4\u03a9\u039d \u039f.\u0394.\u0395.\u03a0", "594": "\u0395\u039b\u0395\u0393\u03a7\u039f\u03a3 \u03a3\u03a0\u039f\u03a1\u039f\u03a0\u0391\u03a1\u0391\u0393\u03a9\u0393\u0397\u03a3", "595": "\u0391\u039c\u03a5\u039d\u03a4\u0399\u039a\u0395\u03a3 \u03a0\u0395\u03a1\u0399\u039f\u03a7\u0395\u03a3 \u039a\u0391\u0399 \u039d. \u039f\u03a7\u03a5\u03a1\u0391", "596": "\u039f\u0394\u039f\u0399\u03a0\u039f\u03a1\u0399\u039a\u0391", "597": "\u03a0\u039f\u03a1\u039f\u0399 \u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u03a9\u039d \u03a4\u039f\u03a5\u03a1\u0399\u03a3\u039c\u039f\u03a5", "598": "\u0394\u0399\u0395\u0398\u039d\u0395\u03a3 \u0394\u0399\u039a\u0391\u03a3\u03a4\u0397\u03a1\u0399\u039f", "599": "\u039f\u0399\u039a\u039f\u039d\u039f\u039c\u0399\u039a\u0397 \u039c\u0395\u03a1\u0399\u039c\u039d\u0391 \u0395\u039d\u039f\u03a0\u039b\u03a9\u039d \u0394\u03a5\u039d\u0391\u039c\u0395\u03a9\u039d", "600": "\u0393\u0395\u039d\u0399\u039a\u039f \u039d\u039f\u03a3\u039f\u039a\u039f\u039c\u0395\u0399\u039f \u0395\u039c\u03a0\u039f\u03a1\u0399\u039a\u039f\u03a5 \u039d\u0391\u03a5\u03a4\u0399\u039a\u039f\u03a5", "601": "\u039d\u039f\u039c\u0399\u039a\u0397 \u0392\u039f\u0397\u0398\u0395\u0399\u0391 \u03a3\u0395 \u03a0\u039f\u039b\u0399\u03a4\u0395\u03a3 \u03a7\u0391\u039c\u0397\u039b\u039f\u03a5 \u0395\u0399\u03a3\u039f\u0394\u0397\u039c\u0391\u03a4\u039f\u03a3", "602": "\u03a3\u03a5\u039c\u0392\u039f\u039b\u0391\u0399\u039f\u0393\u03a1\u0391\u03a6\u0399\u039a\u039f\u0399 \u03a3\u03a5\u039b\u039b\u039f\u0393\u039f\u0399", "603": "\u03a5\u03a0\u039f\u03a5\u03a1\u0393\u0395\u0399\u039f \u03a3\u03a4\u03a1\u0391\u03a4\u0399\u03a9\u03a4\u0399\u039a\u03a9\u039d", "604": "\u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f \u0395.\u039c.\u03a0", "605": "\u03a5\u03a0\u039f\u03a5\u03a1\u0393\u0395\u0399\u039f \u0395\u03a1\u0393\u0391\u03a3\u0399\u0391\u03a3", "606": "\u0391\u0393\u039f\u039d\u0395\u03a3 \u0393\u03a1\u0391\u039c\u039c\u0395\u03a3", "607": "\u039c\u039f\u039d\u039f\u03a0\u03a9\u039b\u0399\u039f \u03a0\u0395\u03a4\u03a1\u0395\u039b\u0391\u0399\u039f\u03a5", "608": "\u03a0\u03a1\u039f\u039b\u0397\u03a8\u0397 \u03a1\u03a5\u03a0\u0391\u039d\u03a3\u0397\u03a3 \u03a4\u0397\u03a3 \u0398\u0391\u039b\u0391\u03a3\u03a3\u0391\u03a3", "609": "\u03a7\u03a9\u03a1\u0399\u039a\u0397 \u0394\u0399\u039a\u0391\u0399\u039f\u0394\u039f\u03a3\u0399\u0391 \u03a4\u0395\u039b\u03a9\u039d\u0395\u0399\u0391\u039a\u03a9\u039d \u0391\u03a1\u03a7\u03a9\u039d", "610": "\u0395\u03a0\u0391\u0393\u0393\u0395\u039b\u039c\u0391\u03a4\u0399\u039a\u0391 \u03a3\u03a9\u039c\u0391\u03a4\u0395\u0399\u0391", "611": "\u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u0391 \u0391\u0393\u03a1\u039f\u03a4\u0399\u039a\u0397\u03a3 \u0391\u03a3\u03a6\u0391\u039b\u0395\u0399\u0391\u03a3", "612": "\u0391\u039e\u0399\u039f\u03a0\u039f\u0399\u0397\u03a3\u0397 \u0395\u039a\u039a\u039b\u0397\u03a3\u0399\u0391\u03a3\u03a4\u0399\u039a\u0397\u03a3 \u03a0\u0395\u03a1\u0399\u039f\u03a5\u03a3\u0399\u0391\u03a3", "613": "\u0395\u039c\u03a0\u039f\u03a1\u0399\u039a\u039f\u0399 \u0391\u039d\u03a4\u0399\u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u039f\u0399", "614": "\u0395\u039d\u03a9\u03a3\u0395\u0399\u03a3 \u0395\u03a6\u0395\u0394\u03a1\u03a9\u039d \u0391\u039e\u0399\u03a9\u039c\u0391\u03a4\u0399\u039a\u03a9\u039d", "615": "\u0391\u03a4\u0395\u039b\u0395\u0399\u0395\u03a3 \u03a5\u03a0\u0395\u03a1 \u03a4\u0397\u03a3 \u0392\u0399\u039f\u039c\u0397\u03a7\u0391\u039d\u0399\u0391\u03a3", "616": "\u039b\u039f\u0393\u0399\u03a3\u03a4\u0399\u039a\u039f \u0395\u0399\u0394\u0399\u039a\u03a9\u039d \u03a4\u0391\u039c\u0395\u0399\u03a9\u039d \u039d.\u03a0.\u0394.\u0394", "617": "\u03a3\u03a5\u039c\u0392\u0391\u03a3\u0397 \u0393\u0399\u0391 \u0394\u0395\u0399\u0393\u039c\u0391\u03a4\u0391 \u039a\u039b\u03a0", "618": "\u0395\u03a1\u0393\u039f\u039b\u0397\u03a0\u03a4\u0395\u03a3 \u0394\u0397\u039c\u039f\u03a3\u0399\u03a9\u039d \u0395\u03a1\u0393\u03a9\u039d", "619": "\u0395\u03a0\u0391\u039d\u0395\u03a0\u039f\u0399\u039a\u0399\u03a3\u039c\u039f\u03a3 \u03a0\u0391\u03a1\u0391\u039c\u0395\u0398\u039f\u03a1\u0399\u03a9\u039d \u03a0\u0395\u03a1\u0399\u039f\u03a7\u03a9\u039d", "620": "\u03a6\u0391\u03a1\u0399\u039a\u0391 \u03a4\u0395\u039b\u0397", "621": "\u039b\u0391\u03a4\u039f\u039c\u0395\u0399\u0391 \u039c\u0391\u03a1\u039c\u0391\u03a1\u03a9\u039d", "622": "\u03a0\u039f\u03a3\u039f\u03a3\u03a4\u039f \u03a3\u03a5\u039c\u039c\u0395\u03a4\u039f\u03a7\u0397\u03a3 \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u039c\u0395\u039d\u03a9\u039d", "623": "\u0391\u03a3\u03a6\u0391\u039b\u0395\u0399\u0391 \u0391\u039d\u0398\u03a1\u03a9\u03a0\u0399\u039d\u0397\u03a3 \u0396\u03a9\u0397\u03a3 \u03a3\u03a4\u0397 \u0398\u0391\u039b\u0391\u03a3\u03a3\u0391", "624": "\u039f\u03a1\u0393\u0391\u039d\u0399\u039a\u039f\u0399 \u039d\u039f\u039c\u039f\u0399 \u03a0\u0395\u03a1\u0399 \u03a6\u03a5\u039b\u0391\u039a\u03a9\u039d", "625": "\u039b\u0391\u0398\u03a1\u0395\u039c\u03a0\u039f\u03a1\u0399\u0391", "626": "\u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u03a4\u0399\u039a\u0395\u03a3 \u0394\u0399\u0391\u03a4\u0391\u039e\u0395\u0399\u03a3 \u0393\u0395\u039d\u0399\u039a\u0391", "627": "\u0395\u0399\u03a3\u0391\u0393\u03a9\u0393\u0397 \u03a7\u039b\u03a9\u03a1\u0399\u039a\u039f\u03a5 \u039a\u0391\u039b\u0399\u039f\u03a5", "628": "\u0399\u039d\u03a3\u03a4\u0399\u03a4\u039f\u03a5\u03a4\u039f \u0393\u0395\u03a9\u03a0\u039f\u039d\u0399\u039a\u03a9\u039d \u0395\u03a0\u0399\u03a3\u03a4\u0397\u039c\u03a9\u039d", "629": "\u0395\u03a0\u0399\u0394\u039f\u039c\u0391 \u03a0\u0391\u03a3\u03a7\u0391 - \u03a7\u03a1\u0399\u03a3\u03a4\u039f\u03a5\u0393\u0395\u039d\u039d\u03a9\u039d", "630": "\u0393\u0395\u03a9\u03a1\u0393\u0399\u039a\u039f\u0399 \u03a3\u03a5\u039d\u0395\u03a4\u0391\u0399\u03a1\u0399\u03a3\u039c\u039f\u0399 \u0391\u039b\u039b\u0397\u039b\u0391\u03a3\u03a6\u0391\u039b\u0395\u0399\u0391\u03a3", "631": "\u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u03a3 \u03a6\u039f\u03a1\u039f\u039b\u039f\u0393\u0399\u039a\u03a9\u039d \u0394\u0399\u039a\u0391\u03a3\u03a4\u0397\u03a1\u0399\u03a9\u039d", "632": "\u0395\u03a0\u0399\u0394\u039f\u03a3\u0397", "633": "\u0399\u0394\u03a1\u03a5\u039c\u0391 \u039a\u03a1\u0391\u03a4\u0399\u039a\u03a9\u039d \u03a5\u03a0\u039f\u03a4\u03a1\u039f\u03a6\u0399\u03a9\u039d", "634": "\u03a5\u0393\u0395\u0399\u039f\u039d\u039f\u039c\u0399\u039a\u039f\u03a3 \u039a\u0391\u039d\u039f\u039d\u0399\u03a3\u039c\u039f\u03a3 \u0391\u0395\u03a1\u039f\u03a5\u0393\u0395\u0399\u039f\u039d\u039f\u039c\u0395\u0399\u03a9\u039d", "635": "\u039f\u03a6\u0395\u0399\u039b\u0395\u03a3 \u03a0\u03a1\u039f\u03a3 \u03a4\u039f \u0394\u0397\u039c\u039f\u03a3\u0399\u039f", "636": "\u03a0\u03a1\u0391\u039a\u03a4\u039f\u03a1\u0395\u0399\u0391 \u0395\u0399\u0394\u0397\u03a3\u0395\u03a9\u039d", "637": "\u0395\u039b\u0395\u0393\u03a7\u039f\u03a3 \u039a\u0391\u0399 \u0395\u03a0\u039f\u03a0\u03a4\u0395\u0399\u0391 \u039e\u0395\u039d\u039f\u0394\u039f\u03a7\u0395\u0399\u03a9\u039d \u039a\u039b\u03a0", "638": "\u039a\u039f\u0399\u039d\u0391 \u03a4\u0391\u039c\u0395\u0399\u0391 \u0395\u039a\u039c\u0395\u03a4\u0391\u039b\u039b\u0395\u03a5\u03a3\u0395\u03a9\u03a3 \u039b\u0395\u03a9\u03a6\u039f\u03a1\u0395\u0399\u03a9\u039d (\u039a.\u03a4.\u0395.\u039b.)", "639": "\u039a\u0391\u03a4\u03a9\u03a4\u0391\u03a4\u0391 \u039f\u03a1\u0399\u0391 \u039c\u0399\u03a3\u0398\u03a9\u039d \u039a\u0391\u0399 \u0397\u039c\u0395\u03a1\u039f\u039c\u0399\u03a3\u0398\u0399\u03a9\u039d", "640": "\u03a3\u03a5\u039d\u03a4\u0397\u03a1\u0397\u03a4\u0399\u039a\u0397 \u039a\u0391\u03a4\u0391\u03a3\u03a7\u0395\u03a3\u0397 \u03a0\u039b\u039f\u0399\u03a9\u039d", "641": "\u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u0391 \u03a0\u03a1\u039f\u03a3\u03a4\u0391\u03a3\u0399\u0391\u03a3 \u0395\u03a1\u0393\u0391\u0396\u039f\u039c\u0395\u039d\u03a9\u039d \u03a3\u03a4\u0397\u039d \u0391\u039b\u039b\u039f\u0394\u0391\u03a0\u0397", "642": "\u0395\u03a5\u03a1\u03a9\u03a0\u0391\u0399\u039a\u039f\u03a3 \u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u03a3 \u03a0\u03a5\u03a1\u0397\u039d\u0399\u039a\u03a9\u039d \u0395\u03a1\u0395\u03a5\u039d\u03a9\u039d", "643": "\u0392\u0399\u0392\u039b\u0399\u0391 \u0393\u0395\u03a9\u03a1\u0393\u0399\u039a\u03a9\u039d \u03a3\u03a5\u039d\u0395\u03a4\u0391\u0399\u03a1\u0399\u03a3\u039c\u03a9\u039d", "644": "\u03a0\u039f\u039b\u0399\u03a4\u0399\u039a\u0395\u03a3 \u039a\u0391\u0399 \u03a3\u03a4\u03a1\u0391\u03a4\u0399\u03a9\u03a4\u0399\u039a\u0395\u03a3 \u03a3\u03a5\u039d\u03a4\u0391\u039e\u0395\u0399\u03a3", "645": "\u039c\u0395\u03a4\u0391\u03a4\u03a1\u039f\u03a0\u0397 \u039c\u0395\u03a4\u039f\u03a7\u03a9\u039d \u03a3\u0395 \u039f\u039d\u039f\u039c\u0391\u03a3\u03a4\u0399\u039a\u0395\u03a3", "646": "\u0395\u0399\u0394\u0399\u039a\u039f\u0399 \u03a6\u03a1\u039f\u03a5\u03a1\u039f\u0399", "647": "\u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u0391 \u0395\u0398\u039d\u0399\u039a\u0397\u03a3 \u0391\u03a3\u03a6\u0391\u039b\u0395\u0399\u0391\u03a3", "648": "\u03a1\u03a5\u0398\u039c\u0399\u03a3\u03a4\u0399\u039a\u039f\u03a3 \u03a6\u039f\u03a1\u039f\u03a3", "649": "\u039b\u0399\u039c\u0391\u039d\u0399 \u0397\u03a1\u0391\u039a\u039b\u0395\u0399\u039f\u03a5 \u039a\u03a1\u0397\u03a4\u0397\u03a3 \u039a\u0391\u0399", "650": "\u0395\u039a\u039a\u039b\u0397\u03a3\u0399\u0391\u03a3\u03a4\u0399\u039a\u0395\u03a3 \u03a5\u03a0\u039f\u03a4\u03a1\u039f\u03a6\u0399\u0395\u03a3", "651": "\u03a6\u039f\u03a1\u039f\u039b\u039f\u0393\u0399\u0391 \u039f\u0399\u039d\u039f\u03a5", "652": "\u0394\u0399\u0395\u0398\u039d\u0397\u03a3 \u03a5\u0393\u0395\u0399\u039f\u039d\u039f\u039c\u0399\u039a\u0397 \u03a3\u03a5\u039c\u0392\u0391\u03a3\u0397 \u0391\u0395\u03a1\u039f\u039d\u0391\u03a5\u03a4\u0399\u039b\u0399\u0391\u03a3", "653": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0391\u03a1\u03a9\u0393\u0397\u03a3 \u03a5\u03a0\u0391\u039b\u039b\u0397\u039b\u03a9\u039d", "654": "\u039a\u039f\u0399\u039d\u03a9\u039d\u0399\u039a\u0397 \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0397 \u0391\u0393\u03a1\u039f\u03a4\u03a9\u039d", "655": "\u039a\u03a5\u03a1\u039f\u03a3 \u03a3\u03a5\u039c\u0392\u039f\u039b\u0391\u0399\u039f\u0393\u03a1\u0391\u03a6\u0399\u039a\u03a9\u039d \u03a0\u03a1\u0391\u039e\u0395\u03a9\u039d", "656": "\u03a6\u039f\u03a1\u039f\u039b\u039f\u0393\u0399\u0391 \u03a5\u03a0\u0395\u03a1\u0391\u039e\u0399\u0391\u03a3 \u0391\u039a\u0399\u039d\u0397\u03a4\u03a9\u039d", "657": "\u039d\u0397\u03a0\u0399\u0391\u0393\u03a9\u0393\u0395\u0399\u0391", "658": "\u0395\u039a\u0398\u0395\u039c\u0391\u03a4\u0391 \u039a\u0391\u0399 \u0394\u0395\u0399\u0393\u039c\u0391\u03a4\u0391", "659": "\u03a5\u0393\u0395\u0399\u039f\u039d\u039f\u039c\u0399\u039a\u039f \u03a3\u03a9\u039c\u0391 \u0391\u0395\u03a1\u039f\u03a0\u039f\u03a1\u0399\u0391\u03a3", "660": "\u03a0\u039b\u0397\u03a1\u03a9\u039c\u0397 \u039c\u0399\u03a3\u0398\u03a9\u039d \u039a\u0391\u0399 \u0397\u039c\u0395\u03a1\u039f\u039c\u0399\u03a3\u0398\u0399\u03a9\u039d", "661": "\u039a\u03a9\u0394\u0399\u039a\u0391\u03a3 \u03a6\u039f\u03a1\u039f\u039b\u039f\u0393\u0399\u0391\u03a3 \u039a\u0391\u03a0\u039d\u039f\u03a5", "662": "\u039f\u03a1\u0399\u0391", "663": "\u0394\u0399\u039a\u0391\u0399\u039f\u03a3\u03a4\u0391\u03a3\u0399\u0391 \u03a3\u0395\u0399\u03a3\u039c\u039f\u03a0\u0391\u0398\u03a9\u039d, \u03a0\u03a5\u03a1\u039f\u03a0\u0391\u0398\u03a9\u039d, \u03a0\u03a1\u039f\u03a3\u03a6\u03a5\u0393\u03a9\u039d \u039a\u039b\u03a0", "664": "\u03a7\u03a1\u0395\u0397 \u039a\u039b\u0397\u03a1\u039f\u039d\u039f\u039c\u0399\u03a9\u039d", "665": "\u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u039d \u0399\u0394\u03a1\u03a5\u039c\u0391\u03a4\u03a9\u039d \u03a0\u0391\u0399\u0394\u0399\u039a\u0397\u03a3 \u03a0\u03a1\u039f\u03a3\u03a4\u0391\u03a3\u0399\u0391\u03a3", "666": "\u039c\u0399\u03a3\u0398\u03a9\u03a3\u0395\u0399\u03a3 \u039a\u0391\u0399 \u0391\u0393\u039f\u03a1\u0395\u03a3", "667": "\u03a0\u0391\u039b\u0391\u0399\u039f\u03a4\u0395\u03a1\u0391\u0399 \u0395\u039a\u039a\u0391\u0398\u0391\u03a1\u0399\u03a3\u0395\u0399\u03a3", "668": "\u039f\u0399\u039a\u039f\u039d\u039f\u039c\u0399\u039a\u0397 \u0391\u03a0\u039f\u039a\u0391\u03a4\u0391\u03a3\u03a4\u0391\u03a3\u0397 \u0391\u0393\u03a1\u039f\u03a4\u03a9\u039d", "669": "\u0391\u03a0\u0391\u039b\u039b\u039f\u03a4\u03a1\u0399\u03a9\u03a3\u0395\u0399\u03a3 \u0393\u0399\u0391 \u0394\u0397\u039c\u039f\u03a4\u0399\u039a\u0391 \u039a\u0391\u0399 \u039a\u039f\u0399\u039d\u039f\u03a4\u0399\u039a\u0391 \u0395\u03a1\u0393\u0391", "670": "\u039c\u0397\u03a4\u03a1\u03a9\u039f \u0391\u0393\u03a1\u039f\u03a4\u03a9\u039d", "671": "\u039a\u0391\u039d\u039f\u039d\u0399\u03a3\u039c\u039f\u03a3 \u0394\u0399\u0395\u03a5\u039a\u039f\u039b\u03a5\u039d\u03a3\u0395\u03a9\u039d", "672": "\u039a\u03a1\u0391\u03a4\u0399\u039a\u039f \u0395\u03a1\u0393\u039f\u03a3\u03a4\u0391\u03a3\u0399\u039f \u0391\u0395\u03a1\u039f\u03a0\u039b\u0391\u039d\u03a9\u039d", "673": "\u0395\u03a0\u0391\u0393\u0393\u0395\u039b\u039c\u0391\u03a4\u0399\u039a\u0391 \u0395\u039d\u0394\u0395\u0399\u039a\u03a4\u0399\u039a\u0391", "674": "\u0391\u03a5\u0398\u0391\u0399\u03a1\u0395\u03a4\u0395\u03a3 \u039a\u0391\u03a4\u0391\u03a3\u039a\u0395\u03a5\u0395\u03a3", "675": "\u0395\u0393\u039a\u0391\u03a4\u0391\u039b\u0395\u039b\u0395\u0399\u039c\u039c\u0395\u039d\u0395\u03a3 \u0395\u039a\u03a4\u0391\u03a3\u0395\u0399\u03a3", "676": "\u03a5\u03a0\u039f\u03a5\u03a1\u0393\u0395\u0399\u039f \u0394\u0397\u039c\u039f\u03a3\u0399\u03a9\u039d \u0384\u0395\u03a1\u0393\u03a9\u039d", "677": "\u03a0\u03a1\u039f\u039d\u039f\u0399\u0391 \u0392. \u0395\u039b\u039b\u0391\u0394\u039f\u03a3", "678": "\u0394\u0399\u039a\u0391\u03a3\u03a4\u0399\u039a\u039f \u0395\u039d\u03a3\u0397\u039c\u039f - \u0391\u0393\u03a9\u0393\u039f\u03a3\u0397\u039c\u039f", "679": "\u03a4\u0391\u03a7\u03a5\u0394\u03a1\u039f\u039c\u0399\u039a\u0397 \u0391\u039d\u03a4\u0391\u03a0\u039f\u039a\u03a1\u0399\u03a3\u0397", "680": "\u0395\u03a3\u03a9\u03a4\u0395\u03a1\u0399\u039a\u0397 \u039d\u039f\u039c\u039f\u0398\u0395\u03a3\u0399\u0391", "681": "\u03a6\u039f\u03a1\u039f\u039b\u039f\u0393\u0399\u0391 \u03a4\u03a3\u0399\u0393\u0391\u03a1\u039f\u03a7\u0391\u03a1\u03a4\u039f\u03a5", "682": "\u039f\u03a1\u0393\u0391\u039d\u0399\u039a\u0395\u03a3 \u0398\u0395\u03a3\u0395\u0399\u03a3 \u0391\u039e\u0399\u03a9\u039c\u0391\u03a4\u0399\u039a\u03a9\u039d", "683": "\u039c\u0391\u0399\u0395\u03a5\u03a4\u0399\u039a\u0397 \u03a0\u0395\u03a1\u0399\u0398\u0391\u039b\u03a8\u0397", "684": "\u0391\u0394\u0395\u0399\u0395\u03a3 \u03a3\u03a4\u03a1\u0391\u03a4\u0399\u03a9\u03a4\u0399\u039a\u03a9\u039d", "685": "\u039f\u03a1\u0393\u0391\u039d\u0399\u039a\u0395\u03a3 \u0394\u0399\u0391\u03a4\u0391\u039e\u0395\u0399\u03a3 \u0395\u039b\u039b\u0397\u039d\u0399\u039a\u0397\u03a3 \u0391\u03a3\u03a4\u03a5\u039d\u039f\u039c\u0399\u0391\u03a3", "686": "\u03a0\u039f\u0399\u039d\u0399\u039a\u039f\u03a3 \u039a\u0391\u0399 \u03a0\u0395\u0399\u0398\u0391\u03a1\u03a7\u0399\u039a\u039f\u03a3 \u039a\u03a9\u0394\u0399\u039a\u0391\u03a3", "687": "\u0391\u039d\u03a5\u03a0\u039f\u03a4\u0391\u039a\u03a4\u039f\u0399", "688": "\u0394\u0399\u0395\u03a5\u0398\u03a5\u039d\u03a3\u0397 \u03a4\u0395\u039b\u03a9\u039d\u0395\u0399\u03a9\u039d \u0398\u0395\u03a3\u03a3\u0391\u039b\u039f\u039d\u0399\u039a\u0397\u03a3", "689": "\u03a0\u0395\u03a1\u0399\u03a6\u0395\u03a1\u0395\u0399\u0395\u03a3 \u039b\u0399\u039c\u0395\u039d\u0399\u039a\u03a9\u039d \u0391\u03a1\u03a7\u03a9\u039d", "690": "\u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0397 \u039a\u0391\u0399 \u0395\u0399\u03a3\u03a0\u03a1\u0391\u039e\u0397 \u03a0\u039f\u03a1\u03a9\u039d \u03a4.\u0395.\u0392.\u0395", "691": "\u03a3\u0399\u0394\u0397\u03a1\u039f\u03a3", "692": "\u0393\u0395\u039d\u0399\u039a\u0397 \u0393\u03a1\u0391\u039c\u039c\u0391\u03a4\u0395\u0399\u0391 \u0395\u039c\u03a0\u039f\u03a1\u0399\u039f\u03a5", "693": "\u0394\u0399\u0391\u03a7\u0395\u0399\u03a1\u0399\u03a3\u0397 \u0399\u03a3\u03a1\u0391\u0397\u039b\u0399\u03a4\u0399\u039a\u03a9\u039d \u03a0\u0395\u03a1\u039f\u03a5\u03a3\u0399\u03a9\u039d", "694": "\u039b\u0399\u03a0\u039f\u03a4\u0391\u039e\u0399\u0391", "695": "\u0392\u0391\u03a1\u0395\u0391 \u039a\u0391\u0399 \u0391\u039d\u0398\u03a5\u0393\u0399\u0395\u0399\u039d\u0391 \u0395\u03a0\u0391\u0393\u0393\u0395\u039b\u039c\u0391\u03a4\u0391", "696": "\u0395\u0399\u0394\u0399\u039a\u039f \u03a4\u0391\u039c\u0395\u0399\u039f \u039c\u0397\u03a7\u0391\u039d\u0397\u039c\u0391\u03a4\u03a9\u039d", "697": "\u039b\u0395\u03a9\u03a6\u039f\u03a1\u0395\u0399\u0391 \u03a0\u0395\u03a1\u0399\u039f\u03a7\u0397\u03a3 \u03a0\u03a1\u03a9\u03a4\u0395\u03a5\u039f\u03a5\u03a3\u0391\u03a3", "698": "\u0391\u039d\u0391\u039c\u039f\u03a1\u03a6\u03a9\u03a4\u0399\u039a\u0391 \u039a\u0391\u03a4\u0391\u03a3\u03a4\u0397\u039c\u0391\u03a4\u0391", "699": "\u03a5\u0393\u0395\u0399\u039f\u039d\u039f\u039c\u0399\u039a\u039f \u03a3\u03a9\u039c\u0391", "700": "\u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u03a3 \u03a5\u03a0\u039f\u03a5\u03a1\u0393\u0395\u0399\u039f\u03a5 \u0395\u03a1\u0393\u0391\u03a3\u0399\u0391\u03a3", "701": "\u0394\u0399\u03a9\u03a1\u03a5\u0393\u0391 \u039a\u039f\u03a1\u0399\u039d\u0398\u039f\u03a5", "702": "\u03a0\u0395\u03a1\u0399\u0398\u0391\u039b\u03a8\u0397 \u03a6\u03a5\u039c\u0391\u03a4\u0399\u039a\u03a9\u039d \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u039c\u0395\u039d\u03a9\u039d", "703": "\u039a\u039f\u0399\u039d\u03a9\u039d\u0399\u039a\u039f\u03a3 \u0395\u039b\u0395\u0393\u03a7\u039f\u03a3 \u0394\u0399\u039f\u0399\u039a\u0397\u03a3\u0397\u03a3 - \u0391\u039d\u03a4\u0399\u0393\u03a1\u0391\u03a6\u0395\u0399\u039f\u039a\u03a1\u0391\u03a4\u0399\u039a\u0391 \u039c\u0395\u03a4\u03a1\u0391 -\u0395\u039a\u039a\u0391\u0398\u0391\u03a1\u0399\u03a3\u0397 \u0391\u03a1\u03a7\u0395\u0399\u03a9\u039d", "704": "\u0392\u0399\u0392\u039b\u0399\u0391 \u03a5\u03a0\u039f\u0398\u0395\u03a3\u0395\u03a9\u039d \u0395\u039a\u039f\u03a5\u03a3\u0399\u0391\u03a3 \u0394\u0399\u039a\u0391\u0399\u039f\u0394\u039f\u03a3\u0399\u0391\u03a3", "705": "\u0396\u0391\u03a7\u0391\u03a1\u0397", "706": "\u0392\u039f\u03a1\u0395\u0399\u039f\u0391\u03a4\u039b\u0391\u039d\u03a4\u0399\u039a\u0397 \u0391\u039c\u03a5\u039d\u03a4\u0399\u039a\u0397 \u039f\u03a1\u0393\u0391\u039d\u03a9\u03a3\u0397 (\u039d.\u0391.\u03a4.\u039f)", "707": "\u03a4\u0391\u039c\u0395\u0399\u039f \u03a3\u03a5\u039d\u03a4\u0391\u039e\u0395\u03a9\u039d \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u03a5 \u0395\u03a4\u0391\u0399\u03a1\u0395\u0399\u0391\u03a3 \u0393\u0395\u039d\u0399\u039a\u03a9\u039d \u0391\u03a0\u039f\u0398\u0397\u039a\u03a9\u039d", "708": "\u039d\u039f\u039c\u0399\u039a\u0397 \u039a\u0391\u03a4\u0391\u03a3\u03a4\u0391\u03a3\u0397 \u03a0\u03a1\u039f\u03a3\u03a6\u03a5\u0393\u03a9\u039d", "709": "\u0394\u0399\u039a\u0391\u03a3\u03a4\u0397\u03a1\u0399\u039f \u039b\u0395\u0399\u03a9\u039d", "710": "\u0394\u0399\u0395\u0398\u039d\u0397\u03a3 \u039f\u03a1\u0393\u0391\u039d\u03a9\u03a3\u0397 \u0395\u03a1\u0393\u0391\u03a3\u0399\u0391\u03a3", "711": "\u03a0\u03a1\u039f\u039c\u0397\u0398\u0395\u0399\u0395\u03a3\u2013\u039c\u0399\u03a3\u0398\u03a9\u03a3\u0395\u0399\u03a3\u2013\u0395\u03a1\u0393\u0391 \u039f.\u0393.\u0391", "712": "\u03a0\u0395\u03a1\u0399\u0398\u0391\u039b\u03a8\u0397 \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u03a5 \u039f.\u0393.\u0391", "713": "\u03a7\u039f\u03a1\u0397\u0393\u0397\u03a3\u0397 \u0394\u0391\u039d\u0395\u0399\u03a9\u039d \u0391\u03a0\u039f \u03a4.\u03a0. \u039a\u0391\u0399 \u0394\u0391\u039d\u0395\u0399\u03a9\u039d", "714": "\u03a4\u0395\u039b\u039f\u03a3 \u0395\u03a0\u0399\u03a4\u0397\u0394\u0395\u03a5\u039c\u0391\u03a4\u039f\u03a3", "715": "\u0395\u039b\u0395\u03a5\u0398\u0395\u03a1\u0391 \u03a4\u0395\u039b\u03a9\u039d\u0395\u0399\u0391\u039a\u0391 \u03a3\u03a5\u0393\u039a\u03a1\u039f\u03a4\u0397\u039c\u0391\u03a4\u0391", "716": "\u03a6\u039f\u03a1\u039f\u039b\u039f\u0393\u0399\u039a\u0391 \u039a\u0399\u039d\u0397\u03a4\u03a1\u0391 \u03a3\u03a5\u0393\u03a7\u03a9\u039d\u0395\u03a5\u03a3\u0395\u03a9\u03a3 \u0397 \u039c\u0395\u03a4\u0391\u03a4\u03a1\u039f\u03a0\u0397\u03a3 \u0395\u03a0\u0399\u03a7\u0395\u0399\u03a1\u0397\u03a3\u0395\u03a9\u039d", "717": "\u039a\u0391\u03a4\u0391\u03a3\u03a4\u0391\u03a4\u0399\u039a\u0395\u03a3 \u0394\u0399\u0391\u03a4\u0391\u039e\u0395\u0399\u03a3 T.E.B.E", "718": "\u039d\u0391\u03a5\u03a4\u0399\u039a\u039f \u0395\u03a0\u0399\u039c\u0395\u039b\u0397\u03a4\u0397\u03a1\u0399\u039f", "719": "\u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f \u03a5.\u0395.\u039d", "720": "\u039b\u0395\u0399\u03a4\u039f\u03a5\u03a1\u0393\u039f\u0399 \u039c\u0395\u03a3\u0397\u03a3 \u0395\u039a\u03a0\u0391\u0399\u0394\u0395\u03a5\u03a3\u0397\u03a3", "721": "\u039a\u039f\u0399\u039d\u039f\u03a0\u03a1\u0391\u039e\u0399\u0391 \u0393\u0395\u03a9\u03a1\u0393\u0399\u039a\u03a9\u039d \u03a3\u03a5\u039d\u0395\u03a4\u0391\u0399\u03a1\u0399\u03a3\u039c\u03a9\u039d", "722": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u03a3 \u0395\u03a0\u0399\u03a7\u0395\u0399\u03a1\u0397\u039c\u0391\u03a4\u0399\u03a9\u039d \u039a\u0399\u039d\u0397\u039c\u0391\u03a4\u039f\u0393\u03a1\u0391\u03a6\u039f\u03a5", "723": "\u0392\u039f\u03a3\u039a\u039f\u03a4\u039f\u03a0\u039f\u0399", "724": "\u0395\u03a0\u0399\u03a4\u039f\u039a\u0399\u0391 \u03a4\u03a1\u0391\u03a0\u0395\u0396\u03a9\u039d", "725": "\u039a\u0391\u03a0\u039d\u0399\u039a\u039f\u0399 \u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u0399", "726": "\u03a3\u03a4\u0391\u0398\u039c\u039f\u0399 \u0391\u03a5\u03a4\u039f\u039a\u0399\u039d\u0397\u03a4\u03a9\u039d", "727": "\u0395\u03a5\u039b\u039f\u0393\u0399\u0391", "728": "\u03a0\u0395\u03a1\u0399\u03a6\u0395\u03a1\u0395\u0399\u0391\u039a\u0395\u03a3 \u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u0395\u03a3 \u03a5\u03a0\u039f\u03a5\u03a1\u0393\u0395\u0399\u039f\u03a5 \u0392\u0399\u039f\u039c\u0397\u03a7\u0391\u039d\u0399\u0391\u03a3", "729": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0391\u0395\u03a1\u039f\u03a0\u039f\u03a1\u0399\u039a\u0397\u03a3 \u0391\u039c\u03a5\u039d\u0391\u03a3", "730": "\u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u03a3 \u039a\u0395\u039d\u03a4\u03a1\u0399\u039a\u0397\u03a3 \u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u0391\u03a3", "731": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0395\u03a1\u0393\u0391\u03a3\u0399\u0391\u03a3 \u0397\u0398\u039f\u03a0\u039f\u0399\u03a9\u039d", "732": "\u03a4\u0395\u039b\u03a9\u039d\u0399\u03a3\u039c\u039f\u03a3 \u0395\u0399\u0394\u03a9\u039d \u0391\u03a4\u039f\u039c\u0399\u039a\u0397\u03a3 \u03a7\u03a1\u0397\u03a3\u0395\u03a9\u03a3", "733": "\u03a6\u039f\u03a1\u039f\u039b\u039f\u0393\u0399\u0391 \u03a0\u03a1\u039f\u03a3\u039f\u0394\u039f\u03a5 \u0391\u03a0\u039f \u03a0\u039b\u039f\u0399\u0391", "734": "\u0394\u0399\u039f\u0399\u039a\u0397\u03a4\u0399\u039a\u0397 \u0394\u0399\u0391\u0399\u03a1\u0395\u03a3\u0397\u03a3", "735": "\u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u03a3 \u0391\u03a5\u03a4\u039f\u039a\u0399\u039d\u0397\u03a4\u039f\u0394\u03a1\u039f\u039c\u0399\u03a9\u039d \u0395\u039b\u039b\u0391\u0394\u039f\u03a3 (\u039f.\u0391.\u0395.)", "736": "\u0395\u0398\u039d\u0399\u039a\u039f \u039a\u0395\u039d\u03a4\u03a1\u039f \u0391\u039c\u0395\u03a3\u0397\u03a3 \u0392\u039f\u0397\u0398\u0395\u0399\u0391\u03a3 (\u0395.\u039a.\u0391.\u0392.)", "737": "\u0393\u039d\u03a9\u039c\u039f\u0394\u039f\u03a4\u0399\u039a\u039f \u03a3\u03a5\u039c\u0392\u039f\u03a5\u039b\u0399\u039f \u039f\u0399\u039a\u039f\u039d\u039f\u039c\u0399\u039a\u0397\u03a3 \u0391\u039d\u0391\u03a0\u03a4\u03a5\u039e\u0397\u03a3", "738": "\u0394\u0399\u0391\u0398\u0397\u039a\u0397", "739": "\u0391\u0393\u03a9\u0393\u0395\u03a3 \u0394\u0399\u0391\u03a4\u03a1\u039f\u03a6\u0397\u03a3", "740": "\u03a6\u0391\u03a1\u039c\u0391\u039a\u0395\u03a5\u03a4\u0399\u039a\u039f\u0399 \u03a3\u03a5\u039b\u039b\u039f\u0393\u039f\u0399", "741": "\u03a4\u0391\u039c\u0395\u0399\u039f \u03a3\u03a5\u039d\u03a4\u0391\u039e\u0395\u03a9\u039d \u039a\u0391\u0399 \u0395\u03a0\u0399\u039a\u039f\u03a5\u03a1\u0399\u039a\u0397\u03a3 \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u03a3 \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u03a5 \u0393\u0395\u03a9\u03a1\u0393\u0399\u039a\u03a9\u039d \u03a3\u03a5\u039d\u0395\u03a4\u0391\u0399\u03a1\u0399\u03a3\u03a4\u0399\u039a\u03a9\u039d \u039f\u03a1\u0393\u0391\u039d\u03a9\u03a3\u0395\u03a9\u039d (\u03a4.\u03a3.\u0395.\u0391.\u03a0.\u0393.\u03a3.\u039f)", "742": "\u0395\u03a0\u0399\u0394\u039f\u039c\u0391\u03a4\u0391 \u0394\u0399\u0391\u03a6\u039f\u03a1\u0391", "743": "\u03a0\u0395\u0399\u0398\u0391\u03a1\u03a7\u0399\u039a\u039f \u0394\u0399\u039a\u0391\u0399\u039f", "744": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0395\u03a0\u0399\u039a\u039f\u03a5\u03a1\u0399\u039a\u0397\u03a3 \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u03a3 \u03a7\u0397\u039c\u0399\u039a\u03a9\u039d (\u03a4.\u0395.\u0391.\u03a7)", "745": "\u03a0\u03a1\u039f\u0391\u0393\u03a9\u0393\u0395\u03a3 \u039a\u0391\u0399 \u03a0\u03a1\u039f\u03a3\u039f\u039d\u03a4\u0391 \u03a0\u03a5\u03a1\u039f\u03a3\u0392\u0395\u03a3\u03a4\u0399\u039a\u039f\u03a5 \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u03a5", "746": "\u039f\u0394\u039f\u0399\u03a0\u039f\u03a1\u0399\u039a\u0391 \u0395\u039e\u039f\u0394\u0391 \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u03a5 \u03a3\u03a9\u039c\u0391\u03a4\u03a9\u039d \u0391\u03a3\u03a6\u0391\u039b\u0395\u0399\u0391\u03a3", "747": "\u039d\u039f\u03a3\u0397\u039b\u0395\u03a5\u03a4\u0399\u039a\u0391 \u0399\u0394\u03a1\u03a5\u039c\u0391\u03a4\u0391 \u039a\u0391\u03a4\u2019 \u0399\u0394\u0399\u0391\u039d", "748": "\u03a0\u03a1\u039f\u03a3\u03a4\u0391\u03a3\u0399\u0391 \u039a\u0391\u03a4\u0391 \u03a4\u0397\u03a3 \u03a6\u03a5\u039b\u039b\u039f\u039e\u0397\u03a1\u0391\u03a3", "749": "\u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u03a3 \u03a4\u0391\u039c\u0395\u0399\u039f\u03a5 \u039d\u039f\u039c\u0399\u039a\u03a9\u039d", "750": "\u03a0\u03a1\u0391\u03a4\u0397\u03a1\u0399\u0391 \u03a5\u0393\u03a1\u03a9\u039d \u039a\u0391\u03a5\u03a3\u0399\u039c\u03a9\u039d", "751": "\u0398\u03a1\u0397\u03a3\u039a\u0395\u03a5\u03a4\u0399\u039a\u039f \u03a3\u03a9\u039c\u0391 \u0395\u039d\u039f\u03a0\u039b\u03a9\u039d \u0394\u03a5\u039d\u0391\u039c\u0395\u03a9\u039d", "752": "\u0394\u0399\u0391\u0394\u0399\u039a\u0391\u03a3\u0399\u0391 \u0391\u039d\u0391\u0393\u039a\u0391\u03a3\u03a4\u0399\u039a\u03a9\u039d \u0391\u03a0\u0391\u039b\u039b\u039f\u03a4\u03a1\u0399\u03a9\u03a3\u0395\u03a9\u039d \u0391\u039a\u0399\u039d\u0397\u03a4\u03a9\u039d", "753": "\u0394\u0399\u0395\u03a1\u039c\u0397\u039d\u0395\u0399\u03a3", "754": "\u03a3\u03a7\u0395\u0394\u0399\u0391 \u0391\u039b\u039b\u03a9\u039d \u03a0\u039f\u039b\u0395\u03a9\u039d", "755": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0391\u039b\u039b\u0397\u039b\u039f\u0392\u039f\u0397\u0398\u0395\u0399\u0391\u03a3 \u03a3\u03a4\u03a1\u0391\u03a4\u0399\u03a9\u03a4\u0399\u039a\u03a9\u039d \u0391\u0395\u03a1\u039f\u03a0\u039f\u03a1\u0399\u0391\u03a3", "756": "\u0397\u039c\u0395\u03a1\u039f\u039b\u039f\u0393\u0399\u039f \u039c\u0397\u03a7\u0391\u039d\u0397\u03a3", "757": "\u039a\u0395\u039d\u03a4\u03a1\u039f \u0395\u039b\u039b\u0397\u039d\u0399\u039a\u0397\u03a3 \u0393\u039b\u03a9\u03a3\u03a3\u0391\u03a3", "758": "\u03a9\u03a1\u0395\u03a3 \u0395\u03a1\u0393\u0391\u03a3\u0399\u0391\u03a3 \u03a3\u0395 \u0391\u03a1\u03a4\u039f\u03a0\u039f\u0399\u0395\u0399\u0391", "759": "\u0393\u0395\u039d\u0399\u039a\u0397 \u0393\u03a1\u0391\u039c\u039c\u0391\u03a4\u0395\u0399\u0391", "760": "\u039c\u0395\u03a4\u0391\u03a6\u03a1\u0391\u03a3\u03a4\u0399\u039a\u0391 \u0393\u03a1\u0391\u03a6\u0395\u0399\u0391", "761": "\u03a0\u03a1\u039f\u0394\u0399\u0391\u0393\u03a1\u0391\u03a6\u0395\u03a3 \u039c\u0395\u039b\u0395\u03a4\u03a9\u039d", "762": "\u03a3\u03a5\u039d\u03a4\u0391\u039e\u0395\u0399\u03a3 \u0398\u03a5\u039c\u0391\u03a4\u03a9\u039d \u0395\u0398\u039d\u0399\u039a\u0397\u03a3", "763": "\u03a4\u0391\u039c\u0395\u0399\u039f \u03a0\u03a1\u039f\u039d\u039f\u0399\u0391\u03a3 \u03a3\u03a5\u039c\u0392\u039f\u039b\u0391\u0399\u039f\u0393\u03a1\u0391\u03a6\u03a9\u039d", "764": "\u0399\u0391\u03a4\u03a1\u039f\u0394\u0399\u039a\u0391\u03a3\u03a4\u0399\u039a\u0397 \u0391\u039c\u039f\u0399\u0392\u0397", "765": "\u0395\u03a6\u039f\u03a1\u0399\u0395\u03a3 \u039a\u0391\u03a0\u039d\u039f\u03a5 \u2013 \u039a\u0391\u03a0\u039d\u0395\u03a1\u0393\u039f\u03a3\u03a4\u0391\u03a3\u0399\u0391", "766": "\u03a0\u039f\u0399\u039c\u039d\u0399\u039f\u03a3\u03a4\u0391\u03a3\u0399\u0391", "767": "\u039a\u0395\u039d\u03a4\u03a1\u0391 \u0395\u03a1\u0395\u03a5\u039d\u0391\u03a3 - \u0395\u03a1\u0395\u03a5\u039d\u0397\u03a4\u0399\u039a\u0391 \u0399\u039d\u03a3\u03a4\u0399\u03a4\u039f\u03a5\u03a4\u0391", "768": "\u03a4\u0391\u039c\u0395\u0399\u0391 \u03a0\u03a1\u039f\u039d\u039f\u0399\u0391\u03a3 \u0394\u0399\u039a\u0397\u0393\u039f\u03a1\u03a9\u039d", "769": "\u039f\u0399\u039d\u039f\u03a0\u0391\u03a1\u0391\u0393\u03a9\u0393\u0397 \u03a3\u0391\u039c\u039f\u03a5", "770": "\u0399\u039c\u0391\u03a4\u0399\u03a3\u039c\u039f\u03a3 \u03a0. \u039d\u0391\u03a5\u03a4\u0399\u039a\u039f\u03a5", "771": "\u039c\u0397\u03a7\u0391\u039d\u0399\u039a\u039f\u0399,\u0391\u03a1\u03a7\u0399\u03a4\u0395\u039a\u03a4\u039f\u039d\u0395\u03a3,\u03a4\u039f\u03a0\u039f\u0393\u03a1\u0391\u03a6\u039f\u0399", "772": "\u03a0\u0391\u039d\u03a4\u0395\u0399\u039f \u03a0\u0391\u039d\u0395\u03a0\u0399\u03a3\u03a4\u0397\u039c\u0399\u039f \u039a\u039f\u0399\u039d\u03a9\u039d\u0399\u039a\u03a9\u039d \u039a\u0391\u0399 \u03a0\u039f\u039b\u0399\u03a4\u0399\u039a\u03a9\u039d \u0395\u03a0\u0399\u03a3\u03a4\u0397\u039c\u03a9\u039d", "773": "\u039d\u0395\u039f\u0399 \u03a7\u03a1\u0397\u039c\u0391\u03a4\u039f\u03a0\u0399\u03a3\u03a4\u03a9\u03a4\u0399\u039a\u039f\u0399 \u0398\u0395\u03a3\u039c\u039f\u0399", "774": "\u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u0391 \u03a0\u039f\u039b\u0399\u03a4\u0399\u039a\u0397\u03a3 \u0391\u0395\u03a1\u039f\u03a0\u039f\u03a1\u0399\u0391\u03a3", "775": "\u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u03a3 \u03a5\u03a0\u039f\u0398\u0397\u039a\u039f\u03a6\u03a5\u039b\u0391\u039a\u0395\u0399\u03a9\u039d", "776": "\u0391\u03a4\u03a5\u03a7\u0397\u039c\u0391\u03a4\u0391 \u03a3\u0395 \u0394\u0397\u039c\u039f\u03a3\u0399\u0391 \u0395\u03a1\u0393\u0391", "777": "\u0391\u03a1\u0395\u0399\u039f\u03a3 \u03a0\u0391\u0393\u039f\u03a3", "778": "\u03a5\u03a0\u0391\u0393\u03a9\u0393\u0397 \u03a3\u0395 \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0397 \u039a\u0391\u0399", "779": "\u0394\u0399\u0395\u0398\u039d\u0395\u0399\u03a3 \u03a3\u0399\u0394\u0397\u03a1\u039f\u0394\u03a1\u039f\u039c\u0399\u039a\u0395\u03a3 \u039c\u0395\u03a4\u0391\u03a6\u039f\u03a1\u0395\u03a3\u0394\u0399\u0395\u03a5\u03a1\u03a9\u03a0\u0391\u0399\u039a\u039f \u03a3\u0399\u0394\u0397\u03a1\u039f\u0394\u03a1\u039f\u039c\u0399\u039a\u039f \u03a3\u03a5\u03a3\u03a4\u0397\u039c\u0391", "780": "\u039f\u0399\u039a\u039f\u039d\u039f\u039c\u0399\u039a\u0397 \u0395\u03a0\u0399\u0398\u0395\u03a9\u03a1\u0397\u03a3\u0397 \u03a0. \u039d\u0391\u03a5\u03a4\u0399\u039a\u039f\u03a5", "781": "\u0391\u039d\u0391\u03a0\u03a4\u03a5\u039e\u0399\u0391\u039a\u0397 \u039a\u0391\u0399 \u0392\u0399\u039f\u039c\u0397\u03a7\u0391\u039d\u0399\u039a\u0397 \u03a0\u039f\u039b\u0399\u03a4\u0399\u039a\u0397", "782": "\u0392\u0395\u0392\u0391\u0399\u03a9\u03a3\u0397 \u039a\u0391\u0399 \u0395\u0399\u03a3\u03a0\u03a1\u0391\u039e\u0397 \u03a0\u039f\u0399\u039d\u0399\u039a\u03a9\u039d \u0395\u039e\u039f\u0394\u03a9\u039d", "783": "\u039d\u0391\u03a5\u03a4\u0399\u039a\u039f \u03a7\u0397\u039c\u0395\u0399\u039f", "784": "\u039b\u0391\u03a7\u0395\u0399\u0391", "785": "\u03a4\u03a1\u039f\u03a7\u0399\u039f\u0394\u03a1\u039f\u039c\u039f\u0399 \u0391\u0398\u0397\u039d\u03a9\u039d \u2013 \u03a0\u0395\u0399\u03a1\u0391\u0399\u03a9\u03a3", "786": "\u03a4\u0391\u039c\u0395\u0399\u039f \u03a0\u03a1\u039f\u039d\u039f\u0399\u0391\u03a3 \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u03a5 \u0395\u03a4\u0391\u0399\u03a1\u0395\u0399\u03a9\u039d \u039b\u0399\u03a0\u0391\u03a3\u039c\u0391\u03a4\u03a9\u039d \u03a4\u0391.\u03a0.\u03a0.\u0395.\u039b", "787": "\u0394\u0399\u0395\u03a5\u039a\u039f\u039b\u03a5\u039d\u03a3\u0395\u0399\u03a3 \u0393\u0399\u0391 \u0391\u039d\u039f\u0399\u039a\u039f\u0394\u039f\u039c\u0397\u03a3\u0397", "788": "\u0391\u0393\u039f\u03a1\u0391\u03a0\u03a9\u039b\u0397\u03a3\u0399\u0391 \u039a\u0391\u03a0\u039d\u039f\u03a5", "789": "\u03a0\u0395\u03a1\u0399 \u039f\u03a1\u03a9\u039d \u0395\u03a1\u0393\u0391\u03a3\u0399\u0391\u03a3 \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u03a5 \u0394\u0399\u0395\u0398\u039d\u03a9\u039d \u039c\u0395\u03a4\u0391\u03a6\u039f\u03a1\u03a9\u039d", "790": "\u0391\u039b\u0399\u0395\u03a5\u03a4\u0399\u039a\u039f\u03a3 \u039a\u03a9\u0394\u0399\u039a\u0391\u03a3", "791": "\u03a3\u03a5\u039c\u0392\u039f\u03a5\u039b\u0399\u0391 \u039a\u0391\u0399 \u0395\u03a0\u0399\u03a4\u03a1\u039f\u03a0\u0395\u03a3", "792": "\u03a0\u0395\u03a1\u0399\u03a6\u0395\u03a1\u0395\u0399\u0391\u039a\u0395\u03a3 \u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u0395\u03a3 \u03a5\u03a0\u039f\u03a5\u03a1\u0393\u0395\u0399\u039f\u03a5 \u039f\u0399\u039a\u039f\u039d\u039f\u039c\u0399\u039a\u03a9\u039d", "793": "\u03a3\u03a5\u039c\u0392\u0391\u03a3\u0395\u0399\u03a3 \u03a0\u0395\u03a1\u0399 \u0391\u03a3\u0395\u039c\u039d\u03a9\u039d \u0394\u0397\u039c\u039f\u03a3\u0399\u0395\u03a5\u039c\u0391\u03a4\u03a9\u039d", "794": "\u0393\u0395\u03a9\u03a1\u0393\u0399\u039a\u039f\u0399 \u03a3\u03a4\u0391\u0398\u039c\u039f\u0399", "795": "\u039d\u0391\u039e\u0399\u03a9\u03a4\u0399\u039a\u0397 \u03a3\u039c\u03a5\u03a1\u0399\u0394\u0391", "796": "\u0391\u039d\u0391\u03a3\u03a4\u039f\u039b\u0397 \u03a0\u03a1\u039f\u03a3\u0395\u039b\u0395\u03a5\u03a3\u0395\u03a9\u03a3 \u0395\u03a6\u0395\u0394\u03a1\u03a9\u039d", "797": "\u0395\u039a\u03a0\u0391\u0399\u0394\u0395\u03a5\u03a3\u0397 \u03a7\u03a9\u03a1\u039f\u03a6\u03a5\u039b\u0391\u039a\u0397\u03a3", "798": "\u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0397 \u0395\u039e\u0391\u0393\u03a9\u0393\u0399\u039a\u03a9\u039d \u03a0\u0399\u03a3\u03a4\u03a9\u03a3\u0395\u03a9\u039d", "799": "\u0398\u0395\u03a1\u0391\u03a0\u0391\u0399\u039d\u0399\u0394\u0395\u03a3 \u0391\u039d\u0391\u03a0\u0397\u03a1\u03a9\u039d \u03a0\u039f\u039b\u0395\u039c\u039f\u03a5", "800": "\u0395\u03a0\u0399\u03a4\u03a1\u039f\u03a0\u0397 \u0391\u03a4\u039f\u039c\u0399\u039a\u0397\u03a3 \u0395\u039d\u0395\u03a1\u0393\u0395\u0399\u0391\u03a3", "801": "\u039a\u0391\u039d\u039f\u039d\u0399\u03a3\u039c\u039f\u03a3 \u0391\u03a3\u03a4\u03a5\u039d\u039f\u039c\u0399\u0391\u03a3 \u03a0\u039f\u039b\u0395\u03a9\u039d", "802": "\u03a6\u03a5\u039b\u039b\u0391 \u03a0\u039f\u0399\u039f\u03a4\u0397\u03a4\u0391\u03a3 \u03a5\u03a0\u0391\u039e\u0399\u03a9\u039c\u0391\u03a4\u0399\u039a\u03a9\u039d \u03a0.\u039d", "803": "\u0395\u03a0\u0399\u0398\u0395\u03a9\u03a1\u0397\u03a3\u0395\u0399\u03a3 \u039a\u03a4\u0397\u039d\u0399\u0391\u03a4\u03a1\u0399\u039a\u0397\u03a3", "804": "\u039c\u0395\u03a1\u0399\u039a\u0397 \u0391\u03a0\u0391\u03a3\u03a7\u039f\u039b\u0397\u03a3\u0397 - \u03a6\u0391\u03a3\u039f\u039d - \u03a4\u0397\u039b\u0395\u03a1\u0393\u0391\u03a3\u0399\u0391 \u039a\u0391\u03a4\u2019 \u039f\u0399\u039a\u039f\u039d \u0391\u03a0\u0391\u03a3\u03a7\u039f\u039b\u0397\u03a3\u0397", "805": "\u0397\u039b\u0395\u039a\u03a4\u03a1\u0399\u039a\u0397 \u0395\u03a4\u0391\u0399\u03a1\u0395\u0399\u0391 \u0391\u0398\u0397\u039d\u03a9\u039d - \u03a0\u0395\u0399\u03a1\u0391\u0399\u03a9\u03a3", "806": "\u03a0\u03a1\u039f\u039a\u0391\u03a4\u0391\u03a3\u039a\u0395\u03a5\u0391\u03a3\u039c\u0395\u039d\u0391\u0399 \u039f\u0399\u039a\u0399\u0391\u0399", "807": "\u03a4\u03a1\u0391\u03a0\u0395\u0396\u0391 \u03a4\u0397\u03a3 \u0395\u039b\u039b\u0391\u0394\u039f\u03a3", "808": "\u03a3\u03a5\u039c\u03a6\u03a9\u039d\u0399\u0395\u03a3 \u03a0\u03a1\u039f\u03a3\u03a4\u0391\u03a3\u0399\u0391\u03a3 \u03a4\u039f\u03a5 \u03a0\u0395\u03a1\u0399\u0392\u0391\u039b\u039b\u039f\u039d\u03a4\u039f\u03a3", "809": "\u039b\u0399\u0393\u039d\u0399\u03a4\u0397\u03a3", "810": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0395\u03a0\u0391\u0393\u0393\u0395\u039b\u039c\u0391\u03a4\u0399\u039a\u0397\u03a3 \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0397\u03a3 \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u03a5 \u0395\u039b\u03a4\u0391", "811": "\u039c\u0395\u039b\u0395\u03a4\u0395\u03a3 \u03a4\u0395\u03a7\u039d\u0399\u039a\u03a9\u039d \u0395\u03a1\u0393\u03a9\u039d", "812": "\u03a0\u039b\u0397\u03a1\u03a9\u039c\u0391\u03a4\u0391 \u0391\u0395\u03a1\u039f\u03a3\u039a\u0391\u03a6\u03a9\u039d", "813": "\u0395\u039e\u0391\u0393\u03a9\u0393\u0397 \u03a3\u03a4\u0391\u03a6\u0399\u0394\u0391\u03a3", "814": "\u03a4\u0391\u039c\u0395\u0399\u039f\u039d \u03a0\u03a1\u039f\u039d\u039f\u0399\u0391\u03a3 \u0394\u0397\u039c\u039f\u03a3\u0399\u03a9\u039d \u03a5\u03a0\u0391\u039b\u039b\u0397\u039b\u03a9\u039d", "815": "\u0394\u0399\u0391\u03a7\u0395\u0399\u03a1\u0399\u03a3\u0397 \u03a0\u0395\u03a1\u0399\u039f\u03a5\u03a3\u0399\u0391\u03a3", "816": "\u039f\u03a1\u0393\u0391\u039d\u0399\u039a\u039f\u0399 \u039d\u039f\u039c\u039f\u0399", "817": "\u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u0395\u03a3 \u0391\u0399\u039c\u039f\u0394\u039f\u03a3\u0399\u0391\u03a3", "818": "\u03a3\u03a9\u039c\u0391\u03a4\u0395\u0399\u0391 \u0394\u0397\u039c\u039f\u03a3\u0399\u03a9\u039d \u03a5\u03a0\u0391\u039b\u039b\u0397\u039b\u03a9\u039d", "819": "\u03a0\u0395\u0396\u039f\u0394\u03a1\u039f\u039c\u0399\u0391", "820": "\u0394\u0399\u0391\u0398\u0395\u03a3\u0397 \u0391\u03a0\u039f\u03a1\u03a1\u0399\u039c\u039c\u0391\u03a4\u03a9\u039d", "821": "\u03a4\u03a1\u039f\u03a7\u0399\u039f\u0394\u03a1\u039f\u039c\u039f\u0399 \u0398\u0395\u03a3\u03a3\u0391\u039b\u039f\u039d\u0399\u039a\u0397\u03a3", "822": "\u0393\u0395\u039d\u0399\u039a\u0397 \u0394\u0399\u0395\u03a5\u0398\u03a5\u039d\u03a3\u0397 \u0394\u0397\u039c\u039f\u03a3\u0399\u039f\u03a5 \u039b\u039f\u0393\u0399\u03a3\u03a4\u0399\u039a\u039f\u03a5", "823": "\u03a1\u03a5\u039c\u039f\u03a5\u039b\u039a\u0391 - \u039b\u0391\u039d\u03a4\u0396\u0395\u03a3", "824": "\u03a0\u0395\u03a4\u03a1\u0395\u039b\u0391\u0399\u039f\u0395\u0399\u0394\u0397", "825": "\u0393\u0395\u039d\u0399\u039a\u0391 \u0391\u03a1\u03a7\u0395\u0399\u0391 \u03a4\u039f\u03a5 \u039a\u03a1\u0391\u03a4\u039f\u03a5\u03a3", "826": "\u039a\u0391\u03a4\u0391\u03a3\u03a4\u0391\u03a4\u0399\u039a\u0395\u03a3 \u0394\u0399\u0391\u03a4\u0391\u039e\u0395\u0399\u03a3 \u039f.\u03a4.\u0395. - \u03a3\u03a7\u0395\u03a3\u0395\u0399\u03a3 \u039f.\u03a4.\u0395. \u039c\u0395 \u0391\u039b\u039b\u039f\u03a5\u03a3 \u03a0\u0391\u03a1\u039f\u03a7\u039f\u03a5\u03a3", "827": "\u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u0391 \u0391\u03a5\u03a4\u039f\u039a\u0399\u039d\u0397\u03a4\u03a9\u039d", "828": "\u0391\u039a\u0391\u0394\u0397\u039c\u0399\u0391 \u0391\u0398\u0397\u039d\u03a9\u039d", "829": "\u039c\u039f\u039d\u039f\u03a0\u03a9\u039b\u0399\u039f \u0396\u0391\u03a7\u0391\u03a1\u0399\u039d\u0397\u03a3", "830": "\u039f\u0399\u039a\u0399\u03a3\u03a4\u0399\u039a\u0395\u03a3 \u03a0\u0395\u03a1\u0399\u039f\u03a7\u0395\u03a3", "831": "\u0391\u03a4\u0395\u039b\u0395\u0399\u0395\u03a3 \u03a5\u03a0\u0395\u03a1 \u03a4\u0397\u03a3 \u0391\u039b\u0399\u0395\u0399\u0391\u03a3", "832": "\u0394\u0399\u0391\u03a6\u039f\u03a1\u0395\u03a3 \u0395\u039a\u03a4\u0391\u039a\u03a4\u0395\u03a3 \u03a6\u039f\u03a1\u039f\u039b\u039f\u0393\u0399\u0395\u03a3", "833": "\u0392\u0399\u0392\u039b\u0399\u0391 \u0394\u0397\u039c\u03a9\u039d \u039a\u0391\u0399 \u039a\u039f\u0399\u039d\u039f\u03a4\u0397\u03a4\u03a9\u039d", "834": "\u0395\u03a1\u0393\u0391\u03a4\u0399\u039a\u0391 \u0391\u03a4\u03a5\u03a7\u0397\u039c\u0391\u03a4\u0391", "835": "\u039d\u039f\u03a3\u0397\u039b\u0395\u03a5\u03a4\u0395\u03a3", "836": "\u03a3\u03a5\u039d\u0394\u0399\u039a\u0391\u039b\u0399\u03a3\u03a4\u0399\u039a\u0395\u03a3 \u0395\u039b\u0395\u03a5\u0398\u0395\u03a1\u0399\u0395\u03a3", "837": "\u0395\u0398\u039d\u0399\u039a\u039f \u03a3\u03a5\u039c\u0392\u039f\u03a5\u039b\u0399\u039f \u0395\u039d\u0395\u03a1\u0393\u0395\u0399\u0391\u03a3", "838": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0395\u03a0\u0399\u039a\u039f\u03a5\u03a1\u0399\u039a\u0397\u03a3 \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u03a3 \u0395\u03a1\u0393\u0391\u03a4\u039f\u03a4\u0395\u03a7\u039d\u0399\u03a4\u03a9\u039d \u03a5\u0391\u039b\u039f\u03a5\u03a1\u0393\u03a9\u039d", "839": "\u0391\u0393\u03a9\u0393\u0395\u03a3 \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u03a4\u03a1\u03a9\u039d", "840": "\u03a3\u03a9\u039c\u0391\u03a4\u0395\u039c\u03a0\u039f\u03a1\u0399\u0391 \u0393\u03a5\u039d\u0391\u0399\u039a\u03a9\u039d", "841": "\u0391\u03a4\u0395\u039b\u0395\u0399\u0395\u03a3 \u0395\u03a1\u0393\u03a9\u039d \u0391\u039c\u03a5\u039d\u03a4\u0399\u039a\u039f\u03a5 \u03a0\u03a1\u039f\u0393\u03a1\u0391\u039c\u039c\u0391\u03a4\u039f\u03a3", "842": "\u03a4\u0395\u03a7\u039d\u0399\u039a\u0397 \u0395\u039a\u03a0\u0391\u0399\u0394\u0395\u03a5\u03a3\u0397 \u0391\u039e\u0399\u03a9\u039c\u0391\u03a4\u0399\u039a\u03a9\u039d \u03a3\u0395 \u0391\u039d\u03a9\u03a4\u0391\u03a4\u0395\u03a3 \u03a3\u03a7\u039f\u039b\u0395\u03a3", "843": "\u0394\u0399\u039a\u0391\u0399\u03a9\u039c\u0391\u03a4\u0391 \u039a\u0397\u03a1\u03a5\u039a\u03a9\u039d \u039a\u039b\u03a0", "844": "\u0393\u0395\u039d\u0399\u039a\u0395\u03a3 \u0394\u0399\u0391\u03a4\u0391\u039e\u0395\u0399\u03a3 \u03a4\u0391\u039c\u0395\u0399\u039f\u03a5 \u039d\u039f\u039c\u0399\u039a\u03a9\u039d", "845": "\u039d\u0391\u03a5\u03a4\u0395\u03a3 \u039a\u0391\u0399 \u039b\u0399\u039c\u0395\u039d\u039f\u03a6\u03a5\u039b\u0391\u039a\u0395\u03a3", "846": "\u03a0\u0391\u039d\u0395\u03a0\u0399\u03a3\u03a4\u0397\u039c\u0399\u0391\u039a\u0397 \u03a3\u03a7\u039f\u039b\u0397 \u0391\u0393\u03a1\u0399\u039d\u0399\u039f\u03a5", "847": "\u03a0\u039f\u039b\u03a5\u03a4\u0395\u03a7\u039d\u0399\u039a\u0397 \u03a3\u03a7\u039f\u039b\u0397", "848": "\u039c\u0395\u0399\u03a9\u03a3\u0397 \u0395\u0399\u03a3\u03a6\u039f\u03a1\u03a9\u039d", "849": "\u039a\u0395\u039d\u03a4\u03a1\u0391 \u039b\u0397\u03a8\u0395\u03a9\u03a3 \u03a4\u0399\u039c\u03a9\u039d \u03a3\u03a6\u0391\u0393\u0395\u0399\u03a9\u039d", "850": "\u0391\u03a0\u039f\u0394\u0397\u039c\u0399\u0391 \u03a3\u03a4\u03a1\u0391\u03a4\u0395\u03a5\u03a3\u0399\u039c\u03a9\u039d", "851": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0395\u03a0\u0399\u039a\u039f\u03a5\u03a1\u0399\u039a\u0397\u03a3 \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u03a3 \u03a0\u03a1\u039f\u039d\u039f\u0399\u0391\u03a3 \u039a\u0391\u0399 \u039a\u039f\u0399\u039d\u0397\u03a3 \u0394\u0399\u0391\u039d\u039f\u039c\u0397\u03a3 \u03a0\u03a9\u039b\u0397\u03a4\u03a9\u039d \u0392\u0395\u039d\u0396\u0399\u039d\u0397\u03a3 \u0391\u0398\u0397\u039d\u03a9\u039d - \u03a0\u0395\u0399\u03a1\u0391\u0399\u03a9\u03a3 \u039a\u0391\u0399 \u03a0\u0395\u03a1\u0399\u03a7\u03a9\u03a1\u03a9\u039d", "852": "\u0399\u0391\u03a4\u03a1\u039f\u03a6\u0391\u03a1\u039c\u0391\u039a\u0395\u03a5\u03a4\u0399\u039a\u0397 \u03a0\u0395\u03a1\u0399\u0398\u0391\u039b\u03a8\u0397", "853": "\u039d\u039f\u03a3\u0397\u039b\u0395\u03a5\u03a4\u0399\u039a\u0391 \u0399\u0394\u03a1\u03a5\u039c\u0391\u03a4\u0391", "854": "\u0393\u0395\u039d\u0399\u039a\u0391 \u03a0\u0395\u03a1\u0399 \u039c\u039f\u03a5\u03a3\u0395\u0399\u03a9\u039d", "855": "\u0391\u03a3\u03a6\u0391\u039b\u0395\u0399\u0391 \u039f\u03a7\u03a5\u03a1\u03a9\u039d \u0398\u0395\u03a3\u0395\u03a9\u039d", "856": "\u0393\u0395\u03a9\u03a1\u0393\u0399\u039a\u0391 \u039c\u0397\u03a7\u0391\u039d\u0397\u039c\u0391\u03a4\u0391", "857": "\u03a4\u0391\u039c\u0395\u0399\u0391 \u03a3\u03a5\u039d\u0395\u03a1\u0393\u0391\u03a3\u0399\u0391\u03a3", "858": "\u0399\u0394\u0399\u03a9\u03a4\u0399\u039a\u0395\u03a3 \u039a\u039b\u0399\u039d\u0399\u039a\u0395\u03a3 \u039a\u0391\u0399 \u0395\u03a1\u0393\u0391\u03a3\u03a4\u0397\u03a1\u0399\u0391", "859": "\u03a5\u0393\u0395\u0399\u039f\u039d\u039f\u039c\u0399\u039a\u0397 \u0395\u039e\u0395\u03a4\u0391\u03a3\u0397 \u0399\u03a0\u03a4\u0391\u039c\u0395\u039d\u03a9\u039d", "860": "\u0394\u0399\u0391\u03a6\u039f\u03a1\u0395\u03a3 \u0391\u0395\u03a1\u039f\u03a0\u039f\u03a1\u0399\u039a\u0395\u03a3 \u03a3\u03a7\u039f\u039b\u0395\u03a3", "861": "\u0393\u03a5\u039d\u0391\u0399\u039a\u0395\u03a3 \u039d\u039f\u03a3\u039f\u039a\u039f\u039c\u039f\u0399", "862": "\u03a6\u039f\u0399\u03a4\u0397\u03a3\u0397, \u0392\u0391\u0398\u039c\u039f\u039b\u039f\u0393\u0399\u0391, \u0395\u039e\u0395\u03a4\u0391\u03a3\u0395\u0399\u03a3 \u039a\u039b\u03a0. \u0391.\u03a3.\u039a.\u03a4", "863": "\u03a3\u0395\u0399\u03a3\u039c\u039f\u03a0\u039b\u0397\u039a\u03a4\u039f\u0399 \u0394\u0399\u0391\u03a6\u039f\u03a1\u039f\u0399", "864": "\u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u03a3 \u03a5\u03a0\u039f\u03a5\u03a1\u0393\u0395\u0399\u039f\u03a5 \u0393\u0395\u03a9\u03a1\u0393\u0399\u0391\u03a3", "865": "\u039a\u03a9\u0394\u0399\u039a\u039f\u03a0\u039f\u0399\u0397\u03a3\u0397 \u03a4\u0397\u03a3 \u039d\u039f\u039c\u039f\u0398\u0395\u03a3\u0399\u0391\u03a3", "866": "\u039c\u0395\u03a4\u0391 \u03a4\u0397\u03a3 \u0393\u0391\u039b\u039b\u0399\u0391\u03a3", "867": "\u0393\u0395\u03a9\u0393\u03a1\u0391\u03a6\u0399\u039a\u0397 \u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u0391 \u03a3\u03a4\u03a1\u0391\u03a4\u039f\u03a5", "868": "\u0395\u0399\u0394\u0397 \u03a0\u0391\u03a1\u0391\u0394\u0399\u0394\u039f\u039c\u0395\u039d\u0391 \u03a3\u03a4\u0397\u039d \u0395\u039b\u0395\u03a5\u0398\u0395\u03a1\u0397 \u03a7\u03a1\u0397\u03a3\u0397", "869": "\u039c\u039f\u039d\u039f\u03a0\u03a9\u039b\u0399\u039f \u03a3\u03a0\u0399\u03a1\u03a4\u03a9\u039d", "870": "\u039a\u0391\u03a4\u0391\u03a3\u03a4\u0391\u03a4\u0399\u039a\u039f\u039d \u03a4.\u0391.\u039a.\u0395", "871": "\u0395\u03a0\u0399\u039a\u039f\u03a5\u03a1\u0399\u039a\u039f \u03a4\u0391\u039c\u0395\u0399\u039f \u03a5\u03a0\u0391\u039b\u039b\u0397\u039b\u03a9\u039d \u0391\u03a3\u03a4\u03a5\u039d\u039f\u039c\u0399\u0391\u03a3 \u03a0\u039f\u039b\u0395\u03a9\u039d (\u0395.\u03a4.\u03a5.\u0391.\u03a0.)", "872": "\u039c\u0399\u03a3\u0398\u039f\u0394\u039f\u03a3\u0399\u0391 \u0399\u0395\u03a1\u0395\u03a9\u039d \u2013 \u0395\u039d\u039f\u03a1\u0399\u0391\u039a\u0397 \u0395\u0399\u03a3\u03a6\u039f\u03a1\u0391", "873": "\u03a5\u0393\u0395\u0399\u039f\u039d\u039f\u039c\u0399\u039a\u039f\u03a3 \u039a\u0391\u039d\u039f\u039d\u0399\u03a3\u039c\u039f\u03a3", "874": "\u039d\u039f\u039c\u039f\u03a3 \u03a0\u0395\u03a1\u0399 \u039a\u03a4\u0397\u039c\u0391\u03a4\u0399\u039a\u03a9\u039d \u03a4\u03a1\u0391\u03a0\u0395\u0396\u03a9\u039d", "875": "\u0394\u0399\u0395\u0398\u039d\u0397\u03a3 \u03a3\u03a5\u039c\u0392\u0391\u03a3\u0397 \u03a0\u0395\u03a1\u0399 \u03a5\u0394\u03a1\u0391\u03a5\u039b\u0399\u039a\u03a9\u039d \u0394\u03a5\u039d\u0391\u039c\u0395\u03a9\u039d", "876": "\u0391\u039d\u0391\u03a0\u0397\u03a1\u039f\u0399 \u0391\u039e\u0399\u03a9\u039c\u0391\u03a4\u0399\u039a\u039f\u0399 \u039a\u0391\u0399 \u039f\u03a0\u039b\u0399\u03a4\u0395\u03a3 \u0395\u0399\u03a1\u0397\u039d\u0399\u039a\u0397\u03a3 \u03a0\u0395\u03a1\u0399\u039f\u0394\u039f\u03a5", "877": "\u03a0\u039f\u0399\u039d\u0399\u039a\u0397 \u039a\u0391\u0399 \u03a0\u0395\u0399\u0398\u0391\u03a1\u03a7\u0399\u039a\u0397 \u0394\u03a9\u03a3\u0399\u0394\u0399\u039a\u0399\u0391 \u039b.\u03a3", "878": "\u0394\u0391\u03a3\u0399\u039a\u039f \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f", "879": "\u0391\u039f\u03a0\u039b\u0397 \u0398\u0397\u03a4\u0395\u0399\u0391-\u0391\u039d\u03a4\u0399\u03a1\u03a1\u0397\u03a3\u0399\u0395\u03a3 \u03a3\u03a5\u039d\u0395\u0399\u0394\u0397\u03a3\u0397\u03a3", "880": "\u039d\u0395\u039f\u0399 \u03a0\u03a1\u039f\u03a3\u03a6\u03a5\u0393\u0395\u03a3", "881": "\u03a4\u0395\u03a7\u039d\u0399\u039a\u0395\u03a3 \u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u0395\u03a3 \u03a3\u03a4\u03a1\u0391\u03a4\u039f\u03a5", "882": "\u039c\u0395\u03a4\u039f\u03a7\u0399\u039a\u039f \u03a4\u0391\u039c\u0395\u0399\u039f \u03a0\u039f\u039b\u0399\u03a4\u0399\u039a\u03a9\u039d \u03a5\u03a0\u0391\u039b\u039b\u0397\u039b\u03a9\u039d", "883": "\u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f \u0399\u0394\u0399\u03a9\u03a4\u0399\u039a\u039f\u03a5 \u0394\u0399\u039a\u0391\u0399\u039f\u03a5", "884": "\u039a\u03a9\u0394\u0399\u039a\u0391\u03a3 \u0391\u0393\u03a1\u039f\u03a4\u0399\u039a\u0397\u03a3 \u0391\u03a3\u03a6\u0391\u039b\u0395\u0399\u0391\u03a3", "885": "\u039f\u03a1\u0393\u0391\u039d\u0399\u039a\u0395\u03a3 \u0394\u0399\u0391\u03a4\u0391\u039e\u0395\u0399\u03a3 \u0391\u03a0\u039f\u03a3\u03a4\u039f\u039b\u0399\u039a\u0397\u03a3 \u0394\u0399\u0391\u039a\u039f\u039d\u0399\u0391\u03a3", "886": "\u03a5\u03a0\u039f\u03a5\u03a1\u0393\u0395\u0399\u039f \u0391\u0399\u0393\u0391\u0399\u039f\u03a5", "887": "\u0393\u0391\u039c\u039f\u0399 \u0394\u03a9\u0394\u0395\u039a\u0391\u039d\u0397\u03a3\u039f\u03a5", "888": "\u03a9\u03a1\u0395\u03a3 \u0395\u03a1\u0393\u0391\u03a3\u0399\u0391\u03a3 \u039a\u03a1\u0395\u039f\u03a0\u03a9\u039b\u0395\u0399\u03a9\u039d", "889": "\u039a\u03a9\u0394\u0399\u039a\u0391\u03a3 \u03a4\u0395\u039b\u03a9\u039d \u03a7\u0391\u03a1\u03a4\u039f\u03a3\u0397\u039c\u039f\u03a5", "890": "\u0394\u0395\u039b\u03a4\u0399\u039f \u0391\u039d\u03a9\u039d\u03a5\u039c\u03a9\u039d \u0395\u03a4\u0391\u0399\u03a1\u0395\u0399\u03a9\u039d", "891": "\u0391\u03a1\u039c\u039f\u0394\u0399\u039f\u03a4\u0397\u03a4\u0391 \u039d\u039f\u039c\u0391\u03a1\u03a7\u0397 \u03a3\u0395 \u0395\u03a1\u0393\u0391\u03a4\u0399\u039a\u0391 \u0396\u0397\u03a4\u0397\u039c\u0391\u03a4\u0391", "892": "\u03a4\u03a1\u039f\u03a6\u039f\u0394\u039f\u03a3\u0399\u0391 \u03a0. \u039d\u0391\u03a5\u03a4\u0399\u039a\u039f\u03a5", "893": "\u03a3\u03a5\u039c\u03a6\u03a9\u039d\u0399\u0391 \u03a0\u0395\u03a1\u0399 \u0394\u0399\u03a0\u039b\u03a9\u039c\u0391\u03a4\u0399\u039a\u03a9\u039d \u03a3\u03a7\u0395\u03a3\u0395\u03a9\u039d", "894": "\u0395\u03a6\u0395\u0394\u03a1\u039f\u0399 \u039a\u0391\u0399 \u0395\u03a0\u0399\u039a\u039f\u03a5\u03a1\u039f\u0399 \u0391\u039e\u0399\u03a9\u039c\u0391\u03a4\u0399\u039a\u039f\u0399 \u03a0.\u039d", "895": "\u03a4\u039f\u03a5\u03a1\u0399\u03a3\u03a4\u0399\u039a\u0395\u03a3 \u0395\u03a0\u0399\u03a7\u0395\u0399\u03a1\u0397\u03a3\u0395\u0399\u03a3", "896": "\u0394\u0399\u0395\u0398\u039d\u0395\u03a3 \u03a0\u039f\u0399\u039d\u0399\u039a\u039f \u0394\u0399\u039a\u0391\u03a3\u03a4\u0397\u03a1\u0399\u039f", "897": "\u0394\u0399\u039f\u0399\u039a\u0397\u03a4\u0399\u039a\u0395\u03a3 \u03a0\u03a1\u0391\u039e\u0395\u0399\u03a3", "898": "\u039d\u039f\u03a3\u039f\u039a\u039f\u039c\u0395\u0399\u0391 \u03a0\u039f\u039b\u0395\u039c\u0399\u039a\u039f\u03a5 \u039d\u0391\u03a5\u03a4\u0399\u039a\u039f\u03a5", "899": "\u03a3\u03a5\u039c\u0392\u039f\u03a5\u039b\u0399\u039f \u03a7\u0391\u039b\u03a5\u0392\u0391", "900": "\u03a4\u0395\u039c\u0391\u03a7\u0399\u03a3\u039c\u039f\u03a3 \u039a\u03a1\u0395\u0391\u03a4\u03a9\u039d", "901": "\u0395\u039b\u0395\u0393\u03a7\u039f\u03a3 \u039a\u0391\u03a4\u039f\u03a7\u0397\u03a3 \u039f\u03a0\u039b\u03a9\u039d", "902": "\u0391\u039d\u0391\u03a0\u03a1\u039f\u03a3\u0391\u03a1\u039c\u039f\u0393\u0395\u03a3 \u03a4\u0397\u03a3 \u0394\u03a1\u0391\u03a7\u039c\u0397\u03a3", "903": "\u0395\u03a6\u039f\u0394\u0399\u0391\u03a3\u039c\u039f\u03a3 \u03a0\u039b\u039f\u0399\u03a9\u039d", "904": "\u03a3\u0395\u0399\u03a3\u039c\u039f\u03a0\u039b\u0397\u039a\u03a4\u039f\u0399 \u0399\u039f\u039d\u0399\u03a9\u039d \u039d\u0397\u03a3\u03a9\u039d", "905": "\u0394\u0397\u039c\u039f\u03a3\u0399\u0391 \u0395\u03a0\u0399\u03a7\u0395\u0399\u03a1\u0397\u03a3\u0397 \u039a\u0399\u039d\u0397\u03a4\u03a9\u039d \u0391\u039e\u0399\u03a9\u039d \u0391\u039d\u03a9\u039d\u03a5\u039c\u0397 \u0395\u03a4\u0391\u0399\u03a1\u0395\u0399\u0391 (\u0394.\u0395.\u039a.\u0391. \u0391.\u0395.)", "906": "\u0395\u03a4\u0391\u0399\u03a1\u0395\u0399\u0391 \u2013 \u0395\u03a5\u03a1\u03a9\u03a0\u0391\u0399\u039a\u039f\u03a3 \u039f\u039c\u0399\u039b\u039f\u03a3", "907": "\u0394\u0399\u0395\u03a5\u0398\u03a5\u039d\u03a3\u0397 \u0391\u039b\u0399\u0395\u0399\u0391\u03a3", "908": "\u0395\u03a0\u0399\u039c\u0395\u039b\u0397\u03a4\u0397\u03a1\u0399\u039f \u03a4\u039f\u03a5\u03a1\u0399\u03a3\u03a4\u0399\u039a\u03a9\u039d \u039a\u0391\u03a4\u0391\u03a3\u03a4\u0397\u039c\u0391\u03a4\u03a9\u039d", "909": "\u0394\u0399\u0395\u0398\u039d\u0395\u0399\u03a3 \u03a3\u03a5\u039c\u0392\u0391\u03a3\u0395\u0399\u03a3 \u0395\u039b\u0391\u0399\u039f\u039b\u0391\u0394\u039f\u03a5", "910": "\u03a0\u03a4\u0397\u03a4\u0399\u039a\u0397 \u0399\u039a\u0391\u039d\u039f\u03a4\u0397\u03a4\u0391", "911": "\u0395\u039a\u039a\u039b\u0397\u03a3\u0399\u0391\u03a3\u03a4\u0399\u039a\u0395\u03a3 \u03a3\u03a7\u039f\u039b\u0395\u03a3", "912": "\u0394\u0399\u0391\u03a4\u0399\u039c\u0397\u03a3\u0397 \u0399\u0391\u03a4\u03a1\u0399\u039a\u03a9\u039d \u03a0\u03a1\u0391\u039e\u0395\u03a9\u039d", "913": "\u0391\u0394\u0399\u039a\u0397\u039c\u0391\u03a4\u0391 \u03a4\u03a5\u03a0\u039f\u03a5", "914": "\u0395\u039e\u0391\u039d\u0398\u0397\u039c\u0391\u03a4\u0399\u039a\u039f\u03a3 \u03a4\u03a5\u03a6\u039f\u03a3", "915": "\u039f\u0399\u039a\u039f\u03a3 \u039d\u0391\u03a5\u03a4\u039f\u03a5", "916": "\u039c\u0391\u03a3\u03a4\u0399\u03a7\u0391", "917": "\u03a3\u03a5\u039b\u039b\u039f\u0393\u039f\u0399 \u039a\u0391\u0399 \u039f\u039c\u039f\u03a3\u03a0\u039f\u039d\u0394\u0399\u0391 \u0394\u0399\u039a\u0391\u03a3\u03a4\u0399\u039a\u03a9\u039d \u0395\u03a0\u0399\u039c\u0395\u039b\u0397\u03a4\u03a9\u039d", "918": "\u0395\u039c\u03a0\u039f\u03a1\u0399\u039a\u0391 \u039a\u0391\u0399 \u0392\u0399\u039f\u039c\u0397\u03a7\u0391\u039d\u0399\u039a\u0391 \u03a3\u0397\u039c\u0391\u03a4\u0391", "919": "\u039f\u03a1\u0393\u0391\u039d\u03a9\u03a3\u0397 \u039a\u0391\u0399 \u039b\u0395\u0399\u03a4\u039f\u03a5\u03a1\u0393\u0399\u0391 \u0391\u039d\u03a9\u03a4\u0391\u03a4\u03a9\u039d \u0395\u039a\u03a0\u0391\u0399\u0394\u0395\u03a5\u03a4\u0399\u039a\u03a9\u039d \u0399\u0394\u03a1\u03a5\u039c\u0391\u03a4\u03a9\u039d", "920": "\u03a5\u0393\u0395\u0399\u039f\u039d\u039f\u039c\u0399\u039a\u0397 \u0391\u03a0\u039f\u0398\u0397\u039a\u0397", "921": "\u0393\u0395\u039d. \u0394\u0399\u0395\u03a5\u0398\u03a5\u039d\u03a3\u0397 \u03a0\u039f\u0399\u039d\u0399\u039a\u0397\u03a3 \u0394\u0399\u039a\u0391\u0399\u039f\u03a3\u03a5\u039d\u0397\u03a3", "922": "\u0391\u0395\u03a1\u039f\u03a0\u039f\u03a1\u0399\u039a\u039f \u0394\u0399\u039a\u0391\u0399\u039f", "923": "\u039c\u0395\u039b\u0395\u03a4\u0397 \u039a\u0391\u0399 \u0395\u03a0\u0399\u0392\u039b\u0395\u03a8\u0397 \u039c\u0397\u03a7\u0391\u039d\u039f\u039b\u039f\u0393\u0399\u039a\u03a9\u039d \u0395\u0393\u039a\u0391\u03a4\u0391\u03a3\u03a4\u0391\u03a3\u0395\u03a9\u039d", "924": "\u0391\u0398\u0395\u039c\u0399\u03a4\u039f\u03a3 \u0391\u039d\u03a4\u0391\u0393\u03a9\u039d\u0399\u03a3\u039c\u039f\u03a3", "925": "\u03a0\u039f\u039b\u0395\u039c\u0399\u039a\u0397 \u0394\u0399\u0391\u0398\u0395\u03a3\u0399\u039c\u039f\u03a4\u0397\u03a4\u0391", "926": "\u039b\u0395\u03a3\u03a7\u0395\u03a3 \u039a\u0391\u0399 \u03a0\u03a1\u0391\u03a4\u0397\u03a1\u0399\u0391 \u0395\u039b.\u0391\u03a3", "927": "\u039a\u0391\u03a5\u03a3\u0399\u039c\u0391", "928": "\u03a5\u0393\u0395\u0399\u039f\u039d\u039f\u039c\u0399\u039a\u0391 \u039c\u0395\u03a4\u03a1\u0391", "929": "\u039a\u0391\u03a4\u0391\u03a3\u03a4\u0391\u03a3\u0397 \u0391\u039e\u0399\u03a9\u039c\u0391\u03a4\u0399\u039a\u03a9\u039d", "930": "\u0395\u0399\u03a3\u03a0\u03a1\u0391\u039e\u0397 \u03a0\u039f\u03a1\u03a9\u039d \u03a4\u0391\u039c\u0395\u0399\u039f\u03a5 \u039d\u039f\u039c\u0399\u039a\u03a9\u039d", "931": "\u0394\u0399\u039f\u0399\u039a\u0397\u03a4\u0399\u039a\u0397 \u03a1\u03a5\u0398\u039c\u0399\u03a3\u0397 \u0391\u03a0\u039f\u0394\u039f\u03a7\u03a9\u039d \u039a\u0391\u0399 \u039f\u03a1\u03a9\u039d \u0395\u03a1\u0393\u0391\u03a3\u0399\u0391\u03a3", "932": "\u0393\u0395\u039d\u0399\u039a\u0397 \u0394\u0399\u0395\u03a5\u0398\u03a5\u039d\u03a3\u0397 \u03a4\u0391\u03a7\u03a5\u0394\u03a1\u039f\u039c\u0395\u0399\u03a9\u039d", "933": "\u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u03a3 \u039b\u0399\u039c\u0395\u039d\u039f\u03a3 \u0398\u0395\u03a3\u03a3\u0391\u039b\u039f\u039d\u0399\u039a\u0397\u03a3 \u0391\u039d\u03a9\u039d\u03a5\u039c\u0397 \u0395\u03a4\u0391\u0399\u03a1\u0399\u0391 (\u039f.\u039b.\u0398. \u0391.\u0395.)", "934": "\u03a3\u03a7\u039f\u039b\u0397 \u0395\u0398\u039d\u0399\u039a\u0397\u03a3 \u0391\u039c\u03a5\u039d\u0391\u03a3", "935": "\u039a\u0391\u0398\u039f\u039b\u0399\u039a\u039f\u0399", "936": "\u0395\u039a\u039a\u039b\u0397\u03a3\u0399\u0391\u03a3\u03a4\u0399\u039a\u0391 \u039c\u039f\u03a5\u03a3\u0395\u0399\u0391", "937": "\u0394\u0399\u0395\u0398\u039d\u0397\u03a3 \u0395\u039a\u0398\u0395\u03a3\u0397 \u0398\u0395\u03a3\u03a3\u0391\u039b\u039f\u039d\u0399\u039a\u0397\u03a3 \u0391.\u0395. \u2013 XELEXPO \u0391.\u0395", "938": "\u0395\u03a5\u0395\u03a1\u0393\u0395\u03a4\u0399\u039a\u039f\u03a3 \u03a5\u03a0\u039f\u039b\u039f\u0393\u0399\u03a3\u039c\u039f\u03a3 \u0397\u039c\u0395\u03a1\u03a9\u039d \u0395\u03a1\u0393\u0391\u03a3\u0399\u0391\u03a3", "939": "\u0395\u0399\u03a3\u03a6\u039f\u03a1\u0391 \u0395\u03a0\u0391\u0393\u0393\u0395\u039b\u039c\u0391\u03a4\u0399\u039a\u039f\u03a5 \u039a\u0399\u039d\u0394\u03a5\u039d\u039f\u03a5", "940": "\u0391\u03a0\u0391\u039b\u039b\u039f\u03a4\u03a1\u0399\u03a9\u03a3\u0395\u0399\u03a3 \u0393\u0399\u0391 \u03a4\u039f\u03a5\u03a1\u0399\u03a3\u03a4\u0399\u039a\u039f\u03a5\u03a3 \u03a3\u039a\u039f\u03a0\u039f\u03a5\u03a3", "941": "\u0391\u03a0\u039f\u039b\u03a5\u039c\u0391\u039d\u03a4\u0397\u03a1\u0399\u0391", "942": "\u0395\u039a\u03a0\u039f\u0399\u0397\u03a3\u0397 \u03a0\u039b\u039f\u0399\u03a9\u039d \u0394\u0397\u039c\u039f\u03a3\u0399\u039f\u03a5", "943": "\u0394\u0399\u0391\u039a\u039f\u039d\u039f\u0399", "944": "\u03a5\u0394\u03a1\u0395\u03a5\u03a3\u0397 \u0394\u0399\u0391\u03a6\u039f\u03a1\u03a9\u039d \u03a0\u039f\u039b\u0395\u03a9\u039d", "945": "\u03a0\u03a1\u03a9\u03a4\u0395\u03a3 \u03a5\u039b\u0395\u03a3 \u039a\u039b\u03a9\u03a3\u03a4\u039f\u03a5\u03a6\u0391\u039d\u03a4\u039f\u03a5\u03a1\u0393\u0399\u0391\u03a3", "946": "\u03a8\u0395\u03a5\u0394\u0397\u03a3 \u0392\u0395\u0392\u0391\u0399\u03a9\u03a3\u0397 \u0395\u039d\u03a9\u03a0\u0399\u039f\u039d \u0391\u03a1\u03a7\u0397\u03a3", "947": "\u0391\u03a0\u03a9\u039b\u0395\u03a3\u0398\u0395\u0399\u03a3\u0395\u03a3 \u039a\u0391\u0399 \u03a0\u0391\u03a1\u0391\u0393\u03a1\u0391\u03a6\u0395\u0399\u03a3\u0395\u03a3 \u0391\u039e\u0399\u0395\u03a3", "948": "\u03a6\u039f\u0399\u03a4\u0397\u03a4\u0399\u039a\u0397 \u039b\u0395\u03a3\u03a7\u0397", "949": "\u03a4\u0391\u039c\u0395\u0399\u039f \u03a5\u0393\u0395\u0399\u0391\u03a3 \u03a4\u0391\u03a7\u03a5\u0394\u03a1\u039f\u039c\u0399\u039a\u039f\u03a5 \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u03a5", "950": "\u0395\u039b\u0395\u0393\u03a7\u039f\u03a3 \u0394\u0395\u039d\u0394\u03a1\u03a9\u0394\u03a9\u039d \u039a\u0391\u039b\u039b\u0399\u0395\u03a1\u0393\u0395\u0399\u03a9\u039d", "951": "\u039a\u0391\u03a4\u0391\u03a0\u039f\u039b\u0395\u039c\u0397\u03a3\u0397 \u0391\u039d\u0391\u039b\u03a6\u0391\u0392\u0397\u03a4\u0399\u03a3\u039c\u039f\u03a5\u039b\u0391\u0399\u039a\u0397 \u0395\u03a0\u0399\u039c\u039f\u03a1\u03a6\u03a9\u03a3\u0397", "952": "\u0395\u03a0\u0399\u039a\u039f\u03a5\u03a1\u0399\u039a\u039f\u03a3 \u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u03a3 \u039c\u0395\u03a4\u0391\u03a6\u039f\u03a1\u03a9\u039d", "953": "\u03a6\u039f\u0399\u03a4\u0397\u03a4\u0399\u039a\u0395\u03a3 \u039b\u0395\u03a3\u03a7\u0395\u03a3", "954": "\u0394\u0399\u0395\u0398\u039d\u0395\u0399\u03a3 \u03a3\u03a5\u039c\u0392\u0391\u03a3\u0395\u0399\u03a3 \u0393\u0399\u0391 \u03a4\u0397\u039d \u03a0\u03a1\u039f\u03a3\u03a4\u0391\u03a3\u0399\u0391 \u03a4\u03a9\u039d \u0395\u03a1\u0393\u0391\u0396\u039f\u039c\u0395\u039d\u03a9\u039d \u0393\u03a5\u039d\u0391\u0399\u039a\u03a9\u039d", "955": "\u039b\u0397\u03a3\u03a4\u0395\u0399\u0391", "956": "\u0391\u0393\u03a9\u0393\u0395\u03a3 \u0391\u03a0\u039f \u03a3\u03a5\u039d\u0391\u039b\u039b\u0391\u0393\u039c\u0391\u03a4\u0399\u039a\u0395\u03a3 \u039a\u0391\u0399 \u0393\u03a1\u0391\u039c\u039c\u0391\u03a4\u0399\u0391", "957": "\u0395\u039a\u039c\u0399\u03a3\u0398\u03a9\u03a3\u0397 \u0394\u0397\u039c\u039f\u03a3\u0399\u03a9\u039d \u039c\u0395\u03a4\u0391\u039b\u039b\u0395\u0399\u03a9\u039d", "958": "\u039a\u039f\u039b\u03a5\u039c\u0392\u0397\u03a4\u0399\u039a\u0395\u03a3 \u0394\u0395\u039e\u0391\u039c\u0395\u039d\u0395\u03a3", "959": "\u0395\u03a1\u0391\u039d\u039f\u0399 \u039a\u0391\u0399 \u039b\u0391\u03a7\u0395\u0399\u039f\u03a6\u039f\u03a1\u039f\u0399 \u0397 \u03a6\u0399\u039b\u0391\u039d\u0398\u03a1\u03a9\u03a0\u0399\u039a\u0395\u03a3 \u0391\u0393\u039f\u03a1\u0395\u03a3", "960": "\u03a0\u03a1\u039f\u03a3\u03a4\u0391\u03a3\u0399\u0391 \u0395\u03a0\u0399\u0392\u0391\u03a4\u0397\u0393\u039f\u03a5 \u039d\u0391\u03a5\u03a4\u0399\u039b\u0399\u0391\u03a3", "961": "\u0393\u0395\u039d\u0399\u039a\u039f\u0399 \u039d\u039f\u039c\u039f\u0399 \u03a0\u0395\u03a1\u0399 \u039e\u0395\u039d\u039f\u0394\u039f\u03a7\u0395\u0399\u03a9\u039d-\u0395\u03a0\u0399\u03a0\u039b. \u0394\u03a9\u039c\u0391\u03a4\u0399\u03a9\u039d \u039a\u039b\u03a0", "962": "\u0399\u0395\u03a1\u0391\u03a1\u03a7\u0399\u0391 \u039a\u0391\u0399 \u03a0\u03a1\u039f\u0391\u0393\u03a9\u0393\u0395\u03a3 \u0391\u039e\u0399\u03a9\u039c\u0391\u03a4\u0399\u039a\u03a9\u039d", "963": "\u03a3\u03a5\u039d\u0395\u03a1\u0393\u0391\u03a4\u0395\u03a3 (\u0393\u03a1\u0391\u039c\u039c\u0391\u03a4\u0395\u0399\u03a3) \u0392\u039f\u03a5\u039b\u0395\u03a5\u03a4\u03a9\u039d-\u0395\u03a5\u03a1\u03a9\u0392\u039f\u03a5\u039b\u0395\u03a5\u03a4\u03a9\u039d", "964": "\u03a3\u03a7\u039f\u039b\u0397 \u0399\u039a\u0391\u03a1\u03a9\u039d", "965": "\u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u03a3 \u03a3\u0399\u0394\u0397\u03a1\u039f\u0394\u03a1\u039f\u039c\u03a9\u039d \u0395\u039b\u039b\u0391\u0394\u039f\u03a3 (\u039f.\u03a3.\u0395.)\u03a3\u0399\u0394\u0397\u03a1\u039f\u0394\u03a1\u039f\u039c\u0399\u039a\u0395\u03a3 \u0395\u03a0\u0399\u03a7\u0395\u0399\u03a1\u0397\u03a3\u0395\u0399\u03a3", "966": "\u03a5\u0393\u0395\u0399\u039f\u039d\u039f\u039c\u0399\u039a\u039f\u03a3 \u039a\u0391\u039d\u039f\u039d\u0399\u03a3\u039c\u039f\u03a3 \u039a\u0391\u03a4\u0391 \u0398\u0391\u039b\u0391\u03a3\u03a3\u0391\u039d \u039a\u0391\u0399 \u039a\u0391\u03a4\u0391 \u039e\u0397\u03a1\u0391\u039d", "967": "\u039a\u0391\u039d\u039f\u039d\u0399\u03a3\u039c\u039f\u03a3 \u039c\u0395\u03a4\u0391\u039b\u039b\u0395\u03a5\u03a4\u0399\u039a\u03a9\u039d \u0395\u03a1\u0393\u0391\u03a3\u0399\u03a9\u039d", "968": "\u0391\u03a0\u039f\u03a6\u03a5\u0393\u0397 \u03a3\u03a5\u0393\u039a\u03a1\u039f\u03a5\u03a3\u0395\u03a9\u039d", "969": "\u03a4\u039f\u039c\u0391\u03a4\u039f\u03a0\u0391\u03a1\u0391\u0393\u03a9\u0393\u0397", "970": "\u0394\u0399\u0391\u03a6\u039f\u03a1\u0395\u03a3 \u0394\u0399\u0391\u03a4\u0391\u039e\u0395\u0399\u03a3 \u0393\u0399\u0391 \u03a4\u0391 \u0391\u03a5\u03a4\u039f\u039a\u0399\u039d\u0397\u03a4\u0391", "971": "\u039a\u0391\u03a4\u0391\u03a4\u0391\u039e\u0397 \u0393\u03a5\u039d\u0391\u0399\u039a\u03a9\u039d \u03a3\u03a4\u039f \u039b.\u03a3", "972": "\u0395\u03a4\u0391\u0399\u03a1\u0395\u0399\u0395\u03a3 \u0394\u0399\u039f\u0399\u039a\u039f\u03a5\u039c\u0395\u039d\u0395\u03a3 \u0391\u03a0\u039f \u03a4\u039f\u03a5\u03a3 \u03a0\u0399\u03a3\u03a4\u03a9\u03a4\u0395\u03a3", "973": "\u0392\u0391\u039b\u039a\u0391\u039d\u0399\u039a\u0395\u03a3 \u03a3\u03a5\u039c\u03a6\u03a9\u039d\u0399\u0395\u03a3", "974": "\u039c\u0395\u03a4\u0391\u03a6\u039f\u03a1\u0391 \u03a3\u03a5\u039d\u03a4\u0395\u039b\u0395\u03a3\u03a4\u0397 \u0394\u039f\u039c\u0397\u03a3\u0397\u03a3", "975": "\u03a0\u03a1\u039f\u039c\u0397\u0398\u0395\u03a5\u03a4\u0399\u039a\u039f\u03a3 \u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u03a3 \u03a0.\u039d", "976": "\u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f \u03a6\u0391\u03a1\u039c\u0391\u039a\u0395\u0399\u03a9\u039d", "977": "\u0394\u0399\u0394\u0391\u03a3\u039a\u039f\u039c\u0395\u039d\u0391 \u039c\u0391\u0398\u0397\u039c\u0391\u03a4\u0391", "978": "\u0395\u039a\u039b\u039f\u0393\u0397 \u0392\u039f\u03a5\u039b\u0395\u03a5\u03a4\u03a9\u039d - \u0395\u03a5\u03a1\u03a9\u0392\u039f\u03a5\u039b\u0395\u03a5\u03a4\u03a9\u039d", "979": "\u03a6\u0391\u03a1\u039c\u0391\u039a\u039f\u03a0\u039f\u0399\u039f\u0399", "980": "\u03a3\u03a4\u03a1\u0391\u03a4\u0399\u03a9\u03a4\u0399\u039a\u0391 \u03a0\u03a1\u0391\u03a4\u0397\u03a1\u0399\u0391", "981": "\u039a\u0391\u03a1\u039a\u0399\u039d\u039f\u03a3", "982": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0395\u03a0\u0399\u039a\u039f\u03a5\u03a1\u0399\u039a\u0397\u03a3 \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u03a3 \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u03a5 \u0391.\u0395. \u039f\u0399\u039d\u039f\u03a0\u039f\u0399\u0399\u0391\u03a3, \u0396\u03a5\u0398\u039f\u03a0\u039f\u0399\u0399\u0391\u03a3 \u039a\u0391\u0399 \u039f\u0399\u039d\u039f\u03a0\u039d\u0395\u03a5\u039c\u0391\u03a4\u039f\u03a0\u039f\u0399\u0399\u0391\u03a3", "983": "\u03a7\u0395\u0399\u03a1\u0399\u03a3\u03a4\u0395\u03a3 \u0391\u03a3\u03a5\u03a1\u039c\u0391\u03a4\u039f\u03a5", "984": "\u03a0\u039f\u039b\u0399\u03a4\u0399\u039a\u0397 \u0395\u03a0\u0399\u03a3\u03a4\u03a1\u0391\u03a4\u0395\u03a5\u03a3\u0397-\u03a0\u0391\u039b\u039b\u0391\u0399\u039a\u0397 \u0391\u039c\u03a5\u039d\u0391", "985": "\u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u0399 \u0395\u0393\u0393\u0395\u0399\u03a9\u039d \u0392\u0395\u039b\u03a4\u0399\u03a9\u03a3\u0395\u03a9\u039d", "986": "\u039f\u039c\u039f\u0393\u0395\u039d\u0395\u0399\u03a3 \u03a0\u0391\u039b\u039b\u0399\u039d\u039f\u03a3\u03a4\u039f\u03a5\u039d\u03a4\u0395\u03a3", "987": "\u0395\u03a5\u03a1\u03a9\u03a0\u0391\u0399\u039a\u039f\u03a3 \u039a\u039f\u0399\u039d\u03a9\u039d\u0399\u039a\u039f\u03a3 \u03a7\u0391\u03a1\u03a4\u0397\u03a3", "988": "\u039f\u03a1\u0393\u0391\u039d\u0399\u039a\u0395\u03a3 \u0394\u0399\u0391\u03a4\u0391\u039e\u0395\u0399\u03a3", "989": "\u0395\u039e\u0391\u0399\u03a1\u0395\u03a3\u0397 \u0394\u0399\u039a\u0391\u03a3\u03a4\u03a9\u039d", "990": "\u0393\u0395\u039d\u0399\u039a\u0395\u03a3 \u0395\u03a0\u0399\u0398\u0395\u03a9\u03a1\u0397\u03a3\u0395\u0399\u03a3 \u2013 \u0394\u0399\u0395\u03a5\u0398\u03a5\u039d\u03a3\u0395\u0399\u03a3 \u03a3\u03a4\u039f\u0399\u03a7\u0395\u0399\u03a9\u0394\u039f\u03a5\u03a3 \u0395\u039a\u03a0\u0391\u0399\u0394\u0395\u03a5\u03a3\u0397\u03a3", "991": "\u039a\u0391\u039d\u039f\u039d\u0399\u03a3\u039c\u039f\u03a3 \u0395\u03a0\u0399\u0398\u0395\u03a9\u03a1\u0397\u03a3\u0395\u03a9\u03a3 \u039a\u0391\u0399 \u0391\u03a3\u03a6\u0391\u039b\u0395\u0399\u0391\u03a3", "992": "\u03a4\u0391\u039c\u0395\u0399\u039f \u03a0\u03a1\u039f\u039d\u039f\u0399\u0391\u03a3 \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u03a5 \u0391\u03a5\u03a4\u039f\u039d\u039f\u039c\u039f\u03a5 \u03a3\u03a4\u0391\u03a6\u0399\u0394\u0399\u039a\u039f\u03a5 \u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u03a5 (\u03a4.\u0391.\u03a0.\u0391.\u03a3.\u039f)", "993": "\u03a4\u0391\u039c\u0395\u0399\u039f\u039d \u03a0\u03a1\u039f\u039d\u039f\u0399\u0391\u03a3 \u039f\u03a1\u0398\u039f\u0394\u039f\u039e\u039f\u03a5 \u0395\u03a6\u0397\u039c\u0395\u03a1\u0399\u0391\u039a\u039f\u03a5", "994": "\u03a3\u03a7\u039f\u039b\u0399\u039a\u0397 \u03a3\u03a9\u039c\u0391\u03a4\u0399\u039a\u0397 \u0391\u0393\u03a9\u0393\u0397", "995": "\u039a\u0395\u039d\u03a4\u03a1\u039f \u03a0\u0391\u03a1\u0391\u0393\u03a9\u0393\u0399\u039a\u039f\u03a4\u0397\u03a4\u0391\u03a3", "996": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u03a3 \u0399\u0394\u0399\u039f\u039a\u03a4\u0397\u03a4\u03a9\u039d", "997": "\u0392\u039f\u03a3\u039a\u0397 \u0395\u039d\u03a4\u039f\u03a3 \u0394\u0391\u03a3\u03a9\u039d", "998": "\u0395\u039b\u0395\u0393\u03a7\u039f\u03a3 \u0395\u039e\u0391\u0393\u039f\u039c\u0395\u039d\u03a9\u039d \u0393\u0395\u03a9\u03a1\u0393\u0399\u039a\u03a9\u039d \u03a0\u03a1\u039f\u0399\u039f\u039d\u03a4\u03a9\u039d", "999": "\u03a0\u0391\u0399\u0394\u0391\u0393\u03a9\u0393\u0399\u039a\u0391 \u03a4\u039c\u0397\u039c\u0391\u03a4\u0391 \u0391.\u0395.\u0399", "1000": "\u03a5\u03a0\u039f\u03a4\u03a1\u039f\u03a6\u0399\u0395\u03a3 \u039a\u039b\u0397\u03a1\u039f\u0394\u039f\u03a4\u0397\u039c\u0391\u03a4\u039f\u03a3 \u03a0. \u0392\u0391\u03a3\u03a3\u0391\u039d\u0397", "1001": "\u0391\u03a4\u03a5\u03a7\u0397\u039c\u0391 \u0391\u03a0\u039f \u0394\u039f\u039b\u039f \u03a4\u039f\u03a5 \u0395\u03a1\u0393\u039f\u0394\u039f\u03a4\u0397", "1002": "\u0392\u03a5\u0396\u0391\u039d\u03a4\u0399\u039d\u039f \u039a\u0391\u0399 \u03a7\u03a1\u0399\u03a3\u03a4\u0399\u0391\u039d\u0399\u039a\u039f \u039c\u039f\u03a5\u03a3\u0395\u0399\u039f", "1003": "\u0395\u0399\u03a1\u0397\u039d\u0395\u03a5\u03a4\u0399\u039a\u0395\u03a3 \u0391\u03a0\u039f\u03a3\u03a4\u039f\u039b\u0395\u03a3", "1004": "\u03a5\u0393\u0395\u0399\u039f\u039d\u039f\u039c\u0399\u039a\u039f\u03a3 \u0384\u0395\u039b\u0395\u0393\u03a7\u039f\u03a3 \u0395\u0399\u03a3\u0395\u03a1\u03a7\u039f\u039c\u0395\u039d\u03a9\u039d", "1005": "\u039f\u03a1\u039a\u039f\u03a3 \u03a4\u039f\u03a5 \u03a0\u039f\u039b\u0399\u03a4\u0397", "1006": "\u03a5\u0393\u0395\u0399\u039f\u039d\u039f\u039c\u0399\u039a\u0397 \u03a0\u0395\u03a1\u0399\u0398\u0391\u039b\u03a8\u0397 \u03a3\u03a0\u039f\u03a5\u0394\u0391\u03a3\u03a4\u03a9\u039d", "1007": "\u03a0\u0391\u03a1\u0391\u03a7\u0391\u03a1\u0391\u039e\u0397 \u039a\u0391\u0399 \u039a\u0399\u0392\u0394\u0397\u039b\u0399\u0391", "1008": "\u0394\u0399\u0391\u039c\u0395\u03a1\u0399\u03a3\u039c\u0391\u03a4\u0391 \u03a0\u039b\u039f\u0399\u0391\u03a1\u03a7\u03a9\u039d \u039a\u0391\u0399 \u03a0\u039b\u0397\u03a1\u03a9\u039c\u0391\u03a4\u03a9\u039d", "1009": "\u039a\u039b\u0391\u0394\u039f\u03a3 \u0391\u03a1\u03a9\u0393\u0397\u03a3 \u03a4.\u0391.\u039a.\u0395", "1010": "\u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u03a3 \u0392\u0391\u039c\u0392\u0391\u039a\u039f\u03a3", "1011": "\u039d\u039f\u03a3\u0397\u039b\u0395\u0399\u0391 \u03a3\u03a4\u03a1\u0391\u03a4\u0399\u03a9\u03a4\u0399\u039a\u03a9\u039d", "1012": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u03a3", "1013": "\u03a0\u039f\u039b\u03a5\u0395\u0398\u039d\u0395\u0399\u03a3 \u0391\u0395\u03a1\u039f\u03a0\u039f\u03a1\u0399\u039a\u0395\u03a3 \u03a3\u03a5\u039c\u0392\u0391\u03a3\u0395\u0399\u03a3", "1014": "\u039d\u0391\u03a5\u03a4\u0399\u039a\u039f \u0391\u03a0\u039f\u039c\u0391\u03a7\u0399\u039a\u039f \u03a4\u0391\u039c\u0395\u0399\u039f", "1015": "\u03a5\u0393\u0399\u0395\u0399\u039d\u0397 \u0391\u03a1\u03a4\u039f\u03a0\u039f\u0399\u0395\u0399\u03a9\u039d", "1016": "\u039d\u039f\u039c\u0391\u03a1\u03a7\u0399\u0391\u039a\u0391 \u03a3\u03a5\u039c\u0392\u039f\u03a5\u039b\u0399\u0391", "1017": "\u039b\u0395\u03a3\u03a7\u0397 \u0391\u039e\u0399\u03a9\u039c\u0391\u03a4\u0399\u039a\u03a9\u039d \u03a0.\u039d", "1018": "\u039a\u0391\u03a4\u03a9\u03a4\u0395\u03a1\u039f \u0394\u0399\u0394\u0391\u039a\u03a4\u0399\u039a\u039f \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f", "1019": "\u0393\u0395\u039d\u0399\u039a\u0391 \u03a0\u0395\u03a1\u0399 \u039a\u03a5\u039a\u039b\u039f\u03a6\u039f\u03a1\u0399\u0391\u03a3 \u0391\u03a5\u03a4\u039f\u039a\u0399\u039d\u0397\u03a4\u03a9\u039d", "1020": "\u03a4\u0391\u039c\u0395\u0399\u039f \u039d\u039f\u03a3\u0397\u039b\u0395\u0399\u0391\u03a3 \u03a3\u03a0\u039f\u03a5\u0394\u0391\u03a3\u03a4\u03a9\u039d", "1021": "\u0395\u03a0\u0391\u0393\u0393\u0395\u039b\u039c\u0391\u03a4\u0399\u039a\u0391 \u039a\u0391\u0399 \u0392\u0399\u039f\u03a4\u0395\u03a7\u039d\u0399\u039a\u0391 \u0395\u03a0\u0399\u039c\u0395\u039b\u0397\u03a4\u0397\u03a1\u0399\u0391", "1022": "\u0391\u039a\u03a4\u039f\u03a0\u039b\u039f\u0399\u0391", "1023": "\u03a0\u03a1\u039f\u03a3\u03a4\u0391\u03a3\u0399\u0391 \u0391\u039b\u0399\u0395\u0399\u0391\u03a3", "1024": "\u039c\u0395 \u03a4\u0397 \u039d\u039f\u03a1\u0392\u0397\u0393\u0399\u0391", "1025": "\u0397\u0398\u0399\u039a\u0395\u03a3 \u0391\u039c\u039f\u0399\u0392\u0395\u03a3 \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u03a5 (\u0384\u0395\u039d\u039f\u03a0\u039b\u039f\u03a5-\u03a0\u039f\u039b\u0399\u03a4\u0399\u039a\u039f\u03a5) \u03a5\u03a0\u039f\u03a5\u03a1\u0393\u0395\u0399\u039f\u03a5 \u0394\u0397\u039c\u039f\u03a3\u0399\u0391\u03a3 \u03a4\u0391\u039e\u0397\u03a3", "1026": "\u039b\u0395\u03a9\u03a6\u039f\u03a1\u0395\u0399\u0391 \u0399\u0394\u0399\u03a9\u03a4\u0399\u039a\u0397\u03a3 \u03a7\u03a1\u0397\u03a3\u0395\u03a9\u03a3", "1027": "\u0395\u03a1\u0393\u0391\u03a4\u0399\u039a\u0395\u03a3 \u0394\u0399\u0391\u03a6\u039f\u03a1\u0395\u03a3", "1028": "\u03a1\u0391\u0394\u0399\u039f\u0397\u039b\u0395\u039a\u03a4\u03a1\u039f\u039b\u039f\u0393\u039f\u0399-\u03a1\u0391\u0394\u0399\u039f\u03a4\u0395\u03a7\u039d\u0399\u03a4\u0395\u03a3", "1029": "\u03a0\u03a1\u039f\u0393\u039d\u03a9\u03a3\u03a4\u0399\u039a\u0391 \u03a0\u039f\u0394\u039f\u03a3\u03a6\u0391\u0399\u03a1\u039f\u03a5", "1030": "\u03a4\u0391\u039c\u0395\u0399\u039f \u03a3\u03a5\u039d\u03a4\u0391\u039e\u0395\u03a9\u039d \u039a\u0391\u0399 \u03a0\u03a1\u039f\u039d\u039f\u0399\u0391\u03a3 \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u03a5 \u03a4\u0397\u03a3 \u0391\u0393\u03a1\u039f\u03a4\u0399\u039a\u0397\u03a3 \u03a4\u03a1\u0391\u03a0\u0395\u0396\u0391\u03a3 \u03a4\u0397\u03a3 \u0395\u039b\u039b\u0391\u0394\u0391\u03a3 (\u03a4.\u03a3.\u03a0. \u2013 \u0391.\u03a4.\u0395.)", "1031": "\u03a5\u0394\u03a1\u0395\u03a5\u03a3\u0397 \u039b\u0395\u039a\u0391\u039d\u039f\u03a0\u0395\u0394\u0399\u039f\u03a5 \u0391\u0398\u0397\u039d\u03a9\u039d", "1032": "\u03a4\u03a1\u0391\u03a0\u0395\u0396\u0391 \u039f\u03a6\u0398\u0391\u039b\u039c\u03a9\u039d", "1033": "\u0395\u0398\u039d\u0399\u039a\u039f \u039a\u0395\u039d\u03a4\u03a1\u039f \u03a7\u0391\u03a1\u03a4\u03a9\u039d \u039a\u0391\u0399 \u03a7\u0391\u03a1\u03a4\u039f\u0393\u03a1\u0391\u03a6\u0399\u039a\u0397\u03a3 \u039a\u039b\u0397\u03a1\u039f\u039d\u039f\u039c\u0399\u0391\u03a3 - \u0395\u0398\u039d\u0399\u039a\u0397 \u03a7\u0391\u03a1\u03a4\u039f\u0398\u0397\u039a\u0397", "1034": "\u039a\u0391\u039d\u039f\u039d\u0399\u03a3\u039c\u039f\u0399 \u0391\u03a0\u039f\u03a6\u03a5\u0393\u0397\u03a3 \u03a3\u03a5\u0393\u039a\u03a1\u039f\u03a5\u03a3\u0395\u03a9\u039d", "1035": "\u0393\u03a1\u0391\u03a6\u0395\u0399\u039f \u0395\u0393\u039a\u039b\u0397\u039c\u0391\u03a4\u0399\u03a9\u039d \u03a0\u039f\u039b\u0395\u039c\u039f\u03a5", "1036": "\u0391\u0393\u03a1\u039f\u03a4\u0399\u039a\u0395\u03a3 \u03a3\u03a5\u039d\u0394\u0399\u039a\u0391\u039b\u0399\u03a3\u03a4\u0399\u039a\u0395\u03a3 \u039f\u03a1\u0393\u0391\u039d\u03a9\u03a3\u0395\u0399\u03a3", "1037": "\u03a4\u0391\u03a5\u03a4\u039f\u03a4\u0397\u03a4\u0395\u03a3", "1038": "\u0394\u0391\u03a3\u0399\u039a\u039f\u0399 \u03a3\u03a5\u039d\u0395\u03a4\u0391\u0399\u03a1\u0399\u03a3\u039c\u039f\u0399", "1039": "\u03a3\u03a5\u039c\u0392\u039f\u039b\u0391\u0399\u039f\u0393\u03a1\u0391\u03a6\u0399\u039a\u0391 \u0394\u0399\u039a\u0391\u0399\u03a9\u039c\u0391\u03a4\u0391", "1040": "\u0399\u0394\u0399\u039f\u039a\u03a4\u0397\u03a3\u0399\u0391 \u039a\u0391\u03a4\u2019 \u039f\u03a1\u039f\u03a6\u039f", "1041": "\u03a3\u03a7\u039f\u039b\u0399\u039a\u0391 \u03a4\u0391\u039c\u0395\u0399\u0391", "1042": "\u0391\u03a1\u03a7\u0395\u0399\u039f\u03a6\u03a5\u039b\u0391\u039a\u0395\u0399\u0391 \u0394\u0399\u0391\u03a6\u039f\u03a1\u0391", "1043": "\u0391\u03a0\u039f\u0396\u0397\u039c\u0399\u03a9\u03a3\u0397 \u0391\u039d\u03a4\u0391\u039b\u039b\u0391\u039e\u0399\u039c\u03a9\u039d", "1044": "\u03a3\u03a7\u039f\u039b\u0399\u039a\u0391 \u039a\u03a4\u0399\u03a1\u0399\u0391", "1045": "\u03a6\u039f\u03a1\u039f\u039b\u039f\u0393\u0399\u0391 \u039f\u0399\u039a\u039f\u0394\u039f\u039c\u03a9\u039d", "1046": "\u03a0\u03a1\u039f\u03a4\u03a5\u03a0\u0391 \u0394\u0397\u039c\u039f\u03a4\u0399\u039a\u0391", "1047": "\u03a0\u03a1\u03a9\u03a4\u0395\u03a3 \u03a5\u039b\u0395\u03a3 \u0392\u03a5\u03a1\u03a3\u039f\u0394\u0395\u03a8\u0399\u0391\u03a3 - \u0394\u0395\u03a1\u039c\u0391\u03a4\u0391", "1048": "\u03a3\u03a5\u039c\u0392\u0399\u0392\u0391\u03a3\u039c\u039f\u03a3 \u039a\u0391\u0399 \u0394\u0399\u0391\u0399\u03a4\u0397\u03a3\u0399\u0391", "1049": "\u039a\u0391\u03a4\u0391\u03a3\u03a4\u0391\u03a3\u0397 \u0394\u0397\u039c\u039f\u03a4\u0399\u039a\u03a9\u039d \u039a\u0391\u0399 \u039a\u039f\u0399\u039d\u039f\u03a4\u0399\u039a\u03a9\u039d \u03a5\u03a0\u0391\u039b\u039b\u0397\u039b\u03a9\u039d", "1050": "\u0395\u03a3\u039f\u0394\u0391 \u0394\u0397\u039c\u03a9\u039d \u039a\u0391\u0399 \u039a\u039f\u0399\u039d\u039f\u03a4\u0397\u03a4\u03a9\u039d", "1051": "\u03a3\u03a4\u0391\u0394\u0399\u0391 \u039a\u0391\u0399 \u0393\u03a5\u039c\u039d\u0391\u03a3\u03a4\u0397\u03a1\u0399\u0391", "1052": "\u039a\u039f\u0399\u039d\u0397 \u0391\u0393\u03a1\u039f\u03a4\u0399\u039a\u0397 \u03a0\u039f\u039b\u0399\u03a4\u0399\u039a\u0397", "1053": "\u0391\u03a4\u039f\u039c\u0391 \u039c\u0395 \u0395\u0399\u0394\u0399\u039a\u0395\u03a3 \u0391\u039d\u0391\u0393\u039a\u0395\u03a3 - \u03a5\u03a0\u0395\u03a1\u0397\u039b\u0399\u039a\u0395\u03a3 - \u03a7\u03a1\u039f\u039d\u0399\u0391 \u03a0\u0391\u03a3\u03a7\u039f\u039d\u03a4\u0395\u03a3", "1054": "\u0395\u039a\u039a\u039b\u0397\u03a3\u0399\u0391\u03a3\u03a4\u0399\u039a\u0391 \u0394\u0399\u039a\u0391\u03a3\u03a4\u0397\u03a1\u0399\u0391", "1055": "\u03a3\u03a5\u039c\u0392\u0391\u03a3\u0395\u0399\u03a3 \u0393\u0399\u0391 \u03a4\u0397\u039d \u0391\u03a0\u039f\u03a6\u03a5\u0393\u0397 \u0394\u0399\u03a0\u039b\u0397\u03a3 \u03a6\u039f\u03a1\u039f\u039b\u039f\u0393\u0399\u0391\u03a3", "1056": "\u03a0\u03a1\u039f\u03a3\u03a4\u0391\u03a3\u0399\u0391 \u0392\u0391\u039c\u0392\u0391\u039a\u039f\u03a0\u0391\u03a1\u0391\u0393\u03a9\u0393\u0397\u03a3", "1057": "\u039d\u0391\u03a5\u03a4\u0399\u039a\u0397 \u03a3\u03a4\u03a1\u0391\u03a4\u039f\u039b\u039f\u0393\u0399\u0391", "1058": "\u039d\u039f\u03a3\u039f\u039a\u039f\u039c\u0395\u0399\u0391\u039a\u0397 \u03a0\u0395\u03a1\u0399\u0398\u0391\u039b\u03a8\u0397 \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u039c\u0395\u039d\u03a9\u039d \u039f.\u0393.\u0391", "1059": "\u03a6\u03a5\u03a3\u0399\u039a\u0391 \u039f\u03a1\u0393\u0391\u039d\u0399\u039a\u0391 \u039b\u0399\u03a0\u0391\u03a3\u039c\u0391\u03a4\u0391", "1060": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0395\u03a0\u0399\u039a\u039f\u03a5\u03a1\u0399\u039a\u0397\u03a3 \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u03a3 \u039c\u0399\u03a3\u0398\u03a9\u03a4\u03a9\u039d \u0395\u03a3\u03a4\u0399\u0391\u03a4\u039f\u03a1\u0399\u03a9\u039d, \u0396\u0391\u03a7\u0391\u03a1\u039f\u03a0\u039b\u0391\u03a3\u03a4\u0395\u0399\u03a9\u039d, \u039a\u0391\u03a6\u0395\u039d\u0395\u0399\u03a9\u039d \u039a.\u039b\u03a0. (\u03a4.\u0395.\u0391.\u039c.\u0395.\u0396.)", "1061": "\u03a4\u0395\u03a7\u039d\u0399\u039a\u0391\u0399 \u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u0391\u0399", "1062": "\u03a3\u03a5\u0393\u039a\u0395\u039d\u03a4\u03a1\u03a9\u03a3\u0397 \u03a0\u03a1\u039f\u0399\u039f\u039d\u03a4\u03a9\u039d", "1063": "\u03a5\u0394\u03a1\u039f\u0393\u03a1\u0391\u03a6\u0399\u039a\u0397 \u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u0391", "1064": "\u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u0391 \u0395\u039b\u0395\u0393\u03a7\u039f\u03a5 \u039a\u0391\u03a4\u0391\u03a3\u039a\u0395\u03a5\u0397\u03a3 \u0391\u039e\u0399\u03a9\u039d \u03a4\u039f\u03a5 \u0394\u0397\u039c\u039f\u03a3\u0399\u039f\u03a5", "1065": "\u0395\u03a0\u0399\u03a3\u039a\u039f\u03a0\u0399\u039a\u0391 \u0393\u03a1\u0391\u03a6\u0395\u0399\u0391", "1066": "\u0392\u0395\u039b\u0393\u0399\u039f, \u0392\u0395\u039d\u0395\u0396\u039f\u03a5\u0395\u039b\u0391 \u039a.\u039b\u03a0", "1067": "\u0394\u0397\u039c\u039f\u03a4\u0399\u039a\u039f\u03a3 \u039a\u0391\u0399 \u039a\u039f\u0399\u039d\u039f\u03a4\u0399\u039a\u039f\u03a3 \u039a\u03a9\u0394\u0399\u039a\u0391\u03a3", "1068": "\u03a0\u03a1\u039f\u0394\u039f\u03a3\u0399\u0391", "1069": "\u039c\u0399\u03a3\u0398\u039f\u03a3 \u0394\u0397\u039c\u039f\u03a3\u0399\u03a9\u039d \u03a5\u03a0\u0391\u039b\u039b\u0397\u039b\u03a9\u039d", "1070": "\u03a0\u039f\u039b\u0399\u03a4\u0399\u039a\u039f \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f \u039d\u0391\u03a5\u03a4\u0399\u039a\u039f\u03a5", "1071": "\u0391\u039d\u0391\u0396\u0397\u03a4\u0397\u03a3\u0397 \u039a\u0391\u0399 \u0394\u0399\u0391\u03a6\u03a5\u039b\u0391\u039e\u0397 \u0391\u03a1\u03a7\u0391\u0399\u039f\u03a4\u0397\u03a4\u03a9\u039d", "1072": "\u0391\u0394\u0395\u0399\u0395\u03a3 \u039b\u0399\u0391\u039d\u0399\u039a\u0397\u03a3 \u03a0\u03a9\u039b\u0397\u03a3\u0397\u03a3 \u03a4\u03a3\u0399\u0393\u0391\u03a1\u03a9\u039d \u039a\u0391\u0399 \u0395\u0399\u0394\u03a9\u039d \u039c\u039f\u039d\u039f\u03a0\u03a9\u039b\u0399\u039f\u03a5", "1073": "\u0395\u03a0\u039f\u03a0\u03a4\u0399\u039a\u0391 \u039c\u0395\u03a3\u0391 \u0394\u0399\u0394\u0391\u03a3\u039a\u0391\u039b\u0399\u0391\u03a3", "1074": "\u0395\u039a\u039b\u039f\u0393\u039f\u0394\u0399\u039a\u0395\u0399\u0391", "1075": "\u039f.\u0393.\u0391 \u039a\u0391\u03a4\u0391\u03a3\u03a4\u0391\u03a4\u0399\u039a\u0395\u03a3 \u0394\u0399\u0391\u03a4\u0391\u039e\u0395\u0399\u03a3", "1076": "\u0399\u039d\u03a3\u03a4\u0399\u03a4\u039f\u03a5\u03a4\u039f \u03a5\u0393\u0395\u0399\u0391\u03a3 \u03a4\u039f\u03a5 \u03a0\u0391\u0399\u0394\u0399\u039f\u03a5", "1077": "\u03a3\u03a7\u039f\u039b\u0397 \u0398\u0395\u03a4\u0399\u039a\u03a9\u039d \u0395\u03a0\u0399\u03a3\u03a4\u0397\u039c\u03a9\u039d \u03a0\u0391\u039d\u0395\u03a0\u0399\u03a3\u03a4\u0397\u039c\u0399\u039f\u03a5 \u03a0\u0391\u03a4\u03a1\u03a9\u039d", "1078": "\u0395\u03a3\u03a0\u0395\u03a1\u0399\u0394\u039f\u0395\u0399\u0394\u0397-\u039f\u03a0\u03a9\u03a1\u039f\u039a\u0397\u03a0\u0395\u03a5\u03a4\u0399\u039a\u0391", "1079": "\u0395\u03a0\u0399\u0394\u039f\u039c\u0391\u03a4\u0391 \u03a3\u03a4\u03a1\u0391\u03a4\u0395\u03a5\u039f\u039c\u0395\u039d\u03a9\u039d", "1080": "\u03a0\u03a1\u039f\u039b\u0397\u03a8\u0397 \u0395\u03a1\u0393\u0391\u03a4\u0399\u039a\u03a9\u039d \u0391\u03a4\u03a5\u03a7\u0397\u039c\u0391\u03a4\u03a9\u039d \u03a4\u03a9\u039d \u039d\u0391\u03a5\u03a4\u0399\u039a\u03a9\u039d", "1081": "\u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u0391 \u0391\u03a0\u039f\u039c\u0391\u0393\u039d\u0397\u03a4\u0399\u03a3\u0395\u03a9\u03a3 \u03a0\u039b\u039f\u0399\u03a9\u039d", "1082": "\u0394\u0399\u0391\u03a6\u039f\u03a1\u0395\u03a3 \u0395\u0399\u0394\u0399\u039a\u0395\u03a3 \u0394\u0399\u0391\u0394\u0399\u039a\u0391\u03a3\u0399\u0395\u03a3", "1083": "\u0393\u0395\u039d\u0399\u039a\u0397 \u0394\u0399\u0395\u03a5\u0398\u03a5\u039d\u03a3\u0397 \u03a4\u0397\u039b\u0395\u03a0\u0399\u039a\u039f\u0399\u039d\u03a9\u039d\u0399\u03a9\u039d", "1084": "\u0395\u0398\u039d\u0399\u039a\u0397 \u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u0391 \u03a0\u039b\u0397\u03a1\u039f\u03a6\u039f\u03a1\u0399\u03a9\u039d (\u0395.\u03a5.\u03a0.)", "1085": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0395\u03a0\u0399\u039a\u039f\u03a5\u03a1\u0399\u039a\u0397\u03a3 \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u03a3 \u039c\u0399\u03a3\u0398\u03a9\u03a4\u03a9\u039d (T.E.A.M)", "1086": "\u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0397 \u039a\u0391\u03a4\u0391 \u03a4\u0397\u03a3 \u0391\u039d\u0395\u03a1\u0393\u0399\u0391\u03a3 - \u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u03a3 \u0391\u03a0\u0391\u03a3\u03a7\u039f\u039b\u0397\u03a3\u0397\u03a3 \u0395\u03a1\u0393\u0391\u03a4\u0399\u039a\u039f\u03a5 \u0394\u03a5\u039d\u0391\u039c\u0399\u039a\u039f\u03a5", "1087": "\u03a3\u03a9\u039c\u0391\u03a4\u0399\u039a\u0397 \u0399\u039a\u0391\u039d\u039f\u03a4\u0397\u03a4\u0391 \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u03a5 \u03a3\u03a4\u03a1\u0391\u03a4\u0395\u03a5\u039c\u0391\u03a4\u039f\u03a3", "1088": "\u039f\u0399\u039a\u039f\u039d\u039f\u039c\u0399\u039a\u0397 \u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u0391 \u03a0. \u039d\u0391\u03a5\u03a4\u0399\u039a\u039f\u03a5", "1089": "\u0394\u0391\u03a3\u0399\u039a\u0397 \u03a6\u039f\u03a1\u039f\u039b\u039f\u0393\u0399\u0391", "1090": "\u0391\u03a4\u0395\u039b\u0395\u0399\u0395\u03a3 \u03a5\u03a0\u0395\u03a1 \u03a4\u0397\u03a3 \u039a\u03a4\u0397\u039d\u039f\u03a4\u03a1\u039f\u03a6\u0399\u0391\u03a3, \u039c\u0395\u039b\u0399\u03a3\u03a3\u039f\u039a\u039f\u039c\u0399\u0391\u03a3 \u039a.\u039b.\u03a0", "1091": "\u03a0\u039f\u039b\u0399\u03a4\u0399\u039a\u0391 \u0394\u0399\u039a\u0391\u0399\u03a9\u039c\u0391\u03a4\u0391 \u03a4\u03a9\u039d \u0393\u03a5\u039d\u0391\u0399\u039a\u03a9\u039d", "1092": "\u039c\u0395\u03a4\u0391\u0398\u0395\u03a3\u0395\u0399\u03a3 \u0395\u039a\u03a0\u0391\u0399\u0394\u0395\u03a5\u03a4\u0399\u039a\u03a9\u039d", "1093": "\u0394\u0399\u0395\u0398\u039d\u0395\u03a3 \u039a\u0395\u039d\u03a4\u03a1\u039f \u03a5\u03a0\u039f\u039b\u039f\u0393\u0399\u03a3\u039c\u039f\u03a5", "1094": "\u0394\u0399\u0391\u03a7\u0395\u0399\u03a1\u0399\u03a3\u0397 \u0394\u0391\u03a3\u03a9\u039d", "1095": "\u0394\u039f\u03a5\u039b\u0395\u0399\u0391", "1096": "\u039c\u0395 \u03a4\u0397 \u03a0\u039f\u039b\u03a9\u039d\u0399\u0391", "1097": "\u0391\u039d\u0391\u0394\u0399\u0391\u039d\u039f\u039c\u0397 \u039a\u03a4\u0397\u039c\u0391\u03a4\u03a9\u039d", "1098": "\u03a5\u03a0\u039f\u0391\u03a0\u0391\u03a3\u03a7\u039f\u039b\u039f\u03a5\u039c\u0395\u039d\u039f\u0399 \u039c\u0399\u03a3\u0398\u03a9\u03a4\u039f\u0399", "1099": "\u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u0399 \u03a0\u03a1\u03a9\u0397\u039d \u03a5.\u0392.\u0395.\u03a4. - \u0393.\u0393.\u0392. - \u0393.\u0393.\u0395.\u03a4", "1100": "\u03a0\u0391\u039d\u0395\u03a0\u0399\u03a3\u03a4\u0397\u039c\u0399\u0391\u039a\u0397 \u0392\u0399\u0392\u039b\u0399\u039f\u0398\u0397\u039a\u0397 \u0391\u0398\u0397\u039d\u03a9\u039d", "1101": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u03a3 \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u03a5 \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u03a4.\u0395\u03a4\u0391\u0399\u03a1\u0395\u0399\u0391\u03a3 \u0397 \u0395\u0398\u039d\u0399\u039a\u0397 (\u03a4.\u0391.\u03a0.\u0391.\u0395. \u0397 \u0395\u0398\u039d\u0399\u039a\u0397)", "1102": "\u03a4\u0395\u039b\u0397 \u03a3\u03a7\u039f\u039b\u0391\u0396\u039f\u03a5\u03a3\u03a9\u039d \u039a\u039b\u0397\u03a1\u039f\u039d\u039f\u039c\u0399\u03a9\u039d", "1103": "\u039e\u0395\u039d\u0395\u03a3 \u0393\u039b\u03a9\u03a3\u03a3\u0395\u03a3", "1104": "\u039a\u0391\u03a4\u0391\u03a3\u039a\u0397\u039d\u03a9\u03a3\u0395\u0399\u03a3 - \u03a0\u0391\u0399\u0394\u0399\u039a\u0395\u03a3 \u0395\u039e\u039f\u03a7\u0395\u03a3", "1105": "\u0394\u0399\u039a\u0391\u03a3\u03a4\u0397\u03a1\u0399\u0391 \u0391\u039d\u0397\u039b\u0399\u039a\u03a9\u039d", "1106": "\u03a3\u03a5\u039c\u0392\u0391\u03a3\u0395\u0399\u03a3 \u0395\u039a\u03a4\u0395\u039b\u0395\u03a3\u0395\u03a9\u03a3 \u0391\u039b\u039b\u039f\u0394\u0391\u03a0\u03a9\u039d \u0391\u03a0\u039f\u03a6\u0391\u03a3\u0395\u03a9\u039d", "1107": "\u03a6\u039f\u03a1\u039f\u03a3 \u0395\u0399\u03a3\u039f\u0394\u0397\u039c\u0391\u03a4\u039f\u03a3 \u039d\u039f\u039c\u0399\u039a\u03a9\u039d \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u03a9\u039d", "1108": "\u0398\u0395\u03a9\u03a1\u0397\u03a4\u0399\u039a\u0391 \u039a\u0391\u0399 \u0399\u03a3\u03a4\u039f\u03a1\u0399\u039a\u0391 \u039c\u0391\u0398\u0397\u039c\u0391\u03a4\u0391", "1109": "\u0391\u03a6\u03a1\u039f\u0394\u0399\u03a3\u0399\u0391", "1110": "\u03a6\u0391\u03a1\u039f\u0399", "1111": "\u0394\u0397\u039c\u039f\u03a3\u0399\u039f\u0393\u03a1\u0391\u03a6\u0399\u039a\u039f \u0395\u03a0\u0391\u0393\u0393\u0395\u039b\u039c\u0391", "1112": "\u039a\u0391\u03a4\u0391\u03a3\u03a4\u0391\u03a4\u0399\u039a\u039f\u03a3 \u039d\u039f\u039c\u039f\u03a3 \u0395\u039a\u039a\u039b\u0397\u03a3\u0399\u0391\u03a3 \u03a4\u0397\u03a3 \u0395\u039b\u039b\u0391\u0394\u039f\u03a3", "1113": "\u0395\u039b\u0395\u0393\u03a7\u039f\u03a3 \u03a3\u039a\u039f\u03a0\u0399\u039c\u039f\u03a4\u0397\u03a4\u0391\u03a3 \u0399\u0394\u03a1\u03a5\u03a3\u0395\u03a9\u03a3 \u0392\u0399\u039f\u039c\u0397\u03a7\u0391\u039d\u0399\u03a9\u039d", "1114": "\u0393\u03a5\u039c\u039d\u0391\u03a3\u0399\u0391 \u039a\u0391\u0399 \u039b\u03a5\u039a\u0395\u0399\u0391", "1115": "\u0391\u0395\u03a1\u039f\u039d\u0391\u03a5\u03a4\u0399\u039a\u0395\u03a3 \u03a0\u039b\u0397\u03a1\u039f\u03a6\u039f\u03a1\u0399\u0395\u03a3", "1116": "\u039a\u0391\u03a4\u0391\u03a3\u03a4\u0391\u03a3\u0397 \u03a5\u03a0\u0391\u039e\u0399\u03a9\u039c\u0391\u03a4\u0399\u039a\u03a9\u039d \u03a0.\u039d", "1117": "\u03a5\u03a0\u039f\u03a5\u03a1\u0393\u0395\u0399\u039f \u03a7\u03a9\u03a1\u039f\u03a4\u0391\u039e\u0399\u0391\u03a3", "1118": "\u0395\u039a\u03a4\u0395\u039b\u0395\u03a3\u0397 \u0384\u0395\u03a1\u0393\u03a9\u039d", "1119": "\u039c\u0399\u03a3\u0398\u039f\u0394\u039f\u03a3\u0399\u0391 \u03a5\u03a0\u0391\u039b\u039b\u0397\u039b\u03a9\u039d \u03a3\u0395 \u0395\u03a0\u0399\u03a3\u03a4\u03a1\u0391\u03a4\u0395\u03a5\u03a3\u0397", "1120": "\u039a\u039f\u0399\u039c\u0397\u03a4\u0397\u03a1\u0399\u0391", "1121": "\u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u03a4\u0399\u039a\u039f\u03a3 \u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u03a3 \u039a\u0399\u039d\u0394\u03a5\u039d\u03a9\u039d \u03a0\u039f\u039b\u0395\u039c\u039f\u03a5", "1122": "\u03a3\u03a5\u039c\u03a6\u03a9\u039d\u0399\u0391 \u0393\u0399\u0391 \u0391\u039d\u0399\u0398\u0391\u0393\u0395\u039d\u0395\u0399\u03a3", "1123": "\u039d\u039f\u039c\u0391\u03a1\u03a7\u0399\u0391\u039a\u0397 \u0391\u03a5\u03a4\u039f\u0394\u0399\u039f\u0399\u039a\u0397\u03a3\u0397", "1124": "\u03a3\u03a7\u039f\u039b\u0397 \u03a4\u039f\u03a5\u03a1\u0399\u03a3\u03a4\u0399\u039a\u03a9\u039d \u0395\u03a0\u0391\u0393\u0393\u0395\u039b\u039c\u0391\u03a4\u03a9\u039d", "1125": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0395\u03a0\u0399\u039a\u039f\u03a5\u03a1\u0399\u039a\u0397\u03a3 \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u03a3 \u039c\u0399\u03a3\u0398\u03a9\u03a4\u03a9\u039d \u03a0\u0391\u03a1\u0391\u0393\u03a9\u0393\u0397\u03a3 \u039a\u0391\u0399 \u0395\u039c\u03a0\u039f\u03a1\u0399\u0391\u03a3 \u039f\u03a0\u03a9\u03a1\u039f\u039a\u0397\u03a0\u0395\u03a5\u03a4\u0399\u039a\u03a9\u039d", "1126": "\u0391\u03a0\u039f\u039b\u03a5\u039c\u0391\u039d\u03a3\u0397 \u03a5\u0394\u0391\u03a4\u03a9\u039d", "1127": "\u03a0\u039f\u039b\u0395\u039f\u0394\u039f\u039c\u0399\u039a\u0395\u03a3 \u0395\u03a0\u0399\u03a4\u03a1\u039f\u03a0\u0395\u03a3", "1128": "\u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u03a3 \u0395\u039a\u0394\u039f\u03a3\u0395\u03a9\u03a3 \u03a3\u03a7\u039f\u039b\u0399\u039a\u03a9\u039d \u0392\u0399\u0392\u039b\u0399\u03a9\u039d", "1129": "\u03a5\u03a0\u0391\u039b\u039b\u0397\u039b\u039f\u0399 \u039d\u039f\u039c. \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u03a9\u039d \u0394\u0397\u039c\u039f\u03a3\u0399\u039f\u03a5 \u0394\u0399\u039a\u0391\u0399\u039f\u03a5", "1130": "\u0391\u039d\u03a4\u0399\u03a3\u03a4\u0391\u0398\u039c\u0399\u03a3\u03a4\u0399\u039a\u0397 \u0395\u0399\u03a3\u03a6\u039f\u03a1\u0391", "1131": "\u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f \u0399\u0394\u0399\u03a9\u03a4\u0399\u039a\u03a9\u039d \u0395\u039a\u03a0\u0391\u0399\u0394\u0395\u03a5\u03a4\u0397\u03a1\u0399\u03a9\u039d", "1132": "\u0394\u0399\u0395\u0398\u039d\u0395\u0399\u03a3 \u03a3\u03a5\u039c\u0392\u0391\u03a3\u0395\u0399\u03a3 \u0393\u0399\u0391 \u03a4\u0391 \u0391\u03a5\u03a4\u039f\u039a\u0399\u039d\u0397\u03a4\u0391", "1133": "\u0395\u039e\u03a9\u03a3\u03a7\u039f\u039b\u0399\u039a\u0397 \u0391\u0393\u03a9\u0393\u0397", "1134": "\u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u03a4\u0399\u039a\u0397 \u0391\u03a1\u039c\u039f\u0394\u0399\u039f\u03a4\u0397\u03a4\u0391", "1135": "\u0395\u039b\u0399\u0395\u03a3 \u039a\u0391\u0399 \u0395\u039b\u0391\u0399\u0391", "1136": "\u0393\u0391\u039c\u039f\u0399 \u0399\u03a3\u03a1\u0391\u0397\u039b\u0399\u03a4\u03a9\u039d", "1137": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0391\u03a1\u03a4\u039f\u03a5", "1138": "\u039a\u0391\u039d\u039f\u039d\u0399\u03a3\u039c\u039f\u03a3 \u0395\u03a0\u0399\u03a4\u03a1\u039f\u03a0\u03a9\u039d", "1139": "\u03a3\u03a5\u039c\u0392\u0391\u03a3\u0397 \u039a\u0391\u03a4\u0391 \u0394\u0391\u0393\u039a\u0395\u0399\u039f\u03a5", "1140": "\u0395\u0398\u039d\u0399\u039a\u039f\u0399 \u0394\u03a1\u03a5\u039c\u039f\u0399", "1141": "\u0391\u03a0\u0391\u039b\u039b\u0391\u0393\u0395\u03a3 \u03a4\u0395\u039b\u03a9\u039d \u03a7\u0391\u03a1\u03a4\u039f\u03a3\u0397\u039c\u039f\u03a5", "1142": "\u0394\u0399\u0395\u0398\u039d\u0397\u03a3 \u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u03a3 \u0391\u039d\u0391\u03a0\u03a4\u03a5\u039e\u0395\u03a9\u03a3", "1143": "\u039a\u0391\u039d\u039f\u039d\u0399\u03a3\u039c\u039f\u03a3 \u0395\u03a1\u0393\u0391\u03a3\u0399\u0391\u03a3 \u0395\u03a0\u0399 \u03a6\u039f\u03a1\u03a4\u0397\u0393\u03a9\u039d \u03a0\u039b\u039f\u0399\u03a9\u039d", "1144": "\u039b\u03a5\u03a3\u03a3\u0391", "1145": "\u0391\u0393\u03a1\u039f\u039a\u03a4\u0397\u039c\u0391", "1146": "\u039a\u0391\u0398\u0397\u0393\u0397\u03a4\u0395\u03a3 \u039a\u0391\u0399 \u03a5\u03a6\u0397\u0393\u0397\u03a4\u0395\u03a3", "1147": "\u03a0\u0391\u0399\u0394\u0399\u039a\u039f\u0399 - \u0392\u03a1\u0395\u03a6\u039f\u039d\u0397\u03a0\u0399\u0391\u039a\u039f\u0399 \u03a3\u03a4\u0391\u0398\u039c\u039f\u0399", "1148": "\u039a\u0395\u039d\u03a4\u03a1\u039f \u0392\u03a5\u0396\u0391\u039d\u03a4\u0399\u039d\u03a9\u039d \u0395\u03a1\u0395\u03a5\u039d\u03a9\u039d", "1149": "\u0399\u0394\u03a1\u03a5\u03a3\u0397 \u0395\u039b\u0395\u03a5\u0398\u0395\u03a1\u0397\u03a3 \u0396\u03a9\u039d\u0397\u03a3 \u03a3\u0395 \u0394\u0399\u0391\u03a6\u039f\u03a1\u0391 \u039b\u0399\u039c\u0391\u039d\u0399\u0391 \u03a4\u0397\u03a3 \u03a7\u03a9\u03a1\u0391\u03a3", "1150": "\u03a3\u03a7\u039f\u039b\u0399\u039a\u0391 \u039b\u0395\u03a9\u03a6\u039f\u03a1\u0395\u0399\u0391", "1151": "\u03a3\u03a6\u0391\u0393\u0395\u0399\u0391", "1152": "\u0395\u03a0\u0399\u039a\u03a5\u03a1\u03a9\u03a3\u0397 \u039d\u039f\u039c\u039f\u0398\u0395\u03a4\u0397\u039c\u0391\u03a4\u03a9\u039d", "1153": "\u0395\u0393\u0393\u03a1\u0391\u03a6\u0391 \u03a4\u0391\u03a5\u03a4\u039f\u03a4\u0397\u03a4\u0391\u03a3 \u039d\u0391\u03a5\u03a4\u0399\u039a\u03a9\u039d", "1154": "\u0391\u03a4\u039f\u039c\u0399\u039a\u0391 \u0394\u0399\u039a\u0391\u0399\u03a9\u039c\u0391\u03a4\u0391 - \u0394\u0395\u0394\u039f\u039c\u0395\u039d\u0391 \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u03a5 \u03a7\u0391\u03a1\u0391\u039a\u03a4\u0397\u03a1\u0391", "1155": "\u0399\u0391\u03a4\u03a1\u039f\u03a6\u0391\u03a1\u039c\u0391\u039a\u0395\u03a5\u03a4\u0399\u039a\u0397 - \u039d\u039f\u03a3\u039f\u039a\u039f\u039c\u0395\u0399\u0391\u039a\u0397 \u03a0\u0395\u03a1\u0399\u0398\u0391\u039b\u03a8\u0397 - \u0395\u039e\u039f\u0394\u0391 \u039a\u0397\u0394\u0395\u0399\u0391\u03a3", "1156": "\u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u0391 \u0394\u0399\u0391\u03a7\u0395\u0399\u03a1\u0399\u03a3\u0395\u03a9\u03a3 \u0391\u039d\u03a4\u0391\u039b\u039b\u0391\u039e\u0399\u039c\u03a9\u039d \u039a\u03a4\u0397\u039c\u0391\u03a4\u03a9\u039d", "1157": "\u03a3\u03a4\u039f\u039b\u0395\u03a3 \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u03a5 \u039b.\u03a3", "1158": "\u03a0\u0395\u03a1\u0399\u03a6\u03a1\u0391\u039e\u0397 \u039f\u0399\u039a\u039f\u03a0\u0395\u0394\u03a9\u039d", "1159": "\u03a3\u0399\u0394\u0397\u03a1\u039f\u0394\u03a1\u039f\u039c\u039f\u0399 \u0391\u03a4\u03a4\u0399\u039a\u0397\u03a3", "1160": "\u03a4\u03a1\u0391\u03a7\u03a9\u039c\u0391\u03a4\u0391", "1161": "\u039d\u0391\u03a5\u0391\u0393\u0399\u0391-\u039d\u0391\u03a5\u0391\u0393\u0399\u0391\u0399\u03a1\u0395\u03a3\u0397", "1162": "\u03a5\u03a0\u039f\u039c\u0397\u03a7\u0391\u039d\u0399\u039a\u039f\u0399", "1163": "\u03a4\u0391\u0399\u039d\u0399\u039f\u0398\u0397\u039a\u0397 \u03a4\u0397\u03a3 \u0395\u039b\u039b\u0391\u0394\u039f\u03a3", "1164": "\u039a\u0391\u039d\u039f\u039d\u0399\u03a3\u039c\u039f\u03a3 \u03a4\u0397\u039b\u0395\u0393\u03a1\u0391\u03a6\u0399\u039a\u0397\u03a3 \u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u0391\u03a3", "1165": "\u03a3\u03a5\u039d\u03a4\u0391\u039e\u0395\u0399\u03a3 \u0398\u03a5\u039c\u0391\u03a4\u03a9\u039d \u03a4\u03a1\u039f\u039c\u039f\u039a\u03a1\u0391\u03a4\u0399\u0391\u03a3", "1166": "\u039a\u0391\u039d\u039f\u039d\u0399\u03a3\u039c\u039f\u03a3 \u03a0\u03a5\u03a1\u0399\u039c\u0391\u03a7\u039f\u03a5 \u03a0\u03a1\u039f\u03a3\u03a4\u0391\u03a3\u0399\u0391\u03a3 \u0395\u03a0\u0399\u0392\u0391\u03a4\u0397\u0393\u03a9\u039d \u03a0\u039b\u039f\u0399\u03a9\u039d", "1167": "\u0391\u03a4\u039f\u039c\u0399\u039a\u0391 \u0392\u0399\u0392\u039b\u0399\u0391\u03a1\u0399\u0391", "1168": "\u0395\u03a0\u0391\u0393\u0393\u0395\u039b\u039c\u0391\u03a4\u0399\u039a\u0391 \u0392\u0399\u0392\u039b\u0399\u0391\u03a1\u0399\u0391 \u0391\u03a1\u03a4\u0395\u03a1\u0393\u0391\u03a4\u03a9\u039d \u039a\u039b\u03a0", "1169": "\u03a6\u039f\u03a1\u039f\u039b\u039f\u0393\u0399\u0391 \u0391\u039c\u03a5\u039b\u039f\u03a3\u0399\u03a1\u039f\u03a0\u0399\u039f\u03a5, \u03a3\u03a4\u0391\u03a6\u0399\u0394\u0399\u039d\u0397\u03a3 \u039a\u039b\u03a0", "1170": "\u039c\u039f\u03a5\u03a3\u0395\u0399\u039f \u0395\u039b\u039b\u0397\u039d\u0399\u039a\u03a9\u039d \u039b\u0391\u0399\u039a\u03a9\u039d \u039f\u03a1\u0393\u0391\u039d\u03a9\u039d", "1171": "\u0395\u03a0\u0399\u039a\u039f\u03a5\u03a1\u0399\u039a\u039f \u03a4\u0391\u039c\u0395\u0399\u039f \u03a0\u03a1\u039f\u039d\u039f\u0399\u0391\u03a3 \u039a\u0391\u0399 \u03a0\u0395\u03a1\u0399\u0398\u0391\u039b\u03a8\u0395\u03a9\u03a3 \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u03a5 \u0395\u039b\u039b\u0397\u039d. \u0397\u039b\u0395\u039a\u03a4\u03a1. \u0395\u03a4\u0391\u0399\u03a1\u0399\u0391\u03a3 (\u0395.\u0397.\u0395.)", "1172": "\u03a4\u0391\u039c\u0395\u0399\u0391 \u039c\u039f\u039d\u0399\u039c\u03a9\u039d \u039f\u0394\u039f\u03a3\u03a4\u03a1\u03a9\u039c\u0391\u03a4\u03a9\u039d", "1173": "\u039f\u03a1\u0393\u0391\u039d\u0399\u039a\u0395\u03a3 \u0398\u0395\u03a3\u0395\u0399\u03a3 \u0391\u039e\u0399\u03a9\u039c\u0391\u03a4\u0399\u039a\u03a9\u039d \u03a0.\u039d", "1174": "\u03a4\u0391\u039c\u0395\u0399\u039f \u03a3\u03a5\u039d\u03a4\u0391\u039e\u0395\u03a9\u039d \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u03a5 \u03a4\u03a1\u0391\u03a0\u0395\u0396\u0391\u03a3 \u0391\u0398\u0397\u039d\u03a9\u039d", "1175": "\u03a0\u039f\u039b\u0399\u039f\u039c\u03a5\u0395\u039b\u0399\u03a4\u0399\u0394\u0391", "1176": "\u03a0\u03a1\u039f\u0391\u0393\u03a9\u0393\u0391\u0399 \u0391\u039e\u0399\u03a9\u039c\u0391\u03a4\u0399\u039a\u03a9\u039d \u03a7\u03a9\u03a1\u039f\u03a6\u03a5\u039b\u0391\u039a\u0397\u03a3", "1177": "\u0395\u03a0\u0399\u0394\u039f\u039c\u0391 \u0391\u0394\u0395\u0399\u0391\u03a3", "1178": "\u0395\u039e\u0395\u03a4\u0391\u03a3\u0395\u0399\u03a3 \u0393\u0399\u0391 \u03a4\u0397\u039d \u03a0\u03a1\u039f\u03a3\u039b\u0397\u03a8\u0397 \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u03a5", "1179": "\u0395\u039b\u0395\u0393\u03a7\u039f\u03a3 \u0395\u039e\u0391\u0393\u03a9\u0393\u0399\u039a\u039f\u03a5 \u0395\u039c\u03a0\u039f\u03a1\u0399\u039f\u03a5", "1180": "\u03a1\u0391\u0394\u0399\u039f\u03a6\u03a9\u039d\u0399\u039a\u039f\u0399 \u03a3\u03a4\u0391\u0398\u039c\u039f\u0399", "1181": "\u039a\u0391\u039d\u039f\u039d\u0399\u03a3\u039c\u039f\u03a3 \u0394\u0399\u039f\u0399\u039a\u0397\u03a4\u0399\u039a\u0397\u03a3 \u039f\u03a1\u0393\u0391\u039d\u03a9\u03a3\u0395\u03a9\u03a3 \u03a4.\u03a3.\u0391.\u03a5", "1182": "\u03a6.\u039a.\u03a0. \u0391\u039d\u03a9\u039d\u03a5\u039c\u03a9\u039d \u0395\u03a4\u0391\u0399\u03a1\u0395\u0399\u03a9\u039d", "1183": "\u0394\u0399\u0391\u03a6\u039f\u03a1\u039f\u0399 \u03a0\u039f\u039b\u03a5\u0395\u0398\u039d\u0395\u0399\u03a3 \u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u0399", "1184": "\u03a7\u039f\u039b\u0395\u03a1\u0391", "1185": "E\u039d\u0399\u0391\u0399\u039f\u03a3 \u0394\u0397\u039c\u039f\u03a3\u0399\u039f\u0393\u03a1\u0391\u03a6\u0399\u039a\u039f\u03a3 \u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u03a3", "1186": "\u0391\u03a4\u0395\u039b\u0395\u0399\u0395\u03a3 \u0394\u0397\u039c\u039f\u03a3\u0399\u03a9\u039d \u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u03a9\u039d", "1187": "\u03a4\u0391\u039c\u0395\u0399\u039f \u03a0\u03a1\u039f\u039d\u039f\u0399\u0391\u03a3 \u039c\u0397\u03a7\u0391\u039d\u039f\u0394\u0397\u0393\u03a9\u039d \u039f\u0394\u039f\u03a3\u03a4\u03a1\u03a9\u03a4\u0397\u03a1\u03a9\u039d \u039a\u039b\u03a0", "1188": "\u039d\u039f\u03a3\u039f\u039a\u039f\u039c\u039f\u0399", "1189": "\u039d\u039f\u03a3\u039f\u039a\u039f\u039c\u0395\u0399\u0391 \u03a6\u03a5\u039b\u0391\u039a\u03a9\u039d", "1190": "\u0391\u03a0\u039f\u039a\u0391\u03a4\u0391\u03a3\u03a4\u0391\u03a3\u0397 \u039a\u03a4\u0397\u039d\u039f\u03a4\u03a1\u039f\u03a6\u03a9\u039d", "1191": "\u03a4\u0395\u039b\u0397 \u039a\u0391\u0399 \u0395\u0399\u03a3\u03a6\u039f\u03a1\u0395\u03a3", "1192": "\u0391\u039a\u0391\u03a4\u0391\u03a3\u03a7\u0395\u03a4\u0391", "1193": "\u039e\u0395\u039d\u039f\u0394\u039f\u03a7\u0395\u0399\u0391\u039a\u039f \u0395\u03a0\u0399\u039c\u0395\u039b\u0397\u03a4\u0397\u03a1\u0399\u039f \u03a4\u0397\u03a3 \u0395\u039b\u039b\u0391\u0394\u0391\u03a3", "1194": "\u0394\u0397\u039c\u039f\u03a4\u039f\u039b\u039f\u0393\u0399\u0391", "1195": "\u03a3\u03a4\u0391\u03a4\u0399\u03a3\u03a4\u0399\u039a\u0397 \u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u0391", "1196": "\u039a\u03a1\u0391\u03a4\u0399\u039a\u039f \u0395\u03a1\u0393\u0391\u03a3\u03a4\u0397\u03a1\u0399\u039f \u0395\u039b\u0395\u0393\u03a7\u039f\u03a5 \u03a6\u0391\u03a1\u039c\u0391\u039a\u03a9\u039d", "1197": "\u0391\u0395\u03a1\u039f\u03a0\u039f\u03a1\u0399\u039a\u0397 \u0391\u03a3\u03a4\u03a5\u039d\u039f\u039c\u0399\u0391", "1198": "\u0395\u039a\u03a4\u0391\u039a\u03a4\u0395\u03a3 \u0395\u0399\u03a3\u03a6\u039f\u03a1\u0395\u03a3", "1199": "\u03a3\u03a5\u039d\u03a4\u0391\u039e\u0395\u0399\u03a3 \u03a5\u03a0\u0391\u039b\u039b\u0397\u039b\u03a9\u039d \u03a4.\u03a4.\u03a4", "1200": "\u039c\u0395\u03a4\u03a1\u0391 \u039a\u0391\u03a4\u0391 \u03a4\u0397\u03a3 \u03a6\u039f\u03a1\u039f\u0394\u0399\u0391\u03a6\u03a5\u0393\u0397\u03a3", "1201": "\u0395\u0394\u0391\u03a6\u0399\u039a\u0397 \u0395\u03a0\u0395\u039a\u03a4\u0391\u03a3\u0397 \u039d\u039f\u039c\u039f\u0398\u0395\u03a3\u0399\u0391\u03a3", "1202": "\u039c\u0399\u039a\u03a1\u039f\u0394\u0399\u0391\u03a6\u039f\u03a1\u0395\u03a3", "1203": "\u03a4\u0391\u03a4\u0396\u0399\u039a\u0399\u03a3\u03a4\u0391\u039d \u2013 \u03a4\u0391\u03a5\u039b\u0391\u039d\u0394\u0397 \u2013 \u03a4\u039f\u03a5\u03a1\u039a\u0399\u0391 \u039a.\u039b\u03a0", "1204": "\u03a3\u03a5\u039c\u0392\u0391\u03a3\u0397 \u0394\u0399\u0395\u0398\u039d\u039f\u03a5\u03a3 \u039c\u0395\u03a4\u0391\u03a6\u039f\u03a1\u0391\u03a3 \u0395\u039c\u03a0\u039f\u03a1\u0395\u03a5\u039c\u0391\u03a4\u03a9\u039d \u039f\u0394\u0399\u039a\u03a9\u03a3", "1205": "\u039a\u03a9\u0394\u0399\u039a\u0391\u03a3 \u0399\u0394\u0399\u03a9\u03a4\u0399\u039a\u039f\u03a5 \u039d\u0391\u03a5\u03a4\u0399\u039a\u039f\u03a5 \u0394\u0399\u039a\u0391\u0399\u039f\u03a5", "1206": "\u039a\u0395\u039d\u03a4\u03a1\u0391 \u0393\u0395\u03a9\u03a1\u0393\u0399\u039a\u0397\u03a3 \u0395\u039a\u03a0\u0391\u0399\u0394\u0395\u03a5\u03a3\u0397\u03a3-\u039f.\u0393.\u0395.\u0395.\u039a.\u0391", "1207": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0395\u03a0\u0399\u039a\u039f\u03a5\u03a1\u0399\u039a\u0397\u03a3 \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u03a3 \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u03a5 \u0399\u0394\u03a1\u03a5\u039c\u0391\u03a4\u03a9\u039d \u0395\u039c\u03a0\u039f\u03a1\u0399\u039a\u039f\u03a5 \u039d\u0391\u03a5\u03a4\u0399\u039a\u039f\u03a5", "1208": "\u0393\u03a1\u0391\u03a6\u0395\u0399\u039f \u0394\u0399\u0391\u03a1\u039a\u0397 \u039a\u03a9\u0394\u0399\u039a\u0391 \u039d\u039f\u039c\u039f\u0398\u0395\u03a3\u0399\u0391\u03a3", "1209": "\u0395\u03a1\u0395\u03a5\u039d\u0391 \u0399\u0394\u0399\u03a9\u03a4\u0399\u039a\u03a9\u039d \u039c\u0395\u03a4\u0391\u039b\u039b\u0395\u0399\u03a9\u039d", "1210": "\u0394\u0399\u0395\u03a5\u0398\u03a5\u039d\u03a3\u0397 \u0394\u0397\u039c\u039f\u03a3\u0399\u03a9\u039d \u0395\u03a1\u0393\u03a9\u039d \u0391\u0395\u03a1\u039f\u03a0\u039f\u03a1\u0399\u0391\u03a3", "1211": "\u03a0\u0395\u03a1\u0399 \u039d\u039f\u039c\u0391\u03a1\u03a7\u03a9\u039d", "1212": "\u03a3\u03a5\u039d\u03a4\u0391\u039e\u0395\u0399\u03a3 \u0398\u03a5\u039c\u0391\u03a4\u03a9\u039d \u0391\u03a0\u039f \u0395\u03a3\u03a9\u03a4\u0395\u03a1\u0399\u039a\u0395\u03a3 \u0394\u0399\u0391\u039c\u0391\u03a7\u0395\u03a3", "1213": "\u0394\u0399\u0391\u03a7\u0395\u0399\u03a1\u0399\u03a3\u0397 \u0395\u03a6\u039f\u0394\u0399\u03a9\u039d \u0395\u039e\u03a9\u03a4\u0395\u03a1\u0399\u039a\u039f\u03a5", "1214": "\u039f\u03a1\u0393\u0391\u039d\u03a9\u03a3\u0397 \u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u03a9\u039d \u03a0\u039f\u039b\u0395\u039c\u0399\u039a\u039f\u03a5 \u039d\u0391\u03a5\u03a4\u0399\u039a\u039f\u03a5", "1215": "\u03a6\u039f\u03a1\u03a4\u0397\u0393\u0391 \u03a0\u039b\u039f\u0399\u0391 \u0391\u039d\u03a9 \u03a4\u03a9\u039d 4.500 \u03a4\u039f\u039d\u039d\u03a9\u039d", "1216": "\u03a1\u0391\u0394\u0399\u039f\u03a4\u0397\u039b\u0395\u0393\u03a1\u0391\u03a6\u0399\u039a\u0397 \u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u0391 \u03a0\u039b\u039f\u0399\u03a9\u039d", "1217": "\u0395\u03a0\u0391\u0393\u0393\u0395\u039b\u039c\u0391\u03a4\u0399\u039a\u0395\u03a3 \u03a3\u03a7\u039f\u039b\u0395\u03a3", "1218": "\u0394\u0399\u0391\u03a6\u039f\u03a1\u0395\u03a3 \u0392\u0399\u039f\u039c\u0397\u03a7\u0391\u039d\u0399\u0395\u03a3", "1219": "\u03a3\u03a5\u039d\u03a4\u0397\u03a1\u0397\u03a3\u0397 \u0391\u0395\u03a1\u039f\u03a3\u039a\u0391\u03a6\u03a9\u039d", "1220": "\u039f\u039b\u03a5\u039c\u03a0\u0399\u0391\u039a\u0397 \u0391\u0395\u03a1\u039f\u03a0\u039f\u03a1\u0399\u0391", "1221": "\u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u03a3 \u03a7\u03a9\u03a1\u039f\u03a6\u03a5\u039b\u0391\u039a\u0397\u03a3", "1222": "\u03a0\u0395\u03a1\u0399\u0398\u0391\u039b\u03a8\u0397 \u03a6\u03a5\u039c\u0391\u03a4\u0399\u039a\u03a9\u039d \u03a4\u0391\u03a7\u03a5\u0394\u03a1\u039f\u039c\u0399\u039a\u03a9\u039d \u03a5\u03a0\u0391\u039b\u039b\u0397\u039b\u03a9\u039d", "1223": "\u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u03a3 \u03a7\u03a1\u0397\u039c\u0391\u03a4\u039f\u0394\u039f\u03a4\u0397\u03a3\u0397\u03a3 \u039f\u0399\u039a\u039f\u039d\u039f\u039c\u0399\u039a\u0397\u03a3 \u0391\u039d\u0391\u03a0\u03a4\u03a5\u039e\u0397\u03a3", "1224": "\u03a0\u03a1\u03a9\u03a4\u0395\u03a3 \u03a5\u039b\u0395\u03a3 \u039e\u03a5\u039b\u0399\u039d\u03a9\u039d \u0392\u0391\u03a1\u0395\u039b\u0399\u03a9\u039d", "1225": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u03a3 \u03a4\u0395\u03a7\u039d\u0399\u039a\u03a9\u039d \u03a4\u03a5\u03a0\u039f\u03a5 \u0391\u0398\u0397\u039d\u03a9\u039d (\u03a4.\u0391.\u03a4.\u03a4.\u0391.)", "1226": "\u03a0\u03a1\u039f\u03a0\u0391\u03a1\u0391\u03a3\u039a\u0395\u03a5\u0391\u03a3\u03a4\u0399\u039a\u0397 \u03a3\u03a7\u039f\u039b\u0397 \u039a\u0391\u039b\u03a9\u039d \u03a4\u0395\u03a7\u039d\u03a9\u039d \u03a4\u0397\u039d\u039f\u03a5", "1227": "\u039f\u0399\u039a\u039f\u039d\u039f\u039c\u0399\u039a\u0395\u03a3 \u0391\u039d\u03a4\u0399\u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0395\u0399\u0395\u03a3 \u0395\u039e\u03a9\u03a4\u0395\u03a1\u0399\u039a\u039f\u03a5", "1228": "\u039a\u0391\u039b\u039b\u0399\u03a4\u0395\u03a7\u039d\u0399\u039a\u039f\u0399 \u03a3\u03a4\u0391\u0398\u039c\u039f\u0399", "1229": "\u03a3\u03a5\u039c\u0392\u0391\u03a3\u0395\u0399\u03a3 \u0393\u0399\u0391 \u03a4\u0397 \u0392\u0399\u0391 \u03a4\u03a9\u039d", "1230": "\u03a0\u03a1\u039f\u03a3\u03a4\u0391\u03a3\u0399\u0391 \u0391\u039c\u03a0\u0395\u039b\u039f\u03a5\u03a1\u0393\u0399\u039a\u0397\u03a3 \u03a0\u0391\u03a1\u0391\u0393\u03a9\u0393\u0397\u03a3", "1231": "\u0394\u0399\u0391\u03a6\u039f\u03a1\u0391 \u0391\u0394\u0399\u039a\u0397\u039c\u0391\u03a4\u0391", "1232": "\u0391\u03a3\u03a4\u03a5\u039d\u039f\u039c\u0399\u0391 \u039a\u0391\u0399 \u0391\u03a3\u03a6\u0391\u039b\u0395\u0399\u0391 \u03a3\u0399\u0394\u0397\u03a1\u039f\u0394\u03a1\u039f\u039c\u03a9\u039d", "1233": "\u039c\u0395\u03a4\u039f\u03a7\u0399\u039a\u039f \u03a4\u0391\u039c\u0395\u0399\u039f \u0392\u0391\u03a3\u0399\u039b\u0399\u039a\u0397\u03a3 \u0391\u0395\u03a1\u039f\u03a0\u039f\u03a1\u0399\u0391\u03a3", "1234": "\u03a5\u03a0\u039f\u0398\u0397\u039a\u0397 \u039c\u0397\u03a7\u0391\u039d\u0399\u039a\u03a9\u039d \u0395\u0393\u039a\u0391\u03a4\u0391\u03a3\u03a4\u0391\u03a3\u0395\u03a9\u039d", "1235": "\u0395\u03a5\u0398\u03a5\u039d\u0397 \u0391\u03a0\u039f \u03a4\u2019\u0391\u03a5\u03a4\u039f\u039a\u0399\u039d\u0397\u03a4\u0391", "1236": "\u03a0\u03a1\u039f\u03a3\u03a4\u0391\u03a3\u0399\u0391 \u039c\u0397\u03a4\u03a1\u039f\u03a4\u0397\u03a4\u039f\u03a3 \u039a\u0391\u0399 \u0392\u03a1\u0395\u03a6\u03a9\u039d", "1237": "\u039c\u0395 \u03a4\u0397 \u03a6\u0399\u039b\u0391\u039d\u0394\u0399\u0391", "1238": "\u0395\u03a0\u0391\u03a1\u03a7\u0399\u0391\u039a\u039f\u03a3 \u03a4\u03a5\u03a0\u039f\u03a3", "1239": "\u0395\u03a0\u0399\u0398\u0395\u03a9\u03a1\u0397\u03a3\u0397 \u03a4\u0395\u039b\u03a9\u039d\u0395\u0399\u03a9\u039d", "1240": "\u0395\u03a0\u0399\u03a4\u03a1\u039f\u03a0\u0395\u0399\u0395\u03a3 \u03a4\u039f\u03a0\u03a9\u039d\u03a5\u039c\u0399\u03a9\u039d", "1241": "\u039c\u0395\u03a4\u0391\u039d\u0391\u03a3\u03a4\u0395\u03a5\u03a3\u0397 \u039a\u0391\u0399 \u0391\u03a0\u039f\u0394\u0397\u039c\u0399\u0391", "1242": "\u0394\u0399\u039a\u0397\u0393\u039f\u03a1\u0399\u039a\u039f\u0399 \u03a3\u03a5\u039b\u039b\u039f\u0393\u039f\u0399", "1243": "\u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f \u03a5\u03a0\u039f\u03a5\u03a1\u0393\u0395\u0399\u039f\u03a5 \u0393\u0395\u03a9\u03a1\u0393\u0399\u0391\u03a3", "1244": "\u03a4\u039c\u0397\u039c\u0391 \u039f\u0399\u039a\u039f\u039d\u039f\u039c\u0399\u039a\u03a9\u039d \u0395\u03a0\u0399\u03a3\u03a4\u0397\u039c\u03a9\u039d \u03a0\u0391\u039d\u039c\u0399\u039f\u03a5 \u03a0\u0391\u03a4\u03a1\u03a9\u039d", "1245": "\u039c\u0391\u039b\u0391\u039a\u03a4\u0395\u03a3", "1246": "\u0395\u039b\u0391\u0399\u0391", "1247": "\u0391\u03a4\u039f\u039c\u0399\u039a\u0391 \u0395\u0393\u0393\u03a1\u0391\u03a6\u0391 \u0391\u039e\u0399\u03a9\u039c\u0391\u03a4\u0399\u039a\u03a9\u039d", "1248": "\u0391\u0393\u03a1\u039f\u03a4\u0399\u039a\u0397 \u03a4\u03a1\u0391\u03a0\u0395\u0396\u0391 \u03a4\u0397\u03a3 \u0395\u039b\u039b\u0391\u0394\u039f\u03a3", "1249": "\u039f\u03a0\u03a4\u0399\u039a\u039f\u0399 - \u039a\u0391\u03a4\u0391\u03a3\u03a4\u0397\u039c\u0391\u03a4\u0391 \u039f\u03a0\u03a4\u0399\u039a\u03a9\u039d \u0395\u0399\u0394\u03a9\u039d", "1250": "\u0394\u0397\u039c\u039f\u03a3\u0399\u0395\u03a3 \u0395\u03a0\u0395\u039d\u0394\u03a5\u03a3\u0395\u0399\u03a3", "1251": "\u039a\u03a1\u0391\u03a4\u0399\u039a\u0397 \u039f\u03a1\u03a7\u0397\u03a3\u03a4\u03a1\u0391 \u0398\u0395\u03a3\u03a3\u0391\u039b\u039f\u039d\u0399\u039a\u0397\u03a3", "1252": "\u039d\u0397\u039f\u039b\u039f\u0393\u0399\u0391-\u03a5\u03a0\u039f\u0398\u0397\u039a\u039f\u039b\u039f\u0393\u0399\u0391-\u03a3\u0397\u039c\u0391\u03a4\u039f\u039b\u039f\u0393\u0397\u03a3\u0397", "1253": "\u03a4\u0391\u039c\u0395\u0399\u039f \u03a3\u03a5\u039d\u03a4\u0391\u039e\u0395\u03a9\u039d \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u03a5 \u0395\u03a4\u0391\u0399\u03a1\u0395\u0399\u0391\u03a3 \u0394\u0399\u0391\u03a7\u0395\u0399\u03a1\u0399\u03a3\u0395\u03a9\u03a3 \u0395\u0399\u0394\u03a9\u039d \u039c\u039f\u039d\u039f\u03a0\u03a9\u039b\u0399\u039f\u03a5 (\u03a4.\u0391.\u03a0.-\u0395.\u0394.\u0395.\u039c.\u0395.)", "1254": "\u0395\u0399\u03a3\u03a0\u03a1\u0391\u039e\u0397 \u0391\u039e\u0399\u03a9\u039d", "1255": "\u03a5\u0393\u0395\u0399\u039f\u039d\u039f\u039c\u0399\u039a\u039f\u03a3 \u0395\u039b\u0395\u0393\u03a7\u039f\u03a3 \u03a4\u03a1\u039f\u03a6\u0399\u039c\u03a9\u039d-\u03a0\u039f\u03a4\u03a9\u039d-\u039d\u0395\u03a1\u03a9\u039d", "1256": "\u039b\u039f\u0393\u0399\u03a3\u03a4\u0395\u03a3 - \u03a6\u039f\u03a1\u039f\u03a4\u0395\u03a7\u039d\u0399\u039a\u039f\u0399", "1257": "\u0395\u0399\u0394\u0399\u039a\u0395\u03a3 \u0394\u0399\u039a\u039f\u039d\u039f\u039c\u0399\u039a\u0395\u03a3 \u0394\u0399\u0391\u03a4\u0391\u039e\u0395\u0399\u03a3 \u0393\u0399\u0391 \u03a4\u039f \u0394\u0397\u039c\u039f\u03a3\u0399\u039f", "1258": "\u03a3\u03a7\u039f\u039b\u0395\u03a3 \u03a3\u03a9\u039c\u0391\u03a4\u03a9\u039d \u0391\u03a3\u03a6\u0391\u039b\u0395\u0399\u0391\u03a3", "1259": "\u03a4\u0391\u039c\u0395\u0399\u039f\u039d \u039a\u039f\u0399\u039d\u03a9\u03a6\u0395\u039b\u03a9\u039d \u0395\u03a1\u0393\u03a9\u039d \u039b\u0395\u03a5\u039a\u0391\u0394\u039f\u03a3", "1260": "\u0395\u0399\u0394\u0399\u039a\u0397 \u0391\u0393\u03a9\u0393\u0397, \u0395\u0399\u0394\u0399\u039a\u0397 \u0395\u03a0\u0391\u0393\u0393\u0395\u039b\u039c\u0391\u03a4\u0399\u039a\u0397", "1261": "\u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u0391 \u039a\u03a1\u0391\u03a4\u0399\u039a\u03a9\u039d \u03a0\u03a1\u039f\u039c\u0397\u0398\u0395\u0399\u03a9\u039d", "1262": "\u039f\u0399\u039d\u039f\u039b\u039f\u0393\u0399\u039a\u0391 \u0399\u0394\u03a1\u03a5\u039c\u0391\u03a4\u0391", "1263": "\u03a3\u03a5\u039d\u0398\u0397\u039a\u0395\u03a3 \u0395\u039a\u0394\u039f\u03a3\u0395\u03a9\u03a3", "1264": "\u0391\u039e\u0399\u03a9\u039c\u0391\u03a4\u0399\u039a\u039f\u0399 \u039a\u0391\u0399 \u03a5\u03a0\u0391\u039e\u0399\u03a9\u039c\u0391\u03a4\u0399\u039a\u039f\u0399 \u039b.\u03a3", "1265": "\u03a5\u0393\u0395\u0399\u039f\u039d\u039f\u039c\u0399\u039a\u0397 \u0395\u039e\u0395\u03a4\u0391\u03a3\u0397 \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u03a5", "1266": "\u039e\u0395\u039d\u0391 \u03a3\u03a7\u039f\u039b\u0395\u0399\u0391 \u0397\u039c\u0395\u0394\u0391\u03a0\u0397\u03a3", "1267": "\u0395.\u03a3.\u03a5.-\u0393\u0395\u039d\u0399\u039a\u0395\u03a3 \u0394\u0399\u0391\u03a4\u0391\u039e\u0395\u0399\u03a3", "1268": "\u03a4\u0391\u039c\u0395\u0399\u0391 \u0395\u03a6\u0391\u03a1\u039c\u039f\u0393\u0397\u03a3 \u03a3\u03a7\u0395\u0394\u0399\u03a9\u039d \u03a0\u039f\u039b\u0395\u03a9\u039d", "1269": "\u03a3\u03a5\u039d\u0395\u03a4\u0391\u0399\u03a1\u0399\u03a3\u039c\u039f\u0399 \u03a3\u03a4\u03a1\u0391\u03a4\u0399\u03a9\u03a4\u0399\u039a\u03a9\u039d \u0395\u0399\u0394\u03a9\u039d", "1270": "\u03a3\u03a5\u039d\u0398\u0397\u039a\u0397 \u03a0\u0395\u03a1\u0399 \u0394\u0399\u0391\u03a3\u03a4\u0397\u039c\u0391\u03a4\u039f\u03a3", "1271": "\u0394\u0399\u0391\u03a7\u0395\u0399\u03a1\u0399\u03a3\u0397 \u0391\u039d\u03a4\u0391\u039b\u039b\u0391\u039e\u0399\u039c\u03a9\u039d \u039a\u03a4\u0397\u039c\u0391\u03a4\u03a9\u039d", "1272": "\u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u039d \u0394\u0399\u039f\u0399\u039a\u0397\u03a3\u0395\u03a9\u03a3", "1273": "\u03a3\u03a7\u039f\u039b\u0397 \u0395\u039a\u03a0\u03a4\u0399\u039a\u03a9\u039d \u039b\u0395\u0399\u03a4\u039f\u03a5\u03a1\u0393\u03a9\u039d", "1274": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u03a3 \u039e\u0395\u039d\u039f\u0394\u039f\u03a7\u039f\u03a5\u03a0\u0391\u039b\u039b\u0397\u039b\u03a9\u039d (\u03a4.\u0391.\u039e.\u03a5.)", "1275": "\u03a3\u03a9\u039c\u0391\u03a4\u0399\u039a\u0397 \u0399\u039a\u0391\u039d\u039f\u03a4\u0397\u03a4\u0391 \u0391\u039e\u0399\u03a9\u039c\u0391\u03a4\u0399\u039a\u03a9\u039d", "1276": "\u0392\u0395\u0392\u0391\u0399\u03a9\u03a3\u0397 \u0395\u03a3\u039f\u0394\u03a9\u039d \u0394\u0397\u039c\u039f\u03a3\u0399\u039f\u03a5 \u0391\u03a0\u039f \u039c\u0395\u03a4\u0391\u039b\u039b\u0395\u0399\u0391 \u039a\u0391\u0399 \u039b\u0391\u03a4\u039f\u039c\u0395\u0399\u0391", "1277": "\u0394\u0399\u0391\u03a6\u039f\u03a1\u039f\u0399 \u0395\u03a0\u039f\u0399\u039a\u0399\u03a3\u03a4\u0399\u039a\u039f\u0399 \u039d\u039f\u039c\u039f\u0399", "1278": "\u0395\u03a0\u0399\u039a\u039f\u03a5\u03a1\u0399\u039a\u039f \u03a4\u0391\u039c\u0395\u0399\u039f \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u03a3 \u039a\u03a1\u0395\u039f\u03a0\u03a9\u039b\u03a9\u039d \u039a\u0391\u0399 \u0395\u03a1\u0393\u0391\u03a4\u039f\u03a5\u03a0\u0391\u039b\u039b\u0397\u039b\u03a9\u039d \u039a\u03a1\u0395\u0391\u03a4\u039f\u03a3 (\u0395.\u03a4.\u0391.\u039a.\u0395.\u039a)", "1279": "\u039f\u0399\u039a\u039f\u039d\u039f\u039c\u0399\u039a\u039f \u03a0\u0391\u039d\u0395\u03a0\u0399\u03a3\u03a4\u0397\u039c\u0399\u039f \u0391\u0398\u0397\u039d\u03a9\u039d", "1280": "\u0393\u0395\u039d\u0399\u039a\u0395\u03a3 \u0391\u03a0\u039f\u0398\u0397\u039a\u0395\u03a3", "1281": "\u03a4\u0391\u039c\u0395\u0399\u0391\u039a\u0397 \u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u0391", "1282": "\u0393\u0395\u039d\u0399\u039a\u0395\u03a3 \u0394\u0399\u0391\u03a4\u0391\u039e\u0395\u0399\u03a3 \u03a0\u0395\u03a1\u0399 \u0391\u039d\u03a9\u039d\u03a5\u039c\u03a9\u039d \u0395\u03a4\u0391\u0399\u03a1\u0395\u0399\u03a9\u039d", "1283": "\u03a4\u039f\u039c\u0395\u0391\u03a3 \u0395\u03a0\u0399\u039a\u039f\u03a5\u03a1\u0399\u039a\u0397\u03a3 \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u03a3 \u039c\u0399\u03a3\u0398\u03a9\u03a4\u03a9\u039d (\u0399\u039a\u0391-\u03a4\u0395\u0391\u039c)\u0395\u0399\u0394\u0399\u039a\u039f\u03a3 \u03a4\u039f\u039c\u0395\u0391\u03a3 \u0395\u03a0\u0399\u039a\u039f\u03a5\u03a1\u0399\u039a\u0397\u03a3 \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u03a3 \u039c\u0399\u03a3\u0398\u03a9\u03a4\u03a9\u039d (\u0399\u039a\u0391-\u0395\u03a4\u0395\u0391\u039c)", "1284": "\u0392\u0391\u03a1\u0392\u0391\u039a\u0395\u0399\u039f \u039b\u03a5\u039a\u0395\u0399\u039f", "1285": "\u039a\u03a9\u0394\u0399\u039a\u0391\u03a3 \u0394\u0399\u039a\u03a9\u039d \u03a4\u039f\u03a5 \u0394\u0397\u039c\u039f\u03a3\u0399\u039f\u03a5", "1286": "\u0394\u0399\u0395\u0398\u039d\u0395\u03a3 \u03a4\u0391\u039c\u0395\u0399\u039f\u039d \u03a0\u0395\u03a1\u0399\u0398\u0391\u039b\u03a8\u0395\u03a9\u03a3 \u03a4\u039f\u03a5 \u03a0\u0391\u0399\u0394\u0399\u039f\u03a5", "1287": "\u03a3\u0399\u0394\u0397\u03a1\u039f\u0394\u03a1\u039f\u039c\u039f\u0399 \u0395\u039b\u039b\u0397\u039d\u0399\u039a\u039f\u03a5 \u039a\u03a1\u0391\u03a4\u039f\u03a5\u03a3", "1288": "\u0391\u03a1\u0394\u0395\u03a5\u03a3\u0395\u0399\u03a3", "1289": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0391\u03a1\u03a7\u0391\u0399\u039f\u039b\u039f\u0393\u0399\u039a\u03a9\u039d \u03a0\u039f\u03a1\u03a9\u039d \u039a\u0391\u0399 \u0391\u03a0\u0391\u039b\u039b\u039f\u03a4\u03a1\u0399\u03a9\u03a3\u0395\u03a9\u039d", "1290": "\u0399\u0394\u03a1\u03a5\u039c\u0391 \u0392\u03a5\u0396\u0391\u039d\u03a4\u0399\u039d\u0397\u03a3 \u039c\u039f\u03a5\u03a3\u0399\u039a\u039f\u039b\u039f\u0393\u0399\u0391\u03a3", "1291": "\u039a\u03a5\u0392\u0395\u03a1\u039d\u0397\u03a4\u0399\u039a\u039f \u03a3\u03a5\u039c\u0392\u039f\u03a5\u039b\u0399\u039f \u0395\u039b\u0395\u0393\u03a7\u039f\u03a5 \u03a4\u0399\u039c\u03a9\u039d", "1292": "\u0395\u0399\u0394\u0399\u039a\u039f \u03a4\u0391\u039c\u0395\u0399\u039f \u0395\u03a0\u039f\u0399\u039a\u0399\u03a3\u039c\u039f\u03a5", "1293": "\u039a\u03a4\u0397\u039c\u0391\u03a4\u039f\u039b\u039f\u0393\u0399\u0391 \u0394\u0397\u039c\u03a9\u039d \u039a\u0391\u0399 \u039a\u039f\u0399\u039d\u039f\u03a4\u0397\u03a4\u03a9\u039d", "1294": "\u039a\u0391\u03a4\u0391\u03a3\u039a\u0395\u03a5\u0397 \u03a3\u03a4\u0391\u03a6\u0399\u0394\u0399\u039d\u0397\u03a3", "1295": "\u0394\u0399\u0395\u0398\u039d\u0397\u03a3 \u03a5\u0393\u0395\u0399\u039f\u039d\u039f\u039c\u0399\u039a\u039f\u03a3 \u039a\u0391\u039d\u039f\u039d\u0399\u03a3\u039c\u039f\u03a3", "1296": "\u0395\u03a0\u0395\u03a4\u0397\u03a1\u0399\u0394\u0391", "1297": "\u03a0\u0391\u0393\u039a\u039f\u03a3\u039c\u0399\u039f\u03a3 \u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u03a3 \u03a4\u039f\u03a5\u03a1\u0399\u03a3\u039c\u039f\u03a5", "1298": "\u0395\u039d\u0399\u03a3\u03a7\u03a5\u03a3\u0397 \u0391\u03a0\u03a1\u039f\u03a3\u03a4\u0391\u03a4\u0395\u03a5\u03a4\u03a9\u039d \u03a0\u0391\u0399\u0394\u0399\u03a9\u039d", "1299": "\u0394\u0399\u0391\u03a6\u039f\u03a1\u039f\u0399 \u0395\u03a0\u0399\u03a3\u0399\u03a4\u0399\u03a3\u03a4\u0399\u039a\u039f\u0399 \u039d\u039f\u039c\u039f\u0399", "1300": "\u0394\u0399\u03a0\u039b\u03a9\u039c\u0391\u03a4\u0399\u039a\u0395\u03a3 \u0391\u03a4\u0395\u039b\u0395\u0399\u0395\u03a3", "1301": "\u039c\u0395\u03a4\u0391 \u03a4\u039f\u03a5 \u0392\u0395\u039b\u0393\u0399\u039f\u03a5", "1302": "\u039a\u0391\u039d\u039d\u0391\u0392\u0399\u03a3", "1303": "\u0395\u039a\u03a4\u0395\u039b\u0395\u03a3\u0397", "1304": "\u03a4\u039f\u03a5\u03a1\u0399\u03a3\u03a4\u0399\u039a\u0395\u03a3 \u0395\u0393\u039a\u0391\u03a4\u0391\u03a3\u03a4\u0391\u03a3\u0395\u0399\u03a3 \u03a1\u039f\u0394\u039f\u03a5", "1305": "\u03a0\u039f\u0399\u039d\u0399\u039a\u039f \u039c\u0397\u03a4\u03a1\u03a9\u039f", "1306": "\u0391\u039d\u03a9\u039c\u0391\u039b\u0395\u03a3 \u0394\u0399\u039a\u0391\u0399\u039f\u03a0\u03a1\u0391\u039e\u0399\u0395\u03a3 \u0394\u03a9\u0394\u0395\u039a\u0391\u039d\u0397\u03a3\u039f\u03a5", "1307": "\u0395\u039c\u03a0\u039f\u03a1\u0399\u039a\u0391 \u039a\u0391\u0399 \u0392\u0399\u039f\u039c\u0397\u03a7\u0391\u039d\u0399\u039a\u0391 \u0395\u03a0\u0399\u039c\u0395\u039b\u0397\u03a4\u0397\u03a1\u0399\u0391", "1308": "\u03a3\u03a5\u039d\u03a4\u039f\u039d\u0399\u03a3\u039c\u039f\u03a3 \u03a0\u03a1\u039f\u0393\u03a1\u0391\u039c\u039c\u0391\u03a4\u03a9\u039d \u039a\u0391\u0399 \u0395\u03a1\u0393\u0391\u03a3\u0399\u03a9\u039d \u039f\u0394\u03a9\u039d \u039a\u0391\u0399 \u0395\u03a1\u0393\u03a9\u039d \u039a\u039f\u0399\u039d\u0397\u03a3 \u03a9\u03a6\u0395\u039b\u0395\u0399\u0391\u03a3", "1309": "\u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f \u039e\u0395\u039d\u039f\u0394\u039f\u03a7\u0395\u0399\u03a9\u039d", "1310": "\u0399\u039d\u03a3\u03a4\u0399\u03a4\u039f\u03a5\u03a4\u039f \u03a6\u03a5\u03a3\u0399\u039a\u0397\u03a3 \u03a4\u039f\u03a5 \u03a3\u03a4\u0395\u03a1\u0395\u039f\u03a5 \u03a6\u039b\u039f\u0399\u039f\u03a5 \u03a4\u0397\u03a3 \u0393\u0397\u03a3", "1311": "\u0395\u03a0\u0399\u039a\u0399\u039d\u0394\u03a5\u039d\u0395\u03a3 \u039f\u0399\u039a\u039f\u0394\u039f\u039c\u0395\u03a3", "1312": "\u0391\u03a1\u03a7\u0395\u0399\u0391 \u0394\u0399\u039a\u0391\u03a3\u03a4\u0397\u03a1\u0399\u03a9\u039d", "1313": "\u03a3\u039a\u039f\u03a0\u039f\u0392\u039f\u039b\u0397", "1314": "\u0391\u03a0\u039f\u039d\u039f\u039c\u0397 \u03a3\u03a5\u039d\u03a4\u0391\u039e\u0395\u03a9\u039d \u03a4\u0391\u039c\u0395\u0399\u039f\u03a5 \u039d\u039f\u039c\u0399\u039a\u03a9\u039d", "1315": "\u03a3\u0397\u03a1\u039f\u03a4\u03a1\u039f\u03a6\u0399\u0391", "1316": "\u0395\u03a3\u03a9\u03a4\u0395\u03a1\u0399\u039a\u039f\u03a3 \u039a\u0391\u039d\u039f\u039d\u0399\u03a3\u039c\u039f\u03a3", "1317": "\u03a0\u03a1\u039f\u03a3\u03a4\u0391\u03a3\u0399\u0391 \u03a4\u0397\u03a3 \u039a\u03a4\u0397\u039d\u039f\u03a4\u03a1\u039f\u03a6\u0399\u0391\u03a3", "1318": "\u03a7\u0391\u03a1\u03a4\u0397\u03a3", "1319": "\u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u0391 \u0395\u0393\u039a\u039b\u0397\u039c\u0391\u03a4\u039f\u039b\u039f\u0393\u0399\u039a\u03a9\u039d \u0391\u039d\u0391\u0396\u0397\u03a4\u0397\u03a3\u0395\u03a9\u039d", "1320": "\u03a5\u0393\u0395\u0399\u039f\u039d\u039f\u039c\u0399\u039a\u0397 \u03a0\u0395\u03a1\u0399\u0398\u0391\u039b\u03a8\u0397 \u0392\u039f\u03a5\u039b\u0395\u03a5\u03a4\u03a9\u039d", "1321": "\u0394\u0399\u039a\u0391\u0399\u039f\u03a3\u03a4\u0391\u03a3\u0399\u039f \u03a0\u039f\u039b\u0395\u039c\u039f\u03a5 1940", "1322": "\u03a7\u0397\u039c\u0395\u0399\u039f \u03a3\u03a4\u03a1\u0391\u03a4\u039f\u03a5", "1323": "\u0395\u03a0\u0391\u03a1\u03a7\u0399\u0391\u039a\u0395\u03a3 \u0393\u0395\u039d\u0399\u039a\u0395\u03a3 \u03a3\u03a5\u039d\u0395\u039b\u0395\u03a5\u03a3\u0395\u0399\u03a3", "1324": "\u039b\u039f\u0393\u0391\u03a1\u0399\u0391\u03a3\u039c\u039f\u03a3 \u0391\u03a1\u03a9\u0393\u0397\u03a3 \u039f\u0399\u039a\u039f\u0393\u0395\u039d\u0395\u0399\u03a9\u039d \u03a3\u03a4\u03a1\u0391\u03a4\u0399\u03a9\u03a4\u0399\u039a\u03a9\u039d \u0395\u039d\u039f\u03a0\u039b\u03a9\u039d \u0394\u03a5\u039d\u0391\u039c\u0395\u03a9\u039d", "1325": "\u039a\u0391\u03a4\u2019 \u0399\u0394\u0399\u0391\u039d \u039d\u0391\u039f\u0399", "1326": "\u03a0\u039b\u0397\u03a1\u03a9\u039c\u0397 \u039c\u0395 \u0395\u03a0\u0399\u03a4\u0391\u0393\u0395\u03a3", "1327": "\u0395\u0398\u039d\u0399\u039a\u0395\u03a3 \u03a3\u03a5\u039b\u039b\u039f\u0393\u0399\u039a\u0395\u03a3 \u03a3\u03a5\u039c\u0392\u0391\u03a3\u0395\u0399\u03a3", "1328": "\u03a3\u03a9\u039c\u0391 \u03a3\u03a4\u03a1\u0391\u03a4\u039f\u039b\u039f\u0393\u0399\u0391\u03a3", "1329": "\u039f\u0394\u039f\u039d\u03a4\u0399\u0391\u03a4\u03a1\u039f\u0399", "1330": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0395\u0398\u039d\u0399\u039a\u039f\u03a5 \u03a3\u03a4\u039f\u039b\u039f\u03a5", "1331": "\u03a3\u03a5\u039c\u03a0\u039b\u0397\u03a1\u03a9\u039c\u0391\u03a4\u0399\u039a\u0395\u03a3 \u03a0\u0391\u03a1\u039f\u03a7\u0395\u03a3 \u039c\u0397\u03a4\u03a1\u039f\u03a4\u0397\u03a4\u0391\u03a3", "1332": "\u039c\u0395\u03a4\u0391\u03a4\u03a1\u0395\u03a8\u0399\u039c\u039f\u03a4\u0397\u03a4\u0391 \u039a\u0391\u03a4\u0391\u0398\u0395\u03a3\u0395\u03a9\u039d", "1333": "\u03a0\u03a4\u0397\u039d\u039f\u03a4\u03a1\u039f\u03a6\u0399\u0391", "1334": "\u03a0\u03a4\u03a5\u03a7\u0399\u039f\u03a5\u03a7\u039f\u0399 \u0391\u039b\u039b\u039f\u0394\u0391\u03a0\u03a9\u039d \u03a0\u0391\u039d\u0395\u03a0\u0399\u03a3\u03a4\u0397\u039c\u0399\u03a9\u039d - \u0394\u0399\u0391\u03a0\u0391\u039d\u0395\u03a0\u0399\u03a3\u03a4\u0397\u039c\u0399\u0391\u039a\u039f \u039a\u0395\u039d\u03a4\u03a1\u039f \u0391\u039d\u0391\u0393\u039d\u03a9\u03a1\u0399\u03a3\u0395\u03a9\u03a3", "1335": "\u03a6\u039f\u03a1\u03a4\u0397\u0393\u0391 \u0391\u03a5\u03a4\u039f\u039a\u0399\u039d\u0397\u03a4\u0391", "1336": "\u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u0391 \u039c\u0397\u03a7\u0391\u039d\u0399\u039a\u0397\u03a3 \u039a\u0391\u039b\u039b\u0399\u0395\u03a1\u0393\u0395\u0399\u0391\u03a3", "1337": "\u0395\u039b\u0395\u0393\u03a7\u039f\u03a3 \u039a\u0399\u039d\u0397\u039c\u0391\u03a4\u039f\u0393\u03a1\u0391\u03a6\u03a9\u039d", "1338": "\u0394\u0397\u039c\u039f\u03a3\u0399\u039f\u0393\u03a1\u0391\u03a6\u0399\u039a\u0395\u03a3 \u039f\u03a1\u0393\u0391\u039d\u03a9\u03a3\u0395\u0399\u03a3", "1339": "\u039d\u0391\u03a5\u03a4\u0399\u039b\u0399\u0391\u039a\u0395\u03a3 \u03a4\u03a1\u0391\u03a0\u0395\u0396\u0395\u03a3", "1340": "\u039b\u0395\u0399\u03a4\u039f\u03a5\u03a1\u0393\u0399\u0391 \u03a5\u0394\u03a1\u039f\u0398\u0395\u03a1\u0391\u03a0\u0395\u03a5\u03a4\u0397\u03a1\u0399\u03a9\u039d", "1341": "\u03a3\u03a5\u039c\u0392\u039f\u03a5\u039b\u0399\u039f \u0395\u039c\u03a0\u039f\u03a1\u0399\u039a\u0397\u03a3 \u039d\u0391\u03a5\u03a4\u0399\u039b\u0399\u0391\u03a3", "1342": "\u0395\u0393\u0393\u0395\u0399\u039f\u03a3 \u03a6\u039f\u03a1\u039f\u039b\u039f\u0393\u0399\u0391 \u039a\u0391\u03a0\u039d\u039f\u03a5", "1343": "\u03a4\u0395\u039b\u039f\u03a3 \u0391\u0394\u0395\u0399\u03a9\u039d \u039f\u0399\u039a\u039f\u0394\u039f\u039c\u03a9\u039d", "1344": "\u0395\u0398\u039d\u0399\u039a\u039f\u03a4\u0397\u03a4\u0391 \u03a0\u039b\u039f\u0399\u03a9\u039d", "1345": "\u03a0\u039f\u039b\u0399\u03a4\u0399\u039a\u0391 \u039a\u039f\u039c\u039c\u0391\u03a4\u0391", "1346": "\u03a3\u03a7\u039f\u039b\u0397 \u0398\u0395\u03a4\u0399\u039a\u03a9\u039d \u0395\u03a0\u0399\u03a3\u03a4\u0397\u039c\u03a9\u039d", "1347": "\u039d\u0397\u039f\u0393\u039d\u03a9\u039c\u039f\u039d\u0395\u03a3", "1348": "\u0394\u0399\u0391\u03a6\u039f\u03a1\u039f\u0399 \u03a0\u039f\u0399\u039d\u0399\u039a\u039f\u0399 \u039d\u039f\u039c\u039f\u0399", "1349": "\u03a0\u03a1\u039f\u03a3\u03a9\u03a1\u0399\u039d\u0397 \u0391\u03a0\u039f\u039b\u03a5\u03a3\u0397", "1350": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0391\u039b\u039b\u0397\u039b\u039f\u0392\u039f\u0397\u0398\u0395\u0399\u0391\u03a3 \u03a3\u03a4\u03a1\u0391\u03a4\u039f\u03a5 \u039e\u0397\u03a1\u0391\u03a3", "1351": "\u03a5\u03a0\u0391\u039e\u0399\u03a9\u039c\u0391\u03a4\u0399\u039a\u039f\u0399 \u0391\u0395\u03a1\u039f\u03a0\u039f\u03a1\u0399\u0391\u03a3", "1352": "\u03a6\u039f\u03a1\u039f\u039b\u039f\u0393\u0399\u0391 \u03a7\u03a1\u0397\u039c\u0391\u03a4\u0399\u03a3\u03a4\u0397\u03a1\u0399\u0391\u039a\u03a9\u039d \u03a3\u03a5\u039c\u0392\u0391\u03a3\u0395\u03a9\u039d", "1353": "\u03a0\u03a4\u03a5\u03a7\u0399\u0391 \u0399\u03a0\u03a4\u0391\u039c\u0395\u039d\u039f\u03a5 \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u03a5", "1354": "\u039a\u03a1\u0395\u0391\u03a4\u0391 \u03a3\u0395 \u03a0\u0391\u039a\u0395\u03a4\u0391", "1355": "\u0395\u039b\u0395\u0393\u03a7\u039f\u03a3 \u039f\u03a0\u039b\u039f\u03a6\u039f\u03a1\u0399\u0391\u03a3", "1356": "\u0391\u039d\u0391\u03a3\u03a4\u039f\u039b\u0395\u03a3 \u0394\u0397\u039c\u039f\u03a3\u0399\u039f\u03a5 \u03a7\u03a1\u0395\u039f\u03a5\u03a3", "1357": "\u0397\u039b\u0395\u039a\u03a4\u03a1\u0399\u039a\u039f\u0399 \u03a3\u0399\u0394\u0397\u03a1\u039f\u0394\u03a1\u039f\u039c\u039f\u0399 \u0391\u0398\u0397\u039d\u03a9\u039d-\u03a0\u0395\u0399\u03a1\u0391\u0399\u03a9\u03a3 (\u0397.\u03a3.\u0391.\u03a0)", "1358": "\u0394\u0399\u0391\u0398\u0395\u03a3\u0397 \u039b\u03a5\u039c\u0391\u03a4\u03a9\u039d \u039a\u0391\u0399 \u0391\u03a0\u039f\u0392\u039b\u0397\u03a4\u03a9\u039d", "1359": "\u0395\u03a0\u0399\u0398\u0395\u03a9\u03a1\u0397\u03a3\u0397 \u03a4\u0395\u03a7\u039d\u0399\u039a\u0397\u03a3 \u0395\u039a\u03a0\u0391\u0399\u0394\u0395\u03a5\u03a3\u0397\u03a3", "1360": "\u03a4\u0395\u039b\u0397 \u0391\u0394\u0395\u0399\u03a9\u039d \u0395\u039e\u0391\u0393\u03a9\u0393\u0397\u03a3", "1361": "\u03a0\u03a1\u039f\u0399\u039f\u039d\u03a4\u0391 \u0393\u0391\u039b\u0391\u039a\u03a4\u039f\u03a3", "1362": "\u0393\u0395\u03a9\u03a1\u0393\u0399\u039a\u0391 \u0395\u03a0\u0399\u039c\u0395\u039b\u0397\u03a4\u0397\u03a1\u0399\u0391", "1363": "\u0399\u0395\u03a1\u0391\u03a1\u03a7\u0399\u039a\u039f\u03a3 \u0384\u0395\u039b\u0395\u0393\u03a7\u039f\u03a3", "1364": "\u03a3\u03a4\u03a1\u0391\u03a4\u0399\u03a9\u03a4\u0399\u039a\u0395\u03a3 \u03a6\u03a5\u039b\u0391\u039a\u0395\u03a3", "1365": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0395\u03a0\u0399\u039a. \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u03a3 \u03a5\u03a0\u0391\u039b\u039b\u0397\u039b\u03a9\u039d \u039a\u0391\u03a0\u039d\u0395\u039c\u03a0\u039f\u03a1\u0399\u039a\u03a9\u039d \u0395\u03a0\u0399\u03a7\u0395\u0399\u03a1\u0397\u03a3\u0395\u03a9\u039d", "1366": "\u03a4\u0391\u039c\u0395\u0399\u039f \u03a0\u03a1\u039f\u039d\u039f\u0399\u0391\u03a3 \u039a\u0391\u0399 \u0395\u03a0\u0399\u039a\u039f\u03a5\u03a1\u0399\u039a\u0397\u03a3 \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u03a3 \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u03a5 \u0399\u03a0\u03a0\u039f\u0394\u03a1\u039f\u039c\u0399\u03a9\u039d (\u03a4.\u0391.\u03a0.\u0395.\u0391.\u03a0.\u0399.)", "1367": "\u0391\u03a0\u039f\u03a7\u03a9\u03a1\u0397\u03a4\u0397\u03a1\u0399\u0391", "1368": "\u03a6\u039f\u03a1\u039f\u03a3 \u0395\u0399\u03a3\u039f\u0394\u0397\u039c\u0391\u03a4\u039f\u03a3 \u03a6\u03a5\u03a3\u0399\u039a\u03a9\u039d \u039a\u0391\u0399 \u039d\u039f\u039c\u0399\u039a\u03a9\u039d \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u03a9\u039d", "1369": "\u039a\u0391\u03a4\u0391\u03a3\u03a4\u0391\u03a4\u0399\u039a\u0395\u03a3 \u0394\u0399\u0391\u03a4\u0391\u039e\u0395\u0399\u03a3 \u03a0\u0391\u03a1\u039f\u03a7\u03a9\u039d", "1370": "\u0391\u03a4\u03a4\u0399\u039a\u039f \u039c\u0395\u03a4\u03a1\u039f", "1371": "\u0392\u039f\u03a5\u03a3\u03a4\u0391\u03a3\u0399\u0391", "1372": "\u0391\u03a0\u039f\u03a3\u03a4\u03a1\u0391\u03a4\u0395\u0399\u0395\u03a3 - \u0395\u03a0\u0391\u039d\u0391\u03a6\u039f\u03a1\u0395\u03a3", "1373": "\u03a4\u03a1\u0391\u03a0\u0395\u0396\u0399\u03a4\u0399\u039a\u0391 \u0394\u0391\u039d\u0395\u0399\u0391 \u03a3\u0395 \u03a7\u03a1\u03a5\u03a3\u039f \u039a\u039b\u03a0", "1374": "\u0394\u0399\u039a\u0391\u0399\u039f\u03a3\u03a4\u0391\u03a3\u0399\u039f \u03a0\u039f\u039b\u0395\u039c\u03a9\u039d", "1375": "\u0395\u0398\u039d\u0399\u039a\u039f \u0391\u03a3\u03a4\u0395\u03a1\u039f\u03a3\u039a\u039f\u03a0\u0395\u0399\u039f", "1376": "\u0399\u0394\u0399\u03a9\u03a4\u0399\u039a\u0395\u03a3 \u0395\u03a0\u0399\u03a7\u0395\u0399\u03a1\u0397\u03a3\u0395\u0399\u03a3 \u03a0\u0391\u03a1\u039f\u03a7\u0397\u03a3 \u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u03a9\u039d \u0391\u03a3\u03a6\u0391\u039b\u0395\u0399\u0391\u03a3", "1377": "\u0394\u0391\u039d\u0395\u0399\u0391 \u0395\u039e\u03a9\u03a4\u0395\u03a1\u0399\u039a\u0391", "1378": "\u03a0\u039d\u0395\u03a5\u039c\u0391\u03a4\u0399\u039a\u039f \u039a\u0395\u039d\u03a4\u03a1\u039f \u0391\u0398\u0397\u039d\u03a9\u039d", "1379": "\u0391\u03a0\u039f\u03a3\u0392\u0395\u03a3\u0395\u0399\u03a3", "1380": "\u0394\u0399\u0391\u03a6\u039f\u03a1\u039f\u0399 \u039f\u0399\u039d\u0399\u039a\u039f\u0399 \u039a\u0391\u0399 \u03a3\u03a4\u0391\u03a6\u0399\u0394\u0399\u039a\u039f\u0399 \u039d\u039f\u039c\u039f\u0399", "1381": "\u0391\u039a\u0391\u0394\u0397\u039c\u0399\u0391 \u03a3\u03a9\u039c\u0391\u03a4\u0399\u039a\u0397\u03a3 \u0391\u0393\u03a9\u0393\u0397\u03a3", "1382": "\u0391\u039c\u039c\u039f\u039b\u0397\u03a8\u0399\u0391", "1383": "\u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f \u03a0\u039b\u039f\u0397\u0393\u0399\u039a\u0397\u03a3 \u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u0391\u03a3", "1384": "\u0397\u0398\u0399\u039a\u0395\u03a3 \u0391\u039c\u039f\u0399\u0392\u0395\u03a3 \u0391\u0395\u03a1\u039f\u03a0\u039f\u03a1\u0399\u0391\u03a3", "1385": "\u039a\u03a9\u0394\u0399\u039a\u0391\u03a3 \u03a6\u039f\u03a1\u039f\u039b\u039f\u0393\u0399\u0391\u03a3 \u039f\u0399\u039d\u039f\u03a0\u039d\u0395\u03a5\u039c\u0391\u03a4\u039f\u03a3", "1386": "\u039b\u0399\u039c\u0395\u039d\u0399\u039a\u0391 \u03a4\u0391\u039c\u0395\u0399\u0391 \u2013 \u039b\u0399\u039c\u0395\u039d\u0399\u039a\u0391 \u0395\u03a1\u0393\u0391", "1387": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0395\u03a0\u0399\u039a. \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u03a3 \u03a5\u03a0\u0391\u039b\u039b\u0397\u039b\u03a9\u039d \u0395\u0398\u039d\u0399\u039a\u039f\u03a5 \u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u03a5 \u039a\u0391\u03a0\u039d\u039f\u03a5 (\u03a4.\u0395.\u0391.\u03a5\u0395.\u039f.\u039a)", "1388": "\u0395\u039b\u0395\u0393\u03a7\u039f\u03a3 \u03a4\u0397\u03a3 \u03a0\u0399\u03a3\u03a4\u0395\u03a9\u03a3", "1389": "\u03a3\u03a4\u03a1\u0391\u03a4\u0399\u03a9\u03a4\u0399\u039a\u0397 \u03a3\u03a7\u039f\u039b\u0397 \u0391\u039e\u0399\u03a9\u039c\u0391\u03a4\u0399\u039a\u03a9\u039d \u03a3\u03a9\u039c\u0391\u03a4\u03a9\u039d", "1390": "\u0392\u039f\u0397\u0398\u0397\u03a4\u0399\u039a\u0391 \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0391 \u03a4\u0397\u03a3 \u0394\u0399\u039a\u0397\u03a3", "1391": "\u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u03a3 \u03a3\u03a7\u039f\u039b\u0399\u039a\u03a9\u039d \u039a\u03a4\u0399\u03a1\u0399\u03a9\u039d", "1392": "\u0392\u0399\u039f\u039c\u0397\u03a7\u0391\u039d\u0399\u0395\u03a3 \u0394\u03a9\u0394\u0395\u039a\u0391\u039d\u0397\u03a3\u039f\u03a5", "1393": "\u03a5\u0393\u0399\u0395\u0399\u039d\u0397 \u039a\u0391\u0399 \u0391\u03a3\u03a6\u0391\u039b\u0395\u0399\u0391 \u03a7\u03a9\u03a1\u03a9\u039d \u0395\u03a1\u0393\u0391\u03a3\u0399\u0391\u03a3 \u039a\u0391\u0399 \u0395\u03a1\u0393\u0391\u0396\u039f\u039c\u0395\u039d\u03a9\u039d", "1394": "\u039c\u0395\u03a4\u0391\u03a4\u03a1\u039f\u03a0\u0397 \u03a4\u0397\u03a3 \u03a0\u039f\u0399\u039d\u0397\u03a3", "1395": "\u0391\u03a5\u03a4\u039f\u039d\u039f\u039c\u039f\u03a3 \u039f\u0399\u039a\u039f\u0394\u039f\u039c\u0399\u039a\u039f\u03a3 \u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u03a3 \u0391\u039e\u0399\u03a9\u039c\u0391\u03a4\u0399\u039a\u03a9\u039d", "1396": "\u039f\u0394\u0399\u039a\u0395\u03a3 \u039c\u0395\u03a4\u0391\u03a6\u039f\u03a1\u0395\u03a3-\u039c\u0395\u03a4\u0391\u03a6\u039f\u03a1\u0395\u0399\u03a3", "1397": "\u0391\u03a1\u039c\u0391 \u0398\u0395\u03a3\u03a0\u0399\u0394\u039f\u03a3", "1398": "\u0394\u0397\u039c\u039f\u03a4\u0399\u039a\u0391 & \u039a\u039f\u0399\u039d\u039f\u03a4\u0399\u039a\u0391", "1399": "\u03a0\u0395\u03a1\u0399\u03a6\u0395\u03a1\u0395\u0399\u0391\u039a\u0395\u03a3 \u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u0395\u03a3", "1400": "\u03a3\u03a7\u039f\u039b\u0397 \u0391\u039d\u0398\u03a1\u03a9\u03a0\u0399\u03a3\u03a4\u0399\u039a\u03a9\u039d \u039a\u0391\u0399 \u039a\u039f\u0399\u039d\u03a9\u039d\u0399\u039a\u03a9\u039d \u0395\u03a0\u0399\u03a3\u03a4\u0397\u039c\u03a9\u039d", "1401": "\u03a3\u03a4\u03a1\u0391\u03a4\u0395\u03a5\u039f\u039c\u0395\u039d\u039f\u0399 \u03a6\u039f\u0399\u03a4\u0397\u03a4\u0391\u0399", "1402": "\u0393\u0395\u039d\u0399\u039a\u0391", "1403": "\u039a\u0391\u03a4\u0391\u03a0\u039f\u039b\u0395\u039c\u0397\u03a3\u0397 \u0395\u03a0\u0399\u0396\u03a9\u039f\u03a4\u0399\u03a9\u039d", "1404": "\u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u03a3 \u0394\u0399\u039f\u0399\u039a\u0397\u03a3\u0395\u03a9\u03a3 \u0395\u039a\u039a\u039b\u0397\u03a3\u0399\u0391\u03a3\u03a4\u0399\u039a\u0397\u03a3 \u039a\u0391\u0399 \u039c\u039f\u039d\u0391\u03a3\u03a4\u0397\u03a1\u0399\u0391\u039a\u0397\u03a3 \u03a0\u0395\u03a1\u0399\u039f\u03a5\u03a3\u0399\u0391\u03a3", "1405": "\u0391\u03a0\u0391\u0393\u039f\u03a1\u0395\u03a5\u03a3\u0397 \u03a7\u03a1\u0397\u03a3\u0397\u03a3 \u0395\u03a0\u0399\u0392\u039b\u0391\u0392\u03a9\u039d \u039f\u03a5\u03a3\u0399\u03a9\u039d", "1406": "\u03a8\u03a5\u03a7\u039f\u039b\u039f\u0393\u039f\u0399", "1407": "\u03a0\u03a5\u03a1\u0391\u03a3\u03a6\u0391\u039b\u0395\u0399\u0391 \u0395\u03a0\u0399\u03a7\u0395\u0399\u03a1\u0397\u03a3\u0395\u03a9\u039d \u039a\u0391\u0399 \u0391\u03a0\u039f\u0398\u0397\u039a\u03a9\u039d", "1408": "\u0391\u03a0\u039f\u039a\u0391\u03a4\u0391\u03a3\u03a4\u0391\u03a3\u0399\u03a3 \u0391\u03a0\u039f\u03a1\u03a9\u039d \u039a\u039f\u03a1\u0391\u03a3\u0399\u0394\u03a9\u039d", "1409": "\u039c\u0395 \u03a4\u0397 \u0392\u0395\u039d\u0395\u0396\u039f\u03a5\u0395\u039b\u0391", "1410": "\u0394\u0399\u039a\u0391\u0399\u039f \u03a4\u03a9\u039d \u03a3\u03a5\u039d\u0398\u0397\u039a\u03a9\u039d", "1411": "\u039a\u03a4\u0397\u039d\u0399\u0391\u03a4\u03a1\u0399\u039a\u0391 \u039c\u0399\u039a\u03a1\u039f\u0392\u0399\u039f\u039b\u039f\u0393\u0399\u039a\u0391 \u0395\u03a1\u0393\u0391\u03a3\u03a4\u0397\u03a1\u0399\u0391", "1412": "\u0395\u03a1\u0393\u0391\u03a3\u03a4\u0397\u03a1\u0399\u0391", "1413": "\u039a\u0391\u039d\u039f\u039d\u0399\u03a3\u039c\u039f\u0399 TELEX \u039a\u0391\u0399 TELEFAX", "1414": "\u039f\u03a0\u039b\u0391 \u039a\u0391\u0399 \u03a3\u03a9\u039c\u0391\u03a4\u0391 \u03a3\u03a4\u03a1\u0391\u03a4\u039f\u03a5 \u039e\u0397\u03a1\u0391\u03a3", "1415": "\u0395\u039a\u03a0\u0391\u0399\u0394\u0395\u03a5\u03a3\u0397 \u03a4\u0391\u03a7\u03a5\u0394\u03a1\u039f\u039c\u0399\u039a\u03a9\u039d \u03a5\u03a0\u0391\u039b\u039b\u0397\u039b\u03a9\u039d", "1416": "\u03a4\u0399\u039c\u039f\u039b\u039f\u0393\u0399\u0391 \u03a0\u0391\u03a1\u039f\u03a7\u03a9\u039d", "1417": "\u039c\u039f\u03a5\u03a3\u039f\u03a5\u039b\u039c\u0391\u039d\u0399\u039a\u0395\u03a3 \u039a\u039f\u0399\u039d\u039f\u03a4\u0397\u03a4\u0395\u03a3", "1418": "\u03a3\u03a4\u03a1\u0391\u03a4\u0399\u03a9\u03a4\u0399\u039a\u0391 \u0395\u03a1\u0393\u0391 \u0395\u039d \u0393\u0395\u039d\u0395\u0399", "1419": "\u03a3\u03a4\u03a1\u0391\u03a4\u0399\u03a9\u03a4\u0399\u039a\u0391 \u039d\u039f\u03a3\u039f\u039a\u039f\u039c\u0395\u0399\u0391", "1420": "\u0394\u0399\u039f\u0399\u039a\u0397\u03a3\u0397 \u0394\u0397\u039c\u039f\u03a3\u0399\u03a9\u039d \u039a\u03a4\u0397\u039c\u0391\u03a4\u03a9\u039d \u2013", "1421": "\u0395\u0399\u0394\u0399\u039a\u0395\u03a3 \u03a4\u0399\u039c\u0395\u03a3 \u039a\u0391\u03a5\u03a3\u0399\u039c\u03a9\u039d \u039a\u0391\u0399 \u0397\u039b\u0395\u039a\u03a4\u03a1\u0399\u039a\u0397\u03a3 \u0395\u039d\u0395\u03a1\u0393\u0395\u0399\u0391\u03a3", "1422": "\u0395\u0393\u0393\u03a1\u0391\u03a6\u0397 \u03a3\u03a0\u039f\u03a5\u0394\u0391\u03a3\u03a4\u03a9\u039d", "1423": "\u0394\u0397\u039c\u039f\u03a4\u0399\u039a\u0391-\u039a\u039f\u0399\u039d\u039f\u03a4\u0399\u039a\u0391 \u0394\u0391\u03a3\u0397 \u039a\u0391\u0399 \u039a\u0397\u03a0\u039f\u0399", "1424": "\u0394\u0397\u039c\u039f\u03a3\u0399\u0391 \u0395\u03a0\u0399\u03a7\u0395\u0399\u03a1\u0397\u03a3\u0397 \u03a0\u039f\u039b\u0395\u039f\u0394\u039f\u039c\u0399\u0391\u03a3 \u039a\u0391\u0399 \u03a3\u03a4\u0395\u0393\u0391\u03a3\u0395\u03a9\u03a3", "1425": "\u03a3\u03a5\u039d\u03a4\u0391\u039e\u0399\u039f\u0394\u039f\u03a4\u0397\u03a3\u0397 \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u03a5 \u0399.\u039a.\u0391", "1426": "\u0395\u039e\u0395\u03a4\u0391\u03a3\u03a4\u0399\u039a\u0395\u03a3 \u0395\u03a0\u0399\u03a4\u03a1\u039f\u03a0\u0395\u03a3 \u0392\u039f\u03a5\u039b\u0397\u03a3", "1427": "\u039c\u0395\u03a4\u03a1\u0391 \u039a\u0391\u03a4\u0391 \u03a4\u03a9\u039d \u03a0\u03a5\u03a1\u039a\u0391\u0399\u03a9\u039d \u0394\u0391\u03a3\u03a9\u039d", "1428": "\u03a5\u03a0\u039f\u03a5\u03a1\u0393\u0395\u0399\u039f \u0395\u0398\u039d\u0399\u039a\u0397\u03a3 \u039f\u0399\u039a\u039f\u039d\u039f\u039c\u0399\u0391\u03a3", "1429": "\u03a3\u03a5\u0393\u039a\u0395\u039d\u03a4\u03a1\u03a9\u03a3\u0397 \u03a0\u0395\u03a1\u0399\u039f\u03a5\u03a3\u0399\u0391\u03a3 \u03a4\u039f\u03a5 \u0394\u0397\u039c\u039f\u03a3\u0399\u039f\u03a5", "1430": "\u039a\u0391\u03a4\u0391\u03a3\u039a\u0395\u03a5\u0397 \u039a\u0391\u0399 \u03a3\u03a5\u039d\u03a4\u0397\u03a1\u0397\u03a3\u0397 \u039f\u0394\u03a9\u039d", "1431": "\u03a4\u0395\u039b\u03a9\u039d\u0395\u0399\u0391\u039a\u0391 \u039a\u03a4\u0399\u03a1\u0399\u0391", "1432": "\u03a4\u0391\u039c\u0395\u0399\u039f \u03a3\u03a5\u039d\u03a4\u0391\u039e\u0395\u03a9\u039d \u0395\u039a\u03a4\u0395\u039b\u03a9\u039d\u0399\u03a3\u03a4\u03a9\u039d (\u03a4.\u03a3.\u0395.)", "1433": "\u039a\u0391\u0398\u0397\u0393\u0397\u03a4\u0399\u039a\u0395\u03a3 \u0395\u0394\u03a1\u0395\u03a3", "1434": "\u039d\u0391\u03a5\u03a4\u0399\u039a\u0397 \u0395\u03a1\u0393\u0391\u03a3\u0399\u0391 \u039d\u0395\u03a9\u039d", "1435": "\u0395\u039a\u03a4\u0395\u039b\u0395\u03a3\u0397 \u0398\u0391\u039d\u0391\u03a4\u0399\u039a\u0397\u03a3 \u03a0\u039f\u0399\u039d\u0397\u03a3", "1436": "\u0395\u03a0\u0399\u0398\u0395\u03a9\u03a1\u0397\u03a3\u0397 \u03a0\u039b\u039f\u0399\u03a9\u039d", "1437": "\u0394\u0399\u03a0\u039b\u03a9\u039c\u0391\u03a4\u0391 \u039a\u0391\u0399 \u0391\u0394\u0395\u0399\u0395\u03a3 \u039d\u0391\u03a5\u03a4\u0399\u039a\u0397\u03a3 \u0399\u039a\u0391\u039d\u039f\u03a4\u0397\u03a4\u0391\u03a3", "1438": "\u0399\u03a3\u03a4\u039f\u03a1\u0399\u039a\u039f \u039a\u0391\u0399 \u0395\u0398\u039d\u039f\u039b\u039f\u0393\u0399\u039a\u039f \u039c\u039f\u03a5\u03a3\u0395\u0399\u039f", "1439": "\u03a0\u03a1\u039f\u03a3\u03a4\u0391\u03a3\u0399\u0391 \u0395\u03a1\u0393\u0391\u0396\u039f\u039c\u0395\u039d\u0397\u03a3 \u039d\u0395\u0391\u03a3", "1440": "\u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u0391 \u0395\u03a0\u0399\u039c\u0395\u039b\u0397\u03a4\u03a9\u039d \u0391\u039d\u0397\u039b\u0399\u039a\u03a9\u039d", "1441": "\u0391\u03a3\u03a4\u0399\u039a\u0397 \u0395\u03a5\u0398\u03a5\u039d\u0397 \u0391\u03a0\u039f \u03a0\u03a5\u03a1\u0397\u039d\u0399\u039a\u0397 \u0395\u039d\u0395\u03a1\u0393\u0395\u0399\u0391", "1442": "\u039a\u03a9\u0394\u0399\u039a\u0391\u03a3 \u03a6\u039f\u03a1\u039f\u039b\u039f\u0393\u0399\u0391\u03a3 \u039a\u0391\u0398\u0391\u03a1\u0391\u03a3 \u03a0\u03a1\u039f\u03a3\u039f\u0394\u039f\u03a5", "1443": "\u0395\u03a0\u0399\u0398\u0395\u03a9\u03a1\u0397\u03a3\u0397 \u03a5.\u0395.\u039d", "1444": "\u039a\u0391\u03a4\u0391\u0393\u0393\u0395\u039b\u0399\u0391 \u03a3\u03a5\u039c\u0392\u0391\u03a3\u0395\u03a9\u03a3 \u0395\u03a1\u0393\u0391\u03a3\u0399\u0391\u03a3 \u03a3\u03a5\u039d\u0394\u0399\u039a\u0391\u039b\u0399\u03a3\u03a4\u0399\u039a\u03a9\u039d \u03a3\u03a4\u0395\u039b\u0395\u03a7\u03a9\u039d", "1445": "\u03a5\u0393\u0395\u0399\u039f\u039d\u039f\u039c\u0399\u039a\u0395\u03a3 \u0394\u0399\u0391\u03a4\u0391\u039e\u0395\u0399\u03a3", "1446": "\u0394\u0399\u0394\u0391\u03a3\u039a\u0391\u039b\u0395\u0399\u039f \u039c\u0395\u03a3\u0397\u03a3 \u0395\u039a\u03a0\u0391\u0399\u0394\u0395\u03a5\u03a3\u0397\u03a3", "1447": "\u03a5\u03a0\u039f\u0392\u03a1\u03a5\u03a7\u0399\u0391", "1448": "\u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u0391 \u0391\u03a0\u03a9\u039b\u0395\u0399\u03a9\u039d, \u039d\u0395\u039a\u03a1\u039f\u03a4\u0391\u03a6\u0395\u0399\u03a9\u039d \u039a\u039b\u03a0", "1449": "\u0391\u0393\u03a1\u039f\u03a4. \u0391\u03a0\u039f\u039a\u0391\u03a4\u0391\u03a3\u03a4\u0391\u03a3\u0397 \u03a3\u03a4\u0391 \u0394\u03a9\u0394\u0395\u039a\u0391\u039d\u0397\u03a3\u0391", "1450": "\u0395\u0399\u0394\u0399\u039a\u0395\u03a3 \u0391\u03a0\u0391\u039b\u039b\u039f\u03a4\u03a1\u0399\u03a9\u03a3\u0395\u0399\u03a3", "1451": "\u03a3\u03a4\u0395\u0393\u0391\u03a3\u0397 \u03a4\u0391\u03a7\u03a5\u0394\u03a1\u039f\u039c\u0399\u039a\u03a9\u039d \u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u03a9\u039d", "1452": "\u0394\u0399\u0391\u039c\u0395\u03a4\u0391\u039a\u039f\u039c\u0399\u03a3\u0397 \u039d\u0391\u03a1\u039a\u03a9\u03a4\u0399\u039a\u03a9\u039d", "1453": "\u039c\u0395\u03a4\u0391\u039c\u039f\u03a3\u03a7\u0395\u03a5\u03a3\u0397 \u0392\u0399\u039f\u039b\u039f\u0393\u0399\u039a\u03a9\u039d \u039f\u03a5\u03a3\u0399\u03a9\u039d", "1454": "\u0392\u03a1\u0391\u0392\u0395\u0399\u0391 \u039a\u0391\u0399 \u03a7\u039f\u03a1\u0397\u0393\u0399\u0395\u03a3", "1455": "\u0395\u03a5\u03a1\u03a9\u03a0\u0391\u0399\u039a\u0397 \u039c\u039f\u03a1\u03a6\u03a9\u03a4\u0399\u039a\u0397 \u03a3\u03a5\u039c\u0392\u0391\u03a3\u0397", "1456": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0395\u03a0\u0399\u039a\u039f\u03a5\u03a1\u0399\u039a\u0397\u03a3 \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u03a3 \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u03a5 \u0395\u039b\u039b\u0397\u039d. \u0395\u03a1\u03a5\u0398\u03a1\u039f\u03a5 \u03a3\u03a4\u0391\u03a5\u03a1\u039f\u03a5 (\u03a4.\u0395.\u0391.\u03a0.\u0395.\u0395.\u03a3.)", "1457": "\u0391\u03a4\u0395\u039b\u0395\u0399\u0395\u03a3 \u0395\u0399\u0394\u03a9\u039d \u0392\u039f\u0397\u0398\u0395\u0399\u0391\u03a3", "1458": "\u0395\u039a\u03a4\u0395\u039b\u0395\u03a3\u0397 \u0395\u03a1\u0393\u03a9\u039d \u039f\u03a7\u03a5\u03a1\u03a9\u03a3\u0397\u03a3", "1459": "\u03a1\u039f\u03a5\u0391\u039d\u03a4\u0391 \u2013 \u03a1\u039f\u03a5\u039c\u0391\u039d\u0399\u0391 \u039a.\u039b\u03a0", "1460": "\u039c\u039f\u039d\u0399\u039c\u0395\u03a3 \u0391\u039d\u03a4\u0399\u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0395\u0399\u0395\u03a3", "1461": "\u03a0\u03a1\u039f\u03a3\u03a4\u0391\u03a3\u0399\u0391 \u0395\u03a6\u0395\u0394\u03a1\u03a9\u039d \u0399\u03a0\u03a4\u0391\u039c\u0395\u039d\u03a9\u039d", "1462": "\u03a4\u03a1\u0391\u03a0\u0395\u0396\u0395\u03a3 \u0395\u039e\u03a9\u03a4\u0395\u03a1\u0399\u039a\u039f\u03a5 \u0395\u039c\u03a0\u039f\u03a1\u0399\u039f\u03a5", "1463": "\u0399\u0391\u03a4\u03a1\u0399\u039a\u039f\u039d \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u039d \u0394\u0397\u039c\u039f\u03a3\u0399\u039f\u03a5 \u039a\u0391\u0399 \u039d.\u03a0.\u0394.\u0394", "1464": "\u0394\u0399\u0391\u03a6\u039f\u03a1\u0391 \u039c\u039f\u039d\u0391\u03a3\u03a4\u0397\u03a1\u0399\u0391", "1465": "\u0395\u03a4\u0391\u0399\u03a1\u0395\u0399\u0395\u03a3 \u0395\u03a0\u0395\u039d\u0394\u03a5\u03a3\u0395\u03a9\u039d - \u03a7\u0391\u03a1\u03a4\u039f\u03a6\u03a5\u039b\u0391\u039a\u0399\u039f\u03a5 \u039a\u0391\u0399 \u0391\u039c\u039f\u0399\u0392\u0391\u0399\u03a9\u039d \u039a\u0395\u03a6\u0391\u039b\u0391\u0399\u03a9\u039d", "1466": "\u0391\u039d\u0391\u0393\u039d\u03a9\u03a1\u0399\u03a3\u0397 \u03a4\u0397\u03a3 \u0395\u039b\u039b\u0397\u039d\u0399\u039a\u0397\u03a3 \u03a0\u039f\u039b\u0399\u03a4\u0395\u0399\u0391\u03a3", "1467": "\u0394\u0399\u0395\u0398\u039d\u0397\u03a3 \u03a3\u03a5\u039c\u0392\u0391\u03a3\u0397", "1468": "\u039b\u0399\u039c\u0395\u039d\u0391\u03a1\u03a7\u0395\u0399\u0391", "1469": "\u03a3\u0395\u0399\u03a3\u039c\u039f\u03a0\u039b\u0397\u039a\u03a4\u039f\u0399 \u0398\u0395\u03a3\u03a3\u0391\u039b\u0399\u0391\u03a3", "1470": "\u03a3\u03a4\u03a1\u0391\u03a4\u0395\u03a5\u03a3\u0397 \u0393\u03a5\u039d\u0391\u0399\u039a\u03a9\u039d", "1471": "\u03a3\u03a4\u03a1\u0391\u03a4\u0399\u03a9\u03a4\u0399\u039a\u0397 \u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u0391 \u039a\u0391\u03a4\u0391\u03a3\u039a\u0395\u03a5\u0397\u03a3 \u0395\u03a1\u0393\u03a9\u039d \u0391\u039d\u0391\u03a3\u03a5\u0393\u039a\u03a1\u039f\u03a4\u0397\u03a3\u0397\u03a3", "1472": "\u03a0\u03a1\u039f\u03a3\u03a4\u0391\u03a3\u0399\u0391 \u03a4\u0397\u03a3 \u03a4\u0399\u039c\u0397\u03a3 \u03a4\u039f\u03a5 \u03a0\u039f\u039b\u0399\u03a4\u0399\u039a\u039f\u03a5 \u039a\u039f\u03a3\u039c\u039f\u03a5", "1473": "\u0395\u03a0\u0399\u039c\u039f\u03a1\u03a6\u03a9\u03a3\u0397 \u039b\u0395\u0399\u03a4\u039f\u03a5\u03a1\u0393\u03a9\u039d \u039c.\u0395", "1474": "\u0395\u039d\u0399\u03a3\u03a7\u03a5\u03a3\u0397 \u0395\u039e\u0391\u0393\u03a9\u0393\u0397\u03a3", "1475": "\u0397\u039b\u0395\u039a\u03a4\u03a1\u039f\u03a6\u03a9\u03a4\u0399\u03a3\u039c\u039f\u03a3 \u0394\u0399\u0391\u03a6\u039f\u03a1\u03a9\u039d \u03a0\u039f\u039b\u0395\u03a9\u039d", "1476": "\u039c\u0395 \u03a4\u0399\u03a3 \u039a\u0391\u03a4\u03a9 \u03a7\u03a9\u03a1\u0395\u03a3", "1477": "\u039d\u0391\u03a5\u03a0\u0397\u0393\u039f\u03a5\u039c\u0395\u039d\u0391 \u03a0\u039b\u039f\u0399\u0391-\u039d\u0391\u03a5\u03a0\u0397\u0393\u039f\u0395\u03a0\u0399\u03a3\u039a\u0395\u03a5\u0391\u03a3\u03a4\u0399\u039a\u0395\u03a3", "1478": "\u0395\u039b\u0395\u0393\u03a7\u039f\u03a3 \u03a0\u03a9\u039b\u0397\u03a3\u0395\u03a9\u039d \u0395\u03a0\u0399 \u03a0\u0399\u03a3\u03a4\u03a9\u03a3\u0395\u0399", "1479": "\u0395\u039b\u0395\u0393\u03a7\u039f\u03a3 \u0392\u0399\u039f\u039c\u0397\u03a7\u0391\u039d\u0399\u039a\u03a9\u039d \u0395\u0393\u039a\u0391\u03a4\u0391\u03a3\u03a4\u0391\u03a3\u0395\u03a9\u039d", "1480": "\u0394\u0399\u0395\u0398\u039d\u0397\u03a3 \u039f\u0399\u039a\u039f\u039d\u039f\u039c\u0399\u039a\u0397 \u0395\u03a0\u0399\u03a4\u03a1\u039f\u03a0\u0397", "1481": "\u0393\u03a1\u0391\u03a6\u0395\u0399\u0391 \u0395\u03a5\u03a1\u0395\u03a3\u0397\u03a3 \u0395\u03a1\u0393\u0391\u03a3\u0399\u0391\u03a3 - \u03a3\u03a5\u039c\u0392\u039f\u03a5\u039b\u039f\u0399 \u0395\u03a1\u0393\u0391\u03a3\u0399\u0391\u03a3", "1482": "\u039c\u039f\u039d\u039f\u03a0\u03a9\u039b\u0399\u039f \u039d\u0391\u03a1\u039a\u03a9\u03a4\u0399\u039a\u03a9\u039d", "1483": "\u0391\u03a0\u0391\u039b\u039b\u0391\u0393\u0395\u03a3 \u03a6\u039f\u03a1\u039f\u039b\u039f\u0393\u0399\u0391\u03a3 \u039a\u039b\u0397\u03a1\u039f\u039d\u039f\u039c\u0399\u03a9\u039d", "1484": "\u03a0\u0391\u0393\u039a\u039f\u03a3\u039c\u0399\u0391 \u039f\u03a1\u0393\u0391\u039d\u03a9\u03a3\u0397 \u03a5\u0393\u0395\u0399\u0391\u03a3", "1485": "\u0395\u0398\u039d\u0399\u039a\u039f \u0399\u0394\u03a1\u03a5\u039c\u0391 \u0395\u03a1\u0395\u03a5\u039d\u03a9\u039d", "1486": "\u039d\u039f\u039c\u039f\u0398\u0395\u03a3\u0399\u0391 \u03a0\u0395\u03a1\u0399 \u03a3\u03a5\u039b\u039b\u039f\u0393\u0399\u039a\u0397\u03a3 \u03a3\u03a5\u039c\u0392\u0391\u03a3\u0395\u03a9\u03a3", "1487": "\u0395\u0398\u039d\u0399\u039a\u039f\u03a3 \u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u03a3 \u03a6\u0391\u03a1\u039c\u0391\u039a\u03a9\u039d", "1488": "\u0394\u0399\u0391\u03a6\u039f\u03a1\u0391 \u0393\u03a5\u039c\u039d\u0391\u03a3\u0399\u0391 & \u039b\u03a5\u039a\u0395\u0399\u0391", "1489": "\u039e\u0395\u039d\u0395\u03a3 \u03a3\u03a7\u039f\u039b\u0395\u03a3 \u0393\u0395\u03a9\u03a0\u039f\u039d\u0399\u0391\u03a3 \u039a\u0391\u0399 \u0394\u0391\u03a3\u039f\u039b\u039f\u0393\u0399\u0391\u03a3", "1490": "\u03a0\u03a1\u039f\u03a3\u03a4\u0391\u03a3\u0399\u0391 \u0391\u039d\u0395\u03a1\u0393\u03a9\u039d", "1491": "\u03a6\u0399\u039b\u0391\u039d\u0398\u03a1\u03a9\u03a0\u0399\u039a\u0391 \u039a\u0391\u03a4\u0391\u03a3\u03a4\u0397\u039c\u0391\u03a4\u0391 \u039a\u0395\u03a6\u0391\u039b\u039b\u0397\u039d\u0399\u0391\u03a3", "1492": "\u039a\u0391\u039d\u039f\u039d\u0399\u03a3\u039c\u039f\u03a3 \u03a0\u0391\u03a1\u039f\u03a7\u03a9\u039d \u03a4.\u0395.\u0392.\u0395", "1493": "\u03a9\u0394\u0395\u0399\u0391 \u039a\u039b\u03a0. \u039c\u039f\u03a5\u03a3\u0399\u039a\u0391 \u0399\u0394\u03a1\u03a5\u039c\u0391\u03a4\u0391", "1494": "\u03a0\u03a1\u039f\u03a3\u039a\u03a5\u039d\u0397\u039c\u0391\u03a4\u0399\u039a\u0391 \u0399\u0394\u03a1\u03a5\u039c\u0391\u03a4\u0391", "1495": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0395\u03a0\u0399\u039a\u039f\u03a5\u03a1\u0399\u039a\u0397\u03a3 \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u03a3 \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u03a5 \u0391\u039d\u03a9\u039d. \u03a5\u0394\u03a1\u039f\u0397\u039b\u0395\u039a\u03a4\u03a1. \u0395\u03a4. \u0393\u039b\u0391\u03a5\u039a\u039f\u03a3", "1496": "\u03a0\u03a1\u0395\u03a3\u0392\u0395\u0399\u0395\u03a3 \u039a\u0391\u0399 \u03a0\u03a1\u039f\u039e\u0395\u039d\u0395\u0399\u0391", "1497": "\u03a5\u03a0\u039f\u03a5\u03a1\u0393\u0395\u0399\u0391 \u03a4\u03a5\u03a0\u039f\u03a5 \u039a\u0391\u0399 \u03a4\u039f\u03a5\u03a1\u0399\u03a3\u039c\u039f\u03a5", "1498": "\u0396\u03a9\u039d\u0395\u03a3 \u0395\u039d\u0395\u03a1\u0393\u039f\u03a5 \u03a0\u039f\u039b\u0395\u039f\u0394\u039f\u039c\u0399\u0391\u03a3", "1499": "\u0395\u039a\u039a\u039b\u0397\u03a3\u0399\u0391 \u0399\u039f\u039d\u0399\u03a9\u039d \u039d\u0397\u03a3\u03a9\u039d", "1500": "\u0395\u03a0\u0399\u03a4\u03a1\u039f\u03a0\u0391\u0399 \u0391\u03a3\u03a6\u0391\u039b\u0395\u0399\u0391\u03a3", "1501": "\u03a5\u03a0\u039f\u03a5\u03a1\u0393\u039f\u0399", "1502": "\u03a0\u039f\u0399\u039d\u0399\u039a\u0397 \u0394\u0399\u0391\u03a4\u0399\u039c\u0397\u03a3\u0397", "1503": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u03a3 \u0395\u03a1\u0393\u0391\u03a4\u03a9\u039d \u039a\u0395\u03a1\u0391\u039c\u039f\u03a0\u039f\u0399\u03a9\u039d", "1504": "\u03a0\u03a1\u03a9\u03a4\u0395\u03a3 \u03a5\u039b\u0395\u03a3 \u03a0\u0391\u0399\u0393\u039d\u0399\u039f\u03a7\u0391\u03a1\u03a4\u03a9\u039d", "1505": "\u039a\u03a1\u03a5\u03a0\u03a4\u039f\u0393\u03a1\u0391\u03a6\u0399\u039a\u0397 \u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u0391", "1506": "\u0394\u0399\u0395\u0398\u039d\u0397\u03a3 \u0395\u03a0\u0399\u03a4\u03a1\u039f\u03a0\u0397 \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u0397\u03a3 \u039a\u0391\u03a4\u0391\u03a3\u03a4\u0391\u03a3\u0395\u03a9\u03a3", "1507": "\u0395\u039b\u0395\u0393\u03a7\u039f\u03a3 \u0397\u039b\u0395\u039a\u03a4\u03a1\u0399\u039a\u03a9\u039d \u0395\u0393\u039a\u0391\u03a4\u0391\u03a3\u03a4\u0391\u03a3\u0395\u03a9\u039d", "1508": "\u0394\u0399\u0391\u03a7\u0395\u0399\u03a1\u0399\u03a3\u0397 \u0399\u0394\u03a1\u03a5\u039c\u0391\u03a4\u03a9\u039d \u039a\u0391\u0399 \u039a\u039b\u0397\u03a1\u039f\u0394\u039f\u03a4\u0397\u039c\u0391\u03a4\u03a9\u039d", "1509": "\u03a4\u0395\u039b\u03a9\u039d\u0395\u0399\u0391\u039a\u0397 \u03a3\u03a4\u0391\u03a4\u0399\u03a3\u03a4\u0399\u039a\u0397", "1510": "\u0399\u0394\u0399\u03a9\u03a4\u0399\u039a\u0395\u03a3 \u039d\u0391\u03a5\u03a4\u0399\u039a\u0395\u03a3 \u03a3\u03a7\u039f\u039b\u0395\u03a3", "1511": "\u0391\u0395\u03a1\u039f\u03a0\u039f\u03a1\u0399\u039a\u0391 \u0391\u03a4\u03a5\u03a7\u0397\u039c\u0391\u03a4\u0391", "1512": "\u0391\u039d\u03a9\u03a4\u0395\u03a1\u039f \u0394\u0399\u0394\u0391\u039a\u03a4\u0399\u039a\u039f \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f", "1513": "\u0394\u0399\u0391\u03a6\u039f\u03a1\u039f\u0399 \u0394\u0399\u039f\u0399\u039a\u0397\u03a4\u0399\u039a\u039f\u0399 \u0395\u03a1\u0393\u0391\u03a4\u0399\u039a\u039f\u0399 \u039d\u039f\u039c\u039f\u0399", "1514": "\u03a3\u03a5\u039c\u0392\u039f\u03a5\u039b\u0399\u039f \u0393\u0395\u03a9\u0393\u03a1\u0391\u03a6\u0399\u039a\u03a9\u039d \u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u03a9\u039d", "1515": "\u0395\u039a\u039a\u039b\u0397\u03a3\u0399\u0391\u03a3\u03a4\u0399\u039a\u0395\u03a3 \u0392\u0399\u0392\u039b\u0399\u039f\u0398\u0397\u039a\u0395\u03a3", "1516": "\u03a4\u039c\u0397\u039c\u0391 \u0395\u03a0\u0399\u03a3\u03a4\u0397\u039c\u0397\u03a3 \u03a6\u03a5\u03a3\u0399\u039a\u0397\u03a3 \u0391\u0393\u03a9\u0393\u0397\u03a3 \u039a\u0391\u0399 \u0391\u0398\u039b\u0397\u03a4\u0399\u03a3\u039c\u039f\u03a5", "1517": "\u03a0\u0395\u03a1\u0399\u039f\u03a1\u0399\u03a3\u039c\u039f\u03a3 \u03a3\u03a5\u039d\u0398\u0395\u03a3\u0395\u03a9\u03a3 \u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u03a9\u039d", "1518": "\u03a4\u0391\u039c\u0395\u0399\u0391 \u0395\u03a0\u0391\u03a1\u03a7\u0399\u0391\u039a\u0397\u03a3 \u039f\u0394\u039f\u03a0\u039f\u0399\u0399\u0391\u03a3", "1519": "\u03a4\u0399\u039c\u039f\u039b\u039f\u0393\u0399\u0391 \u039f.\u03a4.\u0395 - \u039a\u039f\u03a3\u03a4\u039f\u039b\u039f\u0393\u0397\u03a3\u0397 \u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u03a9\u039d \u039f.\u03a4.\u0395", "1520": "\u0395\u0398\u039d\u0399\u039a\u0397 \u0392\u0399\u0392\u039b\u0399\u039f\u0398\u0397\u039a\u0397", "1521": "\u0394\u0397\u039c\u039f\u03a3\u0399\u0395\u03a3 \u03a3\u03a7\u039f\u039b\u0395\u03a3 \u03a5\u03a0\u039f\u039c\u0397\u03a7\u0391\u039d\u0399\u039a\u03a9\u039d", "1522": "\u0391\u039d\u0391\u03a6\u039f\u03a1\u0395\u03a3 \u03a0\u03a1\u039f\u03a3 \u03a4\u0399\u03a3 \u0391\u03a1\u03a7\u0395\u03a3", "1523": "\u039a\u03a1\u0391\u03a4\u0399\u039a\u0397 \u0395\u039a\u039c\u0395\u03a4\u0391\u039b\u039b\u0395\u03a5\u03a3\u0397 \u039b\u0395\u03a9\u03a6\u039f\u03a1\u0395\u0399\u0391\u039a\u03a9\u039d \u0393\u03a1\u0391\u039c\u039c\u03a9\u039d", "1524": "\u0394\u0399\u0391\u03a6\u039f\u03a1\u0391 \u0395\u03a0\u0399\u0394\u039f\u039c\u0391\u03a4\u0391", "1525": "\u0399\u0394\u0399\u03a9\u03a4\u0399\u039a\u0397 \u0391\u0395\u03a1\u039f\u03a0\u039f\u03a1\u0399\u0391 \u2013 \u0391\u0395\u03a1\u039f\u039b\u0395\u03a3\u03a7\u0395\u03a3", "1526": "\u03a4\u039c\u0397\u039c\u0391 \u0394\u0399\u039f\u0399\u039a\u0397\u03a4\u0399\u039a\u0397\u03a3 \u03a4\u03a9\u039d \u0395\u03a0\u0399\u03a7\u0395\u0399\u03a1\u0397\u03a3\u0395\u03a9\u039d", "1527": "\u0394\u0399\u0395\u0398\u039d\u0395\u0399\u03a3 \u0391\u0395\u03a1\u039f\u03a0\u039f\u03a1\u0399\u039a\u0395\u03a3 \u03a3\u03a5\u039c\u0392\u0391\u03a3\u0395\u0399\u03a3", "1528": "\u03a0\u03a1\u039f\u0399\u039a\u039f\u0394\u039f\u03a4\u0397\u03a3\u0395\u0399\u03a3 \u0395\u039e \u0395\u0398\u039d\u0399\u039a\u03a9\u039d \u0393\u0391\u0399\u03a9\u039d", "1529": "\u0394\u0399\u039f\u03a1\u0398\u03a9\u03a3\u0397 \u0391\u03a3\u03a5\u039c\u03a6\u03a9\u039d\u0399\u03a9\u039d", "1530": "\u0395\u03a0\u0399\u03a4\u03a1\u039f\u03a0\u0397 \u0394\u0399\u039f\u0399\u039a\u0397\u03a3\u0395\u03a9\u03a3", "1531": "\u039c\u0395\u03a4\u0391 \u03a4\u0397\u03a3 \u0393\u0395\u03a1\u039c\u0391\u039d\u0399\u0391\u03a3", "1532": "\u039f\u0399\u039a\u039f\u0394\u039f\u039c\u0399\u039a\u039f\u0399 \u03a3\u03a5\u039d\u0395\u03a4\u0391\u0399\u03a1\u0399\u03a3\u039c\u039f\u0399", "1533": "\u039a\u0391\u03a4\u0391\u03a3\u03a4\u0391\u03a4\u0399\u039a\u039f\u0399 \u039d\u039f\u039c\u039f\u0399", "1534": "\u0391\u039e\u0399\u03a9\u039c\u0391\u03a4\u0399\u039a\u039f\u0399 \u0393\u03a1\u0391\u03a6\u0395\u0399\u039f\u03a5", "1535": "\u039a\u0391\u039d\u039f\u039d\u0399\u03a3\u039c\u039f\u03a3 \u0395\u039d\u0391\u0395\u03a1\u0399\u039f\u03a5 \u039a\u03a5\u039a\u039b\u039f\u03a6\u039f\u03a1\u0399\u0391\u03a3", "1536": "\u0394\u0399\u0391\u03a7\u0395\u0399\u03a1\u0399\u03a3\u0397 \u039a\u0391\u03a5\u03a3\u0399\u039c\u03a9\u039d", "1537": "\u039f\u039c\u039f\u039b\u039f\u0393\u0399\u0391\u039a\u0391 \u0394\u0391\u039d\u0395\u0399\u0391", "1538": "\u0395\u03a1\u0393\u0391", "1539": "\u03a3\u03a7\u039f\u039b\u0397 \u039d\u0391\u03a5\u03a4\u0399\u039a\u03a9\u039d \u0394\u039f\u039a\u0399\u039c\u03a9\u039d", "1540": "\u03a0\u03a9\u039b\u0397\u03a3\u0397 \u03a6\u0391\u03a1\u039c\u0391\u039a\u03a9\u039d \u0391\u03a0\u039f \u0399\u0391\u03a4\u03a1\u039f\u03a5\u03a3", "1541": "\u03a3\u0397\u039c\u0391\u03a4\u0391 \u0395\u0398\u039d\u0399\u039a\u039f\u03a4\u0397\u03a4\u0391\u03a3 \u039a\u0391\u0399 \u039d\u0397\u039f\u039b\u039f\u0393\u0397\u03a3\u0395\u03a9\u03a3", "1542": "\u039b\u0395\u0399\u03a4\u039f\u03a5\u03a1\u0393\u039f\u0399 \u03a3\u03a4\u039f\u0399\u03a7\u0395\u0399\u03a9\u0394\u039f\u03a5\u03a3", "1543": "\u0395\u03a6\u0395\u03a4\u0395\u0399\u0391 \u039a\u0391\u0399 \u03a0\u03a1\u03a9\u03a4\u039f\u0394\u0399\u039a\u0395\u0399\u0391", "1544": "\u03a5\u03a0\u039f\u03a5\u03a1\u0393\u0395\u0399\u039f \u03a0\u03a1\u039f\u0395\u0394\u03a1\u0399\u0391\u03a3 \u039a\u03a5\u0392\u0395\u03a1\u039d\u0397\u03a3\u0395\u03a9\u03a3", "1545": "\u039c\u039f\u03a1\u03a6\u03a9\u03a4\u0399\u039a\u039f\u03a3 \u2013 \u039a\u0399\u039d\u0397\u039c\u0391\u03a4\u039f\u0393\u03a1\u0391\u03a6\u039f\u03a3", "1546": "\u039a\u0391\u03a4\u0391\u039c\u0395\u03a4\u03a1\u0397\u03a3\u0397 \u03a7\u03a9\u03a1\u0397\u03a4\u0399\u039a\u039f\u03a4\u0397\u03a4\u0391\u03a3", "1547": "\u03a6\u03a9\u03a4\u0391\u0395\u03a1\u0399\u039f", "1548": "\u03a0\u0391\u0398\u0397\u03a4\u0399\u039a\u0397 \u0391\u0395\u03a1\u0391\u039c\u03a5\u039d\u0391", "1549": "\u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f \u039d\u039f\u03a3\u0397\u039b\u0395\u03a5\u03a4\u0399\u039a\u03a9\u039d \u0399\u0394\u03a1\u03a5\u039c\u0391\u03a4\u03a9\u039d", "1550": "\u039c\u0395 \u03a4\u0397\u039d \u039a\u03a5\u03a0\u03a1\u039f", "1551": "\u039a\u039f\u039b\u039b\u0397\u0393\u039f\u0399 (\u0395\u03a0\u0399\u039c\u039f\u03a1\u03a4\u039f\u0399 \u039a\u0391\u039b\u039b\u0399\u0395\u03a1\u0393\u0397\u03a4\u0395\u03a3)", "1552": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0391\u03a1\u03a9\u0393\u0397\u03a3 \u039b.\u03a3", "1553": "\u0399\u03a7\u0398\u03a5\u039f\u03a3\u039a\u0391\u039b\u0395\u03a3", "1554": "\u03a3\u03a7\u0397\u039c\u0391 \u039a\u0391\u0399 \u03a4\u0399\u039c\u0397 \u03a0\u03a9\u039b\u0397\u03a3\u0397\u03a3 \u0395\u03a6\u0397\u039c\u0395\u03a1\u0399\u0394\u03a9\u039d", "1555": "\u03a5\u0399\u039f\u0398\u0395\u03a3\u0399\u0391", "1556": "\u0395\u039a\u03a4\u0395\u039b\u0395\u03a3\u0397 \u0395\u03a1\u0393\u03a9\u039d \u0391\u03a1\u039c\u039f\u0394\u0399\u039f\u03a4\u0397\u03a4\u0391\u03a3 \u0395\u039a\u039a\u039b\u0397\u03a3\u0399\u0391\u03a3", "1557": "\u03a4\u0391\u039c\u0395\u0399\u039f \u03a3\u03a5\u039d\u03a4\u0391\u039e\u0395\u03a9\u039d \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u03a5", "1558": "\u0394\u0399\u0391\u03a6\u039f\u03a1\u0395\u03a3 \u0395\u03a5\u03a1\u03a9\u03a0\u0391\u0399\u039a\u0395\u03a3 \u03a3\u03a5\u039c\u03a6\u03a9\u039d\u0399\u0395\u03a3", "1559": "\u0395\u0393\u0393\u0395\u0399\u039f\u03a3 \u03a6\u039f\u03a1\u039f\u039b\u039f\u0393\u0399\u0391", "1560": "\u03a0\u0391\u0399\u0394\u0391\u0393\u03a9\u0393\u0399\u039a\u0395\u03a3 \u0391\u039a\u0391\u0394\u0397\u039c\u0399\u0395\u03a3", "1561": "\u03a4\u0391\u039c\u0395\u0399\u039f \u03a0\u03a1\u039f\u039d\u039f\u0399\u0391\u03a3 \u0395\u03a1\u0393\u0391\u03a4\u039f\u03a5\u03a0\u0391\u039b\u039b\u0397\u039b\u03a9\u039d \u039c\u0395\u03a4\u0391\u039b\u039b\u039f\u03a5 (\u03a4\u0391.\u03a0.\u0395.\u039c.)", "1562": "\u03a4\u0395\u03a7\u039d\u0399\u039a\u0397 \u0395\u039a\u039c\u0395\u03a4\u0391\u039b\u039b\u0395\u03a5\u03a3\u0397 \u0391\u0395\u03a1\u039f\u03a3\u039a\u0391\u03a6\u03a9\u039d", "1563": "\u0395\u039d\u03a9\u03a3\u0397 \u0391\u03a0\u039f\u03a3\u03a4\u03a1\u0391\u03a4\u03a9\u039d \u0391\u039e\u0399\u03a9\u039c\u0391\u03a4\u0399\u039a\u03a9\u039d \u0392.\u0391", "1564": "\u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0397 \u0395\u03a1\u0393\u0391\u03a4\u03a9\u039d \u0393\u0395\u03a9\u03a1\u0393\u0399\u0391\u03a3", "1565": "\u039f\u03a1\u0393\u0391\u039d\u03a9\u03a3\u0397 \u039a\u0391\u039b\u039b\u0399\u03a4\u0395\u03a7\u039d\u0399\u039a\u03a9\u039d \u0395\u039a\u0394\u0397\u039b\u03a9\u03a3\u0395\u03a9\u039d-\u03a6\u0395\u03a3\u03a4\u0399\u0392\u0391\u039b", "1566": "\u03a0\u0395\u03a1\u0399\u039f\u03a5\u03a3\u0399\u0391\u039a\u0395\u03a3 \u03a3\u03a5\u039d\u0395\u03a0\u0395\u0399\u0395\u03a3 \u03a4\u0397\u03a3 \u03a0\u039f\u0399\u039d\u0397\u03a3", "1567": "\u03a4\u0397\u039b\u0395\u0393\u03a1\u0391\u03a6\u0399\u039a\u0397 \u0391\u039d\u03a4\u0391\u03a0\u039f\u039a\u03a1\u0399\u03a3\u0397", "1568": "\u0395\u03a0\u0399\u0398\u0395\u03a9\u03a1\u0397\u03a3\u0397 \u0394\u0397\u039c\u039f\u03a3\u0399\u03a9\u039d \u03a5\u03a0\u039f\u039b\u039f\u0393\u03a9\u039d", "1569": "\u039c\u0395 \u03a4\u039f\u039d \u039a\u0391\u039d\u0391\u0394\u0391", "1570": "\u0391\u039b\u039b\u0397\u039b\u039f\u0393\u03a1\u0391\u03a6\u0399\u0391 \u03a5.\u0395.\u039d", "1571": "\u03a4\u0395\u03a7\u039d\u0399\u039a\u039f \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f \u0391\u0395\u03a1\u039f\u03a0\u039f\u03a1\u0399\u0391\u03a3", "1572": "\u039a\u039b\u0391\u0394\u039f\u03a3 \u0391\u03a5\u03a4\u039f\u03a4\u0395\u039b\u03a9\u03a3 \u0391\u03a0\u0391\u03a3\u03a7\u039f\u039b\u039f\u03a5\u039c\u0395\u039d\u03a9\u039d, \u0395\u039b\u0395\u03a5\u0398\u0395\u03a1\u03a9\u039d \u039a\u0391\u0399 \u0391\u039d\u0395\u039e\u0391\u03a1\u03a4\u0397\u03a4\u03a9\u039d", "1573": "\u03a3\u03a7\u039f\u039b\u0395\u0399\u0391 \u0392\u0391\u03a1\u03a5\u039a\u039f\u03a9\u039d \u0397 \u039a\u03a9\u03a6\u03a9\u039d", "1574": "\u03a4\u0391\u039c\u0395\u0399\u039f \u03a0\u03a1\u039f\u039d\u039f\u0399\u0391\u03a3 \u039a\u0391\u03a4\u03a9\u03a4\u0395\u03a1\u03a9\u039d \u03a0\u039b\u0397\u03a1\u03a9\u039c\u0391\u03a4\u03a9\u039d \u0395.\u039d", "1575": "\u03a4\u039f\u03a5\u03a1\u0399\u03a3\u03a4\u0399\u039a\u0391 \u03a0\u039b\u039f\u0399\u0391 - \u03a3\u039a\u0391\u03a6\u0397 \u0391\u039d\u0391\u03a8\u03a5\u03a7\u0397\u03a3 - \u03a4\u039f\u03a5\u03a1\u0399\u03a3\u03a4\u0399\u039a\u039f\u0399 \u039b\u0399\u039c\u0395\u039d\u0395\u03a3 (\u039c\u0391\u03a1\u0399\u039d\u0395\u03a3)", "1576": "\u0395\u03a0\u0399\u0394\u039f\u039c\u0391\u03a4\u0391 \u0395\u039f\u03a1\u03a4\u03a9\u039d \u03a7\u03a1\u0399\u03a3\u03a4\u039f\u03a5\u0393\u0395\u039d\u039d\u03a9\u039d \u039a\u0391\u0399 \u03a0\u0391\u03a3\u03a7\u0391", "1577": "\u0395\u03a0\u0399\u039c\u0395\u039b\u0397\u03a4\u0397\u03a1\u0399\u0391 - \u0393\u0395\u039d\u0399\u039a\u0395\u03a3 \u0394\u0399\u0391\u03a4\u0391\u039e\u0395\u0399\u03a3", "1578": "\u03a5\u03a0\u039f\u03a5\u03a1\u0393\u0395\u0399\u039f \u0395\u03a1\u0395\u03a5\u039d\u0391\u03a3 \u039a\u0391\u0399 \u03a4\u0395\u03a7\u039d\u039f\u039b\u039f\u0393\u0399\u0391\u03a3", "1579": "\u03a3\u03a4\u0395\u0393\u0391\u03a3\u0397 \u0391\u039e\u0399\u03a9\u039c\u0391\u03a4\u0399\u039a\u03a9\u039d", "1580": "\u03a0\u0391\u03a1\u0391\u03a1\u03a4\u0397\u039c\u0391\u03a4\u0391 \u0393\u0395\u039d\u0399\u039a\u039f\u03a5 \u03a7\u0397\u039c\u0395\u0399\u039f\u03a5", "1581": "\u039a\u0391\u0398\u0391\u03a1\u0399\u03a3\u03a4\u03a1\u0399\u0395\u03a3", "1582": "\u039a\u0391\u039d\u039f\u039d\u0399\u03a3\u039c\u039f\u03a3 \u039d\u0391\u03a5\u03a4\u039f\u0394\u0399\u039a\u0395\u0399\u039f\u03a5", "1583": "\u0391\u039c\u039f\u0399\u0392\u0395\u03a3 \u039c\u0397\u03a7\u0391\u039d\u0399\u039a\u03a9\u039d", "1584": "\u0395\u03a0\u0399\u039c\u039f\u03a1\u03a6\u03a9\u03a3\u0397 \u0394\u0397\u039c\u039f\u03a3\u0399\u03a9\u039d \u03a5\u03a0\u0391\u039b\u039b\u0397\u039b\u03a9\u039d", "1585": "\u039a\u0391\u039d\u039f\u039d\u0399\u03a3\u039c\u039f\u0399 \u0395\u03a0\u0399\u0392\u0391\u03a4\u0397\u0393\u03a9\u039d \u03a0\u039b\u039f\u0399\u03a9\u039d", "1586": "\u03a4\u0391\u039c\u0395\u0399\u039f \u03a3\u03a5\u039d\u03a4\u0391\u039e\u0395\u03a9\u039d \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u03a5 \u0395\u03a4\u0391\u0399\u03a1\u0399\u0391\u03a3 \u0395\u039b\u039b. \u039a\u0391\u039b\u03a5\u039a\u039f\u03a0\u039f\u0399\u0395\u0399\u039f\u03a5-\u03a0\u03a5\u03a1\u0399\u03a4\u0399\u0394\u039f\u03a0\u039f\u0399\u0395\u0399\u039f\u03a5", "1587": "\u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f \u03a4\u03a1\u0391\u03a0\u0395\u0396\u03a9\u039d", "1588": "\u039b\u03a5\u03a3\u03a3\u0399\u0391\u03a4\u03a1\u0395\u0399\u0391", "1589": "\u03a3\u03a5\u039d\u039f\u03a1\u0399\u0391\u039a\u0395\u03a3 \u03a5\u0393\u0395\u0399\u039f\u039d\u039f\u039c\u0399\u039a\u0395\u03a3 \u03a3\u03a5\u039c\u0392\u0391\u03a3\u0395\u0399\u03a3", "1590": "\u03a0\u039f\u039b\u0395\u039c\u0399\u039a\u039f \u039c\u039f\u03a5\u03a3\u0395\u0399\u039f", "1591": "\u039a\u0391\u0398\u0397\u039a\u039f\u039d\u03a4\u0391 \u03a4\u0395\u039b\u03a9\u039d\u0395\u0399\u0391\u039a\u03a9\u039d \u03a5\u03a0\u0391\u039b\u039b\u0397\u039b\u03a9\u039d", "1592": "\u0395\u03a0\u0395\u039a\u03a4\u0391\u03a3\u0397 \u03a4\u0397\u03a3 \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u03a3", "1593": "\u03a6\u039f\u03a1\u039f\u039b\u039f\u0393\u0399\u039a\u0395\u03a3 \u0391\u03a0\u0391\u039b\u039b\u0391\u0393\u0395\u03a3", "1594": "\u0395\u03a0\u0399\u0394\u039f\u039c\u0391 \u03a3\u03a4\u03a1\u0391\u03a4\u0395\u03a5\u03a3\u0397\u03a3", "1595": "\u0394\u0399\u0391\u03a1\u039a\u0397 \u03a3\u03a4\u03a1\u0391\u03a4\u039f\u0394\u0399\u039a\u0395\u0399\u0391", "1596": "\u03a3\u03a5\u039d\u03a4\u0391\u039e\u0399\u039f\u0394\u039f\u03a4\u0397\u03a3\u0397 \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u03a5 \u039f.\u0393.\u0391", "1597": "\u0391\u03a3\u03a4\u03a5\u039d\u039f\u039c\u0399\u0391 \u0395\u039c\u03a0\u039f\u03a1\u0399\u039a\u0397\u03a3 \u039d\u0391\u03a5\u03a4\u0399\u039b\u0399\u0391\u03a3", "1598": "\u03a6\u03a1\u039f\u039d\u03a4\u0399\u03a3\u03a4\u0395\u03a3 \u039c\u039f\u039d\u0391\u0394\u03a9\u039d", "1599": "\u0391\u03a1\u0391\u0392\u039f\u03a3\u0399\u03a4\u039f\u03a3", "1600": "\u039c\u0397\u03a4\u03a1\u039f\u03a0\u039f\u039b\u0395\u0399\u03a3", "1601": "\u03a6\u0399\u039b\u0391\u039d\u0398\u03a1\u03a9\u03a0\u0399\u039a\u0391 \u03a3\u03a9\u039c\u0391\u03a4\u0395\u0399\u0391", "1602": "\u0394\u0399\u0391\u03a6\u039f\u03a1\u039f\u0399 \u03a0\u039f\u039b\u03a5\u0395\u0398\u039d\u0395\u0399\u03a3 \u03a3\u03a5\u039c\u0392\u0391\u03a3\u0395\u0399\u03a3", "1603": "\u0395\u039e\u03a5\u0393\u0399\u0391\u039d\u03a4\u0399\u039a\u0391 \u0395\u03a1\u0393\u0391", "1604": "\u03a6\u03a5\u039b\u039b\u0391 \u03a0\u039f\u0399\u039f\u03a4\u0397\u03a4\u0391\u03a3 \u039d\u0391\u03a5\u03a4\u03a9\u039d", "1605": "\u03a6\u0399\u039b\u0391\u039d\u0398\u03a1\u03a9\u03a0\u0399\u039a\u0391 \u0399\u0394\u03a1\u03a5\u039c\u0391\u03a4\u0391 \u039a\u0391\u0399 \u03a3\u03a9\u039c\u0391\u03a4\u0395\u0399\u0391", "1606": "\u0395\u03a3\u03a4\u0399\u0391 \u039d\u0391\u03a5\u03a4\u0399\u039a\u03a9\u039d", "1607": "\u0393\u039b\u03a5\u039a\u0391 \u039a\u0391\u0399 \u039a\u039f\u039d\u03a3\u0395\u03a1\u0392\u0395\u03a3", "1608": "\u03a0\u03a1\u039f\u03a3\u03a4\u0391\u03a3\u0399\u0391 \u03a5\u03a0\u039f\u0392\u03a1\u03a5\u03a7\u0399\u03a9\u039d \u039a\u0391\u039b\u03a9\u0394\u0399\u03a9\u039d", "1609": "\u0395\u03a0\u0395\u039e\u0395\u03a1\u0393\u0391\u03a3\u0399\u0391 \u039a\u0391\u0399 \u0395\u039c\u03a0\u039f\u03a1\u0399\u0391 \u03a3\u03a5\u039a\u03a9\u039d", "1610": "\u03a7\u0391\u03a1\u039f\u039a\u039f\u03a0\u0395\u0399\u039f", "1611": "\u0394\u0399\u0391\u039c\u0395\u03a4\u0391\u039a\u039f\u039c\u0399\u03a3\u0397 \u03a3\u03a4\u0397\u039d \u0391\u039b\u0392\u0391\u039d\u0399\u0391", "1612": "\u0395\u03a0\u0399\u0398\u0395\u03a9\u03a1\u0397\u03a3\u0397 \u03a6\u03a5\u039b\u0391\u039a\u03a9\u039d", "1613": "\u0394\u0399\u0395\u0398\u039d\u0395\u0399\u03a3 \u03a3\u03a5\u039c\u0392\u0391\u03a3\u0395\u0399\u03a3 \u03a0\u0395\u03a1\u0399 \u039a\u03a5\u03a1\u0399\u0391\u039a\u0397\u03a3 \u0391\u03a1\u0393\u0399\u0391\u03a3", "1614": "\u039a\u0399\u039d\u0397\u039c\u0391\u03a4\u039f\u0393\u03a1\u0391\u03a6\u0399\u039a\u0397 \u0392\u0399\u039f\u039c\u0397\u03a7\u0391\u039d\u0399\u0391", "1615": "\u03a0\u0399\u03a3\u03a4\u039f\u03a0\u039f\u0399\u0397\u03a4\u0399\u039a\u0391 \u03a0\u03a1\u039f\u0395\u039b\u0395\u03a5\u03a3\u0395\u03a9\u03a3", "1616": "\u03a4\u039f\u03a5\u03a1\u0399\u03a3\u03a4\u0399\u039a\u0397 \u03a0\u03a1\u039f\u03a0\u0391\u0393\u0391\u039d\u0394\u0391", "1617": "\u0395\u0399\u03a3\u03a6\u039f\u03a1\u0391 \u0395\u0399\u03a3\u0391\u0393\u03a9\u0393\u0395\u03a9\u039d", "1618": "\u039a\u0391\u0396\u0399\u039d\u039f", "1619": "\u039c\u0395 \u03a4\u0397\u039d \u0395\u039b\u0392\u0395\u03a4\u0399\u0391", "1620": "\u0394\u0399\u039a\u0391\u03a3\u03a4\u0399\u039a\u039f\u0399 \u0395\u03a0\u0399\u039c\u0395\u039b\u0397\u03a4\u0395\u03a3", "1621": "\u039a\u03a9\u0394\u0399\u039a\u0391\u03a3 \u03a0\u039f\u0399\u039d\u0399\u039a\u0397\u03a3 \u0394\u0399\u039a\u039f\u039d\u039f\u039c\u0399\u0391\u03a3", "1622": "\u03a4\u039f\u03a0\u0399\u039a\u0395\u03a3 \u0394\u0399\u039f\u0399\u039a\u0397\u03a4\u0399\u039a\u0395\u03a3 \u0395\u03a0\u0399\u03a4\u03a1\u039f\u03a0\u0395\u03a3", "1623": "\u0395\u03a4\u0391\u0399\u03a1\u0395\u0399\u0395\u03a3 \u039a\u0395\u03a6\u0391\u039b\u0391\u0399\u039f\u03a0\u039f\u0399\u0397\u03a3\u0395\u03a9\u03a3", "1624": "\u039f\u03a1\u03a5\u0396\u0391", "1625": "\u0394\u0399\u039f\u0399\u039a\u0397\u03a4\u0399\u039a\u039f \u03a3\u03a5\u039c\u0392\u039f\u03a5\u039b\u0399\u039f \u039f.\u0393.\u0391", "1626": "\u0395\u039a\u03a0\u0391\u0399\u0394\u0395\u03a5\u03a4\u0399\u039a\u039f \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f \u03a3\u03a7\u039f\u039b\u03a9\u039d \u03a0.\u039d", "1627": "\u0392\u0391\u03a3\u0399\u039b\u0395\u0399\u0391 \u039a\u0391\u0399 \u0391\u039d\u03a4\u0399\u0392\u0391\u03a3\u0399\u039b\u0395\u0399\u0391", "1628": "\u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u0391 \u03a3\u03a4\u0399\u03a3 \u0395\u03a0\u0391\u03a1\u03a7\u0399\u0395\u03a3 \u03a4.\u03a0. \u039a\u0391\u0399 \u0394", "1629": "\u0393\u0395\u03a9\u03a1\u0393\u0399\u039a\u0395\u03a3 \u0392\u0399\u039f\u039c\u0397\u03a7\u0391\u039d\u0399\u0395\u03a3", "1630": "\u0392\u039f\u03a5\u039b\u0395\u03a5\u03a4\u0397\u03a1\u0399\u039f", "1631": "\u03a0\u039f\u03a1\u0398\u039c\u0395\u0399\u0391", "1632": "\u0395\u039a\u03a4\u0395\u039b\u0395\u03a3\u0397 \u03a5\u0394\u03a1\u0391\u03a5\u039b\u0399\u039a\u03a9\u039d \u0395\u03a1\u0393\u03a9\u039d", "1633": "\u0399\u039d\u03a3\u03a4\u0399\u03a4\u039f\u03a5\u03a4\u0391 \u039a\u03a1\u0397\u03a4\u0399\u039a\u039f\u03a5 \u0394\u0399\u039a\u0391\u0399\u039f\u03a5 - \u0391\u0399\u0393\u0391\u0399\u039f\u03a5 \u039a\u0391\u0399 \u0394\u0399\u0391\u03a6\u039f\u03a1\u0391 \u0395\u03a1\u0395\u03a5\u039d\u0397\u03a4\u0399\u039a\u0391 \u039a\u0395\u039d\u03a4\u03a1\u0391", "1634": "\u0391\u03a4\u0395\u039b\u0395\u0399\u0395\u03a3 \u0394\u0399\u0391\u03a6\u039f\u03a1\u0395\u03a3", "1635": "\u039a\u0395\u039d\u03a4\u03a1\u0391 \u03a0\u0391\u03a1\u0391\u0398\u0395\u03a1\u0399\u03a3\u039c\u039f\u03a5 -", "1636": "\u03a3\u03a7\u039f\u039b\u0395\u03a3 \u0391\u0395\u03a1\u039f\u03a0\u039f\u03a1\u0399\u0391\u03a3", "1637": "\u039b\u0395\u03a0\u03a1\u0391", "1638": "\u0391\u0399\u03a3\u0398\u0397\u03a4\u0399\u039a\u039f\u0399", "1639": "\u0395\u039a\u039a\u0391\u0398\u0391\u03a1\u0399\u03a3\u0397 \u03a0\u039f\u0399\u039d\u0399\u039a\u03a9\u039d \u0395\u039e\u039f\u0394\u03a9\u039d", "1640": "\u0393\u0395\u039d. \u039f\u0399\u039a\u039f\u0394\u039f\u039c\u0399\u039a\u039f\u03a3 \u039a\u0391\u039d\u039f\u039d\u0399\u03a3\u039c\u039f\u03a3", "1641": "\u0395\u039b\u0395\u0393\u03a7\u039f\u03a3 \u0394\u0391\u03a0\u0391\u039d\u03a9\u039d \u03a4\u039f\u03a5 \u039a\u03a1\u0391\u03a4\u039f\u03a5\u03a3", "1642": "\u03a0\u0395\u03a4\u03a1\u0395\u039b\u0391\u0399\u039f\u039a\u0399\u039d\u0397\u03a4\u0391 \u039a\u0391\u0399 \u0399\u03a3\u03a4\u0399\u039f\u03a6\u039f\u03a1\u0391", "1643": "\u039a\u0391\u039b\u039b\u0399\u0395\u03a1\u0393\u0395\u0399\u0391 \u039a\u0391\u03a0\u039d\u039f\u03a5", "1644": "\u0394\u0399\u039f\u0399\u039a\u0397\u03a3\u0397 \u039c\u039f\u039d\u0391\u03a3\u03a4\u0397\u03a1\u0399\u03a9\u039d", "1645": "\u039a\u03a4\u0397\u039d\u0399\u0391\u03a4\u03a1\u0399\u039a\u0391 \u0399\u0394\u0399\u039f\u03a3\u039a\u0395\u03a5\u0391\u03a3\u039c\u0391\u03a4\u0391", "1646": "\u039c\u039f\u039d\u0399\u039c\u039f\u0399 \u039a\u0391\u0399 \u0395\u0398\u0395\u039b\u039f\u039d\u03a4\u0395\u03a3", "1647": "\u03a6\u039f\u03a1\u039f\u039b\u039f\u0393\u0399\u0391 \u039a\u0395\u03a1\u0394\u03a9\u039d \u0395\u0399\u03a3\u0391\u0393\u03a9\u0393\u0395\u03a9\u039d", "1648": "\u0391\u0393\u03a9\u0393\u0395\u03a3 \u0395\u039e\u03a9\u03a3\u0395\u03a9\u03a3 \u039c\u0399\u03a3\u0398\u03a9\u03a4\u03a9\u039d", "1649": "\u039f\u03a1\u0393\u0391\u039d\u03a9\u03a3\u0397 \u0395\u039e\u03a9\u03a4\u0395\u03a1\u0399\u039a\u039f\u03a5 \u0395\u039c\u03a0\u039f\u03a1\u0399\u039f\u03a5", "1650": "\u0391\u0393\u03a9\u0393\u0395\u03a3 \u039c\u0397\u03a7\u0391\u039d\u0399\u039a\u03a9\u039d", "1651": "\u039d\u0391\u03a5\u03a4\u0399\u039a\u0397 \u03a3\u03a7\u039f\u039b\u0397 \u03a0\u039f\u039b\u0395\u039c\u039f\u03a5", "1652": "\u039c\u0395\u03a4\u0391\u03a6\u039f\u03a1\u0391 \u0398\u0395\u03a3\u0395\u03a9\u039d", "1653": "\u0395\u0399\u03a3\u0391\u0393\u03a9\u0393\u0397 \u0395\u03a0\u0391\u0393\u0393\u0395\u039b\u039c\u0391\u03a4\u0399\u039a\u039f\u03a5 \u03a5\u039b\u0399\u039a\u039f\u03a5", "1654": "\u03a3\u03a5\u0393\u039a\u03a1\u039f\u03a4\u0397\u03a3\u0397 \u039a\u0391\u0399 \u039b\u0395\u0399\u03a4\u039f\u03a5\u03a1\u0393\u0399\u0391", "1655": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0395\u03a0\u0399\u039a\u039f\u03a5\u03a1\u0399\u039a\u0397\u03a3 \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u03a3 \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u03a5 \u0391\u0395\u03a1\u039f\u03a0\u039f\u03a1\u0399\u039a\u03a9\u039d \u0395\u03a0\u0399\u03a7\u0395\u0399\u03a1\u0397\u03a3\u0395\u03a9\u039d (T.\u0395.\u0391.\u03a0.\u0391.\u0395.)", "1656": "\u03a3\u03a5\u039b\u039b\u039f\u0393\u0397 \u039a\u0391\u0399 \u0394\u0399\u0391\u039a\u0399\u039d\u0397\u03a3\u0397 \u03a0\u0395\u03a4\u03a1\u0395\u039b\u0391\u0399\u039f\u0395\u0399\u0394\u03a9\u039d \u0395\u03a1\u039c\u0391\u03a4\u03a9\u039d", "1657": "\u039a\u0395\u039d\u03a4\u03a1\u0391 \u0391\u0394\u03a5\u039d\u0391\u03a4\u0399\u03a3\u039c\u0391\u03a4\u039f\u03a3 \u2013 \u0394\u0399\u0391\u0399\u03a4\u039f\u039b\u039f\u0393\u0399\u0391\u03a3", "1658": "\u039f\u039c\u0391\u0394\u0399\u039a\u0397 \u039a\u0391\u03a4\u0391\u0393\u0393\u0395\u039b\u0399\u0391 \u03a3\u03a5\u039c\u0392\u0391\u03a3\u0395\u03a9\u03a3 \u0395\u03a1\u0393\u0391\u03a3\u0399\u0391\u03a3", "1659": "\u0394\u0399\u0391\u03a6\u039f\u03a1\u0391 \u039c\u039f\u03a5\u03a3\u0395\u0399\u0391", "1660": "\u0392\u0395\u0392\u0391\u0399\u03a9\u03a3\u0397 \u039a\u0391\u0399 \u0395\u0399\u03a3\u03a0\u03a1\u0391\u039e\u0397 \u0395\u03a3\u039f\u0394\u03a9\u039d", "1661": "\u0393\u03a1\u0391\u03a6\u0395\u0399\u0391 \u03a4\u03a5\u03a0\u039f\u03a5", "1662": "\u0394\u0399\u039f\u0399\u039a\u0397\u03a4\u0399\u039a\u039f \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f", "1663": "\u03a3\u03a5\u039d\u0395\u03a1\u0393\u0395\u0399\u0391 \u0395\u03a0\u0399\u03a3\u039a\u0395\u03a5\u03a9\u039d", "1664": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0395\u03a0\u0399\u039a\u039f\u03a5\u03a1\u0399\u039a\u0397\u03a3 \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0397\u03a3 \u03a0\u03a1\u039f\u039d\u039f\u0399\u0391\u03a3 \u039a\u0391\u0399 \u0391\u03a3\u0398\u0395\u039d\u0395\u0399\u0391\u03a3 \u0395\u03a1\u0393\u0391\u0396\u039f\u039c\u0395\u039d\u03a9\u039d \u03a3\u03a4\u0391 \u039b\u0399\u039c\u0391\u039d\u0399\u0391 (\u03a4.\u0395.\u0391.\u03a0.\u0391.\u0395.\u039b.)", "1665": "\u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0397 \u039a\u0391\u03a0\u039d\u0395\u03a1\u0393\u0391\u03a4\u03a9\u039d", "1666": "\u0391\u039d\u03a4\u0399\u03a3\u0397\u039a\u03a9\u039c\u0391\u03a4\u0391 (\u0395\u039e\u0391\u0393\u039f\u03a1\u0391 \u0398\u0397\u03a4\u0395\u0399\u0391\u03a3)", "1667": "\u03a1\u03a5\u039c\u039f\u03a5\u039b\u039a\u039f\u03a5\u039c\u0395\u039d\u0391 \u039f\u03a7\u0397\u039c\u0391\u03a4\u0391", "1668": "\u039d\u039f\u039c\u039f\u0399 \u0391\u039d\u0391\u03a6\u0395\u03a1\u039f\u039c\u0395\u039d\u039f\u0399 \u03a3\u0395 \u03a0\u039f\u039b\u039b\u0395\u03a3 \u03a6\u039f\u03a1\u039f\u039b\u039f\u0393\u0399\u0395\u03a3", "1669": "\u039f\u0399\u039a\u039f\u03a3\u03a5\u03a3\u03a4\u0397\u039c\u0391\u03a4\u0391\u2013\u0392\u0399\u039f\u03a4\u039f\u03a0\u039f\u0399", "1670": "\u03a0\u03a1\u039f\u03a3\u03a4\u0391\u03a3\u0399\u0391 \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u03a9\u039d", "1671": "\u0395\u0398\u039d\u0399\u039a\u039f \u03a4\u03a5\u03a0\u039f\u0393\u03a1\u0391\u03a6\u0395\u0399\u039f", "1672": "\u0394\u0399\u039a\u0391\u03a3\u03a4\u0399\u039a\u0391 \u039a\u0391\u03a4\u0391\u03a3\u03a4\u0397\u039c\u0391\u03a4\u0391", "1673": "\u03a0\u03a1\u039f\u03a3\u03a4\u0391\u03a3\u0399\u0391 \u0392\u0399\u0392\u039b\u0399\u039f\u03a5-\u0395\u0398\u039d\u0399\u039a\u039f \u039a\u0395\u039d\u03a4\u03a1\u039f \u0392\u0399\u0392\u039b\u0399\u039f\u03a5-\u039b\u039f\u0393\u039f\u03a4\u0395\u03a7\u039d\u0399\u0391", "1674": "\u0394\u0391\u03a3\u039c\u039f\u0399 \u0391\u039d\u03a4\u0399\u039d\u03a4\u0391\u039c\u03a0\u0399\u0393\u039a", "1675": "\u0394\u0391\u03a3\u0397 \u03a0\u0391\u03a1\u0391\u039c\u0395\u0398\u039f\u03a1\u0399\u03a9\u039d \u03a0\u0395\u03a1\u0399\u039f\u03a7\u03a9\u039d", "1676": "\u0398\u0395\u039f\u039b\u039f\u0393\u0399\u039a\u0397 \u03a3\u03a7\u039f\u039b\u0397", "1677": "\u039f\u03a1\u039f\u0399 - \u03a0\u03a1\u039f\u0394\u0399\u0391\u0393\u03a1\u0391\u03a6\u0395\u03a3 \u03a4\u03a5\u03a0\u039f\u03a0\u039f\u0399\u0397\u03a3\u0397\u03a3", "1678": "\u03a6\u039f\u03a1\u039f\u039b\u039f\u0393\u0399\u0391 \u0392\u03a5\u039d\u0397\u03a3 \u039a\u0391\u0399 \u0396\u03a5\u0398\u039f\u03a5", "1679": "\u0391\u03a0\u039f\u0398\u0397\u039a\u0397 \u039a\u03a4\u0397\u039d\u0399\u0391\u03a4\u03a1\u0399\u039a\u03a9\u039d \u0395\u03a6\u039f\u0394\u0399\u03a9\u039d", "1680": "\u03a0\u0391\u03a1\u039f\u03a7\u0397 \u03a4\u0397\u039b\u0395\u03a6\u03a9\u039d\u0399\u039a\u03a9\u039d \u03a3\u03a5\u039d\u0394\u0395\u03a3\u0395\u03a9\u039d", "1681": "\u03a0\u0391\u03a1\u0391\u03a7\u03a9\u03a1\u0397\u03a3\u0397 \u0399\u0391\u039c\u0391\u03a4\u0399\u039a\u03a9\u039d \u03a0\u0397\u0393\u03a9\u039d", "1682": "\u039c\u0391\u0398\u0397\u03a4\u0399\u039a\u0391 \u03a3\u03a5\u03a3\u03a3\u0399\u03a4\u0399\u0391", "1683": "\u03a0\u03a1\u039f\u03a3\u039b\u0397\u03a8\u0397 \u0395\u03a6\u0395\u0394\u03a1\u03a9\u039d, \u0391\u039d\u0391\u03a0\u0397\u03a1\u03a9\u039d \u03a0\u039f\u039b\u0395\u039c\u039f\u03a5, \u03a0\u039f\u039b\u03a5\u03a4\u0395\u039a\u039d\u03a9\u039d \u039a\u0391\u0399 \u0391\u039b\u039b\u03a9\u039d \u0391\u03a4\u039f\u039c\u03a9\u039d \u039c\u0395 \u0395\u0399\u0394\u0399\u039a\u0395\u03a3 \u0391\u039d\u0391\u0393\u039a\u0395\u03a3", "1684": "\u0395\u03a1\u03a4 \u2013 3", "1685": "\u03a3\u03a7\u039f\u039b\u0397 \u03a0\u039f\u039b\u0395\u039c\u039f\u03a5 \u0391\u0395\u03a1\u039f\u03a0\u039f\u03a1\u0399\u0391\u03a3", "1686": "\u03a4\u039f\u03a0\u039f\u0398\u0395\u03a4\u0397\u03a3\u0395\u0399\u03a3 - \u039c\u0395\u03a4\u0391\u03a4\u0391\u039e\u0395\u0399\u03a3", "1687": "\u0394\u0399\u0395\u0398\u039d\u0395\u0399\u03a3 \u03a3\u03a5\u039c\u0392\u0391\u03a3\u0395\u0399\u03a3 \u03a0\u03a1\u039f\u03a3\u03a4\u0391\u03a3\u0399\u0391\u03a3", "1688": "\u03a6\u03a5\u03a3\u0399\u039a\u039f \u0391\u0395\u03a1\u0399\u039f", "1689": "\u03a4\u0395\u03a7\u039d\u0399\u039a\u0391 \u0395\u03a1\u0393\u0391", "1690": "\u0394\u0399\u03a0\u039b\u03a9\u039c\u0391\u03a4\u039f\u03a5\u03a7\u039f\u0399 \u0391\u039d\u03a9\u03a4\u0391\u03a4\u03a9\u039d", "1691": "\u0395\u0398\u039d\u0399\u039a\u039f \u039d\u039f\u039c\u0399\u03a3\u039c\u0391\u03a4\u0399\u039a\u039f \u039c\u039f\u03a5\u03a3\u0395\u0399\u039f", "1692": "\u039f\u0399\u039a\u039f\u039d\u039f\u039c\u0399\u039a\u0397 \u0391\u03a3\u03a4\u03a5\u039d\u039f\u039c\u0399\u0391 \u03a3\u03a4\u0397 \u0398\u0391\u039b\u0391\u03a3\u03a3\u0391", "1693": "\u0391\u03a3\u03a6\u0391\u039b\u0395\u0399\u0391, \u039b\u0395\u0399\u03a4\u039f\u03a5\u03a1\u0393\u0399\u0391 \u039a\u0391\u0399 \u0395\u039a\u039c\u0395\u03a4\u0391\u039b\u039b\u0395\u03a5\u03a3\u0397", "1694": "\u0395\u0399\u0394\u0399\u039a\u0391 \u03a0\u03a1\u039f\u039d\u039f\u039c\u0399\u0391 \u0391\u039d\u03a9\u039d\u03a5\u039c\u03a9\u039d \u0395\u03a4\u0391\u0399\u03a1\u0395\u0399\u03a9\u039d", "1695": "\u0393\u03a1\u0391\u039c\u039c\u0391\u03a4\u0395\u0399\u0391 \u03a4\u03a9\u039d \u0394\u0399\u039a\u0391\u03a3\u03a4\u0397\u03a1\u0399\u03a9\u039d \u039a\u0391\u0399 \u0395\u0399\u03a3\u0391\u0393\u0393\u0395\u039b\u0399\u03a9\u039d", "1696": "\u0391\u039b\u0399\u03a0\u0391\u03a3\u03a4\u0391", "1697": "\u0395\u03a0\u0399\u0394\u039f\u03a3\u0397 \u0394\u0399\u039a\u039f\u0393\u03a1\u0391\u03a6\u03a9\u039d", "1698": "\u039a\u0395\u039d\u03a4\u03a1\u0399\u039a\u039f \u03a4\u0391\u039c\u0395\u0399\u039f \u0393\u0395\u03a9\u03a1\u0393\u0399\u0391\u03a3", "1699": "\u03a3\u03a4\u03a1\u0391\u03a4\u0399\u03a9\u03a4\u0399\u039a\u0391 \u03a3\u03a5\u039c\u0392\u039f\u03a5\u039b\u0399\u0391", "1700": "\u03a4\u0391\u039c\u0395\u0399\u0391\u039a\u0397 \u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u0391 \u03a4\u0395\u039b\u03a9\u039d\u0395\u0399\u03a9\u039d", "1701": "\u039d\u039f\u03a3\u0397\u039b\u0395\u03a5\u03a4\u0399\u039a\u039f \u0399\u0394\u03a1\u03a5\u039c\u0391 \u039c.\u03a4.\u03a3", "1702": "\u0394\u0399\u039a\u0391\u0399\u039f \u0398\u0391\u039b\u0391\u03a3\u03a3\u0391\u03a3-\u03a5\u03a6\u0391\u039b\u039f\u039a\u03a1\u0397\u03a0\u0399\u0394\u0391", "1703": "\u0395\u0399\u0394\u0399\u039a\u039f\u03a3 \u03a6\u039f\u03a1\u039f\u03a3 \u039a\u0391\u03a4\u0391\u039d\u0391\u039b\u03a9\u03a3\u0397\u03a3", "1704": "\u039c\u0395\u0399\u039f\u039d\u039f\u03a4\u0399\u039a\u0391 \u03a3\u03a7\u039f\u039b\u0395\u0399\u0391", "1705": "\u0393\u03a1\u0391\u03a6\u0395\u0399\u0391 \u0395\u039c\u03a0\u039f\u03a1\u0399\u039a\u03a9\u039d \u03a0\u039b\u0397\u03a1\u039f\u03a6\u039f\u03a1\u0399\u03a9\u039d", "1706": "\u03a3\u03a5\u039d\u03a4\u039f\u039d\u0399\u03a3\u03a4\u0399\u039a\u039f\u039d \u03a3\u03a5\u039c\u0392\u039f\u03a5\u039b\u0399\u039f\u039d \u039d\u0395\u03a9\u039d \u03a0\u03a1\u039f\u03a3\u03a6\u03a5\u0393\u03a9\u039d", "1707": "\u03a0\u0395\u03a1\u0399\u0398\u0391\u039b\u03a8\u0397 \u0391\u03a0\u039f\u03a1\u03a9\u039d \u039a\u0391\u0399 \u0391\u039d\u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u03a4\u03a9\u039d", "1708": "\u03a6\u039f\u03a1\u039f\u039b\u039f\u0393\u0399\u0391 \u039a\u0395\u039d\u03a4\u03a1\u03a9\u039d \u0394\u0399\u0391\u03a3\u039a\u0395\u0394\u0391\u03a3\u0395\u03a9\u03a3 \u039a\u0391\u0399 \u03a0\u039f\u039b\u03a5\u03a4\u0395\u039b\u0395\u0399\u0391\u03a3", "1709": "\u03a3\u03a0\u039f\u0393\u0393\u0391\u039b\u0399\u0395\u03a5\u03a4\u0399\u039a\u0391 \u2013 \u0394\u03a5\u03a4\u0395\u03a3", "1710": "\u0394\u0399\u0395\u0398\u039d\u0395\u03a3 \u039d\u039f\u039c\u0399\u03a3\u039c\u0391\u03a4\u0399\u039a\u039f \u03a4\u0391\u039c\u0395\u0399\u039f", "1711": "\u0392\u0399\u0392\u039b\u0399\u039f \u0394\u0399\u0395\u039a\u0394\u0399\u039a\u0397\u03a3\u0395\u03a9\u039d", "1712": "\u0395\u0393\u039a\u0391\u03a4\u0391\u03a3\u03a4\u0391\u03a3\u0397 - \u039b\u0395\u0399\u03a4\u039f\u03a5\u03a1\u0393\u0399\u0391 \u039a\u0391\u03a4\u0391\u03a3\u039a\u0395\u03a5\u03a9\u039d \u039a\u0395\u03a1\u0391\u0399\u03a9\u039d", "1713": "\u0395\u039d\u03a9\u03a3\u0397 \u0394\u0397\u039c\u03a9\u039d \u039a\u0391\u0399 \u039a\u039f\u0399\u039d\u039f\u03a4\u0397\u03a4\u03a9\u039d", "1714": "\u039b\u039f\u0393\u0399\u03a3\u03a4\u0399\u039a\u039f\u03a3 \u039a\u0391\u0399 \u039f\u0399\u039a\u039f\u039d\u039f\u039c\u0399\u039a\u039f\u03a3 \u039a\u0391\u039d\u039f\u039d\u0399\u03a3\u039c\u039f\u03a3", "1715": "\u039a\u0391\u03a4\u03a9\u03a4\u0395\u03a1\u0391 \u039f\u03a1\u0393\u0391\u039d\u0391 \u03a3\u03a9\u039c\u0391\u03a4\u03a9\u039d \u0391\u03a3\u03a6\u0391\u039b\u0395\u0399\u0391\u03a3", "1716": "\u03a5\u03a0\u039f\u03a5\u03a1\u0393\u0395\u0399\u039f \u0395\u039c\u03a0\u039f\u03a1\u0399\u039a\u0397\u03a3 \u039d\u0391\u03a5\u03a4\u0399\u039b\u0399\u0391\u03a3", "1717": "\u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u03a3 \u0395\u039b\u0395\u0393\u039a\u03a4\u0399\u039a\u039f\u03a5 \u03a3\u03a5\u039d\u0395\u0394\u03a1\u0399\u039f\u03a5", "1718": "\u0391\u0393\u039f\u03a1\u0395\u03a3 \u0391\u0393\u03a1\u039f\u03a4\u0399\u039a\u03a9\u039d \u03a0\u03a1\u039f\u0399\u039f\u039d\u03a4\u03a9\u039d", "1719": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0395\u03a0\u0399\u039a\u039f\u03a5\u03a1\u0399\u039a\u0397\u03a3 \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u03a3 \u039c\u0399\u03a3\u0398\u03a9\u03a4\u03a9\u039d \u039a\u039b\u03a9\u03a3\u03a4\u039f\u03a5\u03a6\u0391\u039d\u03a4\u039f\u03a5\u03a1\u0393\u0399\u0391\u03a3", "1720": "\u039e\u0395\u039d\u0391\u0393\u039f\u0399 \u039a\u0391\u0399 \u0394\u0399\u0395\u03a1\u039c\u0397\u039d\u0395\u0399\u03a3", "1721": "\u03a0\u039f\u039b\u0395\u039c\u0399\u039a\u0395\u03a3 \u03a3\u03a5\u039d\u03a4\u0391\u039e\u0395\u0399\u03a3", "1722": "\u0391\u03a3\u03a4\u0399\u039a\u0395\u03a3 \u03a3\u03a5\u0393\u039a\u039f\u0399\u039d\u03a9\u039d\u0399\u0395\u03a3 \u0391\u0398\u0397\u039d\u03a9\u039d-\u03a0\u0395\u0399\u03a1\u0391\u0399\u03a9\u03a3 \u039a\u0391\u0399 \u03a0\u0395\u03a1\u0399\u03a7\u03a9\u03a1\u03a9\u039d-\u039f.\u0391.\u03a3.\u0391", "1723": "\u039a\u0391\u03a4\u0391\u03a3\u03a4\u0391\u03a4\u0399\u039a\u0395\u03a3 \u0394\u0399\u0391\u03a4\u0391\u039e\u0395\u0399\u03a3 \u03a4\u0391\u039c\u0395\u0399\u039f\u03a5 \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u03a3 \u0391\u03a1\u03a4\u0395\u03a1\u0393\u0391\u03a4\u03a9\u039d \u039a.\u039b.\u03a0", "1724": "\u0391\u03a4\u03a5\u03a7\u0397\u039c\u0391\u03a4\u0391 \u03a3\u0395 \u039c\u0395\u03a4\u0391\u039b\u039b\u0395\u0399\u0391 \u039a\u039b\u03a0", "1725": "\u03a6\u039f\u03a1\u039f\u039b\u039f\u0393\u0399\u0391 \u03a0\u039f\u039b\u0395\u039c\u0399\u039a\u03a9\u039d \u039a\u0395\u03a1\u0394\u03a9\u039d", "1726": "\u03a3\u03a7\u0395\u0394\u0399\u039f \u03a0\u039f\u039b\u0395\u03a9\u03a3 \u0398\u0395\u03a3\u03a3\u0391\u039b\u039f\u039d\u0399\u039a\u0397\u03a3", "1727": "\u039f\u0399\u039a\u039f\u039d\u039f\u039c\u0399\u039a\u0397 \u0394\u0399\u039f\u0399\u039a\u0397\u03a3\u0397 \u0391\u0393\u03a1\u039f\u03a4. \u0391\u03a3\u03a6\u0391\u039b\u0395\u0399\u0391\u03a3", "1728": "\u039a\u03a1\u0391\u03a4\u0399\u039a\u039f \u03a9\u0394\u0395\u0399\u039f \u0398\u0395\u03a3\u03a3\u0391\u039b\u039f\u039d\u0399\u039a\u0397\u03a3", "1729": "\u039a\u0395\u039d\u03a4\u03a1\u0391 \u0391\u039d\u03a9\u03a4\u0395\u03a1\u0397\u03a3 \u03a4\u0395\u03a7\u039d\u0399\u039a\u0397\u03a3 \u0395\u039a\u03a0\u0391\u0399\u0394\u0395\u03a5\u03a3\u0397\u03a3 (\u039a.A.\u03a4.\u0395.)", "1730": "\u03a4\u0397\u039b\u0395\u03a6\u03a9\u039d\u0399\u039a\u0397 \u0391\u039d\u03a4\u0391\u03a0\u039f\u039a\u03a1\u0399\u03a3\u0397", "1731": "\u039f\u0399\u039a\u039f\u039d\u039f\u039c\u0399\u039a\u0391 \u0393\u03a5\u039c\u039d\u0391\u03a3\u0399\u0391", "1732": "\u0392\u0399\u0392\u039b\u0399\u0391 \u039a\u0391\u0399 \u0395\u03a5\u03a1\u0395\u03a4\u0397\u03a1\u0399\u0391 \u03a3\u03a5\u039d\u0395\u03a4\u0391\u0399\u03a1\u0399\u03a3\u039c\u03a9\u039d", "1733": "\u0395\u03a0\u0399\u0394\u039f\u039c\u0391 \u0391\u039d\u0395\u03a1\u0393\u0399\u0391\u03a3", "1734": "\u0395\u0393\u0393\u03a1\u0391\u03a6\u0395\u03a3, \u0395\u039e\u0395\u03a4\u0391\u03a3\u0395\u0399\u03a3, \u03a0\u03a1\u039f\u0393\u03a1\u0391\u039c\u039c\u0391\u03a4\u0391 \u039a\u039b\u03a0", "1735": "\u03a3\u03a7\u039f\u039b\u0397 \u039c\u039f\u039d\u0399\u039c\u03a9\u039d \u03a5\u03a0\u0391\u039e\u0399\u03a9\u039c\u0391\u03a4\u0399\u039a\u03a9\u039d", "1736": "\u0395\u039a\u039a\u039b\u0397\u03a3\u0399\u0391 \u0391\u039c\u0395\u03a1\u0399\u039a\u0397\u03a3", "1737": "\u039c\u0395\u03a4\u039f\u03a7\u0399\u039a\u039f \u03a4\u0391\u039c\u0395\u0399\u039f \u03a3\u03a4\u03a1\u0391\u03a4\u039f\u03a5", "1738": "\u039d\u039f\u03a3\u0397\u039b\u0395\u0399\u0391", "1739": "\u03a3\u03a7\u039f\u039b\u0397 \u0395\u03a5\u0395\u039b\u03a0\u0399\u0394\u03a9\u039d", "1740": "\u03a5\u03a0\u039f\u03a5\u03a1\u0393\u0395\u0399\u039f \u0395\u03a1\u0393\u0391\u03a3\u0399\u0391\u03a3 \u039a\u0391\u0399 \u039a\u039f\u0399\u039d\u03a9\u039d\u0399\u039a\u03a9\u039d \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u039d", "1741": "\u039a\u0391\u039d\u039f\u039d\u0399\u03a3\u039c\u039f\u03a3 \u03a7\u03a1\u0397\u039c\u0391\u03a4\u0399\u03a3\u03a4\u0397\u03a1\u0399\u039f\u03a5 \u0391\u039e\u0399\u03a9\u039d \u0391\u0398\u0397\u039d\u03a9\u039d", "1742": "\u0391\u039d\u03a4\u0399\u03a3\u0395\u0399\u03a3\u039c\u0399\u039a\u039f\u03a3 \u039a\u0391\u039d\u039f\u039d\u0399\u03a3\u039c\u039f\u03a3", "1743": "\u03a6\u0391\u03a1\u039c\u0391\u039a\u0395\u03a5\u03a4\u0399\u039a\u0397 \u0394\u0395\u039f\u039d\u03a4\u039f\u039b\u039f\u0393\u0399\u0391", "1744": "\u03a6\u039f\u03a1\u039f\u039b\u039f\u0393\u0399\u0391 \u0395\u039b\u0391\u0399\u03a9\u0394\u03a9\u039d \u03a0\u03a1\u039f\u0399\u039f\u039d\u03a4\u03a9\u039d", "1745": "\u0395\u0399\u0394\u0399\u039a\u0391 \u03a1\u0391\u0394\u0399\u039f\u03a4\u0397\u039b\u0395\u03a6\u03a9\u039d\u0399\u039a\u0391 \u0394\u0399\u039a\u03a4\u03a5\u0391", "1746": "\u03a4\u0395\u03a7\u039d\u0399\u039a\u0395\u03a3 \u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u0395\u03a3", "1747": "\u0391\u03a1\u03a7\u0395\u0399\u0391 \u03a5\u0393\u0399\u0395\u0399\u039d\u0397\u03a3", "1748": "\u039f\u0394\u039f\u0399\u03a0\u039f\u03a1\u0399\u039a\u0391 \u039a\u0391\u0399 \u0391\u03a0\u039f\u0396\u0397\u039c\u0399\u03a9\u03a3\u0395\u0399\u03a3 \u0391\u03a0\u039f\u03a3\u03a4\u039f\u039b\u03a9\u039d \u0395\u039e\u03a9\u03a4\u0395\u03a1\u0399\u039a\u039f\u03a5", "1749": "\u0394\u0399\u0391\u03a6\u039f\u03a1\u039f\u0399 \u039b\u039f\u0393\u0399\u03a3\u03a4\u0399\u039a\u039f\u0399 \u039d\u039f\u039c\u039f\u0399", "1750": "\u0395\u039a\u039a\u039b\u0397\u03a3\u0399\u0391\u03a3\u03a4\u0399\u039a\u039f\u0399 \u03a5\u03a0\u0391\u039b\u039b\u0397\u039b\u039f\u0399", "1751": "\u039d\u0391\u03a5\u03a4\u0399\u039a\u0391 \u0395\u03a0\u0391\u0393\u0393\u0395\u039b\u039c\u0391\u03a4\u0399\u039a\u0391 \u03a3\u03a9\u039c\u0391\u03a4\u0395\u0399\u0391 \u039a\u0391\u0399 \u039f\u039c\u039f\u03a3\u03a0\u039f\u039d\u0394\u0399\u0395\u03a3", "1752": "\u03a4\u0395\u039b\u0397 \u03a7\u03a1\u0397\u03a3\u0397\u03a3 \u0391\u0395\u03a1\u039f\u039b\u0399\u039c\u0395\u039d\u03a9\u039d", "1753": "\u03a0\u03a1\u039f\u0391\u0399\u03a1\u0395\u03a4\u0399\u039a\u0397 \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0397", "1754": "\u039c\u0395 \u03a4\u0397 \u039b\u0399\u0392\u03a5\u0397", "1755": "\u03a0\u039f\u03a4\u0391\u039c\u039f\u03a0\u039b\u039f\u0399\u0391 \u03a6\u039f\u03a1\u03a4\u0399\u039f\u03a5 \u03a5\u0393\u03a1\u03a9\u039d \u039a\u0391\u03a5\u03a3\u0399\u039c\u03a9\u039d", "1756": "\u03a4\u0391\u039c\u0395\u0399\u039f \u03a3\u03a5\u039d\u03a4\u0391\u039e\u0395\u03a9\u039d \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u03a5 \u0395\u039b\u039b\u0397\u039d\u0399\u039a\u03a9\u039d \u0397\u039b\u0395\u039a\u03a4\u03a1\u0399\u039a\u03a9\u039d \u03a3\u0399\u0394\u0397\u03a1\u039f\u0394\u03a1\u039f\u039c\u03a9\u039d \u0391\u0398\u0397\u039d\u03a9\u039d-\u03a0\u0395\u0399\u03a1\u0391\u0399\u03a9\u03a3 (\u03a4.\u03a3.\u03a0.-\u0397.\u03a3.\u0391.\u03a0)", "1757": "\u039c\u0395\u03a3\u0391\u0396\u039f\u039d\u03a4\u0395\u03a3", "1758": "\u03a3\u03a4\u03a1\u0391\u03a4\u0399\u03a9\u03a4\u0399\u039a\u039f\u03a3 \u03a0\u039f\u0399\u039d\u0399\u039a\u039f\u03a3", "1759": "\u0394\u0399\u039a\u0391\u0399\u03a9\u039c\u0391\u03a4\u0391 \u039a\u0391\u0399 \u039a\u0391\u0398\u0397\u039a\u039f\u039d\u03a4\u0391 \u03a6\u039f\u0399\u03a4\u0397\u03a4\u03a9\u039d", "1760": "\u03a0\u03a1\u039f\u0395\u0394\u03a1\u0399\u0391 \u0394\u0397\u039c\u039f\u039a\u03a1\u0391\u03a4\u0399\u0391\u03a3", "1761": "\u039a\u03a9\u0394\u0399\u039a\u0391\u03a3 \u0395\u039c\u03a0\u039f\u03a1\u0399\u039a\u039f\u03a5 \u039d\u039f\u039c\u039f\u03a5", "1762": "\u03a3\u03a5\u039d\u03a4\u0391\u039e\u0399\u039f\u0394\u039f\u03a4\u0397\u03a3\u0397 \u039f.\u0393.\u0391", "1763": "\u03a3\u0391\u039d\u0391\u03a4\u039f\u03a1\u0399\u0391", "1764": "\u0395\u039b\u0395\u0393\u03a7\u039f\u03a3 \u0395\u039c\u03a0\u039f\u03a1\u0399\u039f\u03a5 \u0395\u0399\u0394\u03a9\u039d \u03a0\u03a1\u03a9\u03a4\u0397\u03a3 \u0391\u039d\u0391\u0393\u039a\u0397\u03a3", "1765": "\u0392\u0391\u039b\u0391\u039d\u0399\u0394\u0399\u0391", "1766": "\u03a0\u039f\u039b\u03a5\u03a4\u0395\u03a7\u039d\u0399\u039a\u0397 \u03a3\u03a7\u039f\u039b\u0397 \u03a0\u0391\u039d\u0395\u03a0\u0399\u03a3\u03a4\u0397\u039c\u0399\u039f\u03a5 \u03a0\u0391\u03a4\u03a1\u03a9\u039d", "1767": "\u03a3\u0395\u0399\u03a3\u039c\u039f\u03a0\u039b\u0397\u039a\u03a4\u039f\u0399 \u03a0\u0395\u039b\u039f\u03a0\u039f\u039d\u039d\u0397\u03a3\u039f\u03a5", "1768": "\u0394\u0399\u0395\u0398\u039d\u0397\u03a3 \u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u03a3 \u03a7\u03a1\u0397\u039c\u0391\u03a4\u039f\u0394\u039f\u03a4\u0397\u03a3\u0395\u03a9\u03a3", "1769": "\u039c\u0395\u03a4\u0391\u03a6\u039f\u03a1\u0391 \u03a3\u03a4\u039f \u0395\u03a3\u03a9\u03a4\u0395\u03a1\u0399\u039a\u039f", "1770": "\u0399\u03a3\u03a4\u039f\u03a1\u0399\u039a\u039f \u0391\u03a1\u03a7\u0395\u0399\u039f \u03a5\u0394\u03a1\u0391\u03a3", "1771": "\u0395\u0393\u039a\u0391\u03a4\u0391\u03a3\u03a4\u0391\u03a3\u0397 \u039a\u0391\u0399 \u039a\u0399\u039d\u0397\u03a3\u0397 \u0391\u039b\u039b\u039f\u0394\u0391\u03a0\u03a9\u039d", "1772": "\u03a3\u03a7\u039f\u039b\u0397 \u03a4\u0395\u03a7\u039d\u0399\u039a\u0397\u03a3 \u0395\u039a\u03a0\u0391\u0399\u0394\u0395\u03a5\u03a3\u0397\u03a3 \u0391\u039e\u0399\u03a9\u039c\u0391\u03a4\u0399\u039a\u03a9\u039d", "1773": "\u0393\u0391\u039c\u039f\u03a3 \u03a3\u03a4\u03a1\u0391\u03a4\u0399\u03a9\u03a4\u0399\u039a\u03a9\u039d", "1774": "\u0391\u03a0\u0391\u0393\u039f\u03a1\u0395\u03a5\u03a3\u0397 \u0395\u039e\u039f\u0394\u039f\u03a5 \u039f\u03a6\u0395\u0399\u039b\u0395\u03a4\u03a9\u039d", "1775": "\u03a0\u03a1\u03a9\u03a4\u0395\u03a3 \u03a5\u039b\u0395\u03a3 \u03a8\u0395\u039a\u0391\u03a3\u03a4\u0397\u03a1\u03a9\u039d", "1776": "\u03a6\u0399\u039b\u0395\u039a\u03a0\u0391\u0399\u0394\u0395\u03a5\u03a4\u0399\u039a\u0397 \u0395\u03a4\u0391\u0399\u03a1\u0395\u0399\u0391", "1777": "\u0391\u0394\u0395\u0399\u0395\u03a3 \u039f\u0394\u0397\u0393\u03a9\u039d \u0391\u03a5\u03a4\u039f\u039a\u0399\u039d\u0397\u03a4\u03a9\u039d", "1778": "\u0395\u0398\u039d\u0399\u039a\u0397 \u03a0\u0399\u039d\u0391\u039a\u039f\u0398\u0397\u039a\u0397 \u039a\u0391\u0399 \u039c\u039f\u03a5\u03a3\u0395\u0399\u039f \u0391\u039b. \u03a3\u039f\u03a5\u03a4\u03a3\u039f\u03a5", "1779": "\u03a4\u0391\u03a7\u03a5\u0394\u03a1\u039f\u039c\u0399\u039a\u0391 \u0394\u0395\u039c\u0391\u03a4\u0391", "1780": "\u0395\u0399\u03a3\u03a0\u03a1\u0391\u039e\u0397 \u03a0\u039f\u03a1\u03a9\u039d", "1781": "\u039f\u03a1\u0393\u0391\u039d\u03a9\u03a3\u0397 \u039a\u0391\u0399 \u039b\u0395\u0399\u03a4\u039f\u03a5\u03a1\u0393\u0399\u0391 \u03a4\u0395\u03a7\u039d\u0399\u039a\u03a9\u039d \u03a3\u03a7\u039f\u039b\u03a9\u039d", "1782": "\u0394\u0399\u0391\u0398\u0395\u03a3\u0397 \u0393\u0391\u0399\u03a9\u039d \u03a3\u03a4\u0397 \u0398\u0395\u03a3\u03a3\u0391\u039b\u0399\u0391", "1783": "\u0394\u0399\u0391\u039a\u03a1\u0399\u03a3\u0397 \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u039c\u0395\u039d\u03a9\u039d", "1784": "\u0391\u0393\u0391\u0398\u039f\u0395\u03a1\u0393\u0391 \u0399\u0394\u03a1\u03a5\u039c\u0391\u03a4\u0391 \u039a\u0395\u03a1\u039a\u03a5\u03a1\u0391\u03a3", "1785": "\u03a5\u03a0\u0391\u0399\u0398\u03a1\u0399\u039f-\u03a0\u039b\u0391\u039d\u039f\u0394\u0399\u039f \u0395\u039c\u03a0\u039f\u03a1\u0399\u039f \u039a\u0391\u0399 \u0395\u039c\u03a0\u039f\u03a1\u039f\u03a0\u0391\u039d\u0397\u0393\u03a5\u03a1\u0395\u0399\u03a3", "1786": "\u0395\u039e\u0391\u0393\u03a9\u0393\u0399\u039a\u0391 \u03a4\u0395\u039b\u0397", "1787": "\u03a5\u03a0\u039f\u03a5\u03a1\u0393\u0399\u039a\u039f \u03a3\u03a5\u039c\u0392\u039f\u03a5\u039b\u0399\u039f - \u039f\u03a1\u0393\u0391\u039d\u03a9\u03a3\u0397 \u03a5\u03a0\u039f\u03a5\u03a1\u0393\u0395\u0399\u03a9\u039d - \u039a\u03a5\u0392\u0395\u03a1\u039d\u0397\u03a4\u0399\u039a\u0395\u03a3 \u0395\u03a0\u0399\u03a4\u03a1\u039f\u03a0\u0395\u03a3", "1788": "\u0391\u03a5\u03a4\u039f\u039a\u0399\u039d\u0397\u03a4\u0391 \u039a\u0391\u0399 \u0391\u039c\u0391\u039e\u0399\u0394\u0399\u0391 \u0391\u039d\u0391\u03a0\u0397\u03a1\u03a9\u039d \u03a0\u039f\u039b\u0395\u039c\u039f\u03a5", "1789": "\u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u0395\u03a3 \u03a0\u0395\u03a1\u0399\u03a6\u0395\u03a1\u0395\u0399\u0391\u039a\u0397\u03a3 \u0391\u039d\u0391\u03a0\u03a4\u03a5\u039e\u0397\u03a3", "1790": "\u0394\u0399\u0391\u03a4\u0399\u039c\u0397\u03a3\u0397 \u03a6\u0391\u03a1\u039c\u0391\u039a\u03a9\u039d", "1791": "\u03a6\u039f\u03a1\u039f\u039b\u039f\u0393\u0399\u0391 \u0395\u0399\u0394\u03a9\u039d \u03a0\u039f\u039b\u03a5\u03a4\u0395\u039b\u0395\u0399\u0391\u03a3", "1792": "\u039d\u0391\u03a5\u03a4\u0399\u039a\u0397 \u03a0\u039f\u0399\u039d\u0399\u039a\u0397 \u039d\u039f\u039c\u039f\u0398\u0395\u03a3\u0399\u0391", "1793": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0395\u03a0\u0399\u039a\u039f\u03a5\u03a1\u0399\u039a\u0397\u03a3 \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u03a3 \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u03a5 \u0395\u03a4\u0391\u0399\u03a1\u0395\u0399\u03a9\u039d \u03a0\u0395\u03a4\u03a1\u0395\u039b\u0391\u0399\u039f\u0395\u0399\u0394\u03a9\u039d", "1794": "\u0394\u03a9\u03a1\u039f \u0395\u039f\u03a1\u03a4\u03a9\u039d \u0395\u03a6\u0397\u039c\u0395\u03a1\u0399\u0394\u039f\u03a0\u03a9\u039b\u03a9\u039d", "1795": "\u0394\u0399\u0395\u03a5\u039a\u039f\u039b\u03a5\u039d\u03a3\u0395\u0399\u03a3 \u0393\u0399\u0391 \u03a4\u0397\u039d \u0391\u039d\u039f\u0399\u039a\u039f\u0394\u039f\u039c\u0397\u03a3\u0397", "1796": "\u0395\u03a0\u0399\u03a3\u039a\u0395\u03a5\u0391\u03a3\u03a4\u0395\u03a3 - \u03a3\u03a5\u039d\u0395\u03a1\u0393\u0395\u0399\u0391 \u0395\u03a0\u0399\u03a3\u039a\u0395\u03a5\u0397\u03a3 \u0391\u03a5\u03a4\u039f\u039a\u0399\u039d\u0397\u03a4\u03a9\u039d\u039f\u0394\u0399\u039a\u0397 \u0392\u039f\u0397\u0398\u0395\u0399\u0391 \u039f\u03a7\u0397\u039c\u0391\u03a4\u03a9\u039d", "1797": "\u03a0\u0391\u03a1\u0391\u03a7\u03a9\u03a1\u0397\u03a3\u0397 \u0394\u0391\u03a3\u03a9\u039d", "1798": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u03a3 \u0391\u03a3\u0398\u0395\u039d\u0395\u0399\u0391\u03a3 \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u03a5 \u03a4\u03a1\u0391\u03a0\u0395\u0396\u03a9\u039d \u03a0\u0399\u03a3\u03a4\u0395\u03a9\u03a3, \u0393\u0395\u039d\u0399\u039a\u0397\u03a3 \u039a\u0391\u0399 \u0391\u039c\u0395\u03a1\u0399\u039a\u0391\u039d \u0395\u039e\u03a0\u03a1\u0395\u03a3", "1799": "\u03a0\u039b\u0397\u03a4\u03a4\u039f\u039c\u0395\u039d\u0391 \u0391\u03a0\u039f \u03a4\u0397\u039d \u0391\u039d\u0395\u03a1\u0393\u0399\u0391 \u0395\u03a0\u0391\u0393\u0393\u0395\u039b\u039c\u0391\u03a4\u0391", "1800": "\u03a4\u0391\u039c\u0395\u0399\u0391 \u039a.\u0391.\u03a4.\u0395", "1801": "\u0395\u0399\u0394\u0399\u039a\u039f\u0399 \u03a3\u03a4\u03a1\u0391\u03a4\u0399\u03a9\u03a4\u0399\u039a\u039f\u0399 \u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u0399", "1802": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u03a3 \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u03a5 \u0399\u039f\u039d\u0399\u039a\u0397\u03a3 \u039a\u0391\u0399 \u039b\u0391\u0399\u039a\u0397\u03a3 \u03a4\u03a1\u0391\u03a0\u0395\u0396\u0391\u03a3 (\u03a4.\u0391.\u03a0.- \u0399.\u039b.\u03a4.)", "1803": "\u03a0\u03a1\u039f\u03a3\u03a4\u0391\u03a3\u0399\u0391 \u0391\u03a0\u039f \u0391\u039a\u03a4\u0399\u039d\u039f\u0392\u039f\u039b\u0399\u0395\u03a3", "1804": "\u039a\u03a1\u0391\u03a4\u0399\u039a\u039f \u0398\u0395\u0391\u03a4\u03a1\u039f \u0392. \u0395\u039b\u039b\u0391\u0394\u039f\u03a3", "1805": "\u03a5\u0393\u0395\u0399\u039f\u039d\u039f\u039c\u0399\u039a\u039f\u03a3 \u0395\u039b\u0395\u0393\u03a7\u039f\u03a3 \u03a6\u039f\u0399\u03a4\u0397\u03a4\u03a9\u039d", "1806": "\u0394\u0399\u0391\u03a6\u039f\u03a1\u0391", "1807": "\u03a4\u0395\u039b\u03a9\u039d\u0395\u0399\u0391\u039a\u0397 \u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u0391 \u03a3\u0399\u0394\u0397\u03a1\u039f\u0394\u03a1\u039f\u039c\u03a9\u039d", "1808": "\u0395\u03a6\u0395\u03a5\u03a1\u0395\u03a3\u0395\u0399\u03a3 \u0391\u03a6\u039f\u03a1\u03a9\u03a3\u0391\u0399 \u0395\u0398\u039d. \u0391\u039c\u03a5\u039d\u0391", "1809": "\u03a5\u03a0\u039f\u0392\u03a1\u03a5\u03a7\u0399\u039f\u03a3 \u03a4\u0397\u039b\u0395\u0393\u03a1\u0391\u03a6\u039f\u03a3", "1810": "\u0391\u0394\u0395\u0399\u0395\u03a3 \u039f\u0399\u039a\u039f\u0394\u039f\u039c\u0397\u03a3 \u039e\u0395\u039d\u039f\u0394\u039f\u03a7\u0395\u0399\u03a9\u039d", "1811": "\u0399\u039d\u03a3\u03a4\u0399\u03a4\u039f\u03a5\u03a4\u039f \u0392\u03a5\u0396\u0391\u039d\u03a4\u0399\u039d\u03a9\u039d \u03a3\u03a0\u039f\u03a5\u0394\u03a9\u039d", "1812": "\u03a3\u03a7\u039f\u039b\u0397 \u0393\u0395\u03a9\u03a4\u0395\u03a7\u039d\u0399\u039a\u03a9\u039d \u0395\u03a0\u0399\u03a3\u03a4\u0397\u039c\u03a9\u039d \u03a0\u0391\u039d\u039c\u0399\u039f\u03a5 \u0398\u0395\u03a3\u039d\u0399\u039a\u0397\u03a3", "1813": "\u0392\u0399\u0392\u039b\u0399\u039f\u0398\u0397\u039a\u0395\u03a3", "1814": "\u03a4\u0391\u039c\u0395\u0399\u0391 \u0391\u039d\u0395\u0393\u0395\u03a1\u03a3\u0395\u03a9\u03a3 \u0394\u0399\u0394\u0391\u039a\u03a4\u0397\u03a1\u0399\u03a9\u039d", "1815": "\u0395\u03a0\u0399\u0394\u039f\u039c\u0391 \u0392\u0399\u0392\u039b\u0399\u039f\u0398\u0397\u039a\u0397\u03a3", "1816": "\u039a\u0391\u03a4\u0391\u03a3\u03a4\u0397\u039c\u0391\u03a4\u0391 \u0391\u03a6\u039f\u03a1\u039f\u039b\u039f\u0393\u0397\u03a4\u03a9\u039d \u0395\u0399\u0394\u03a9\u039d", "1817": "\u0395\u03a0\u0399\u03a7\u0395\u0399\u03a1\u0397\u03a3\u0395\u0399\u03a3 \u03a0\u0395\u03a1\u0399\u0398\u0391\u039b\u03a8\u0395\u03a9\u03a3 \u0397\u039b\u0399\u039a\u0399\u03a9\u039c\u0395\u039d\u03a9\u039d \u0397 \u0391\u039d\u0391\u03a0\u0397\u03a1\u03a9\u039d", "1818": "\u039b\u0399\u039c\u0395\u039d\u0399\u039a\u039f\u0399 \u03a3\u03a4\u0391\u0398\u039c\u039f\u0399", "1819": "\u039d\u039f\u039c\u039f\u0398\u0395\u03a4\u0399\u039a\u0395\u03a3 \u0395\u039e\u039f\u03a5\u03a3\u0399\u039f\u0394\u039f\u03a4\u0397\u03a3\u0395\u0399\u03a3", "1820": "\u0398\u0391\u039b\u0391\u039c\u039f\u0399 \u03a1\u0391\u0394\u0399\u039f\u0399\u03a3\u039f\u03a4\u039f\u03a0\u03a9\u039d", "1821": "\u0394\u0399\u039f\u0399\u039a\u0397\u03a3\u0397 \u0395\u039a\u039a\u039b\u0397\u03a3\u0399\u0391\u03a3\u03a4\u0399\u039a\u0397\u03a3 \u0395\u039a\u03a0\u0391\u0399\u0394\u0395\u03a5\u03a3\u0397\u03a3", "1822": "\u0391\u03a0\u0391\u0393\u039f\u03a1\u0395\u03a5\u039c\u0395\u039d\u0395\u03a3 \u039a\u0391\u0399", "1823": "\u0397\u0398\u039f\u03a0\u039f\u0399\u039f\u0399", "1824": "\u03a3\u03a5\u039c\u0392\u0391\u03a3\u0395\u0399\u03a3 \u03a0\u0395\u03a1\u0399 \u0394\u0399\u0395\u0398\u039d\u03a9\u039d \u0395\u039a\u0398\u0395\u03a3\u0395\u03a9\u039d", "1825": "\u03a3\u03a6\u03a1\u0391\u0393\u0399\u03a3\u03a4\u039f\u03a3 \u03a7\u0391\u03a1\u03a4\u0397\u03a3", "1826": "\u0395\u03a4\u0391\u0399\u03a1\u0395\u0399\u0395\u03a3 \u0394\u0399\u0391\u03a7\u0395\u0399\u03a1\u0399\u0396\u039f\u039c\u0395\u039d\u0395\u03a3 \u0394\u0397\u039c\u039f\u03a3\u0399\u0391 \u03a3\u03a5\u039c\u03a6\u0395\u03a1\u039f\u039d\u03a4\u0391", "1827": "\u03a4\u0395\u039b\u03a9\u039d\u0395\u0399\u0391\u039a\u0395\u03a3 \u0394\u0399\u0395\u03a5\u039a\u039f\u039b\u03a5\u039d\u03a3\u0395\u0399\u03a3", "1828": "\u0394\u0395\u039e\u0391\u039c\u0395\u039d\u039f\u03a0\u039b\u039f\u0399\u0391", "1829": "\u039a\u0395\u039d\u03a4\u03a1\u039f \u0394\u0399\u0395\u0398\u039d\u039f\u03a5\u03a3 \u039a\u0391\u0399 \u0395\u03a5\u03a1\u03a9\u03a0\u0391\u0399\u039a\u039f\u03a5", "1830": "\u0395\u03a0\u0399\u0392\u0391\u03a4\u0397\u0393\u0391 \u039c\u0395\u03a3\u039f\u0393\u0395\u0399\u0391\u039a\u0391 \u039a\u0391\u0399 \u03a4\u039f\u03a5\u03a1\u0399\u03a3\u03a4\u0399\u039a\u0391 \u03a0\u039b\u039f\u0399\u0391", "1831": "\u0395\u03a0\u0399\u0398\u0395\u03a9\u03a1\u0397\u03a3\u0397 \u0394\u0399\u039a\u0391\u03a3\u03a4\u0399\u039a\u03a9\u039d \u03a5\u03a0\u0391\u039b\u039b\u0397\u039b\u03a9\u039d", "1832": "\u039a\u0391\u039d\u039f\u039d\u0399\u03a3\u039c\u039f\u03a3 \u0398\u0395\u0391\u03a4\u03a1\u03a9\u039d \u039a\u0399\u039d\u0397\u039c\u0391\u03a4\u039f\u0393\u03a1\u0391\u03a6\u03a9\u039d \u039a\u039b\u03a0", "1833": "\u039c\u0395\u03a4\u0391\u039b\u039b\u0395\u03a5\u03a4\u0399\u039a\u039f\u03a3 \u039a\u03a9\u0394\u0399\u039a\u0391\u03a3", "1834": "\u039a\u0391\u03a4\u0391\u03a3\u03a4\u0391\u03a4\u0399\u039a\u039f \u03a4.\u0395.\u0391.\u0391.\u03a0.\u0391.\u0395", "1835": "\u03a0\u0391\u039d\u0395\u03a0\u0399\u03a3\u03a4\u0397\u039c\u0399\u0391\u039a\u0397 \u039b\u0395\u03a3\u03a7\u0397", "1836": "\u0395\u039c\u03a0\u039f\u03a1\u0399\u039a\u0391 \u039a\u0391\u0399 \u0392\u0399\u039f\u039c\u0397\u03a7\u0391\u039d\u0399\u039a\u0391 \u03a3\u0397\u039c\u0391\u03a4\u0391 - (\u0394\u0399\u0395\u0398\u039d\u0395\u0399\u03a3 \u03a3\u03a5\u039c\u0392\u0391\u03a3\u0395\u0399\u03a3)", "1837": "\u0395\u03a0\u0399\u0394\u039f\u039c\u0391\u03a4\u0391 \u0391\u03a0\u039f\u039b\u03a5\u039f\u039c\u0395\u039d\u03a9\u039d \u039f\u03a0\u039b\u0399\u03a4\u03a9\u039d \u03a9\u03a3 \u0391\u039d\u0399\u039a\u0391\u039d\u03a9\u039d", "1838": "\u03a3\u03a5\u039c\u0392\u039f\u03a5\u039b\u0399\u039f \u0395\u039d\u0395\u03a1\u0393\u0395\u0399\u0391\u03a3", "1839": "\u03a3\u03a7\u039f\u039b\u0397 \u039d\u039f\u039c\u0399\u039a\u03a9\u039d,\u039f\u0399\u039a\u039f\u039d\u039f\u039c\u0399\u039a\u03a9\u039d \u039a\u0391\u0399 \u03a0\u039f\u039b\u0399\u03a4\u0399\u039a\u03a9\u039d \u0395\u03a0\u0399\u03a3\u03a4\u0397\u039c\u03a9\u039d", "1840": "\u03a0\u03a1\u039f\u03a0\u039b\u0397\u03a1\u03a9\u039c\u0395\u03a3 \u039a\u0391\u0399 \u03a0\u03a1\u039f\u039a\u0391\u03a4\u0391\u0392\u039f\u039b\u0395\u03a3", "1841": "\u039a\u039b\u0391\u0394\u039f\u03a3 \u0391\u03a3\u0398\u0395\u039d\u0395\u0399\u0391\u03a3 \u03a4.\u0395.\u0392.\u0395", "1842": "\u0394\u0399\u0391\u039d\u039f\u039c\u0397 \u0393\u0391\u0399\u03a9\u039d \u039a\u03a9\u03a0\u0391\u0399\u0394\u0391\u03a3", "1843": "\u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f \u0391\u03a3\u03a6\u0391\u039b\u0395\u0399\u0391\u03a3 \u039d.\u03a0.\u0394.\u0394. - \u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u03a9\u039d & \u0395\u03a0\u0399\u03a7\u0395\u0399\u03a1\u0397\u03a3\u0395\u03a9\u039d", "1844": "\u03a5\u03a0\u039f\u03a5\u03a1\u0393\u0395\u0399\u039f \u03a5\u03a0\u039f\u0394\u039f\u039c\u03a9\u039d, \u039c\u0395\u03a4\u0391\u03a6\u039f\u03a1\u03a9\u039d \u039a\u0391\u0399 \u0394\u0399\u039a\u03a4\u03a5\u03a9\u039d", "1845": "\u0391\u0395\u03a1\u039f\u039d\u0391\u03a5\u0391\u0393\u039f\u03a3\u03a9\u03a3\u03a4\u0399\u039a\u0397 \u039c\u039f\u039d\u0391\u0394\u0391", "1846": "\u039a\u039f\u03a5\u03a1\u0395\u0399\u0391, \u039a\u039f\u039c\u039c\u03a9\u03a4\u0397\u03a1\u0399\u0391 \u039a.\u039b.\u03a0", "1847": "\u03a4\u0391\u039c\u0395\u0399\u039f \u03a0\u03a1\u039f\u039d\u039f\u0399\u0391\u03a3 \u0394\u0399\u039a\u0391\u03a3\u03a4\u0399\u039a\u03a9\u039d \u0395\u03a0\u0399\u039c\u0395\u039b\u0397\u03a4\u03a9\u039d", "1848": "\u0395\u0399\u0394\u0399\u039a\u0391 \u03a3\u03a5\u039d\u0395\u03a1\u0393\u0395\u0399\u0391", "1849": "\u039a\u0391\u03a4\u0395\u03a8\u03a5\u0393\u039c\u0395\u039d\u0391 \u039a\u03a1\u0395\u0391\u03a4\u0391", "1850": "\u039c\u0395\u03a3\u039f\u0393\u0395\u0399\u0391\u039a\u0391 \u0394\u03a1\u039f\u039c\u039f\u039b\u039f\u0393\u0399\u0391 \u0395\u03a0\u0399\u0392\u0391\u03a4\u0397\u0393\u03a9\u039d \u03a0\u039b\u039f\u0399\u03a9\u039d", "1851": "\u03a3\u03a5\u0393\u039a\u03a1\u039f\u03a4\u0397\u03a3\u0397 \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u03a5 \u0391\u0395\u03a1\u039f\u03a0\u039f\u03a1\u0399\u0391\u03a3", "1852": "\u03a5\u03a0\u0391\u039b\u039b\u0397\u039b\u0399\u039a\u039f\u03a3 \u039a\u03a9\u0394\u0399\u039a\u0391\u03a3", "1853": "\u0393\u0395\u039d\u0399\u039a\u0395\u03a3 \u0394\u0399\u0391\u03a4\u0391\u039e\u0395\u0399\u03a3 \u03a0\u0395\u03a1\u0399 \u03a6\u0391\u03a1\u039c\u0391\u039a\u0395\u0399\u03a9\u039d", "1854": "\u0394\u0399\u0391\u03a6\u039f\u03a1\u039f\u0399 \u03a3\u03a4\u0395\u0393\u0391\u03a3\u03a4\u0399\u039a\u039f\u0399 \u039d\u039f\u039c\u039f\u0399", "1855": "\u03a5\u03a0\u039f\u03a5\u03a1\u0393\u0395\u0399\u039f \u03a3\u03a5\u039d\u03a4\u039f\u039d\u0399\u03a3\u039c\u039f\u03a5", "1856": "\u03a0\u03a1\u039f\u03a3\u039b\u0397\u03a8\u0395\u0399\u03a3 \u03a3\u03a4\u039f \u0394\u0397\u039c\u039f\u03a3\u0399\u039f", "1857": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0395\u03a0\u0399\u039a. \u0391\u03a3\u03a6\u0391\u039b. \u03a0\u03a1\u039f\u03a3\u03a9\u03a0. \u039f.\u0395.\u0391.\u03a3. \u039a\u0391\u0399 \u03a5\u03a0\u0391\u039b\u039b. \u0393\u03a1\u0391\u03a6\u0395\u0399\u03a9\u039d \u039a\u039f\u0399\u039d\u03a9\u039d \u03a4\u0391\u039c\u0395\u0399\u03a9\u039d \u0399\u0394\u0399\u03a9\u03a4\u0399\u039a\u03a9\u039d \u039b\u0395\u03a9\u03a6\u039f\u03a1\u0395\u0399\u03a9\u039d", "1858": "\u03a3\u03a4\u03a1\u0391\u03a4\u0399\u03a9\u03a4\u0399\u039a\u0397 \u0391\u03a3\u03a4\u03a5\u039d\u039f\u039c\u0399\u0391", "1859": "\u039d\u039f\u039c\u0399\u03a3\u039c\u0391\u03a4\u0399\u039a\u0395\u03a3 \u03a3\u03a5\u039c\u0392\u0391\u03a3\u0395\u0399\u03a3", "1860": "\u0391\u03a1\u03a7\u0397 \u0394\u0399\u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0397\u03a3 \u0391\u03a0\u039f\u03a1\u03a1\u0397\u03a4\u039f\u03a5 \u0395\u03a0\u0399\u039a\u039f\u0399\u039d\u03a9\u039d\u0399\u03a9\u039d (\u0391.\u0394.\u0391.\u0395.)", "1861": "\u03a3\u03a4\u03a1\u0391\u03a4\u0399\u03a9\u03a4\u0399\u039a\u0391 \u03a3\u03a5\u039d\u0395\u03a1\u0393\u0395\u0399\u0391", "1862": "\u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u0397 \u039a\u03a1\u0391\u03a4\u0397\u03a3\u0397", "1863": "\u0395\u03a6\u0397\u039c\u0395\u03a1\u0399\u0394\u0391 \u03a4\u0397\u03a3 \u039a\u03a5\u0392\u0395\u03a1\u039d\u0397\u03a3\u0395\u03a9\u03a3", "1864": "\u0391\u039d\u03a9\u03a4\u0391\u03a4\u039f \u03a5\u0393\u0395\u0399\u039f\u039d\u039f\u039c\u0399\u039a\u039f \u03a3\u03a5\u039c\u0392\u039f\u03a5\u039b\u0399\u039f", "1865": "\u0393\u03a1\u0391\u039c\u039c\u0391\u03a4\u0395\u0399\u03a3 \u03a3\u03a4\u03a1\u0391\u03a4\u039f\u0394\u0399\u039a\u0395\u0399\u03a9\u039d", "1866": "\u039a\u0391\u03a4\u0391\u03a3\u03a4\u0391\u03a3\u0397 \u0394\u0399\u039f\u03a0\u03a9\u039d, \u039d\u0391\u03a5\u03a4\u03a9\u039d \u039a\u0391\u0399 \u039d\u0391\u03a5\u03a4\u039f\u03a0\u0391\u0399\u0394\u03a9\u039d", "1867": "\u03a0\u0395\u03a1\u0399\u03a0\u03a4\u03a9\u03a3\u0395\u0399\u03a3 \u0391\u039c\u039f\u0399\u0392\u0391\u0399\u0391\u03a3 \u03a3\u03a5\u039d\u0394\u03a1\u039f\u039c\u0397\u03a3", "1868": "\u03a5\u03a0\u039f\u039d\u039f\u039c\u039f\u0399 \u03a0\u03a1\u03a9\u03a4\u0395\u03a5\u039f\u03a5\u03a3\u0391\u03a3", "1869": "\u03a4\u0395\u039b\u0397 \u0394\u0399\u0391\u0394\u03a1\u039f\u039c\u0397\u03a3 \u0395\u039d\u0391\u0395\u03a1\u0399\u039f\u03a5 \u03a7\u03a9\u03a1\u039f\u03a5", "1870": "\u03a5\u0393\u0395\u0399\u039f\u039d\u039f\u039c\u0399\u039a\u0391\u0399 \u0395\u03a0\u0399\u03a4\u03a1\u039f\u03a0\u0391\u0399", "1871": "\u0399\u0391\u03a4\u03a1\u0399\u039a\u0395\u03a3 \u0395\u0399\u0394\u0399\u039a\u039f\u03a4\u0397\u03a4\u0395\u03a3", "1872": "\u0395\u03a1\u03a4 \u2013 2", "1873": "\u0395\u039a\u03a4\u0395\u039b\u0395\u03a3\u0397 \u0395\u03a1\u0393\u03a9\u039d \u039f.\u03a3.\u0395.\u039a\u0391\u0399 \u03a3\u03a5\u039d\u0394\u0395\u0394\u0395\u039c\u0395\u039d\u03a9\u039d \u0395\u03a0\u0399\u03a7\u0395\u0399\u03a1\u0397\u03a3\u0395\u03a9\u039d", "1874": "\u0393\u0395\u03a9\u03a1\u0393\u0399\u039a\u0395\u03a3 \u03a3\u03a7\u039f\u039b\u0395\u03a3", "1875": "\u03a3\u03a5\u039c\u039c\u0395\u03a4\u039f\u03a7\u0397 \u03a3\u03a5\u039d\u0395\u03a4\u0391\u0399\u03a1\u0399\u03a3\u039c\u03a9\u039d \u03a3\u0395 \u03a0\u03a1\u039f\u039c\u0397\u0398\u0395\u0399\u0395\u03a3 \u0394\u0397\u039c\u039f\u03a3\u0399\u039f\u03a5", "1876": "\u0394\u0399\u039a\u0391\u0399\u03a9\u039c\u0391 \u03a7\u039f\u03a1\u03a4\u039f\u039d\u039f\u039c\u0397\u03a3", "1877": "\u039f\u0399\u039a\u039f\u039a\u03a5\u03a1\u0399\u039a\u0395\u03a3 \u03a3\u03a7\u039f\u039b\u0395\u03a3", "1878": "\u039a\u0395\u039d\u03a4\u03a1\u0391 \u03a5\u0393\u0395\u0399\u0391\u03a3-\u03a0\u039f\u039b\u03a5\u0399\u0391\u03a4\u03a1\u0395\u0399\u0391", "1879": "\u0394\u0399\u039a\u0391\u03a3\u03a4\u0397\u03a1\u0399\u039f \u03a3\u03a5\u039d\u0394\u0399\u0391\u039b\u039b\u0391\u0393\u0397\u03a3 \u039a\u0391\u0399 \u0394\u0399\u0391\u0399\u03a4\u0397\u03a3\u0399\u0391\u03a3", "1880": "\u0395\u03a0\u0399\u0398\u0395\u03a9\u03a1\u0397\u03a3\u0397 \u0399\u03a7\u0398\u03a5\u03a9\u039d", "1881": "\u03a3\u03a5\u039d\u0395\u03a4\u0391\u0399\u03a1\u0399\u03a3\u039c\u039f\u0399 \u0395\u039e\u0395\u03a5\u0393\u0395\u039d\u0399\u03a3\u039c\u039f\u03a5 \u0394\u0395\u039d\u0394\u03a1\u03a9\u039d", "1882": "\u03a6\u039f\u0399\u03a4\u0397\u03a4\u0395\u03a3", "1883": "\u0394\u039f\u039c\u0397\u03a3\u0397 \u0395\u03a0\u0399 \u03a1\u03a5\u039c\u039f\u03a4\u039f\u039c\u039f\u03a5\u039c\u0395\u039d\u03a9\u039d \u0391\u039a\u0399\u039d\u0397\u03a4\u03a9\u039d", "1884": "\u0391\u03a0\u0391\u03a3\u03a7\u039f\u039b\u0397\u03a3\u0397 - \u0395\u039e\u0395\u0399\u0394\u0399\u039a\u0395\u03a5\u03a3\u0397 - \u039a\u0391\u03a4\u0391\u03a1\u03a4\u0399\u03a3\u0397 \u0391\u039d\u0395\u03a1\u0393\u03a9\u039d", "1885": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0395\u03a0\u0399\u039a\u039f\u03a5\u03a1\u0399\u039a\u0397\u03a3 \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u03a3 \u03a5\u03a0\u0391\u039b\u039b\u0397\u039b\u03a9\u039d \u03a6\u0391\u03a1\u039c\u0391\u039a\u0395\u03a5\u03a4\u0399\u039a\u03a9\u039d \u0395\u03a1\u0393\u0391\u03a3\u0399\u03a9\u039d (\u03a4.\u0395.\u0391.\u03a5.\u03a6.\u0395.)", "1886": "\u039d\u039f\u039c\u0399\u03a3\u039c\u0391\u03a4\u0399\u039a\u039f \u03a3\u03a5\u03a3\u03a4\u0397\u039c\u0391", "1887": "\u0391\u03a0\u039f\u0393\u03a1\u0391\u03a6\u0397 \u039d\u0391\u03a5\u03a4\u0399\u039a\u03a9\u039d", "1888": "\u0395\u0398\u039d\u0399\u039a\u039f \u0398\u0395\u0391\u03a4\u03a1\u039f", "1889": "\u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u0391 \u0395\u03a0\u0399\u03a3\u03a4\u0397\u039c\u039f\u039d\u0399\u039a\u0397\u03a3 \u0384\u0395\u03a1\u0395\u03a5\u039d\u0391\u03a3 \u039a\u0391\u0399 \u0391\u039d\u0391\u03a0\u03a4\u03a5\u039e\u0395\u03a9\u03a3", "1890": "\u03a0\u0391\u03a1\u039f\u03a7\u0395\u03a3 \u0391\u03a3\u03a4\u03a5\u039d\u039f\u039c\u0399\u039a\u039f\u03a5 \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u03a5 \u0395\u039b\u039b\u0397\u039d\u0399\u039a\u0397\u03a3 \u0391\u03a3\u03a4\u03a5\u039d\u039f\u039c\u0399\u0391\u03a3", "1891": "\u03a3\u0399\u0392\u0399\u03a4\u0391\u039d\u0399\u0394\u0395\u0399\u039f\u03a3 \u03a3\u03a7\u039f\u039b\u0397", "1892": "\u03a3\u03a4\u03a1\u0391\u03a4\u0399\u03a9\u03a4\u0399\u039a\u0397 \u0399\u0391\u03a4\u03a1\u0399\u039a\u0397 \u03a3\u03a7\u039f\u039b\u0397", "1893": "\u03a5\u03a0\u039f\u03a5\u03a1\u0393\u0395\u0399\u039f \u039a\u039f\u0399\u039d\u03a9\u039d\u0399\u039a\u03a9\u039d \u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u03a9\u039d", "1894": "\u0391\u03a0\u0391\u0393\u039f\u03a1\u0395\u03a5\u03a3\u0397 \u0391\u03a0\u0391\u039b\u039b\u039f\u03a4\u03a1\u0399\u03a9\u03a3\u0397\u03a3 \u03a0\u039b\u039f\u0399\u03a9\u039d", "1895": "\u03a0\u0391\u039d\u0395\u03a0\u0399\u03a3\u03a4\u0397\u039c\u0399\u0391\u039a\u0391 \u03a3\u03a5\u0393\u0393\u03a1\u0391\u039c\u039c\u0391\u03a4\u0391", "1896": "\u039c\u039f\u03a5\u03a3\u039f\u03a5\u039b\u039c\u0391\u039d\u039f\u0399", "1897": "\u0394\u0399\u039a\u0391\u03a3\u03a4\u0399\u039a\u039f\u0399 \u03a3\u03a5\u039c\u0392\u039f\u03a5\u039b\u039f\u0399 \u03a0\u039f\u039b\u0395\u039c\u0399\u039a\u039f\u03a5 \u039d\u0391\u03a5\u03a4\u0399\u039a\u039f\u03a5", "1898": "\u0391\u0395\u03a1\u039f\u03a0\u039f\u03a1\u0399\u039a\u0391 \u0395\u03a1\u0393\u0391 \u039a\u0391\u0399 \u03a0\u03a1\u039f\u039c\u0397\u0398\u0395\u0399\u0395\u03a3", "1899": "\u03a4\u039f\u03a0\u0399\u039a\u0391 \u0395\u0393\u0393\u0395\u0399\u039f\u0392\u0395\u039b\u03a4\u0399\u03a9\u03a4\u0399\u039a\u0391 \u0395\u03a1\u0393\u0391", "1900": "\u03a6\u039f\u03a1\u039f\u039b\u039f\u0393\u0399\u0391 \u0396\u03a9\u03a9\u039d", "1901": "\u03a3\u03a5\u039d\u03a4\u0391\u0393\u039c\u0391", "1902": "\u039d\u039f\u039c\u039f\u0399 \u03a0\u0395\u03a1\u0399 \u03a7\u03a1\u0397\u039c\u0391\u03a4\u0399\u03a3\u03a4\u0397\u03a1\u0399\u039f\u03a5 - \u0395\u03a0\u0399\u03a4\u03a1\u039f\u03a0\u0397 \u039a\u0395\u03a6\u0391\u039b\u0391\u0399\u0391\u0393\u039f\u03a1\u0391\u03a3 - \u03a7\u03a1\u0397\u039c\u0391\u03a4\u0399\u03a3\u03a4\u0397\u03a1\u0399\u0391\u039a\u0397 \u0391\u0393\u039f\u03a1\u0391 \u03a0\u0391\u03a1\u0391\u0393\u03a9\u0393\u03a9\u039d", "1903": "\u0393\u0395\u03a9\u03a4\u03a1\u0397\u03a3\u0395\u0399\u03a3", "1904": "\u03a4\u0391\u039c\u0395\u0399\u0391 \u0395\u03a0\u0399\u039a\u039f\u03a5\u03a1\u0399\u039a\u0397\u03a3 \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u03a3 \u039a\u0391\u0399 \u03a0\u03a1\u039f\u039d\u039f\u0399\u0391\u03a3 \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u03a5 \u0395\u039c\u03a0\u039f\u03a1\u0399\u039a\u0397\u03a3 \u03a4\u03a1\u0391\u03a0\u0395\u0396\u0391\u03a3 \u0395\u039b\u039b\u0391\u0394\u0391\u03a3 (\u03a4.\u0395.\u0391.\u03a0.\u0395.\u03a4.\u0395 \u039a\u0391\u0399 \u03a4.\u0391.\u03a0.\u0395.\u03a4.\u0395.)", "1905": "\u0395\u03a6\u0395\u0394\u03a1\u039f\u0399 \u0391\u0395\u03a1\u039f\u03a0\u039f\u03a1\u0399\u0391\u03a3", "1906": "\u039a\u0391\u03a4\u2019 \u0399\u0394\u0399\u0391\u039d \u0399\u0394\u0399\u03a9\u03a4\u0399\u039a\u0391 \u0395\u039a\u03a0\u0391\u0399\u0394\u0395\u03a5\u03a4\u0397\u03a1\u0399\u0391", "1907": "\u03a3\u03a7\u039f\u039b\u0397 \u039d\u039f\u039c\u0399\u039a\u03a9\u039d \u039a\u0391\u0399 \u039f\u0399\u039a\u039f\u039d\u039f\u039c\u0399\u039a\u03a9\u039d \u0395\u03a0\u0399\u03a3\u03a4\u0397\u039c\u03a9\u039d", "1908": "\u039a\u0391\u03a4\u0391\u0392\u039f\u039b\u0397 \u0395\u0399\u03a3\u03a6\u039f\u03a1\u03a9\u039d \u039c\u0395 \u0394\u039f\u03a3\u0395\u0399\u03a3", "1909": "\u03a0\u0391\u039b\u0391\u0399\u039f\u03a4\u0395\u03a1\u0395\u03a3 \u0391\u0395\u03a1\u039f\u03a0\u039f\u03a1\u0399\u039a\u0395\u03a3 \u0395\u03a4\u0391\u0399\u03a1\u0395\u0399\u0395\u03a3", "1910": "\u03a4\u03a1\u039f\u039c\u039f\u039a\u03a1\u0391\u03a4\u0399\u0391 - \u039f\u03a1\u0393\u0391\u039d\u03a9\u039c\u0395\u039d\u0397", "1911": "\u03a4\u0391\u039c\u0395\u0399\u0391 \u0395\u039b\u0399\u0391\u03a3-\u0394\u0391\u039a\u039f\u039a\u03a4\u039f\u039d\u0399\u0391", "1912": "\u0393\u03a1\u0391\u03a6\u0395\u0399\u0391 \u0395\u03a5\u03a1\u0395\u03a3\u0395\u03a9\u03a3 \u039d\u0391\u03a5\u03a4\u0399\u039a\u0397\u03a3 \u0395\u03a1\u0393\u0391\u03a3\u0399\u0391\u03a3", "1913": "\u0391\u03a1\u03a4\u039f\u03a0\u039f\u0399\u0395\u0399\u0391", "1914": "\u03a6\u039f\u03a1\u039f\u039b\u039f\u0393\u0399\u0391 \u039a\u03a5\u039a\u039b\u039f\u03a5 \u0395\u03a1\u0393\u0391\u03a3\u0399\u03a9\u039d", "1915": "\u03a3\u03a5\u039d\u0391\u039b\u039b\u0391\u0393\u039c\u0391\u03a4\u0399\u039a\u0397 \u039a\u0391\u0399 \u0393\u03a1\u0391\u039c\u039c\u0391\u03a4\u0399\u039f \u03a3\u0395 \u0394\u0399\u0391\u03a4\u0391\u0393\u0397", "1916": "\u03a0\u0395\u03a1\u0399\u03a6\u0395\u03a1\u0395\u0399\u0391\u039a\u0395\u03a3 \u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u0395\u03a3 \u03a5\u03a0\u039f\u03a5\u03a1\u0393\u0395\u0399\u039f\u03a5 \u039c\u0395\u03a4\u0391\u03a6\u039f\u03a1\u03a9\u039d \u039a\u0391\u0399 \u0395\u03a0\u0399\u039a\u039f\u0399\u039d\u03a9\u039d\u0399\u03a9\u039d", "1917": "\u0395\u039b\u039b\u0397\u039d\u0399\u039a\u039f\u03a3 \u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u03a3 \u03a4\u039f\u03a5\u03a1\u0399\u03a3\u039c\u039f\u03a5", "1918": "\u03a0\u03a1\u039f\u03a3\u03a4\u0391\u03a3\u0399\u0391 \u03a4\u03a1\u0391\u03a5\u039c\u0391\u03a4\u0399\u03a9\u039d, \u0391\u0399\u03a7\u039c\u0391\u039b\u03a9\u03a4\u03a9\u039d \u039a\u0391\u0399 \u0391\u039c\u0391\u03a7\u039f\u03a5 \u03a0\u039b\u0397\u0398\u03a5\u03a3\u039c\u039f\u03a5", "1919": "\u039a\u0391\u039d\u039f\u039d\u0399\u03a3\u039c\u039f\u03a3 \u039b\u0395\u0399\u03a4\u039f\u03a5\u03a1\u0393\u0399\u0391\u03a3 \u03a4.\u0395.\u0392.\u0395", "1920": "\u03a3\u03a4\u0395\u0393\u0391\u03a3\u0397 \u0391\u039d\u0391\u03a0\u0397\u03a1\u03a9\u039d \u03a0\u039f\u039b\u0395\u039c\u039f\u03a5", "1921": "\u0391\u0398\u039b\u0397\u03a4\u0399\u03a3\u039c\u039f\u03a3 \u039a\u0391\u0399 \u03a8\u03a5\u03a7\u0391\u0393\u03a9\u0393\u0399\u0391 \u03a0. \u039d\u0391\u03a5\u03a4\u0399\u039a\u039f\u03a5", "1922": "\u0391\u039d\u0395\u039b\u039a\u03a5\u03a3\u03a4\u0397\u03a1\u0395\u03a3 - \u0391\u039d\u03a5\u03a8\u03a9\u03a4\u0399\u039a\u0391 \u039c\u0395\u03a3\u0391 \u039a\u0391\u0399 \u039c\u0397\u03a7\u0391\u039d\u0397\u039c\u0391\u03a4\u0391", "1923": "\u03a3\u03a5\u039d\u03a4\u0391\u039e\u0395\u0399\u03a3 \u03a0\u039b\u0397\u03a1\u03a9\u039c\u0391\u03a4\u03a9\u039d \u0395\u03a0\u0399\u03a4\u0391\u039a\u03a4\u03a9\u039d \u03a0\u039b\u039f\u0399\u03a9\u039d", "1924": "\u0394\u0399\u039a\u0391\u0399\u03a9\u039c\u0391\u03a4\u0391 \u03a5\u03a0\u0395\u03a1\u0397\u039c\u0395\u03a1\u0399\u0391\u03a3", "1925": "\u039a\u03a9\u0394\u0399\u039a\u0391\u03a3 \u03a0\u039f\u039b\u0395\u039c\u0399\u039a\u03a9\u039d \u03a3\u03a5\u039d\u03a4\u0391\u039e\u0395\u03a9\u039d", "1926": "\u039a\u0391\u03a0\u039d\u039f\u03a3", "1927": "\u03a0\u03a1\u039f\u03a3\u03a4\u0391\u03a3\u0399\u0391 \u03a3\u0395\u0399\u03a3\u039c\u039f\u03a0\u039b\u0397\u039a\u03a4\u03a9\u039d", "1928": "\u0391\u03a0\u039f\u03a3\u03a4\u03a1\u0391\u03a4\u0395\u0399\u0395\u03a3 \u039a\u0391\u0399 \u0391\u03a0\u039f\u039a\u0391\u03a4\u0391\u03a3\u03a4\u0391\u03a3\u0395\u0399\u03a3", "1929": "\u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f \u0395\u03a0\u0391\u0393\u0393\u0395\u039b\u039c\u0391\u03a4\u0399\u039a\u03a9\u039d \u03a3\u03a7\u039f\u039b\u03a9\u039d", "1930": "\u0394\u0399\u0395\u0398\u039d\u0395\u0399\u03a3 \u03a3\u03a5\u039c\u0392\u0391\u03a3\u0395\u0399\u03a3 \u0393\u0399\u0391 \u03a4\u0397\u039d \u03a0\u03a1\u039f\u03a3\u03a4\u0391\u03a3\u0399\u0391 \u03a4\u03a9\u039d \u0395\u03a1\u0393\u0391\u0396\u039f\u039c\u0395\u039d\u03a9\u039d \u0391\u039d\u0397\u039b\u0399\u039a\u03a9\u039d", "1931": "\u039a\u0395\u039d\u03a4\u03a1\u0399\u039a\u0397 \u0391\u0393\u039f\u03a1\u0391 \u0391\u0398\u0397\u039d\u03a9\u039d", "1932": "\u0395\u039d\u0399\u03a3\u03a7\u03a5\u03a3\u0397 \u0395\u039b\u0391\u0399\u039f\u03a0\u0391\u03a1\u0391\u0393\u03a9\u0393\u0397\u03a3", "1933": "\u0391\u039d\u039f\u0399\u039a\u03a4\u0391 \u03a3\u03a9\u03a6\u03a1\u039f\u039d\u0399\u03a3\u03a4\u0399\u039a\u0391 \u039a\u0391\u03a4\u0391\u03a3\u03a4\u0397\u039c\u0391\u03a4\u0391", "1934": "\u03a6\u0399\u039b\u0391\u039d\u0398\u03a1\u03a9\u03a0\u0399\u039a\u0391 \u0399\u0394\u03a1\u03a5\u039c\u0391\u03a4\u0391 \u0396\u0391\u039a\u03a5\u039d\u0398\u039f\u03a5", "1935": "\u0394\u0399\u0391\u03a6\u039f\u03a1\u0391 \u0395\u0399\u0394\u0397 \u03a4\u03a1\u039f\u03a6\u0399\u039c\u03a9\u039d, \u03a0\u039f\u03a4\u03a9\u039d & \u0391\u039d\u03a4\u0399\u039a\u0395\u0399\u039c\u0395\u039d\u03a9\u039d", "1936": "\u03a6\u039f\u03a1\u039f\u039b\u039f\u0393\u0399\u0391 \u0395\u03a0\u0399\u03a7\u0395\u0399\u03a1\u0397\u03a3\u0395\u03a9\u039d \u03a4\u03a5\u03a0\u039f\u03a5", "1937": "\u03a0\u0395\u03a1\u0399\u039f\u03a1\u0399\u03a3\u039c\u039f\u0399 \u0395\u0399\u03a3\u0391\u0393\u03a9\u0393\u0397\u03a3", "1938": "\u03a0\u03a1\u039f\u03a3\u03a9\u03a1\u0399\u039d\u0397 \u0395\u0399\u03a3\u0394\u039f\u03a7\u0397 \u0395\u039c\u03a0\u039f\u03a1\u0395\u03a5\u039c\u0391\u03a4\u03a9\u039d", "1939": "\u0391\u03a1\u03a7\u0395\u0399\u039f", "1940": "\u0394\u0399\u03a5\u039b\u0399\u03a3\u03a4\u0397\u03a1\u0399\u0391 \u03a0\u0395\u03a4\u03a1\u0395\u039b\u0391\u0399\u039f\u03a5", "1941": "\u0395\u0399\u03a3\u0391\u0393\u03a9\u0393\u0397 \u03a0\u0391\u0399\u0394\u0391\u0393\u03a9\u0393\u0399\u039a\u039f\u03a5 \u03a5\u039b\u0399\u039a\u039f\u03a5", "1942": "\u0395\u03a0\u0399\u0398\u0395\u03a9\u03a1\u0397\u03a3\u0397 \u039a\u039b\u0397\u03a1\u039f\u0394\u039f\u03a4\u0397\u039c\u0391\u03a4\u03a9\u039d", "1943": "\u03a3\u0399\u0394\u0397\u03a1\u039f\u0394\u03a1\u039f\u039c\u039f\u0399 \u0392\u039f\u03a1\u0395\u0399\u039f\u0394\u03a5\u03a4\u0399\u039a\u0397\u03a3 \u0395\u039b\u039b\u0391\u0394\u039f\u03a3", "1944": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0395\u03a0\u0399\u039a\u039f\u03a5\u03a1\u0399\u039a\u0397\u03a3 \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u03a3 \u0395\u03a1\u0393\u0391\u03a4\u039f\u03a4\u0395\u03a7\u039d\u0399\u03a4\u03a9\u039d \u0394\u039f\u039c\u0399\u039a\u03a9\u039d \u039a\u0391\u0399 \u039e\u03a5\u039b\u039f\u03a5\u03a1\u0393\u0399\u039a\u03a9\u039d \u0395\u03a1\u0393\u0391\u03a3\u0399\u03a9\u039d (\u03a4.\u0395.\u0391.\u0395.\u0394.\u039e.\u0395.)", "1945": "\u03a4\u0391\u039c\u0395\u0399\u0391 \u03a0\u03a1\u039f\u039d\u039f\u0399\u0391\u03a3 \u03a3\u03a4\u0399\u03a3 \u03a0\u03a1\u0395\u03a3\u0392\u0395\u0399\u0395\u03a3", "1946": "\u039f\u0399\u039a\u039f\u0393\u0395\u039d\u0395\u0399\u0391\u039a\u039f\u03a3 \u03a0\u03a1\u039f\u0393\u03a1\u0391\u039c\u039c\u0391\u03a4\u0399\u03a3\u039c\u039f\u03a3 - \u03a5\u0393\u0395\u0399\u0391 \u03a0\u0391\u0399\u0394\u0399\u039f\u03a5", "1947": "\u0391\u03a1\u03a7\u0399\u0395\u03a1\u0395\u0399\u03a3", "1948": "\u03a3\u03a5\u039c\u0392\u039f\u03a5\u039b\u0399\u0391 \u03a5\u03a0\u039f\u03a5\u03a1\u0393\u0395\u0399\u039f\u03a5 \u0394\u0399\u039a\u0391\u0399\u039f\u03a3\u03a5\u039d\u0397\u03a3", "1949": "\u039d\u039f\u03a3\u039f\u039a\u039f\u039c\u0395\u0399\u0391\u039a\u0397 \u03a0\u0395\u03a1\u0399\u0398\u0391\u039b\u03a8\u0397", "1950": "\u039a\u0391\u03a4\u0391\u03a3\u03a4\u0397\u039c\u0391\u03a4\u0391 \u03a0\u03a9\u039b\u0397\u03a3\u0395\u03a9\u03a3 \u039f\u0399\u039d\u039f\u03a0\u039d\u0395\u03a5\u039c\u0391\u03a4\u03a9\u0394\u03a9\u039d \u03a0\u039f\u03a4\u03a9\u039d \u039a\u0391\u0399 \u039a\u0395\u039d\u03a4\u03a1\u0391 \u0394\u0399\u0391\u03a3\u039a\u0395\u0394\u0391\u03a3\u0395\u03a9\u03a3", "1951": "\u03a0\u03a1\u03a9\u03a4\u0395\u03a5\u039f\u03a5\u03a3\u0391", "1952": "\u03a0\u039f\u039b\u03a5\u03a4\u0395\u03a7\u039d\u0395\u0399\u039f \u039a\u03a1\u0397\u03a4\u0397\u03a3", "1953": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0395\u03a0\u0399\u039a\u039f\u03a5\u03a1\u0399\u039a\u0397\u03a3 \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u03a3 \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u03a5 \u0395\u03a4\u0391\u0399\u03a1\u0395\u0399\u03a9\u039d \u03a4\u03a3\u0399\u039c\u0395\u039d\u03a4\u03a9\u039d (\u03a4.\u0395.\u0391.\u03a0.\u0395.\u03a4.)", "1954": "\u0395\u039b\u039b\u0397\u039d\u0399\u039a\u039f\u03a3 \u03a4\u0391\u03a0\u0397\u03a4\u039f\u03a5\u03a1\u0393\u0399\u039a\u039f\u03a3 \u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u03a3", "1955": "\u0395\u03a6\u0391\u03a1\u039c\u039f\u0393\u0397 \u0394\u0397\u039c\u039f\u03a3\u0399\u039f\u03a5\u03a0\u0391\u039b\u039b\u0397\u039b\u0399\u039a\u039f\u03a5 \u039a\u03a9\u0394\u0399\u039a\u0391", "1956": "\u0397\u039b\u0395\u039a\u03a4\u03a1\u039f\u039b\u039f\u0393\u0399\u039a\u039f \u0395\u03a1\u0393\u0391\u03a3\u03a4\u0397\u03a1\u0399\u039f", "1957": "\u03a4\u0391\u039c\u0395\u0399\u039f \u03a3\u03a5\u039d\u03a4\u0391\u039e\u0395\u03a9\u039d \u039c\u0397\u03a7\u0391\u039d\u0399\u039a\u03a9\u039d \u039a\u0391\u0399 \u0395\u03a1\u0393\u039f\u039b\u0397\u03a0\u03a4\u03a9\u039d", "1958": "\u039c\u0395\u03a3\u0399\u03a4\u0395\u03a3 \u0391\u03a3\u03a4\u0399\u039a\u03a9\u039d \u03a3\u03a5\u039c\u0392\u0391\u03a3\u0395\u03a9\u039d", "1959": "\u03a0\u039b\u03a9\u03a4\u0395\u03a3 \u0394\u0395\u039e\u0391\u039c\u0395\u039d\u0395\u03a3", "1960": "\u039a\u0391\u039d\u039f\u039d\u0399\u03a3\u039c\u039f\u0399 \u03a6\u039f\u03a1\u03a4\u03a9\u03a3\u0395\u03a9\u039d", "1961": "\u0395\u0399\u0394\u0399\u039a\u0391 \u0395\u03a0\u0399\u0394\u039f\u039c\u0391\u03a4\u0391", "1962": "\u03a0\u039f\u0399\u039d\u0399\u039a\u039f\u03a3 \u039a\u03a9\u0394\u0399\u039a\u0391\u03a3", "1963": "\u0395\u0399\u0394\u0399\u039a\u039f\u03a3 \u039b\u039f\u0393\u0391\u03a1\u0399\u0391\u03a3\u039c\u039f\u03a3 \u03a0\u03a1\u039f\u039d\u039f\u0399\u0391\u03a3 (\u03a4.\u03a3.\u0395.\u03a5.\u03a0.)", "1964": "\u0395\u0398\u039d\u0399\u039a\u0397 \u0391\u039d\u03a4\u0399\u03a3\u03a4\u0391\u03a3\u0397", "1965": "\u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u03a3 \u0392\u0399\u039f\u039c\u0397\u03a7\u0391\u039d\u0399\u039a\u0397\u03a3 \u0391\u039d\u0391\u03a0\u03a4\u03a5\u039e\u0397\u03a3", "1966": "\u0395\u03a1\u0393\u0391 \u039a\u039f\u0399\u039d\u0397\u03a3 \u03a5\u03a0\u039f\u0394\u039f\u039c\u0397\u03a3", "1967": "\u0394\u0399\u0395\u03a5\u0398\u03a5\u039d\u03a3\u0397 T\u0395\u039b\u03a9\u039d\u0395\u0399\u03a9\u039d \u03a0\u0395\u0399\u03a1\u0391\u0399\u0391", "1968": "\u0399\u0391\u03a4\u03a1\u0399\u039a\u0397 \u03a3\u03a7\u039f\u039b\u0397 \u0399\u03a9\u0391\u039d\u039d\u0399\u039d\u03a9\u039d", "1969": "\u0396\u03a9\u039f\u039a\u039b\u039f\u03a0\u0397 \u039a\u0391\u0399 \u0396\u03a9\u039f\u039a\u03a4\u039f\u039d\u0399\u0391", "1970": "\u03a1\u03a5\u0398\u039c\u0399\u03a3\u0399\u03a3 \u039a\u0399\u039d\u0397\u03a3\u0395\u03a9\u03a3 \u0395\u039d \u039f\u0394\u039f\u0399\u03a3", "1971": "\u0395\u03a4\u0391\u0399\u03a1\u0395\u0399\u0395\u03a3 \u03a0\u03a1\u039f\u03a3\u03a4\u0391\u03a3\u0399\u0391\u03a3 \u039a\u03a1\u0391\u03a4\u039f\u03a5\u039c\u0395\u039d\u03a9\u039d - \u0391\u03a0\u039f\u03a6\u03a5\u039b\u0391\u039a\u0399\u0396\u039f\u039c\u0395\u039d\u03a9\u039d", "1972": "\u0394\u0391\u03a3\u0399\u039a\u0397 \u0394\u0399\u0395\u03a5\u0398\u0395\u03a4\u0397\u03a3\u0397 \u03a7\u0395\u0399\u039c\u0391\u03a1\u03a1\u03a9\u039d", "1973": "\u03a3\u03a5\u039d\u039f\u03a1\u0399\u0391\u039a\u039f\u0399 \u03a6\u03a5\u039b\u0391\u039a\u0395\u03a3", "1974": "\u03a3\u03a7\u039f\u039b\u0397 \u0398\u0395\u03a4\u0399\u039a\u03a9\u039d \u0395\u03a0\u0399\u03a3\u03a4\u0397\u039c\u03a9\u039d \u03a0\u0391\u039d\u039c\u0399\u039f\u03a5 \u0399\u03a9\u0391\u039d\u039d\u0399\u039d\u03a9\u039d", "1975": "\u0395\u039a\u03a0\u0391\u0399\u0394\u0395\u03a5\u03a3\u0397 \u03a0.\u039d\u0391\u03a5\u03a4\u0399\u039a\u039f\u03a5", "1976": "\u0394\u0399\u039a\u0391\u0399\u039f\u03a3\u03a4\u0391\u03a3\u0399\u039f \u0395\u03a0\u0399\u03a3\u03a4\u03a1\u0391\u03a4\u0395\u03a5\u03a3\u0395\u03a9\u03a3 1974", "1977": "\u03a1\u0391\u0394\u0399\u039f\u03a4\u0397\u039b\u0395\u0393\u03a1\u0391\u03a6\u0399\u039a\u0397 \u039a\u0391\u0399 \u03a1\u0391\u0394\u0399\u039f\u03a4\u0397\u039b\u0395\u03a6\u03a9\u039d\u0399\u039a\u0397 \u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u0391", "1978": "\u03a6\u0391\u03a1\u039c\u0391\u039a\u0391-\u0399\u0394\u0399\u039f\u03a3\u039a\u0395\u03a5\u0391\u03a3\u039c\u0391\u03a4\u0391", "1979": "\u03a3\u03a5\u039d\u03a4\u0395\u039b\u0395\u03a3\u03a4\u0395\u03a3 \u039a\u0395\u03a1\u0394\u039f\u03a5\u03a3 \u0395\u03a0\u0391\u0393\u0393\u0395\u039b\u039c\u0391\u03a4\u0399\u03a9\u039d", "1980": "\u0395\u0398\u039d\u0399\u039a\u039f \u039a\u0395\u039d\u03a4\u03a1\u039f \u039a\u039f\u0399\u039d\u03a9\u039d\u0399\u039a\u03a9\u039d \u0395\u03a1\u0395\u03a5\u039d\u03a9\u039d", "1981": "\u039a\u0395\u03a6\u0391\u039b\u0391\u0399\u039f \u039d\u0391\u03a5\u03a4\u0399\u039a\u0397\u03a3 \u0395\u039a\u03a0\u0391\u0399\u0394\u0395\u03a5\u03a3\u0395\u03a9\u03a3", "1982": "\u0395\u0399\u03a3\u03a0\u03a1\u0391\u039e\u0397 \u0395\u03a3\u039f\u0394\u03a9\u039d \u03a0\u0391\u03a1\u0395\u039b\u0398\u039f\u03a5\u03a3\u03a9\u039d \u03a7\u03a1\u0397\u03a3\u0395\u03a9\u039d", "1983": "\u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u03a3 \u0397\u039d\u03a9\u039c\u0395\u039d\u03a9\u039d \u0395\u0398\u039d\u03a9\u039d", "1984": "\u03a3\u0395\u0399\u03a3\u039c\u039f\u03a0\u039b\u0397\u039a\u03a4\u039f\u0399 \u039d\u0397\u03a3\u039f\u03a5 \u0398\u0397\u03a1\u0391\u03a3", "1985": "\u039a\u0395\u039d\u03a4\u03a1\u0399\u039a\u0397 \u0391\u0393\u039f\u03a1\u0391 \u0398\u0395\u03a3\u03a3\u0391\u039b\u039f\u039d\u0399\u039a\u0397\u03a3", "1986": "\u0394\u0399\u0391\u03a6\u0398\u039f\u03a1\u0391 \u0391\u039b\u039b\u039f\u0394\u0391\u03a0\u03a9\u039d \u0394\u0397\u039c\u039f\u03a3\u0399\u03a9\u039d \u039b\u0395\u0399\u03a4\u039f\u03a5\u03a1\u0393\u03a9\u039d", "1987": "\u0393\u0395\u03a9\u03a0\u039f\u039d\u0399\u039a\u039f \u03a0\u0391\u039d\u0395\u03a0\u0399\u03a3\u03a4\u0397\u039c\u0399\u039f \u0391\u0398\u0397\u039d\u03a9\u039d", "1988": "\u039a\u0391\u039d\u039f\u039d\u0399\u03a3\u039c\u039f\u03a3 \u03a3\u03a4\u03a1\u0391\u03a4\u039f\u0394\u0399\u039a\u0395\u0399\u03a9\u039d", "1989": "\u0394\u0399\u0391\u03a6\u039f\u03a1\u0395\u03a3 \u03a5\u0393\u0395\u0399\u039f\u039d\u039f\u039c\u0399\u039a\u0395\u03a3 \u03a3\u03a5\u039c\u0392\u0391\u03a3\u0395\u0399\u03a3", "1990": "\u03a4\u039f\u03a5\u03a1\u0399\u03a3\u03a4\u0399\u039a\u0391 \u039b\u0395\u03a9\u03a6\u039f\u03a1\u0395\u0399\u0391", "1991": "\u0394\u0391\u039d\u0395\u0399\u0391 \u0391\u03a0\u039f \u0395\u039a\u0394\u039f\u03a4\u0399\u039a\u0395\u03a3 \u03a4\u03a1\u0391\u03a0\u0395\u0396\u0395\u03a3", "1992": "\u0395\u03a0\u0399\u0398\u0391\u039b\u0391\u03a3\u03a3\u0399\u0391 \u0391\u03a1\u03a9\u0393\u0397 - \u03a1\u03a5\u039c\u039f\u03a5\u039b\u039a\u0397\u03a3\u0397 \u03a0\u039b\u039f\u0399\u03a9\u039d", "1993": "\u03a0\u03a1\u039f\u03a3\u03a4\u0391\u03a3\u0399\u0391 \u03a4\u039f\u03a5 \u039a\u0391\u0398\u0395\u03a3\u03a4\u03a9\u03a4\u039f\u03a3", "1994": "\u03a3\u03a5\u039c\u0392\u0391\u03a3\u0395\u0399\u03a3 \u03a0\u0395\u03a1\u0399 \u03a5\u039b\u0399\u039a\u039f\u03a5 \u0395\u03a5\u0397\u039c\u0395\u03a1\u0399\u0391\u03a3 \u039d\u0391\u03a5\u03a4\u0399\u039b\u039b\u039f\u039c\u0395\u039d\u03a9\u039d", "1995": "\u039c\u0395\u03a3\u0399\u03a4\u0395\u03a3 \u0395\u0393\u03a7\u03a9\u03a1\u0399\u03a9\u039d \u03a0\u03a1\u039f\u0399\u039f\u039d\u03a4\u03a9\u039d", "1996": "\u039a\u03a1\u0391\u03a4\u0399\u039a\u0397 \u039f\u03a1\u03a7\u0397\u03a3\u03a4\u03a1\u0391 \u0391\u0398\u0397\u039d\u03a9\u039d", "1997": "\u03a4\u039c\u0397\u039c\u0391\u03a4\u0391 \u039c\u039f\u03a5\u03a3\u0399\u039a\u03a9\u039d - \u0398\u0395\u0391\u03a4\u03a1\u0399\u039a\u03a9\u039d \u03a3\u03a0\u039f\u03a5\u0394\u03a9\u039d \u039a\u0391\u0399 \u0395\u03a0\u0399\u039a\u039f\u0399\u039d\u03a9\u039d\u0399\u0391\u03a3 - \u039c\u0395\u03a3\u03a9\u039d \u039c\u0391\u0396\u0399\u039a\u0397\u03a3 \u0395\u039d\u0397\u039c\u0395\u03a1\u03a9\u03a3\u0397\u03a3", "1998": "\u03a0\u0395\u0399\u0398\u0391\u03a1\u03a7\u0399\u039a\u0397 \u0395\u039e\u039f\u03a5\u03a3\u0399\u0391 \u039b\u0399\u039c\u0395\u039d\u0399\u039a\u03a9\u039d \u0391\u03a1\u03a7\u03a9\u039d", "1999": "\u0399\u039d\u03a3\u03a4\u0399\u03a4\u039f\u03a5\u03a4\u039f \u0391\u039c\u03a5\u039d\u03a4\u0399\u039a\u03a9\u039d \u0391\u039d\u0391\u039b\u03a5\u03a3\u0395\u03a9\u039d (\u0399.\u0391.\u0391.)", "2000": "\u0399\u0394\u0399\u03a9\u03a4\u0399\u039a\u039f\u0399 \u03a3\u03a4\u0391\u0398\u039c\u039f\u0399 \u0391\u03a3\u03a5\u03a1\u039c\u0391\u03a4\u039f\u03a5 - \u03a7\u03a1\u0397\u03a3\u0397 \u03a1\u0391\u0394\u0399\u039f\u03a3\u03a5\u03a7\u039d\u039f\u03a4\u0397\u03a4\u03a9\u039d", "2001": "\u0391\u039d\u0391\u0393\u039d\u03a9\u03a1\u0399\u03a3\u0397 \u039e\u0395\u039d\u03a9\u039d \u039a\u0391\u03a4\u0391\u039c\u0395\u03a4\u03a1\u0397\u03a3\u0395\u03a9\u039d", "2002": "\u0393\u0395\u039d\u039f\u039a\u03a4\u039f\u039d\u0399\u0391", "2003": "\u0395\u03a0\u0395\u039e\u0395\u03a1\u0393\u0391\u03a3\u0399\u0391 \u039a\u0391\u03a0\u039d\u039f\u03a5", "2004": "\u03a3\u03a5\u039c\u0392\u039f\u03a5\u039b\u0399\u039f \u0395\u03a0\u0399\u039a\u03a1\u0391\u03a4\u0395\u0399\u0391\u03a3", "2005": "\u0399\u0391\u03a4\u03a1\u039f\u0399 \u0399.\u039a.\u0391", "2006": "\u03a5\u03a0\u039f\u0398\u0397\u039a\u0397", "2007": "\u0391\u03a1\u039c\u039f\u0394\u0399\u039f\u03a4\u0397\u03a4\u0391 \u039b\u0399\u039c\u0395\u039d\u0399\u039a\u039f\u03a5 \u03a3\u03a9\u039c\u0391\u03a4\u039f\u03a3", "2008": "\u0395\u0399\u03a3\u0391\u0393\u03a9\u0393\u0395\u03a3 \u0393\u0399\u0391 \u0395\u039a\u0398\u0395\u03a3\u0395\u0399\u03a3, \u03a3\u03a5\u039d\u0395\u0394\u03a1\u0399\u0391 \u039a\u039b\u03a0", "2009": "\u0395\u03a5\u03a1\u03a9\u03a0\u0391\u0399\u039a\u0397 \u03a4\u03a1\u0391\u03a0\u0395\u0396\u0391 \u0391\u039d\u0391\u03a3\u03a5\u0393\u039a\u03a1\u039f\u03a4\u0397\u03a3\u0397-\u0391\u039d\u0391\u03a0\u03a4\u03a5\u039e\u0397", "2010": "\u0391\u0395\u03a1\u039f\u0394\u03a1\u039f\u039c\u0399\u039f \u03a3\u03a0\u0391\u03a4\u03a9\u039d", "2011": "\u03a4\u039c\u0397\u039c\u0391 \u0394\u0397\u039c\u039f\u03a3\u0399\u039f\u0393\u03a1\u0391\u03a6\u0399\u0391\u03a3 - \u039c\u0395\u03a3\u03a9\u039d \u039c\u0391\u0396\u0399\u039a\u0397\u03a3 \u0395\u03a0\u0399\u039a\u039f\u0399\u039d\u03a9\u039d\u0399\u0391\u03a3", "2012": "\u03a4\u039f\u039a\u039f\u03a3", "2013": "\u0395\u039d\u0399\u03a3\u03a7\u03a5\u03a3\u0397 \u03a0\u039f\u039b\u0395\u039c\u039f\u03a0\u0391\u0398\u03a9\u039d \u039a\u039b\u03a0. \u0391\u0393\u03a1\u039f\u03a4\u03a9\u039d", "2014": "\u0395\u039e\u039f\u0394\u0391 \u039a\u0397\u0394\u0395\u0399\u0391\u03a3 \u03a3\u03a4\u03a1\u0391\u03a4\u0399\u03a9\u03a4\u0399\u039a\u03a9\u039d", "2015": "\u03a0\u0391\u03a1\u039f\u03a7\u0395\u03a3 \u03a5\u03a0\u0391\u039b\u039b\u0397\u039b\u03a9\u039d", "2016": "\u03a0\u03a1\u039f\u03a3\u03a4\u0391\u03a3\u0399\u0391 \u03a3\u0399\u03a4\u039f\u03a0\u0391\u03a1\u0391\u0393\u03a9\u0393\u0397\u03a3", "2017": "\u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0397 \u039f.\u0393.\u0391 \u0391\u03a0\u039f \u0391\u039d\u0395\u039c\u039f\u0398\u03a5\u0395\u039b\u039b\u0391 \u039a\u0391\u0399 \u03a0\u039b\u0397\u039c\u039c\u03a5\u03a1\u0391", "2018": "\u0394\u0399\u0395\u03a5\u0398\u03a5\u039d\u03a3\u0397 \u039a\u0391\u03a4\u0391\u03a3\u039a\u0395\u03a5\u03a9\u039d \u039a\u0391\u0399 \u0395\u039e\u039f\u03a0\u039b\u0399\u03a3\u039c\u039f\u03a5", "2019": "\u03a4\u0395\u039b\u03a9\u039d\u0395\u0399\u0391\u039a\u039f\u0399 \u03a5\u03a0\u039f\u039b\u039f\u0393\u039f\u0399", "2020": "\u0393\u0395\u039d\u0399\u039a\u0397 \u0393\u03a1\u0391\u039c\u039c\u0391\u03a4\u0395\u0399\u0391 \u0391\u0398\u039b\u0397\u03a4\u0399\u03a3\u039c\u039f\u03a5", "2021": "\u03a3\u03a5\u039d\u03a4\u0391\u039e\u0395\u0399\u03a3", "2022": "\u0391\u0394\u0395\u0399\u0395\u03a3 \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u03a5 \u039b.\u03a3", "2023": "\u03a3\u03a5\u039d\u03a4\u0391\u039e\u0395\u0399\u03a3 \u03a3\u03a4\u03a1\u0391\u03a4\u0399\u03a9\u03a4\u0399\u039a\u03a9\u039d \u03a0\u0391\u0398\u039f\u039d\u03a4\u03a9\u039d \u03a3\u03a4\u0397\u039d", "2024": "\u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0397 \u0395\u03a0\u0399\u0392\u0391\u03a4\u03a9\u039d", "2025": "\u0391\u03a0\u0391\u039b\u039b\u039f\u03a4\u03a1\u0399\u03a9\u03a3\u0397 \u0391\u039a\u0399\u039d\u0397\u03a4\u03a9\u039d", "2026": "\u03a3\u03a7\u039f\u039b\u0397 \u0395\u03a0\u0399\u03a3\u03a4\u0397\u039c\u03a9\u039d \u03a5\u0393\u0395\u0399\u0391\u03a3", "2027": "\u0395\u039d\u039f\u0399\u039a\u0399\u039f\u03a3\u03a4\u0391\u03a3\u0399\u039f \u0392\u039f\u03a3\u039a\u03a9\u039d", "2028": "\u03a4\u0391\u039c\u0395\u0399\u039f \u03a3\u03a5\u039d\u03a4\u0391\u039e\u0395\u03a9\u039d \u0397\u0398\u039f\u03a0\u039f\u0399\u03a9\u039d - \u03a3\u03a5\u0393\u0393\u03a1\u0391\u03a6\u0395\u03a9\u039d \u03a4\u0395\u03a7\u039d\u0399\u039a\u03a9\u039d \u0398\u0395\u0391\u03a4\u03a1\u039f\u03a5", "2029": "\u0395\u03a5\u03a1\u03a9\u03a0\u0391\u0399\u039a\u039f \u0395\u039d\u03a4\u0391\u039b\u039c\u0391 \u03a3\u03a5\u039b\u039b\u0397\u03a8\u0397\u03a3", "2030": "\u0391\u039d\u03a4\u0399\u039a\u0395\u0399\u039c\u0395\u039d\u0391 \u0394\u0395\u0394\u0397\u039b\u03a9\u039c\u0395\u039d\u0397\u03a3 \u0391\u039e\u0399\u0391\u03a3 \u0391\u039d\u03a4\u0399\u039a\u0391\u03a4\u0391\u0392\u039f\u039b\u0395\u03a3", "2031": "\u0393\u0395\u039d\u0399\u039a\u0397 \u0394\u0399\u0395\u03a5\u0398\u03a5\u039d\u03a3\u0397 \u039c\u0395\u03a4\u0391\u03a6\u039f\u03a1\u03a9\u039d", "2032": "\u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u03a3 \u03a5\u03a0\u039f\u03a5\u03a1\u0393\u0395\u0399\u039f\u03a5 \u0394\u0399\u039a\u0391\u0399\u039f\u03a3\u03a5\u039d\u0397\u03a3", "2033": "\u0395\u03a5\u0398\u03a5\u039d\u0397 \u03a5\u03a0\u039f\u03a5\u03a1\u0393\u03a9\u039d", "2034": "\u03a4\u039c\u0397\u039c\u0391 \u039a\u03a4\u0397\u039d\u0399\u0391\u03a4\u03a1\u0399\u039a\u0397\u03a3", "2035": "\u0394\u0399\u039a\u0391\u03a3\u03a4\u0399\u039a\u039f \u03a3\u03a9\u039c\u0391 \u0395\u039d\u039f\u03a0\u039b\u03a9\u039d \u0394\u03a5\u039d\u0391\u039c\u0395\u03a9\u039d", "2036": "\u0395\u039d\u039f\u03a1\u0399\u0391\u039a\u039f\u0399 \u039d\u0391\u039f\u0399 \u039a\u0391\u0399 \u0395\u03a6\u0397\u039c\u0395\u03a1\u0399\u039f\u0399", "2037": "\u03a5\u0393\u0395\u0399\u039f\u039d\u039f\u039c\u0399\u039a\u0395\u03a3 \u0395\u03a0\u0399\u03a4\u03a1\u039f\u03a0\u0395\u03a3 \u039d\u0391\u03a5\u03a4\u0399\u039a\u039f\u03a5", "2038": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0395\u03a0\u0399\u039a\u039f\u03a5\u03a1\u0399\u039a\u0397\u03a3 \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u03a3 \u039a\u0391\u0399 \u03a0\u03a1\u039f\u039d\u039f\u0399\u0391\u03a3 \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u03a5 \u0395\u039b\u039b\u0397\u039d\u0399\u039a\u0397\u03a3 \u03a1\u0391\u03a1\u0399\u039f\u03a6\u03a9\u039d\u0399\u0391\u03a3-\u03a4\u0397\u039b\u0395\u039f\u03a1\u0391\u03a3\u0395\u03a9\u03a3-\u03a4\u039f\u03a5\u03a1\u0399\u03a3\u039c\u039f\u03a5 (\u03a4.\u0395.\u0391.\u03a0.\u03a0. \u0395.\u03a1.\u03a4. \u03a4.)", "2039": "\u03a3\u03a4\u03a1\u0391\u03a4\u0399\u03a9\u03a4\u0399\u039a\u0397 \u0392\u039f\u0397\u0398\u0395\u0399\u0391 \u0397.\u03a0.\u0391", "2040": "\u03a3\u03a5\u039d\u03a4\u0391\u039e\u0395\u0399\u03a3 \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u03a5", "2041": "\u03a7\u03a1\u0397\u039c\u0391\u03a4\u0399\u039a\u0397 \u0394\u0399\u0391\u03a7\u0395\u0399\u03a1\u0399\u03a3\u0397 \u03a0. \u039d\u0391\u03a5\u03a4\u0399\u039a\u039f\u03a5", "2042": "\u03a0\u039f\u039b\u0399\u03a4\u0399\u039a\u039f \u0393\u03a1\u0391\u03a6\u0395\u0399\u039f \u03a0\u03a1\u03a9\u0398\u03a5\u03a0\u039f\u03a5\u03a1\u0393\u039f\u03a5", "2043": "\u039b\u039f\u03a5\u03a4\u03a1\u039f\u0398\u0395\u03a1\u0391\u03a0\u0395\u0399\u0391 \u039a\u0391\u0399 \u0391\u0395\u03a1\u039f\u0398\u0395\u03a1\u0391\u03a0\u0395\u0399\u0391", "2044": "\u03a3\u03a5\u039c\u0392\u039f\u03a5\u039b\u0399\u039f \u039a\u039f\u0399\u039d\u03a9\u039d\u0399\u039a\u03a9\u039d \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u039d", "2045": "\u0395\u039d\u03a4\u039f\u039a\u0391 \u0393\u03a1\u0391\u039c\u039c\u0391\u03a4\u0399\u0391", "2046": "\u03a3\u03a9\u03a6\u03a1\u039f\u039d\u0399\u03a3\u03a4\u0399\u039a\u039f\u03a3 \u039a\u03a9\u0394\u0399\u039a\u0391\u03a3", "2047": "\u0394\u0397\u039c\u039f\u03a4\u0399\u039a\u0395\u03a3 \u0395\u03a0\u0399\u03a7\u0395\u0399\u03a1\u0397\u03a3\u0395\u0399\u03a3", "2048": "\u039a\u03a9\u0394\u0399\u039a\u0391\u03a3 \u03a0\u039f\u039b\u0399\u03a4\u0399\u039a\u0397\u03a3 \u0394\u0399\u039a\u039f\u039d\u039f\u039c\u0399\u0391\u03a3 - \u039d\u0395\u039f\u03a3", "2049": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0395\u03a0\u0399\u039a\u039f\u03a5\u03a1\u0399\u039a\u0397\u03a3 \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u03a3 \u03a5\u03a0\u0391\u039b\u039b\u0397\u039b\u03a9\u039d \u039a\u039f\u03a5\u03a1\u0395\u0399\u03a9\u039d \u039a\u0391\u0399 \u039a\u039f\u039c\u039c\u03a9\u03a4\u0397\u03a1\u0399\u03a9\u039d", "2050": "\u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f \u03a3\u0399\u0394\u0397\u03a1\u039f\u0394\u03a1\u039f\u039c\u03a9\u039d- \u039f.\u03a3.\u0395.- \u03a3\u0399\u0394\u0397\u03a1\u039f\u0394\u03a1\u039f\u039c\u0399\u039a\u03a9\u039d \u0395\u03a0\u0399\u03a7\u0395\u0399\u03a1\u0397\u03a3\u0395\u03a9\u039d", "2051": "\u0394\u0399\u0391\u03a6\u039f\u03a1\u039f\u0399 \u039d\u039f\u039c\u039f\u0399 \u0393\u0399\u0391 \u03a4\u039f\u039d \u03a4\u03a5\u03a0\u039f", "2052": "\u03a4\u0391\u03a7\u03a5\u0394\u03a1\u039f\u039c\u0399\u039a\u0391 \u0394\u0395\u039b\u03a4\u0391\u03a1\u0399\u0391", "2053": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u03a3 \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u03a5 \u0397\u039b\u0395\u039a\u03a4\u03a1. \u0395\u03a4. \u0391\u0398\u0397\u039d\u03a9\u039d - \u03a0\u0395\u0399\u03a1\u0391\u0399\u03a9\u03a3 \u039a\u0391\u0399 \u0395\u039b\u039b\u0397\u039d. \u0397\u039b\u0395\u039a\u03a4\u03a1. \u0395\u03a4\u0391\u0399\u03a1\u0399\u0391\u03a3 (\u03a4.\u0391.\u03a0 \u0397.\u0395.\u0391.\u03a0.- \u0395.\u0397.\u0395.)", "2054": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0395\u03a0\u0399\u039a\u039f\u03a5\u03a1\u0399\u039a\u0397\u03a3 \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0397\u03a3 \u0391\u03a1\u03a4\u039f\u03a0\u039f\u0399\u03a9\u039d", "2055": "\u0394\u0397\u039c\u039f\u03a4\u0399\u039a\u039f\u0399 \u039a\u0391\u0399 \u039a\u039f\u0399\u039d\u039f\u03a4\u0399\u039a\u039f\u0399 \u0391\u03a1\u03a7\u039f\u039d\u03a4\u0395\u03a3", "2056": "\u039c\u0395\u03a4\u0391\u03a6\u039f\u03a1\u0391 \u03a4\u0391\u03a7\u03a5\u0394\u03a1\u039f\u039c\u0395\u0399\u039f\u03a5", "2057": "\u039a\u0391\u039d\u039f\u039d\u0399\u03a3\u039c\u039f\u03a3 \u03a0\u0391\u03a1\u039f\u03a7\u03a9\u039d \u03a4\u0391\u039c\u0395\u0399\u039f\u03a5 \u0395\u03a0\u0399\u039a\u039f\u03a5\u03a1\u0399\u039a\u0397\u03a3 \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u03a3 \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u03a4\u03a9\u039d \u039a\u0391\u0399 \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u03a5 \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u03a4\u0399\u039a\u03a9\u039d \u0395\u03a0\u0399\u03a7\u0395\u0399\u03a1\u0397\u03a3\u0395\u03a9\u039d (\u03a4.\u0395.\u0391.\u0391.\u03a0.\u0391.\u0395.)", "2058": "\u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f", "2059": "\u0394\u0397\u039c\u039f\u03a3\u0399\u0391 \u0395\u03a0\u0399\u03a7\u0395\u0399\u03a1\u0397\u03a3\u0397 \u0397\u039b\u0395\u039a\u03a4\u03a1\u0399\u03a3\u039c\u039f\u03a5", "2060": "\u039a\u0391\u039d\u039f\u039d\u0399\u03a3\u039c\u039f\u0399 \u0395\u03a1\u0393\u03a9\u039d \u03a9\u03a0\u039b\u0399\u03a3\u039c\u0395\u039d\u039f\u03a5 \u03a3\u039a\u03a5\u03a1\u039f\u0394\u0395\u039c\u0391\u03a4\u039f\u03a3", "2061": "\u0391\u039b\u0395\u03a5\u03a1\u0391-\u0391\u03a1\u03a4\u039f\u03a3", "2062": "\u03a4\u0395\u039b\u0397 \u03a0\u03a1\u039f\u03a3\u039f\u03a1\u039c\u0399\u03a3\u0395\u03a9\u03a3, \u03a0\u0391\u03a1\u0391\u0392\u039f\u039b\u0397\u03a3 \u039a\u0391\u0399 \u03a0\u0391\u03a1\u039f\u03a0\u039b\u0399\u03a3\u039c\u039f\u03a5", "2063": "\u0399\u0394\u0399\u03a9\u03a4\u0399\u039a\u0391 \u0395\u039a\u03a0\u0391\u0399\u0394\u0395\u03a5\u03a4\u0397\u03a1\u0399\u0391 \u03a6\u03a1\u039f\u039d\u03a4\u0399\u03a3\u03a4\u0397\u03a1\u0399\u0391", "2064": "\u0391\u03a1\u03a7\u0391\u0399\u039f\u039b\u039f\u0393\u0399\u039a\u0397 \u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u0391", "2065": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u03a3 \u03a4\u03a5\u03a0\u039f\u0393\u03a1\u0391\u03a6\u03a9\u039d \u039a\u0391\u0399 \u039c\u0399\u03a3\u0398\u03a9\u03a4\u03a9\u039d \u0393\u03a1\u0391\u03a6\u0399\u039a\u03a9\u039d \u03a4\u0395\u03a7\u039d\u03a9\u039d (\u03a4.\u0391.\u03a4. & \u039c.\u0393.\u03a4)", "2066": "\u0395\u0399\u0394\u0399\u039a\u0395\u03a3 \u0395\u03a6\u0391\u03a1\u039c\u039f\u0393\u0395\u03a3 \u039a\u03a5\u03a1\u0399\u0391\u039a\u0397\u03a3 \u0391\u03a1\u0393\u0399\u0391\u03a3", "2067": "\u0394\u0399\u0391\u03a6\u039f\u03a1\u039f\u0399 \u039d\u039f\u039c\u039f\u0399 \u0393\u0399\u0391 \u03a4\u0391 \u03a0\u039b\u0397\u03a1\u03a9\u039c\u0391\u03a4\u0391", "2068": "\u0391\u03a3\u03a4\u0399\u039a\u0391 \u03a3\u03a7\u039f\u039b\u0395\u0399\u0391", "2069": "\u03a4\u0391\u039c\u0395\u0399\u0391 \u03a3\u03a5\u039d\u03a4\u0391\u039e\u0395\u03a9\u039d \u0395\u03a6\u0397\u039c\u0395\u03a1\u0399\u0394\u039f\u03a0\u03a9\u039b\u03a9\u039d \u039a\u0391\u0399 \u03a5\u03a0\u0391\u039b\u039b\u0397\u039b\u03a9\u039d \u03a0\u03a1\u0391\u039a\u03a4\u039f\u03a1\u0395\u0399\u03a9\u039d \u0391\u0398\u0397\u039d\u03a9\u039d-\u0398\u0395\u03a3\u039d\u0399\u039a\u0397\u03a3 (\u03a4.\u03a3.\u0395.\u03a5.\u03a0.)", "2070": "\u0394\u039f\u039c\u0399\u039a\u0391 \u0395\u03a1\u0393\u0391", "2071": "\u039d\u0391\u03a5\u03a3\u03a4\u0391\u0398\u039c\u039f\u03a3", "2072": "\u0391\u039d\u03a4\u0399\u0393\u03a1\u0391\u03a6\u0399\u039a\u0391 \u0394\u0399\u039a\u0391\u0399\u03a9\u039c\u0391\u03a4\u0391", "2073": "\u0395\u03a0\u0399\u0394\u039f\u039c\u0391 \u039f\u0399\u039a\u039f\u0393\u0395\u039d\u0395\u0399\u0391\u039a\u03a9\u039d \u0392\u0391\u03a1\u03a9\u039d", "2074": "\u0395\u039b\u039b\u0397\u039d\u0399\u039a\u0397-\u0395\u03a5\u03a1\u03a9\u03a0\u0391\u0399\u039a\u0397 \u03a6\u0391\u03a1\u039c\u0391\u039a\u039f\u03a0\u039f\u0399\u0399\u0391", "2075": "\u0394\u0395\u039b\u03a4\u0399\u0391 \u03a4\u0391\u03a5\u03a4\u039f\u03a4\u0397\u03a4\u039f\u03a3", "2076": "\u03a3\u03a7\u039f\u039b\u0399\u0391\u03a4\u03a1\u0399\u039a\u0397 \u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u0391", "2077": "\u03a5\u0394\u03a1\u039f\u0393\u039f\u039d\u0391\u039d\u0398\u03a1\u0391\u039a\u0395\u03a3", "2078": "\u0393\u0395\u039d\u0399\u039a\u0391 \u03a0\u0395\u03a1\u0399 \u0395\u039a\u0398\u0395\u03a3\u0395\u03a9\u039d", "2079": "\u03a6\u039f\u03a1\u039f\u039b\u039f\u0393\u0399\u039a\u0395\u03a3 \u0394\u0399\u0395\u03a5\u039a\u039f\u039b\u03a5\u039d\u03a3\u0395\u0399\u03a3", "2080": "\u039b\u03a3\u039c\u039f\u03a3 \u03a0\u03a1\u039f\u039d\u039f\u0399\u0391\u03a3 \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u03a5 \u0399.\u039a.\u0391", "2081": "\u0395\u039b\u0395\u0393\u03a7\u039f\u03a3 \u039a\u03a4\u0399\u03a1\u0399\u0391\u039a\u03a9\u039d \u0395\u03a1\u0393\u03a9\u039d", "2082": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0395\u03a0\u0399\u039a\u039f\u03a5\u03a1\u0399\u039a\u0397\u03a3 \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0397\u03a3", "2083": "\u0395\u039b\u0391\u0399\u039f\u03a0\u03a5\u03a1\u0397\u039d\u0395\u03a3", "2084": "\u0395\u039c\u03a6\u03a5\u03a4\u0395\u03a5\u03a4\u0399\u039a\u0391 \u039a\u03a4\u0397\u039c\u0391\u03a4\u0391", "2085": "\u03a4\u039f\u03a5\u03a1\u0399\u03a3\u03a4\u0399\u039a\u0395\u03a3 \u03a3\u03a5\u039c\u0392\u0391\u03a3\u0395\u0399\u03a3", "2086": "\u039a\u039b\u0391\u0394\u039f\u03a3 \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u03a3 \u03a4\u0395\u03a7\u039d\u0399\u039a\u03a9\u039d \u03a4\u03a5\u03a0\u039f\u03a5 \u0398\u0395\u03a3\u03a3\u0391\u039b\u039f\u039d\u0399\u039a\u0397\u03a3 (\u039a.\u0391.\u03a4.\u03a4.\u0398.)", "2087": "\u039c\u0395\u03a4\u0395\u03a9\u03a1\u039f\u039b\u039f\u0393\u0399\u039a\u0397 \u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u0391", "2088": "\u0391\u0393\u03a1\u039f\u03a4\u0399\u039a\u039f\u03a3 \u039a\u03a9\u0394\u0399\u039a\u0391\u03a3", "2089": "\u03a4\u0395\u03a7\u039d\u0399\u039a\u039f \u0395\u03a0\u0399\u039c\u0395\u039b\u0397\u03a4\u0397\u03a1\u0399\u039f", "2090": "\u0395\u039b\u0395\u0393\u03a7\u039f\u03a3 \u039d\u039f\u039c\u0399\u039c\u039f\u03a6\u03a1\u039f\u03a3\u03a5\u039d\u0397\u03a3", "2091": "\u0391\u03a1\u03a7\u0391\u0399\u039f\u039b\u039f\u0393\u0399\u039a\u0397 \u0395\u03a4\u0391\u0399\u03a1\u0399\u0391", "2092": "\u03a3\u03a7\u039f\u039b\u0391\u0396\u039f\u03a5\u03a3\u0395\u03a3 \u039a\u039b\u0397\u03a1\u039f\u039d\u039f\u039c\u0399\u0395\u03a3", "2093": "\u0393\u0395\u03a6\u03a5\u03a1\u0391 \u03a1\u0399\u039f\u03a5 - \u0391\u039d\u03a4\u0399\u03a1\u03a1\u0399\u039f\u03a5", "2094": "\u03a6\u039f\u0399\u03a4\u0397\u03a3\u0397, \u0395\u039e\u0395\u03a4\u0391\u03a3\u0395\u0399\u03a3 \u039a\u039b\u03a0", "2095": "\u03a4\u03a5\u03a7\u0395\u03a1\u0391, \u039c\u0399\u039a\u03a4\u0391 \u039a\u0391\u0399 \u03a4\u0395\u03a7\u039d\u0399\u039a\u0391 \u03a0\u0391\u0399\u0393\u039d\u0399\u0391", "2096": "\u039f\u03a1\u0393\u0391\u039d\u0399\u039a\u039f\u0399 \u0391\u03a1\u0399\u0398\u039c\u039f\u0399 \u03a5\u03a0\u0391\u039e\u0399\u03a9\u039c\u0391\u03a4\u0399\u039a\u03a9\u039d", "2097": "\u03a6\u039f\u03a1\u039f\u039b\u039f\u0393\u0399\u0391 \u039a\u0399\u039d\u0397\u03a4\u0397\u03a3 \u039a\u0391\u0399 \u0391\u039a\u0399\u039d\u0397\u03a4\u0397\u03a3 \u03a0\u0395\u03a1\u0399\u039f\u03a5\u03a3\u0399\u0391\u03a3", "2098": "\u0391\u03a4\u0395\u039b\u0395\u0399\u0395\u03a3 \u0391\u0393\u0399\u039f\u03a5 \u039f\u03a1\u039f\u03a5\u03a3", "2099": "\u039c\u039f\u039d\u039f\u03a0\u03a9\u039b\u0399\u039f \u0391\u039b\u0391\u03a4\u0399\u039f\u03a5", "2100": "\u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0397 \u0395\u039b\u039b\u0397\u039d\u03a9\u039d \u0395\u039e\u03a9\u03a4\u0395\u03a1\u0399\u039a\u039f\u03a5", "2101": "\u0394\u0399\u0395\u0398\u039d\u0395\u03a3 \u039a\u0395\u039d\u03a4\u03a1\u039f \u0391\u039d\u03a9\u03a4\u0391\u03a4\u03a9\u039d", "2102": "\u0391\u039d\u0391\u03a0\u03a1\u039f\u03a3\u0391\u03a1\u039c\u039f\u0393\u0395\u03a3 \u03a3\u03a5\u039d\u03a4\u0391\u039e\u0395\u03a9\u039d", "2103": "\u0393\u0395\u039d\u0399\u039a\u0395\u03a3 \u0395\u03a0\u0399\u0398\u0395\u03a9\u03a1\u0397\u03a3\u0395\u0399\u03a3-\u0394\u0399\u0395\u03a5\u0398\u03a5\u039d\u03a3\u0395\u0399\u03a3", "2104": "\u03a3\u03a9\u039c\u0391 \u039f\u03a1\u039a\u03a9\u03a4\u03a9\u039d \u039b\u039f\u0393\u0399\u03a3\u03a4\u03a9\u039d", "2105": "\u03a3\u0395\u0399\u03a3\u039c\u039f\u03a0\u039b\u0397\u039a\u03a4\u039f\u0399 \u0392\u039f\u03a1\u0395\u0399\u039f\u03a5 \u0395\u039b\u039b\u0391\u0394\u039f\u03a3", "2106": "\u03a0\u0391\u039d\u0395\u03a0\u0399\u03a3\u03a4\u0397\u039c\u0399\u0391 \u03a0\u0395\u0399\u03a1\u0391\u0399\u03a9\u03a3-\u039c\u0391\u039a\u0395\u0394\u039f\u039d\u0399\u0391\u03a3", "2107": "\u03a7\u03a9\u03a1\u039f\u03a4\u0391\u039e\u0399\u0391 \u039a\u0391\u0399 \u03a0\u0395\u03a1\u0399\u0392\u0391\u039b\u039b\u039f\u039d", "2108": "\u0395\u03a3\u03a9\u03a4\u0395\u03a1\u0399\u039a\u039f\u0399 \u039a\u0391\u039d\u039f\u039d\u0399\u03a3\u039c\u039f\u0399 \u0395\u03a1\u0393\u0391\u03a3\u0399\u0391\u03a3", "2109": "\u0395\u039b\u0395\u0393\u03a7\u039f\u03a3 \u039d\u0391\u03a5\u03a4\u0399\u039a\u03a9\u039d \u0391\u03a4\u03a5\u03a7\u0397\u039c\u0391\u03a4\u03a9\u039d", "2110": "\u03a0\u039d\u0395\u03a5\u039c\u0391\u03a4\u0399\u039a\u0391 \u039a\u0395\u039d\u03a4\u03a1\u0391", "2111": "\u03a0\u039b\u039f\u0397\u0393\u0399\u039a\u0391 \u0394\u0399\u039a\u0391\u0399\u03a9\u039c\u0391\u03a4\u0391", "2112": "\u03a3\u03a4\u03a1\u0391\u03a4\u0395\u03a5\u039f\u039c\u0395\u039d\u039f\u0399 \u0394\u0399\u039a\u0397\u0393\u039f\u03a1\u039f\u0399", "2113": "\u03a3\u03a5\u03a3\u03a4\u0391\u03a4\u0399\u039a\u0391 \u0391\u03a5\u03a4\u039f\u039a\u0399\u039d\u0397\u03a4\u03a9\u039d", "2114": "\u03a3\u0399\u0394\u0397\u03a1\u039f\u0394\u03a1\u039f\u039c\u039f\u0399 \u03a0\u0395\u039b\u039f\u03a0\u039f\u039d\u039d\u0397\u03a3\u039f\u03a5", "2115": "\u03a4\u039c\u0397\u039c\u0391 \u039c\u0395\u0398\u039f\u0394\u039f\u039b\u039f\u0393\u0399\u0391\u03a3, \u0399\u03a3\u03a4\u039f\u03a1\u0399\u0391\u03a3 \u039a\u0391\u0399 \u0398\u0395\u03a9\u03a1\u0399\u0391\u03a3 \u03a4\u0397\u03a3 \u0395\u03a0\u0399\u03a3\u03a4\u0397\u039c\u0397\u03a3", "2116": "\u0395\u03a5\u03a1\u03a9\u03a0\u0391\u0399\u039a\u039f \u03a0\u039f\u039b\u0399\u03a4\u0399\u03a3\u03a4\u0399\u039a\u039f \u039a\u0395\u039d\u03a4\u03a1\u039f \u0394\u0395\u039b\u03a6\u03a9\u039d", "2117": "\u03a3\u03a5\u039d\u0395\u03a4\u0391\u0399\u03a1\u0399\u03a3\u039c\u039f\u0399 \u0395\u0393\u0393\u0395\u0399\u03a9\u039d \u0392\u0395\u039b\u03a4\u0399\u03a9\u03a3\u0395\u03a9\u039d", "2118": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0395\u03a0\u0399\u039a\u039f\u03a5\u03a1\u0399\u039a\u0397\u03a3 \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0397\u03a3 \u0394\u0397\u039c\u039f\u03a3\u0399\u03a9\u039d \u03a5\u03a0\u0391\u039b\u039b\u0397\u039b\u03a9\u039d (\u03a4.\u0395.\u0391.\u0394.\u03a5.)", "2119": "\u0399\u0395\u03a1\u039f\u039a\u0397\u03a1\u03a5\u039a\u0395\u03a3", "2120": "\u0395\u0399\u03a1\u0397\u039d\u039f\u0394\u0399\u039a\u0395\u0399\u0391 - \u03a0\u03a4\u0391\u0399\u03a3\u039c\u0391\u03a4\u039f\u0394\u0399\u039a\u0395\u0399\u0391", "2121": "\u0391\u0393\u039f\u03a1\u0391\u039d\u039f\u039c\u0399\u039a\u0397 \u039d\u039f\u039c\u039f\u0398\u0395\u03a3\u0399\u0391", "2122": "\u03a4\u03a1\u0391\u03a0\u0395\u0396\u0399\u03a4\u0399\u039a\u0397 \u0395\u03a0\u0399\u03a4\u0391\u0393\u0397", "2123": "\u039d\u0391\u03a5\u0391\u0393\u039f\u03a3\u03a9\u03a3\u03a4\u0399\u039a\u0391 \u039a\u0391\u0399 \u03a1\u03a5\u039c\u039f\u03a5\u039b\u039a\u0391", "2124": "\u03a6\u039f\u03a1\u039f\u039b\u039f\u0393\u0399\u039a\u0395\u03a3 \u0394\u0399\u0391\u03a6\u039f\u03a1\u0395\u03a3\u0399", "2125": "\u039c\u0395\u03a4\u03a1\u0391 \u039a\u0391\u0399 \u03a3\u03a4\u0391\u0398\u039c\u0391", "2126": "\u0393\u0395\u039d\u0399\u039a\u039f \u03a7\u0397\u039c\u0395\u0399\u039f \u03a4\u039f\u03a5 \u039a\u03a1\u0391\u03a4\u039f\u03a5\u03a3", "2127": "\u03a3\u03a5\u039c\u03a6\u03a9\u039d\u0399\u0391 \u0393\u0399\u0391 \u0399\u03a3\u0391 \u039f\u0399\u039a\u039f\u039d\u039f\u039c\u0399\u039a\u0391 \u039a\u039f\u0399\u039d\u03a9\u039d\u0399\u039a\u0391", "2128": "\u03a3\u03a5\u039d\u039f\u03a1\u0399\u0391\u039a\u039f\u0399 \u03a3\u03a4\u0391\u0398\u039c\u039f\u0399", "2129": "\u0391\u039e\u0399\u03a9\u039c\u0391\u03a4\u0399\u039a\u039f\u0399 \u03a3\u03a9\u039c\u0391\u03a4\u03a9\u039d \u0391\u03a3\u03a6\u0391\u039b\u0395\u0399\u0391\u03a3", "2130": "\u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u0391\u039a\u0391 \u03a3\u03a5\u039c\u0392\u039f\u03a5\u039b\u0399\u0391", "2131": "\u0395\u0399\u03a3\u0391\u0393\u03a9\u0393\u0399\u039a\u039f\u03a3 \u039d\u039f\u039c\u039f\u03a3", "2132": "\u039a\u03a4\u0397\u039c\u0391\u03a4\u039f\u039b\u039f\u0393\u0399\u039f", "2133": "\u0395\u03a4\u0391\u0399\u03a1\u0395\u0399\u0391 \u0394\u0399\u0391\u03a7\u0395\u0399\u03a1\u0399\u03a3\u0395\u03a9\u03a3 \u03a5\u03a0\u0395\u0393\u0393\u03a5\u03a9\u039d \u03a0\u03a1\u039f\u03a3\u039f\u0394\u03a9\u039d", "2134": "\u03a5\u03a0\u039f\u03a5\u03a1\u0393\u0395\u0399\u039f \u039c\u0391\u039a\u0395\u0394\u039f\u039d\u0399\u0391\u03a3 \u2013 \u0398\u03a1\u0391\u039a\u0397\u03a3", "2135": "\u03a4\u039f\u03a5\u03a1\u0399\u03a3\u03a4\u0399\u039a\u0391 \u0393\u03a1\u0391\u03a6\u0395\u0399\u0391 \u039a\u0391\u0399 \u03a3\u03a9\u039c\u0391\u03a4\u0395\u0399\u0391", "2136": "\u0394\u0391\u039d\u0395\u0399\u0391 \u0391\u039d\u0391\u03a3\u03a5\u0393\u039a\u03a1\u039f\u03a4\u0397\u03a3\u0397\u03a3", "2137": "\u0391\u03a3\u03a4\u0399\u039a\u0395\u03a3 \u03a3\u03a5\u0393\u039a\u039f\u0399\u039d\u03a9\u039d\u0399\u0395\u03a3 \u0398\u0395\u03a3\u03a3\u0391\u039b\u039f\u039d\u0399\u039a\u0397\u03a3-\u039f.\u0391.\u03a3.\u0398", "2138": "\u0395\u0398\u0395\u039b\u039f\u039d\u03a4\u0395\u03a3 \u0391\u0395\u03a1\u039f\u03a0\u039f\u03a1\u0399\u0391\u03a3", "2139": "\u03a3\u0397\u039c\u0395\u0399\u03a9\u03a4\u0395\u03a3", "2140": "\u03a4\u0395\u039b\u0397 \u0395\u0393\u039a\u0391\u03a4\u0391\u03a3\u03a4\u0391\u03a3\u0397\u03a3 - \u039b\u0395\u0399\u03a4\u039f\u03a5\u03a1\u0393\u0399\u0391\u03a3 \u039a\u0395\u03a1\u0391\u0399\u03a9\u039d", "2141": "\u0397.\u03a0.\u0391", "2142": "\u03a0\u0391\u039d\u0395\u03a0\u0399\u03a3\u03a4\u0397\u039c\u0399\u0391 \u0391\u0399\u0393\u0391\u0399\u039f\u03a5, \u0399\u039f\u039d\u0399\u039f\u03a5 \u039a\u0391\u0399 \u0398\u0395\u03a3\u03a3\u0391\u039b\u0399\u0391\u03a3", "2143": "\u03a4\u0391\u039c\u0395\u0399\u039f \u03a0\u03a1\u039f\u039d\u039f\u0399\u0391\u03a3 \u039e\u0395\u039d\u039f\u0394\u039f\u03a7\u03a9\u039d", "2144": "\u03a3\u03a5\u039c\u0392\u039f\u03a5\u039b\u0399\u0391 \u03a3\u03a4\u0395\u0393\u0391\u03a3\u0395\u03a9\u03a3", "2145": "\u03a4\u0395\u03a7\u039d\u0399\u039a\u0397 \u0395\u039a\u039c\u0395\u03a4\u0391\u039b\u039b\u0395\u03a5\u03a3\u0397 \u0399\u0394\u0399\u03a9\u03a4\u0399\u039a\u03a9\u039d \u0391\u0395\u03a1\u039f\u03a0\u039b\u0391\u039d\u03a9\u039d", "2146": "\u03a6\u039f\u03a1\u039f\u039b\u039f\u0393\u0399\u0391 \u0394\u0397\u039c\u039f\u03a3\u0399\u03a9\u039d \u0398\u0395\u0391\u039c\u0391\u03a4\u03a9\u039d", "2147": "\u03a3\u03a4\u03a1\u0391\u03a4\u039f\u039b\u039f\u0393\u0399\u0391 \u039f\u03a0\u039b\u0399\u03a4\u03a9\u039d \u03a7\u03a9\u03a1\u039f\u03a6\u03a5\u039b\u0391\u039a\u0397\u03a3", "2148": "\u0393\u03a5\u039c\u039d\u0391\u03a3\u0399\u0391 \u0391\u03a1\u0399\u03a3\u03a4\u039f\u03a5\u03a7\u03a9\u039d", "2149": "\u03a3\u03a7\u039f\u039b\u0399\u039a\u0397 \u0391\u039d\u03a4\u0399\u039b\u0397\u03a8\u0397", "2150": "\u0395\u03a5\u0398\u03a5\u039d\u0397 \u03a3\u03a4\u03a1\u0391\u03a4\u0399\u03a9\u03a4\u0399\u039a\u03a9\u039d", "2151": "\u03a3\u03a4\u0391\u0398\u039c\u039f\u0399 \u0395\u03a0\u0399\u0392\u0397\u03a4\u039f\u03a1\u03a9\u039d", "2152": "\u0392\u0395\u0392\u0391\u0399\u03a9\u03a3\u0397 \u03a0\u03a4\u0391\u0399\u03a3\u039c\u0391\u03a4\u03a9\u039d \u0391\u03a0\u039f", "2153": "\u0394\u0399\u0391\u0396\u03a5\u0393\u0399\u039f", "2154": "\u0394\u0399\u0395\u0398\u039d\u0397\u03a3 \u03a3\u03a5\u039c\u0392\u0391\u03a3\u0397 \u03a0\u0395\u03a1\u0399 \u0391\u039d\u0391\u0393\u039a\u0391\u03a3\u03a4\u0399\u039a\u0397\u03a3 \u0395\u03a1\u0393\u0391\u03a3\u0399\u0391\u03a3", "2155": "\u0394\u0399\u0395\u03a5\u039a\u039f\u039b\u03a5\u039d\u03a3\u0397 \u0394\u0399\u0395\u0398\u039d\u039f\u03a5\u03a3 \u039d\u0391\u03a5\u03a4\u0399\u039b\u0399\u0391\u039a\u0397\u03a3 \u039a\u0399\u039d\u0397\u03a3\u0395\u03a9\u03a3", "2156": "\u0395\u039d\u039f\u0399\u039a\u0399\u039f\u03a3\u03a4\u0391\u03a3\u0399\u039f", "2157": "\u0395\u039a\u0398\u0395\u03a3\u0395\u0399\u03a3 \u0396\u0391\u03a0\u03a0\u0395\u0399\u039f\u03a5 \u039c\u0395\u0393\u0391\u03a1\u039f\u03a5", "2158": "\u0394\u0399\u0391\u03a7\u0395\u0399\u03a1\u0399\u03a3\u0397 \u03a5\u039b\u0399\u039a\u039f\u03a5 \u03a0. \u039d\u0391\u03a5\u03a4\u0399\u039a\u039f\u03a5", "2159": "\u0395\u03a6\u0395\u0394\u03a1\u0399\u039a\u0391 \u03a4\u0391\u039c\u0395\u0399\u0391 \u039a\u03a1\u0397\u03a4\u0397\u03a3", "2160": "\u03a3\u0399\u03a4\u0391\u03a1\u0399", "2161": "\u03a6\u039f\u03a1\u03a4\u0397\u0393\u0391 501-4500 \u03a4\u039f\u039d\u039d\u03a9\u039d", "2162": "\u03a4\u03a1\u0391\u03a0\u0395\u0396\u0391 \u0395\u03a1\u0393\u0391\u03a3\u0399\u0391\u03a3", "2163": "\u0391\u03a4\u0395\u039b\u0395\u0399\u0395\u03a3 \u03a5\u03a0\u0395\u03a1 \u03a4\u0397\u03a3 \u0393\u0395\u03a9\u03a1\u0393\u0399\u0391\u03a3", "2164": "\u0391\u0399\u0393\u0399\u0391\u039b\u039f\u03a3 \u039a\u0391\u0399 \u03a0\u0391\u03a1\u0391\u039b\u0399\u0391", "2165": "\u0394\u0391\u03a3\u0397 \u0399\u0394\u03a1\u03a5\u039c\u0391\u03a4\u03a9\u039d", "2166": "\u0399\u03a7\u0398\u03a5\u039f\u03a4\u03a1\u039f\u03a6\u0395\u0399\u0391", "2167": "\u0391\u03a0\u039f\u0393\u03a1\u0391\u03a6\u0395\u03a3 \u03a0. \u039d\u0391\u03a5\u03a4\u0399\u039a\u039f\u03a5", "2168": "\u03a3\u0397\u039c\u0391\u03a4\u0391 \u039a\u0391\u0399 \u0394\u0395\u039b\u03a4\u0399\u0391 \u0391\u039d\u0391\u03a0\u0397\u03a1\u03a9\u039d \u03a0\u039f\u039b\u0395\u039c\u039f\u03a5", "2169": "\u03a0\u0395\u0399\u0398\u0391\u03a1\u03a7\u0399\u039a\u039f \u0394\u0399\u039a\u0391\u0399\u039f \u0391\u03a3\u03a4\u03a5\u039d\u039f\u039c\u0399\u039a\u039f\u03a5 \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u03a5 \u0395\u039b\u039b\u0397\u039d\u0399\u039a\u0397\u03a3 \u0391\u03a3\u03a4\u03a5\u039d\u039f\u039c\u0399\u0391\u03a3", "2170": "\u0391\u03a4\u039c\u039f\u039b\u0395\u0392\u0397\u03a4\u0395\u03a3", "2171": "\u03a4\u0391\u03a7\u03a5\u0394\u03a1\u039f\u039c\u0399\u039a\u0397 \u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u0391 \u03a3\u03a4\u03a1\u0391\u03a4\u039f\u03a5", "2172": "\u03a0\u03a1\u039f\u03a3\u03a4\u0391\u03a3\u0399\u0391 \u03a0\u0399\u039d\u0391\u039a\u0399\u0394\u03a9\u039d", "2173": "\u0391\u0393\u03a1\u039f\u03a4\u0399\u039a\u0391 \u039a\u03a4\u0397\u039d\u0399\u0391\u03a4\u03a1\u0395\u0399\u0391", "2174": "\u03a7\u03a1\u0397\u039c\u0391\u03a4\u0399\u03a3\u03a4\u0397\u03a1\u0399\u0391\u039a\u0391 \u0394\u0399\u039a\u0391\u03a3\u03a4\u0397\u03a1\u0399\u0391", "2175": "\u0395\u0393\u0393\u03a1\u0391\u03a6\u0397 \u03a0\u03a1\u039f\u0395\u03a1\u03a7\u039f\u039c\u0395\u039d\u03a9\u039d \u0391\u03a0\u039f \u03a4\u0397\u039d \u0391\u039b\u039b\u039f\u0394\u0391\u03a0\u0397", "2176": "\u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u03a3 \u0394\u0399\u0391\u03a7\u0395\u0399\u03a1\u0399\u03a3\u0397\u03a3 \u0394\u0397\u039c\u039f\u03a3\u0399\u039f\u03a5 \u03a5\u039b\u0399\u039a\u039f\u03a5", "2177": "\u03a0\u0391\u039d\u0395\u03a0\u0399\u03a3\u03a4\u0397\u039c\u0399\u039f \u039a\u03a5\u03a0\u03a1\u039f\u03a5", "2178": "\u039a\u0391\u03a4\u0395\u03a1\u0393\u0391\u03a3\u0399\u0391 \u039e\u0397\u03a1\u0391\u03a3 \u03a3\u03a4\u0391\u03a6\u0399\u0394\u0391\u03a3", "2179": "\u03a4\u0395\u039b\u03a9\u039d\u0395\u0399\u0391\u039a\u0397 \u0394\u0399\u0391\u0399\u03a1\u0395\u03a3\u0397", "2180": "\u0391\u0396\u0397\u03a4\u0397\u03a4\u0391", "2181": "\u039c\u0395\u039b\u0399\u03a3\u03a3\u039f\u03a4\u03a1\u039f\u03a6\u0399\u0391", "2182": "\u0394\u0399\u0395\u03a5\u0398\u03a5\u039d\u03a3\u0397 \u0398\u0391\u039b\u0391\u03a3\u03a3\u0399\u03a9\u039d \u039a\u03a1\u0391\u03a4\u0399\u039a\u03a9\u039d \u039c\u0395\u03a4\u0391\u03a6\u039f\u03a1\u03a9\u039d", "2183": "\u0395\u039a\u039c\u0395\u03a4\u0391\u039b\u039b\u0395\u03a5\u03a3\u0397 \u039c\u0395\u03a4\u0391\u039b\u039b\u0395\u0399\u03a9\u039d \u039c\u0395 \u0395\u0393\u0393\u03a5\u0397\u03a3\u0397", "2184": "\u0399\u0394\u0399\u03a9\u03a4\u0399\u039a\u0395\u03a3 \u0395\u03a0\u0391\u0393\u0393\u0395\u039b\u039c\u0391\u03a4\u0399\u039a\u0395\u03a3 \u03a3\u03a7\u039f\u039b\u0395\u03a3", "2185": "\u0394\u0399\u0391\u0398\u0395\u03a3\u0397 \u0391\u03a7\u03a1\u0397\u03a3\u03a4\u039f\u03a5 \u03a5\u039b\u0399\u039a\u039f\u03a5", "2186": "\u03a4\u0391\u03a7\u03a5\u0394\u03a1\u039f\u039c\u0399\u039a\u0395\u03a3 \u039c\u0395\u03a4\u0391\u03a6\u039f\u03a1\u0395\u03a3", "2187": "\u0395\u03a1\u03a5\u0398\u03a1\u039f \u03a0\u0399\u03a0\u0395\u03a1\u0399", "2188": "\u03a0\u0399\u039a\u03a0\u0391-\u0395\u039f\u03a0-\u039a\u0395\u039d\u03a4\u03a1\u039f \u0392\u03a1\u0395\u03a6\u03a9\u039d \u0397 \u039c\u0397\u03a4\u0395\u03a1\u0391-\u0395\u039b\u0395\u03a0\u0391\u03a0", "2189": "\u03a3\u03a5\u039c\u039c\u0395\u03a4\u039f\u03a7\u0397 \u03a3\u0395 \u03a3\u03a5\u039c\u0392\u039f\u03a5\u039b\u0399\u0391", "2190": "\u0393\u03a5\u039c\u039d\u0391\u03a3\u03a4\u0397\u03a1\u0399\u039f", "2191": "\u0399\u0391\u03a4\u03a1\u0399\u039a\u039f\u0399- \u039f\u0394\u039f\u039d\u03a4\u0399\u0391\u03a4\u03a1\u0399\u039a\u039f\u0399 \u03a3\u03a5\u039b\u039b\u039f\u0393\u039f\u0399", "2192": "\u0395\u0399\u03a3\u0391\u0393\u03a9\u0393\u0397 \u03a6\u039f\u0399\u03a4\u0397\u03a4\u03a9\u039d", "2193": "\u0395\u039b\u039b\u0397\u039d\u0399\u039a\u039f \u0384\u0399\u0394\u03a1\u03a5\u039c\u0391 \u03a0\u039f\u039b\u0399\u03a4\u0399\u03a3\u039c\u039f\u03a5", "2194": "\u039b\u039f\u0399\u039c\u039f\u039a\u0391\u0398\u0391\u03a1\u03a4\u0397\u03a1\u0399\u0391 \u0396\u03a9\u03a9\u039d", "2195": "\u0394\u0399\u0395\u0398\u039d\u0397\u03a3 \u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u03a3 \u0391\u03a4\u039f\u039c\u0399\u039a\u0397\u03a3 \u0395\u039d\u0395\u03a1\u0393\u0395\u0399\u0391\u03a3", "2196": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0395\u039e\u039f\u0394\u039f\u03a5 \u039a\u0391\u0399 \u0395\u03a0\u0399\u039a\u039f\u03a5\u03a1\u0399\u039a\u0397\u03a3 \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u03a3 \u039c\u0399\u03a3\u0398\u03a9\u03a4\u03a9\u039d \u0392\u0399\u039f\u039c\u0397\u03a7\u0391\u039d\u0399\u0391\u03a3 \u039a\u0391\u03a0\u039d\u039f\u03a5", "2197": "\u039a\u0391\u0398\u0397\u0393\u0397\u03a4\u0395\u03a3 \u0395.\u039c.\u03a0", "2198": "\u039f\u0399\u039a\u039f\u039d\u039f\u039c\u0399\u039a\u0397 \u0394\u0399\u039f\u0399\u039a\u0397\u03a3\u0397", "2199": "\u0392\u0395\u0392\u0391\u0399\u03a9\u03a3\u0397 \u03a6\u039f\u03a1\u039f\u039b\u039f\u0393\u0399\u0391\u03a3 \u039a\u0391\u0398\u0391\u03a1\u0391\u03a3 \u03a0\u03a1\u039f\u03a3\u039f\u0394\u039f\u03a5", "2200": "\u03a4\u0391\u039c\u0395\u0399\u039f \u03a3\u03a5\u039d\u03a4\u0391\u039e\u0395\u03a9\u039d \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u03a5 \u03a4\u03a1\u0391\u03a0\u0395\u0396\u03a9\u039d \u0395\u039b\u039b\u0391\u0394\u039f\u03a3 \u039a\u0391\u0399 \u039a\u03a4\u0397\u039c\u0391\u03a4\u0399\u039a\u0397\u03a3", "2201": "\u0394\u0397\u039c\u039f\u03a8\u0397\u03a6\u0399\u03a3\u039c\u0391\u03a4\u0391", "2202": "\u0395\u039b\u039b\u0397\u039d\u0399\u039a\u039f \u0391\u039d\u039f\u0399\u039a\u03a4\u039f \u03a0\u0391\u039d\u0395\u03a0\u0399\u03a3\u03a4\u0397\u039c\u0399\u039f", "2203": "\u039a\u0391\u039b\u039b\u0399\u03a4\u0395\u03a7\u039d\u0399\u039a\u039f \u0395\u03a0\u0391\u0393\u0393\u0395\u039b\u039c\u0391\u03a4\u0399\u039a\u039f \u0395\u03a0\u0399\u039c\u0395\u039b\u0397\u03a4\u0397\u03a1\u0399\u039f", "2204": "\u0391\u039d\u039f\u0399\u039a\u039f\u0394\u039f\u039c\u0397\u03a3\u0399\u03a3", "2205": "\u0394\u0391\u03a3\u0399\u039a\u039f\u03a3 \u039a\u03a9\u0394\u0399\u039a\u0391\u03a3", "2206": "\u039a\u0391\u039d\u039f\u039d\u0399\u03a3\u039c\u039f\u03a3 \u03a0\u03a5\u03a1\u039f\u03a3\u0392\u0395\u03a3\u03a4\u0399\u039a\u03a9\u039d \u039c\u0395\u03a3\u03a9\u039d \u03a4\u03a9\u039d \u03a0\u039b\u039f\u0399\u03a9\u039d", "2207": "\u0394\u0399\u03a6\u0398\u0395\u03a1\u0399\u03a4\u0399\u0394\u0391", "2208": "\u0392\u0399\u0392\u039b\u0399\u0391 \u039a\u0391\u0399 \u03a6\u039f\u03a1\u039f\u039b\u039f\u0393\u0399\u039a\u0391 \u03a3\u03a4\u039f\u0399\u03a7\u0395\u0399\u0391", "2209": "\u0395\u039b\u0395\u0393\u03a7\u039f\u03a3 \u0395\u039e\u0391\u0393\u039f\u039c\u0395\u039d\u03a9\u039d \u0395\u039b\u0391\u0399\u03a9\u039d", "2210": "\u0395\u03a0\u0399\u0394\u039f\u039c\u0391\u03a4\u0391 \u039f\u0399\u039a\u039f\u0393\u0395\u039d\u0395\u0399\u03a9\u039d \u03a3\u03a4\u03a1\u0391\u03a4\u0399\u03a9\u03a4\u0399\u039a\u03a9\u039d", "2211": "\u0395\u03a5\u03a1\u03a9\u03a0\u0391\u0399\u039a\u0395\u03a3 \u03a3\u03a5\u039c\u03a6\u03a9\u039d\u0399\u0395\u03a3 \u03a0\u039f\u03a5 \u0391\u03a6\u039f\u03a1\u039f\u03a5\u039d \u03a4\u0397\u039d \u03a4\u0397\u039b\u0395\u039f\u03a1\u0391\u03a3\u0397", "2212": "\u0395\u039a\u03a4\u0391\u039a\u03a4\u0391 \u03a3\u03a4\u03a1\u0391\u03a4\u039f\u0394\u0399\u039a\u0395\u0399\u0391", "2213": "\u03a0\u039f\u039b\u0395\u039c\u0399\u039a\u0397 \u0392\u0399\u039f\u039c\u0397\u03a7\u0391\u039d\u0399\u0391", "2214": "\u0391\u03a3\u0395\u039c\u039d\u039f\u0399 \u0393\u03a5\u039d\u0391\u0399\u039a\u0395\u03a3", "2215": "\u0391\u03a0\u0395\u039b\u0395\u03a5\u0398\u0395\u03a1\u03a9\u03a3\u0397 \u0391\u0393\u039f\u03a1\u0391\u03a3 \u0397\u039b\u0395\u039a\u03a4\u03a1\u0399\u039a\u0397\u03a3 \u0395\u039d\u0395\u03a1\u0393\u0395\u0399\u0391\u03a3 \u0395\u039d\u0395\u03a1\u0393\u0395\u0399\u0391\u039a\u0397 \u03a0\u039f\u039b\u0399\u03a4\u0399\u039a\u0397 \u03a1.\u0391.\u0395", "2216": "\u03a0\u03a1\u039f\u0395\u0399\u03a3\u03a0\u03a1\u0391\u039e\u0397 \u0394\u0399\u039a\u0397\u0393\u039f\u03a1\u0399\u039a\u0397\u03a3 \u0391\u039c\u039f\u0399\u0392\u0397\u03a3", "2217": "\u0395\u0398\u039d\u0399\u039a\u0397 \u03a3\u03a7\u039f\u039b\u0397 \u0394\u0397\u039c\u039f\u03a3\u0399\u0391\u03a3 \u03a5\u0393\u0395\u0399\u0391\u03a3 (\u0395.\u03a3.\u0394.\u03a5.)", "2218": "\u03a0\u03a1\u039f\u039c\u0397\u0398\u0395\u0399\u0391 \u0398\u0395\u0399\u039f\u03a5 \u039a\u0391\u0399 \u0398\u0395\u0399\u0399\u039a\u039f\u03a5 \u03a7\u0391\u039b\u039a\u039f\u03a5", "2219": "\u03a7\u0397\u039c\u0399\u039a\u039f\u0399 - \u03a7\u0397\u039c\u0399\u039a\u0395\u03a3 \u0392\u0399\u039f\u039c\u0397\u03a7\u0391\u039d\u0399\u0395\u03a3", "2220": "\u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0397 \u039a\u0391\u03a4\u0391 \u03a4\u0397\u03a3 \u0391\u03a3\u0398\u0395\u039d\u0395\u0399\u0391\u03a3", "2221": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0391\u039b\u039b\u0397\u039b\u039f\u0392\u039f\u0397\u0398\u0395\u0399\u0391\u03a3 \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u03a5 \u0395\u0398\u039d\u0399\u039a\u039f\u03a5 \u03a4\u03a5\u03a0\u039f\u0393\u03a1\u0391\u03a6\u0395\u0399\u039f\u03a5 (\u03a4.\u0391.\u03a0.\u0395.\u03a4.)", "2222": "\u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u03a3 \u03a5\u03a0\u039f\u03a5\u03a1\u0393\u0395\u0399\u039f\u03a5 \u039f\u0399\u039a\u039f\u039d\u039f\u039c\u0399\u039a\u03a9\u039d", "2223": "\u03a0\u0395\u03a1\u0399\u0395\u03a7\u039f\u039c\u0395\u039d\u039f \u0394\u0397\u039b\u03a9\u03a3\u0397\u03a3 \u03a6\u039f\u03a1\u039f\u03a5 \u0395\u0399\u03a3\u039f\u0394\u0397\u039c\u0391\u03a4\u039f\u03a3", "2224": "\u03a0\u03a1\u03a9\u03a4\u0395\u03a3 \u03a5\u039b\u0395\u03a3 \u03a3\u0399\u0394\u0395\u03a1\u0395\u039d\u0399\u03a9\u039d \u0392\u0391\u03a1\u0395\u039b\u0399\u03a9\u039d", "2225": "\u0395\u03a5\u03a1\u03a9\u03a0\u0391\u0399\u039a\u039f\u03a3 \u039a\u03a9\u0394\u0399\u039a\u0391\u03a3 \u039a\u039f\u0399\u039d\u03a9\u039d\u0399\u039a\u0397\u03a3 \u0391\u03a3\u03a6\u0391\u039b\u0395\u0399\u0391\u03a3", "2226": "\u0394\u0399\u0391\u03a6\u039f\u03a1\u039f\u0399 \u0393\u0395\u03a9\u03a1\u0393\u0399\u039a\u039f\u0399 \u03a3\u03a5\u039d\u0395\u03a4\u0391\u0399\u03a1\u0399\u03a3\u039c\u039f\u0399", "2227": "\u03a3\u03a7\u0395\u0394\u0399\u0391 \u03a0\u039f\u039b\u0395\u03a9\u039d \u0399\u039f\u039d\u0399\u03a9\u039d \u039d\u0397\u03a3\u03a9\u039d", "2228": "\u0395\u03a5\u03a1\u03a9\u03a0\u0391\u0399\u039a\u0397 \u039f\u0399\u039a\u039f\u039d\u039f\u039c\u0399\u039a\u0397 \u039a\u039f\u0399\u039d\u039f\u03a4\u0397\u03a4\u0391 \u0395\u03a5\u03a1\u03a9\u03a0\u0391\u0399\u039a\u0397 \u0395\u039d\u03a9\u03a3\u0397", "2229": "\u03a3\u03a7\u039f\u039b\u0397 \u0394\u0399\u039f\u0399\u039a\u0397\u03a3\u0395\u03a9\u03a3 \u039d\u039f\u03a3\u0397\u039b\u0395\u03a5\u03a4. \u0399\u0394\u03a1\u03a5\u039c\u0391\u03a4\u03a9\u039d", "2230": "\u0394\u0399\u0391\u03a6\u039f\u03a1\u039f\u0399 \u039d\u039f\u039c\u039f\u0399 \u0395\u039c\u03a0\u03a1\u0391\u0393\u039c\u0391\u03a4\u039f\u03a5 \u0394\u0399\u039a\u0391\u0399\u039f\u03a5", "2231": "\u0395\u03a0\u0399\u039c\u0395\u039b\u0397\u03a4\u0395\u0399\u0391 \u039a\u0391\u0399 \u039f\u0399\u039a\u039f\u039d\u039f\u039c\u0399\u039a\u0395\u03a3 \u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u0395\u03a3", "2232": "\u0394\u0399\u0391\u0394\u0399\u039a\u0391\u03a3\u0399\u0391 \u0391\u03a4\u0395\u039b\u0395\u0399\u0391\u03a3", "2233": "\u03a0\u0391\u0399\u0394\u0399\u039a\u0395\u03a3 \u0395\u039e\u039f\u03a7\u0395\u03a3", "2234": "\u03a4\u0391\u039c\u0395\u0399\u039f \u03a3\u03a5\u039d\u03a4\u0391\u039e\u0395\u03a9\u039d \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u03a5 \u0395\u0398\u039d\u0399\u039a\u0397\u03a3 \u03a4\u03a1\u0391\u03a0\u0395\u0396\u0391\u03a3 \u03a4\u0397\u03a3 \u0395\u039b\u039b\u0391\u0394\u039f\u03a3", "2235": "\u039a\u03a1\u0391\u03a4\u0399\u039a\u0397 \u0395\u039a\u039c\u0395\u03a4\u0391\u039b\u039b\u0395\u03a5\u03a3\u0397 \u0394\u0391\u03a3\u03a9\u039d", "2236": "\u0391\u039d\u0395\u039e\u0391\u03a1\u03a4\u0397\u03a3\u0399\u0391 \u03a4\u0397\u03a3 \u0395\u039a\u039a\u039b\u0397\u03a3\u0399\u0391\u03a3 \u03a4\u0397\u03a3 \u0395\u039b\u039b\u0391\u0394\u039f\u03a3", "2237": "\u03a4\u0395\u03a7\u039d\u0399\u039a\u0391 \u03a0\u03a4\u03a5\u03a7\u0399\u0391", "2238": "\u0395\u03a0\u0399\u0392\u0391\u03a4\u0399\u039a\u0391 \u0391\u03a5\u03a4\u039f\u039a\u0399\u039d\u0397\u03a4\u0391 (\u0394\u0397\u039c\u039f\u03a3\u0399\u0391\u03a3 \u039a\u0391\u0399 \u0399\u0394\u0399\u03a9\u03a4\u0399\u039a\u0397\u03a3 \u03a7\u03a1\u0397\u03a3\u0397\u03a3)", "2239": "\u03a3\u03a5\u039c\u0392\u0391\u03a3\u0395\u0399\u03a3 \u0392\u039f\u03a5\u039b\u0395\u03a5\u03a4\u03a9\u039d", "2240": "\u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u03a3 \u03a4\u03a9\u039d \u0394\u0399\u039a\u0391\u03a3\u03a4\u0397\u03a1\u0399\u03a9\u039d", "2241": "\u0395\u039a\u03a0\u0391\u0399\u0394\u0395\u03a5\u03a4\u0399\u039a\u039f\u0399 \u039b\u0395\u0399\u03a4\u039f\u03a5\u03a1\u0393\u039f\u0399 \u0395\u039d \u0393\u0395\u039d\u0395\u0399", "2242": "\u0391\u03a1\u039c\u039f\u0394\u0399\u039f\u03a4\u0397\u03a4\u0391 \u03a4\u0395\u039b\u03a9\u039d\u0395\u0399\u0391\u039a\u03a9\u039d \u0391\u03a1\u03a7\u03a9\u039d", "2243": "\u0395\u0399\u0394\u0399\u039a\u0391 \u0395\u03a6\u0395\u03a4\u0395\u0399\u0391", "2244": "\u0391\u039e\u0399\u03a9\u039c\u0391\u03a4\u0399\u039a\u039f\u0399 \u0391\u0395\u03a1\u039f\u03a0\u039f\u03a1\u0399\u0391\u03a3", "2245": "\u03a0\u0391\u039d\u0395\u03a0\u0399\u03a3\u03a4\u0397\u039c\u0399\u0391\u039a\u0397 \u0392\u0399\u0392\u039b\u0399\u039f\u0398\u0397\u039a\u0397", "2246": "\u0395\u03a0\u0399\u03a4\u03a1\u039f\u03a0\u0397 \u03a3\u03a5\u039d\u03a4\u0391\u039e\u0397\u03a3 \u03a3\u03a7\u0395\u0394\u0399\u039f\u03a5 \u039a\u03a9\u0394\u0399\u039a\u0391 \u0395\u03a1\u0393\u0391\u03a3\u0399\u0391\u03a3", "2247": "\u0395\u039b\u039f\u039d\u039f\u03a3\u0399\u0391", "2248": "\u039d\u0391\u03a5\u039b\u039f\u03a3\u03a5\u039c\u03a6\u03a9\u039d\u0391", "2249": "\u03a3\u0399\u0394\u0397\u03a1\u039f\u0394\u03a1\u039f\u039c\u039f\u0399 \u0398\u0395\u03a3\u03a3\u0391\u039b\u0399\u039a\u039f\u0399", "2250": "\u03a1\u0391\u0394\u0399\u039f\u03a6\u03a9\u039d\u0399\u039a\u0395\u03a3 \u03a3\u03a5\u039c\u0392\u0391\u03a3\u0395\u0399\u03a3", "2251": "\u03a0\u03a1\u039f\u03a9\u0398\u0397\u03a3\u0397 \u0393\u0395\u03a9\u03a1\u0393\u0399\u039a\u0397\u03a3 \u03a0\u0391\u03a1\u0391\u0393\u03a9\u0393\u0397\u03a3-\u0395\u0398.\u0399.\u0391\u0393.\u0395", "2252": "\u0395\u03a0\u039f\u03a7\u0399\u0391\u039a\u03a9\u03a3 \u0395\u03a1\u0393\u0391\u0396\u039f\u039c\u0395\u039d\u039f\u0399 \u039c\u0399\u03a3\u0398\u03a9\u03a4\u039f\u0399", "2253": "\u0394\u0399\u0394\u0391\u039a\u03a4\u0399\u039a\u039f \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f", "2254": "\u039a\u03a9\u0394\u0399\u039a\u0391\u03a3 \u039a\u0395\u039d\u03a4\u03a1\u0399\u039a\u0397\u03a3, \u03a0\u03a1\u0395\u03a3\u0392\u0395\u03a5\u03a4\u0399\u039a\u0397\u03a3 \u039a\u0391\u0399", "2255": "\u03a0\u039f\u039b\u0399\u03a4\u0399\u039a\u039f \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f \u03a5\u03a0\u039f\u03a5\u03a1\u0393\u0395\u0399\u039f\u03a5 \u0395\u0398\u039d\u0399\u039a\u0397\u03a3 \u0391\u039c\u03a5\u039d\u0391\u03a3", "2256": "\u0394\u0399\u03a0\u039b\u03a9\u039c\u0391\u03a4\u0391 \u0395\u03a5\u03a1\u0395\u03a3\u0399\u03a4\u0395\u03a7\u039d\u0399\u0391\u03a3", "2257": "\u03a3\u03a9\u039c\u0391\u03a4\u0395\u0399\u0391 \u0393\u0395\u03a9\u03a1\u0393\u0399\u039a\u03a9\u039d \u0395\u03a1\u0393\u0391\u03a4\u03a9\u039d", "2258": "\u039a\u03a9\u0394\u0399\u039a\u0391\u03a3 \u03a0\u0395\u03a1\u0399 \u0395\u0399\u03a3\u03a0\u03a1\u0391\u039e\u0395\u03a9\u03a3 \u0394\u0397\u039c\u039f\u03a3\u0399\u03a9\u039d \u0395\u03a3\u039f\u0394\u03a9\u039d", "2259": "\u03a4\u03a1\u0391\u03a0\u0395\u0396\u039f\u0393\u03a1\u0391\u039c\u039c\u0391\u03a4\u0399\u0391", "2260": "\u03a0\u03a1\u039f\u039c\u0397\u0398\u0395\u03a5\u03a4\u0399\u039a\u039f\u03a3 \u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u03a3 \u0395.\u0392.\u0391", "2261": "\u0395\u039b\u0395\u0393\u03a7\u039f\u03a3 \u0391\u03a3\u03a6\u0391\u039b\u0395\u0399\u0391\u03a3 \u0391\u03a5\u03a4\u039f\u039a\u0399\u039d\u0397\u03a4\u03a9\u039d\u039a\u0395\u039d\u03a4\u03a1\u0391 \u03a4\u0395\u03a7\u039d\u0399\u039a\u039f\u03a5 \u0395\u039b\u0395\u0393\u03a7\u039f\u03a5 \u039f\u03a7\u0397\u039c\u0391\u03a4\u03a9\u039d (\u039a.\u03a4.\u0395.\u039f.)", "2262": "\u0395\u039e\u0391\u0393\u03a9\u0393\u0397 \u03a4\u03a5\u03a1\u039f\u03a5", "2263": "\u039d\u0391\u03a5\u03a4\u0399\u039b\u0399\u0391\u039a\u039f \u03a3\u03a5\u039d\u0391\u039b\u039b\u0391\u0393\u039c\u0391", "2264": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0395\u03a0\u0399\u039a\u039f\u03a5\u03a1\u0399\u039a\u0397\u03a3 \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u03a3 \u0397\u039b\u0395\u03a4\u03a1\u039f\u03a4\u0395\u03a7\u039d\u0399\u03a4\u03a9\u039d \u0395\u039b\u039b\u0391\u0394\u039f\u03a3 (T.E.A.H.E.)", "2265": "\u039c\u0399\u03a3\u0398\u039f\u0399 \u03a3\u03a4\u03a1\u0391\u03a4\u0399\u03a9\u03a4\u0399\u039a\u03a9\u039d \u039a\u0391\u0399 \u03a0\u03a1\u039f\u03a3\u0391\u03a5\u039e\u0397\u03a3\u0395\u0399\u03a3", "2266": "\u0391\u03a3\u03a4\u0399\u039a\u039f\u03a3 \u039a\u03a9\u0394\u0399\u039a\u0391\u03a3", "2267": "\u039c\u0395 \u03a4\u0399\u03a3 \u0397\u039d\u03a9\u039c\u0395\u039d\u0395\u03a3 \u03a0\u039f\u039b\u0399\u03a4\u0395\u0399\u0395\u03a3 \u0391\u039c\u0395\u03a1\u0399\u039a\u0397\u03a3", "2268": "\u03a4\u0391\u039c\u0395\u0399\u039f \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u03a9\u03a3 \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0399\u039a\u039f\u03a5 \u039f.\u03a4.\u0395. (\u03a4.\u0391.\u03a0.-\u039f.\u03a4.\u0395.)", "2269": "\u039c\u0391\u0399\u0395\u03a3", "2270": "\u03a6\u03a5\u0393\u039f\u0394\u0399\u039a\u0399\u0391", "2271": "\u039f\u03a1\u0393\u0391\u039d\u0399\u03a3\u039c\u039f\u03a3 \u039e\u0395\u039d\u039f\u0394\u039f\u03a7\u0395\u0399\u0391\u039a\u0397\u03a3 \u03a0\u0399\u03a3\u03a4\u0397\u03a3", "2272": "\u0394\u0397\u039c\u039f\u03a4\u0399\u039a\u039f\u0399 \u03a3\u03a4\u03a1\u0391\u03a4\u039f\u039b\u039f\u0393\u039f\u0399", "2273": "\u0391\u039d\u03a9\u03a4\u0391\u03a4\u039f \u0394\u0399\u039a\u0391\u03a3\u03a4\u0399\u039a\u039f \u03a3\u03a5\u039c\u0392\u039f\u03a5\u039b\u0399\u039f", "2274": "\u0399\u03a3\u03a4\u039f\u03a1\u0399\u039a\u039f \u0391\u03a1\u03a7\u0395\u0399\u039f \u039a\u03a1\u0397\u03a4\u0397\u03a3", "2275": "\u0395\u039b\u039b\u0397\u039d\u0399\u039a\u0397 \u0398\u0391\u039b\u0391\u03a3\u03a3\u0399\u0391 \u0384\u0395\u039d\u03a9\u03a3\u0397", "2276": "\u0395\u039a\u03a0\u039f\u0399\u0397\u03a3\u0395\u0399\u03a3 \u039a\u0391\u0399 \u0395\u039a\u039c\u0399\u03a3\u0398\u03a9\u03a3\u0395\u0399\u03a3", "2277": "\u03a4\u0391\u03a7\u03a5\u0394\u03a1\u039f\u039c\u0399\u039a\u0395\u03a3 \u0395\u03a0\u0399\u03a4\u0391\u0393\u0395\u03a3", "2278": "\u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u0391 \u039c\u0397\u03a4\u03a1\u03a9\u039f\u03a5", "2279": "\u0394\u0399\u0391\u03a6\u039f\u03a1\u0391 \u039f\u0399\u039a\u039f\u039d\u039f\u039c\u0399\u039a\u0391 \u0398\u0395\u039c\u0391\u03a4\u0391", "2280": "\u0395\u039d\u0394\u0399\u039a\u0391 \u039c\u0395\u03a3\u0391", "2281": "\u03a4\u0395\u039b\u0397 \u0391\u0395\u03a1\u039f\u03a0\u039f\u03a1\u0399\u039a\u03a9\u039d \u03a4\u0391\u039e\u0399\u0394\u0399\u03a9\u039d", "2282": "\u039c\u0395 \u03a4\u0397\u039d \u0391\u0399\u0393\u03a5\u03a0\u03a4\u039f", "2283": "\u0394\u0399\u0391\u03a6\u039f\u03a1\u0395\u03a3 \u0392\u0399\u0392\u039b\u0399\u039f\u0398\u0397\u039a\u0395\u03a3", "2284": "\u039a\u0395\u039d\u03a4\u03a1\u0399\u039a\u0397 \u03a5\u03a0\u0397\u03a1\u0395\u03a3\u0399\u0391"}}}}], "splits": [{"name": "train", "num_bytes": 216757887, "num_examples": 28536}, {"name": "test", "num_bytes": 71533786, "num_examples": 9516}, {"name": "validation", "num_bytes": 68824457, "num_examples": 9511}], "download_size": 147827496, "dataset_size": 357116130}, {"config_name": "volume", "features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "\u039a\u039f\u0399\u039d\u03a9\u039d\u0399\u039a\u0397 \u03a0\u03a1\u039f\u039d\u039f\u0399\u0391", "1": "\u0393\u0395\u03a9\u03a1\u0393\u0399\u039a\u0397 \u039d\u039f\u039c\u039f\u0398\u0395\u03a3\u0399\u0391", "2": "\u03a1\u0391\u0394\u0399\u039f\u03a6\u03a9\u039d\u0399\u0391 \u039a\u0391\u0399 \u03a4\u03a5\u03a0\u039f\u03a3", "3": "\u0392\u0399\u039f\u039c\u0397\u03a7\u0391\u039d\u0399\u039a\u0397 \u039d\u039f\u039c\u039f\u0398\u0395\u03a3\u0399\u0391", "4": "\u03a5\u0393\u0395\u0399\u039f\u039d\u039f\u039c\u0399\u039a\u0397 \u039d\u039f\u039c\u039f\u0398\u0395\u03a3\u0399\u0391", "5": "\u03a0\u039f\u039b\u0395\u039c\u0399\u039a\u039f \u039d\u0391\u03a5\u03a4\u0399\u039a\u039f", "6": "\u03a4\u0391\u03a7\u03a5\u0394\u03a1\u039f\u039c\u0395\u0399\u0391 - \u03a4\u0397\u039b\u0395\u03a0\u0399\u039a\u039f\u0399\u039d\u03a9\u039d\u0399\u0395\u03a3", "7": "\u0394\u0391\u03a3\u0397 \u039a\u0391\u0399 \u039a\u03a4\u0397\u039d\u039f\u03a4\u03a1\u039f\u03a6\u0399\u0391", "8": "\u0395\u039b\u0395\u0393\u039a\u03a4\u0399\u039a\u039f \u03a3\u03a5\u039d\u0395\u0394\u03a1\u0399\u039f \u039a\u0391\u0399 \u03a3\u03a5\u039d\u03a4\u0391\u039e\u0395\u0399\u03a3", "9": "\u03a0\u039f\u039b\u0395\u039c\u0399\u039a\u0397 \u0391\u0395\u03a1\u039f\u03a0\u039f\u03a1\u0399\u0391", "10": "\u039d\u039f\u039c\u0399\u039a\u0391 \u03a0\u03a1\u039f\u03a3\u03a9\u03a0\u0391 \u0394\u0397\u039c\u039f\u03a3\u0399\u039f\u03a5 \u0394\u0399\u039a\u0391\u0399\u039f\u03a5", "11": "\u039d\u039f\u039c\u039f\u0398\u0395\u03a3\u0399\u0391 \u0391\u039d\u03a9\u039d\u03a5\u039c\u03a9\u039d \u0395\u03a4\u0391\u0399\u03a1\u0395\u0399\u03a9\u039d \u03a4\u03a1\u0391\u03a0\u0395\u0396\u03a9\u039d \u039a\u0391\u0399 \u03a7\u03a1\u0397\u039c\u0391\u03a4\u0399\u03a3\u03a4\u0397\u03a1\u0399\u03a9\u039d", "12": "\u03a0\u039f\u039b\u0399\u03a4\u0399\u039a\u0397 \u0391\u0395\u03a1\u039f\u03a0\u039f\u03a1\u0399\u0391", "13": "\u0395\u039c\u039c\u0395\u03a3\u0397 \u03a6\u039f\u03a1\u039f\u039b\u039f\u0393\u0399\u0391", "14": "\u039a\u039f\u0399\u039d\u03a9\u039d\u0399\u039a\u0395\u03a3 \u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u0395\u0399\u03a3", "15": "\u039d\u039f\u039c\u039f\u0398\u0395\u03a3\u0399\u0391 \u0394\u0397\u039c\u03a9\u039d \u039a\u0391\u0399 \u039a\u039f\u0399\u039d\u039f\u03a4\u0397\u03a4\u03a9\u039d", "16": "\u039d\u039f\u039c\u039f\u0398\u0395\u03a3\u0399\u0391 \u0395\u03a0\u0399\u039c\u0395\u039b\u0397\u03a4\u0397\u03a1\u0399\u03a9\u039d \u03a3\u03a5\u039d\u0395\u03a4\u0391\u0399\u03a1\u0399\u03a3\u039c\u03a9\u039d \u039a\u0391\u0399 \u03a3\u03a9\u039c\u0391\u03a4\u0395\u0399\u03a9\u039d", "17": "\u0394\u0397\u039c\u039f\u03a3\u0399\u0391 \u0395\u03a1\u0393\u0391", "18": "\u0394\u0399\u039f\u0399\u039a\u0397\u03a3\u0397 \u0394\u0399\u039a\u0391\u0399\u039f\u03a3\u03a5\u039d\u0397\u03a3", "19": "\u0391\u03a3\u03a6\u0391\u039b\u0399\u03a3\u03a4\u0399\u039a\u0391 \u03a4\u0391\u039c\u0395\u0399\u0391", "20": "\u0395\u039a\u039a\u039b\u0397\u03a3\u0399\u0391\u03a3\u03a4\u0399\u039a\u0397 \u039d\u039f\u039c\u039f\u0398\u0395\u03a3\u0399\u0391", "21": "\u0395\u039a\u03a0\u0391\u0399\u0394\u0395\u03a5\u03a4\u0399\u039a\u0397 \u039d\u039f\u039c\u039f\u0398\u0395\u03a3\u0399\u0391", "22": "\u0394\u0397\u039c\u039f\u03a3\u0399\u039f \u039b\u039f\u0393\u0399\u03a3\u03a4\u0399\u039a\u039f", "23": "\u03a4\u0395\u039b\u03a9\u039d\u0395\u0399\u0391\u039a\u0397 \u039d\u039f\u039c\u039f\u0398\u0395\u03a3\u0399\u0391", "24": "\u03a3\u03a5\u0393\u039a\u039f\u0399\u039d\u03a9\u039d\u0399\u0395\u03a3", "25": "\u0395\u0398\u039d\u0399\u039a\u0397 \u0391\u039c\u03a5\u039d\u0391", "26": "\u03a3\u03a4\u03a1\u0391\u03a4\u039f\u03a3 \u039e\u0397\u03a1\u0391\u03a3", "27": "\u0391\u0393\u039f\u03a1\u0391\u039d\u039f\u039c\u0399\u039a\u0397 \u039d\u039f\u039c\u039f\u0398\u0395\u03a3\u0399\u0391", "28": "\u0394\u0397\u039c\u039f\u03a3\u0399\u039f\u0399 \u03a5\u03a0\u0391\u039b\u039b\u0397\u039b\u039f\u0399", "29": "\u03a0\u0395\u03a1\u0399\u039f\u03a5\u03a3\u0399\u0391 \u0394\u0397\u039c\u039f\u03a3\u0399\u039f\u03a5 \u039a\u0391\u0399 \u039d\u039f\u039c\u0399\u03a3\u039c\u0391", "30": "\u039f\u0399\u039a\u039f\u039d\u039f\u039c\u0399\u039a\u0397 \u0394\u0399\u039f\u0399\u039a\u0397\u03a3\u0397", "31": "\u039b\u0399\u039c\u0395\u039d\u0399\u039a\u0397 \u039d\u039f\u039c\u039f\u0398\u0395\u03a3\u0399\u0391", "32": "\u0391\u03a3\u03a4\u0399\u039a\u0397 \u039d\u039f\u039c\u039f\u0398\u0395\u03a3\u0399\u0391", "33": "\u03a0\u039f\u039b\u0399\u03a4\u0399\u039a\u0397 \u0394\u0399\u039a\u039f\u039d\u039f\u039c\u0399\u0391", "34": "\u0394\u0399\u03a0\u039b\u03a9\u039c\u0391\u03a4\u0399\u039a\u0397 \u039d\u039f\u039c\u039f\u0398\u0395\u03a3\u0399\u0391", "35": "\u0394\u0399\u039f\u0399\u039a\u0397\u03a4\u0399\u039a\u0397 \u039d\u039f\u039c\u039f\u0398\u0395\u03a3\u0399\u0391", "36": "\u0391\u039c\u0395\u03a3\u0397 \u03a6\u039f\u03a1\u039f\u039b\u039f\u0393\u0399\u0391", "37": "\u03a4\u03a5\u03a0\u039f\u03a3 \u039a\u0391\u0399 \u03a4\u039f\u03a5\u03a1\u0399\u03a3\u039c\u039f\u03a3", "38": "\u0395\u0398\u039d\u0399\u039a\u0397 \u039f\u0399\u039a\u039f\u039d\u039f\u039c\u0399\u0391", "39": "\u0391\u03a3\u03a4\u03a5\u039d\u039f\u039c\u0399\u039a\u0397 \u039d\u039f\u039c\u039f\u0398\u0395\u03a3\u0399\u0391", "40": "\u0391\u0393\u03a1\u039f\u03a4\u0399\u039a\u0397 \u039d\u039f\u039c\u039f\u0398\u0395\u03a3\u0399\u0391", "41": "\u0395\u03a1\u0393\u0391\u03a4\u0399\u039a\u0397 \u039d\u039f\u039c\u039f\u0398\u0395\u03a3\u0399\u0391", "42": "\u03a0\u039f\u0399\u039d\u0399\u039a\u0397 \u039d\u039f\u039c\u039f\u0398\u0395\u03a3\u0399\u0391", "43": "\u0395\u039c\u03a0\u039f\u03a1\u0399\u039a\u0397 \u039d\u039f\u039c\u039f\u0398\u0395\u03a3\u0399\u0391", "44": "\u0395\u03a0\u0399\u03a3\u03a4\u0397\u039c\u0395\u03a3 \u039a\u0391\u0399 \u03a4\u0395\u03a7\u039d\u0395\u03a3", "45": "\u0395\u039c\u03a0\u039f\u03a1\u0399\u039a\u0397 \u039d\u0391\u03a5\u03a4\u0399\u039b\u0399\u0391", "46": "\u03a3\u03a5\u039d\u03a4\u0391\u0393\u039c\u0391\u03a4\u0399\u039a\u0397 \u039d\u039f\u039c\u039f\u0398\u0395\u03a3\u0399\u0391"}}}}], "splits": [{"name": "train", "num_bytes": 216757887, "num_examples": 28536}, {"name": "test", "num_bytes": 71533786, "num_examples": 9516}, {"name": "validation", "num_bytes": 68824457, "num_examples": 9511}], "download_size": 145147904, "dataset_size": 357116130}], "configs": [{"config_name": "chapter", "data_files": [{"split": "train", "path": "chapter/train-*"}, {"split": "test", "path": "chapter/test-*"}, {"split": "validation", "path": "chapter/validation-*"}]}, {"config_name": "subject", "data_files": [{"split": "train", "path": "subject/train-*"}, {"split": "test", "path": "subject/test-*"}, {"split": "validation", "path": "subject/validation-*"}]}, {"config_name": "volume", "data_files": [{"split": "train", "path": "volume/train-*"}, {"split": "test", "path": "volume/test-*"}, {"split": "validation", "path": "volume/validation-*"}], "default": true}]}
2024-01-04T12:03:50+00:00
[ "2109.15298" ]
[ "el" ]
TAGS #task_categories-text-classification #task_ids-multi-class-classification #task_ids-topic-classification #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Modern Greek (1453-) #license-cc-by-4.0 #arxiv-2109.15298 #region-us
Dataset Card for Greek Legal Code ================================= Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Repository: URL * Paper: URL * Data: URL * Leaderboard: N/A * Point of Contact: Christos Papaloukas ### Dataset Summary Greek\_Legal\_Code (GLC) is a dataset consisting of approx. 47k legal resources from Greek legislation. The origin of GLC is “Permanent Greek Legislation Code - Raptarchis”, a collection of Greek legislative documents classified into multi-level (from broader to more specialized) categories. Topics GLC consists of 47 legislative volumes and each volume corresponds to a main thematic topic. Each volume is divided into thematic sub categories which are called chapters and subsequently, each chapter breaks down to subjects which contain the legal resources. The total number of chapters is 389 while the total number of subjects is 2285, creating an interlinked thematic hierarchy. So, for the upper thematic level (volume) GLC has 47 classes. For the next thematic level (chapter) GLC offers 389 classes and for the inner and last thematic level (subject), GLC has 2285 classes. GLC classes are divided into three categories for each thematic level: frequent classes, which occur in more than 10 training documents and can be found in all three subsets (training, development and test); few-shot classes which appear in 1 to 10 training documents and also appear in the documents of the development and test sets, and zero-shot classes which appear in the development and/or test, but not in the training documents. ### Supported Tasks and Leaderboards The dataset supports: Multi-class Text Classification: Given the text of a document, a model predicts the corresponding class. Few-shot and Zero-shot learning: As already noted, the classes can be divided into three groups: frequent, few-shot, and zero- shot, depending on whether they were assigned to more than 10, fewer than 10 but at least one, or no training documents, respectively. ### Languages All documents are written in Greek. Dataset Structure ----------------- ### Data Instances ### Data Fields The following data fields are provided for documents ('train', 'dev', 'test'): 'text': (str) The full content of each document, which is represented by its 'header' and 'articles' (i.e., the 'main\_body'). 'label': (class label): Depending on the configurarion, the volume/chapter/subject of the document. For volume-level class it belongs to specifically: ["ΚΟΙΝΩΝΙΚΗ ΠΡΟΝΟΙΑ", "ΓΕΩΡΓΙΚΗ ΝΟΜΟΘΕΣΙΑ", "ΡΑΔΙΟΦΩΝΙΑ ΚΑΙ ΤΥΠΟΣ", "ΒΙΟΜΗΧΑΝΙΚΗ ΝΟΜΟΘΕΣΙΑ", "ΥΓΕΙΟΝΟΜΙΚΗ ΝΟΜΟΘΕΣΙΑ", "ΠΟΛΕΜΙΚΟ ΝΑΥΤΙΚΟ", "ΤΑΧΥΔΡΟΜΕΙΑ - ΤΗΛΕΠΙΚΟΙΝΩΝΙΕΣ", "ΔΑΣΗ ΚΑΙ ΚΤΗΝΟΤΡΟΦΙΑ", "ΕΛΕΓΚΤΙΚΟ ΣΥΝΕΔΡΙΟ ΚΑΙ ΣΥΝΤΑΞΕΙΣ", "ΠΟΛΕΜΙΚΗ ΑΕΡΟΠΟΡΙΑ", "ΝΟΜΙΚΑ ΠΡΟΣΩΠΑ ΔΗΜΟΣΙΟΥ ΔΙΚΑΙΟΥ", "ΝΟΜΟΘΕΣΙΑ ΑΝΩΝΥΜΩΝ ΕΤΑΙΡΕΙΩΝ ΤΡΑΠΕΖΩΝ ΚΑΙ ΧΡΗΜΑΤΙΣΤΗΡΙΩΝ", "ΠΟΛΙΤΙΚΗ ΑΕΡΟΠΟΡΙΑ", "ΕΜΜΕΣΗ ΦΟΡΟΛΟΓΙΑ", "ΚΟΙΝΩΝΙΚΕΣ ΑΣΦΑΛΙΣΕΙΣ", "ΝΟΜΟΘΕΣΙΑ ΔΗΜΩΝ ΚΑΙ ΚΟΙΝΟΤΗΤΩΝ", "ΝΟΜΟΘΕΣΙΑ ΕΠΙΜΕΛΗΤΗΡΙΩΝ ΣΥΝΕΤΑΙΡΙΣΜΩΝ ΚΑΙ ΣΩΜΑΤΕΙΩΝ", "ΔΗΜΟΣΙΑ ΕΡΓΑ", "ΔΙΟΙΚΗΣΗ ΔΙΚΑΙΟΣΥΝΗΣ", "ΑΣΦΑΛΙΣΤΙΚΑ ΤΑΜΕΙΑ", "ΕΚΚΛΗΣΙΑΣΤΙΚΗ ΝΟΜΟΘΕΣΙΑ", "ΕΚΠΑΙΔΕΥΤΙΚΗ ΝΟΜΟΘΕΣΙΑ", "ΔΗΜΟΣΙΟ ΛΟΓΙΣΤΙΚΟ", "ΤΕΛΩΝΕΙΑΚΗ ΝΟΜΟΘΕΣΙΑ", "ΣΥΓΚΟΙΝΩΝΙΕΣ", "ΕΘΝΙΚΗ ΑΜΥΝΑ", "ΣΤΡΑΤΟΣ ΞΗΡΑΣ", "ΑΓΟΡΑΝΟΜΙΚΗ ΝΟΜΟΘΕΣΙΑ", "ΔΗΜΟΣΙΟΙ ΥΠΑΛΛΗΛΟΙ", "ΠΕΡΙΟΥΣΙΑ ΔΗΜΟΣΙΟΥ ΚΑΙ ΝΟΜΙΣΜΑ", "ΟΙΚΟΝΟΜΙΚΗ ΔΙΟΙΚΗΣΗ", "ΛΙΜΕΝΙΚΗ ΝΟΜΟΘΕΣΙΑ", "ΑΣΤΙΚΗ ΝΟΜΟΘΕΣΙΑ", "ΠΟΛΙΤΙΚΗ ΔΙΚΟΝΟΜΙΑ", "ΔΙΠΛΩΜΑΤΙΚΗ ΝΟΜΟΘΕΣΙΑ", "ΔΙΟΙΚΗΤΙΚΗ ΝΟΜΟΘΕΣΙΑ", "ΑΜΕΣΗ ΦΟΡΟΛΟΓΙΑ", "ΤΥΠΟΣ ΚΑΙ ΤΟΥΡΙΣΜΟΣ", "ΕΘΝΙΚΗ ΟΙΚΟΝΟΜΙΑ", "ΑΣΤΥΝΟΜΙΚΗ ΝΟΜΟΘΕΣΙΑ", "ΑΓΡΟΤΙΚΗ ΝΟΜΟΘΕΣΙΑ", "ΕΡΓΑΤΙΚΗ ΝΟΜΟΘΕΣΙΑ", "ΠΟΙΝΙΚΗ ΝΟΜΟΘΕΣΙΑ", "ΕΜΠΟΡΙΚΗ ΝΟΜΟΘΕΣΙΑ", "ΕΠΙΣΤΗΜΕΣ ΚΑΙ ΤΕΧΝΕΣ", "ΕΜΠΟΡΙΚΗ ΝΑΥΤΙΛΙΑ", "ΣΥΝΤΑΓΜΑΤΙΚΗ ΝΟΜΟΘΕΣΙΑ" ] \ The labels can also be a the chapter-level or subject-level class it belongs to. Some chapter labels are omitted due to size (389 classes). Some subject labels are also omitted due to size (2285 classes). ### Data Splits Split: Train, No of Documents: 28,536, Avg. words: 600 Split: Development, No of Documents: 9,511, Avg. words: 574 Split: Test, No of Documents: 9,516, Avg. words: 595 Dataset Creation ---------------- ### Curation Rationale The dataset was curated by Papaloukas et al. (2021) with the hope to support and encourage further research in NLP for the Greek language. ### Source Data #### Initial Data Collection and Normalization The ''Permanent Greek Legislation Code - Raptarchis'' is a thorough catalogue of Greek legislation since the creation of the Greek state in 1834 until 2015. It includes Laws, Royal and Presidential Decrees, Regulations and Decisions, retrieved from the Official Government Gazette, where Greek legislation is published. This collection is one of the official, publicly available sources of classified Greek legislation suitable for classification tasks. Currently, the original catalogue is publicly offered in MS Word (.doc) format through the portal e-Themis, the legal database and management service of it, under the administration of the Ministry of the Interior (Affairs). E-Themis is primarily focused on providing legislation on a multitude of predefined thematic categories, as described in the catalogue. The main goal is to help users find legislation of interest using the thematic index. #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information The dataset does not include personal or sensitive information. Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators Papaloukas et al. (2021) ### Licensing Information *Christos Papaloukas, Ilias Chalkidis, Konstantinos Athinaios, Despina-Athanasia Pantazi and Manolis Koubarakis.* *Multi-granular Legal Topic Classification on Greek Legislation.* *Proceedings of the 3rd Natural Legal Language Processing (NLLP) Workshop, Punta Cana, Dominican Republic, 2021* ### Contributions Thanks to @christospi for adding this dataset.
[ "### Dataset Summary\n\n\nGreek\\_Legal\\_Code (GLC) is a dataset consisting of approx. 47k legal resources from Greek legislation. The origin of GLC is “Permanent Greek Legislation Code - Raptarchis”, a collection of Greek legislative documents classified into multi-level (from broader to more specialized) categories.\n\n\nTopics\n\n\nGLC consists of 47 legislative volumes and each volume corresponds to a main thematic topic. Each volume is divided into thematic sub categories which are called chapters and subsequently, each chapter breaks down to subjects which contain the legal resources. The total number of chapters is 389 while the total number of subjects is 2285, creating an interlinked thematic hierarchy. So, for the upper thematic level (volume) GLC has 47 classes. For the next thematic level (chapter) GLC offers 389 classes and for the inner and last thematic level (subject), GLC has 2285 classes.\n\n\nGLC classes are divided into three categories for each thematic level: frequent classes, which occur in more than 10 training documents and can be found in all three subsets (training, development and test); few-shot classes which appear in 1 to 10 training documents and also appear in the documents of the development and test sets, and zero-shot classes which appear in the development and/or test, but not in the training documents.", "### Supported Tasks and Leaderboards\n\n\nThe dataset supports:\n\n\nMulti-class Text Classification: Given the text of a document, a model predicts the corresponding class.\n\n\nFew-shot and Zero-shot learning: As already noted, the classes can be divided into three groups: frequent, few-shot, and zero- shot, depending on whether they were assigned to more than 10, fewer than 10 but at least one, or no training documents, respectively.", "### Languages\n\n\nAll documents are written in Greek.\n\n\nDataset Structure\n-----------------", "### Data Instances", "### Data Fields\n\n\nThe following data fields are provided for documents ('train', 'dev', 'test'):\n\n\n'text': (str) The full content of each document, which is represented by its 'header' and 'articles' (i.e., the 'main\\_body'). \n\n'label': (class label): Depending on the configurarion, the volume/chapter/subject of the document. For volume-level class it belongs to specifically: [\"ΚΟΙΝΩΝΙΚΗ ΠΡΟΝΟΙΑ\",\n\"ΓΕΩΡΓΙΚΗ ΝΟΜΟΘΕΣΙΑ\",\n\"ΡΑΔΙΟΦΩΝΙΑ ΚΑΙ ΤΥΠΟΣ\",\n\"ΒΙΟΜΗΧΑΝΙΚΗ ΝΟΜΟΘΕΣΙΑ\",\n\"ΥΓΕΙΟΝΟΜΙΚΗ ΝΟΜΟΘΕΣΙΑ\",\n\"ΠΟΛΕΜΙΚΟ ΝΑΥΤΙΚΟ\",\n\"ΤΑΧΥΔΡΟΜΕΙΑ - ΤΗΛΕΠΙΚΟΙΝΩΝΙΕΣ\",\n\"ΔΑΣΗ ΚΑΙ ΚΤΗΝΟΤΡΟΦΙΑ\",\n\"ΕΛΕΓΚΤΙΚΟ ΣΥΝΕΔΡΙΟ ΚΑΙ ΣΥΝΤΑΞΕΙΣ\",\n\"ΠΟΛΕΜΙΚΗ ΑΕΡΟΠΟΡΙΑ\",\n\"ΝΟΜΙΚΑ ΠΡΟΣΩΠΑ ΔΗΜΟΣΙΟΥ ΔΙΚΑΙΟΥ\",\n\"ΝΟΜΟΘΕΣΙΑ ΑΝΩΝΥΜΩΝ ΕΤΑΙΡΕΙΩΝ ΤΡΑΠΕΖΩΝ ΚΑΙ ΧΡΗΜΑΤΙΣΤΗΡΙΩΝ\",\n\"ΠΟΛΙΤΙΚΗ ΑΕΡΟΠΟΡΙΑ\",\n\"ΕΜΜΕΣΗ ΦΟΡΟΛΟΓΙΑ\",\n\"ΚΟΙΝΩΝΙΚΕΣ ΑΣΦΑΛΙΣΕΙΣ\",\n\"ΝΟΜΟΘΕΣΙΑ ΔΗΜΩΝ ΚΑΙ ΚΟΙΝΟΤΗΤΩΝ\",\n\"ΝΟΜΟΘΕΣΙΑ ΕΠΙΜΕΛΗΤΗΡΙΩΝ ΣΥΝΕΤΑΙΡΙΣΜΩΝ ΚΑΙ ΣΩΜΑΤΕΙΩΝ\",\n\"ΔΗΜΟΣΙΑ ΕΡΓΑ\",\n\"ΔΙΟΙΚΗΣΗ ΔΙΚΑΙΟΣΥΝΗΣ\",\n\"ΑΣΦΑΛΙΣΤΙΚΑ ΤΑΜΕΙΑ\",\n\"ΕΚΚΛΗΣΙΑΣΤΙΚΗ ΝΟΜΟΘΕΣΙΑ\",\n\"ΕΚΠΑΙΔΕΥΤΙΚΗ ΝΟΜΟΘΕΣΙΑ\",\n\"ΔΗΜΟΣΙΟ ΛΟΓΙΣΤΙΚΟ\",\n\"ΤΕΛΩΝΕΙΑΚΗ ΝΟΜΟΘΕΣΙΑ\",\n\"ΣΥΓΚΟΙΝΩΝΙΕΣ\",\n\"ΕΘΝΙΚΗ ΑΜΥΝΑ\",\n\"ΣΤΡΑΤΟΣ ΞΗΡΑΣ\",\n\"ΑΓΟΡΑΝΟΜΙΚΗ ΝΟΜΟΘΕΣΙΑ\",\n\"ΔΗΜΟΣΙΟΙ ΥΠΑΛΛΗΛΟΙ\",\n\"ΠΕΡΙΟΥΣΙΑ ΔΗΜΟΣΙΟΥ ΚΑΙ ΝΟΜΙΣΜΑ\",\n\"ΟΙΚΟΝΟΜΙΚΗ ΔΙΟΙΚΗΣΗ\",\n\"ΛΙΜΕΝΙΚΗ ΝΟΜΟΘΕΣΙΑ\",\n\"ΑΣΤΙΚΗ ΝΟΜΟΘΕΣΙΑ\",\n\"ΠΟΛΙΤΙΚΗ ΔΙΚΟΝΟΜΙΑ\",\n\"ΔΙΠΛΩΜΑΤΙΚΗ ΝΟΜΟΘΕΣΙΑ\",\n\"ΔΙΟΙΚΗΤΙΚΗ ΝΟΜΟΘΕΣΙΑ\",\n\"ΑΜΕΣΗ ΦΟΡΟΛΟΓΙΑ\",\n\"ΤΥΠΟΣ ΚΑΙ ΤΟΥΡΙΣΜΟΣ\",\n\"ΕΘΝΙΚΗ ΟΙΚΟΝΟΜΙΑ\",\n\"ΑΣΤΥΝΟΜΙΚΗ ΝΟΜΟΘΕΣΙΑ\",\n\"ΑΓΡΟΤΙΚΗ ΝΟΜΟΘΕΣΙΑ\",\n\"ΕΡΓΑΤΙΚΗ ΝΟΜΟΘΕΣΙΑ\",\n\"ΠΟΙΝΙΚΗ ΝΟΜΟΘΕΣΙΑ\",\n\"ΕΜΠΟΡΙΚΗ ΝΟΜΟΘΕΣΙΑ\",\n\"ΕΠΙΣΤΗΜΕΣ ΚΑΙ ΤΕΧΝΕΣ\",\n\"ΕΜΠΟΡΙΚΗ ΝΑΥΤΙΛΙΑ\",\n\"ΣΥΝΤΑΓΜΑΤΙΚΗ ΝΟΜΟΘΕΣΙΑ\"\n] \\\n\n\nThe labels can also be a the chapter-level or subject-level class it belongs to. Some chapter labels are omitted due to size (389 classes). Some subject labels are also omitted due to size (2285 classes).", "### Data Splits\n\n\nSplit: Train, No of Documents: 28,536, Avg. words: 600\nSplit: Development, No of Documents: 9,511, Avg. words: 574\nSplit: Test, No of Documents: 9,516, Avg. words: 595\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nThe dataset was curated by Papaloukas et al. (2021) with the hope to support and encourage further research in NLP for the Greek language.", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nThe ''Permanent Greek Legislation Code - Raptarchis'' is a thorough catalogue of Greek legislation since the creation of the Greek state in 1834 until 2015. It includes Laws, Royal and Presidential Decrees, Regulations and Decisions, retrieved from the Official Government Gazette, where Greek legislation is published. This collection is one of the official, publicly available sources of classified Greek legislation suitable for classification tasks.\n\n\nCurrently, the original catalogue is publicly offered in MS Word (.doc) format through the portal e-Themis, the legal database and management service of it, under the administration of the Ministry of the Interior (Affairs). E-Themis is primarily focused on providing legislation on a multitude of predefined thematic categories, as described in the catalogue. The main goal is to help users find legislation of interest using the thematic index.", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nThe dataset does not include personal or sensitive information.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nPapaloukas et al. (2021)", "### Licensing Information\n\n\n*Christos Papaloukas, Ilias Chalkidis, Konstantinos Athinaios, Despina-Athanasia Pantazi and Manolis Koubarakis.*\n*Multi-granular Legal Topic Classification on Greek Legislation.*\n*Proceedings of the 3rd Natural Legal Language Processing (NLLP) Workshop, Punta Cana, Dominican Republic, 2021*", "### Contributions\n\n\nThanks to @christospi for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-multi-class-classification #task_ids-topic-classification #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Modern Greek (1453-) #license-cc-by-4.0 #arxiv-2109.15298 #region-us \n", "### Dataset Summary\n\n\nGreek\\_Legal\\_Code (GLC) is a dataset consisting of approx. 47k legal resources from Greek legislation. The origin of GLC is “Permanent Greek Legislation Code - Raptarchis”, a collection of Greek legislative documents classified into multi-level (from broader to more specialized) categories.\n\n\nTopics\n\n\nGLC consists of 47 legislative volumes and each volume corresponds to a main thematic topic. Each volume is divided into thematic sub categories which are called chapters and subsequently, each chapter breaks down to subjects which contain the legal resources. The total number of chapters is 389 while the total number of subjects is 2285, creating an interlinked thematic hierarchy. So, for the upper thematic level (volume) GLC has 47 classes. For the next thematic level (chapter) GLC offers 389 classes and for the inner and last thematic level (subject), GLC has 2285 classes.\n\n\nGLC classes are divided into three categories for each thematic level: frequent classes, which occur in more than 10 training documents and can be found in all three subsets (training, development and test); few-shot classes which appear in 1 to 10 training documents and also appear in the documents of the development and test sets, and zero-shot classes which appear in the development and/or test, but not in the training documents.", "### Supported Tasks and Leaderboards\n\n\nThe dataset supports:\n\n\nMulti-class Text Classification: Given the text of a document, a model predicts the corresponding class.\n\n\nFew-shot and Zero-shot learning: As already noted, the classes can be divided into three groups: frequent, few-shot, and zero- shot, depending on whether they were assigned to more than 10, fewer than 10 but at least one, or no training documents, respectively.", "### Languages\n\n\nAll documents are written in Greek.\n\n\nDataset Structure\n-----------------", "### Data Instances", "### Data Fields\n\n\nThe following data fields are provided for documents ('train', 'dev', 'test'):\n\n\n'text': (str) The full content of each document, which is represented by its 'header' and 'articles' (i.e., the 'main\\_body'). \n\n'label': (class label): Depending on the configurarion, the volume/chapter/subject of the document. For volume-level class it belongs to specifically: [\"ΚΟΙΝΩΝΙΚΗ ΠΡΟΝΟΙΑ\",\n\"ΓΕΩΡΓΙΚΗ ΝΟΜΟΘΕΣΙΑ\",\n\"ΡΑΔΙΟΦΩΝΙΑ ΚΑΙ ΤΥΠΟΣ\",\n\"ΒΙΟΜΗΧΑΝΙΚΗ ΝΟΜΟΘΕΣΙΑ\",\n\"ΥΓΕΙΟΝΟΜΙΚΗ ΝΟΜΟΘΕΣΙΑ\",\n\"ΠΟΛΕΜΙΚΟ ΝΑΥΤΙΚΟ\",\n\"ΤΑΧΥΔΡΟΜΕΙΑ - ΤΗΛΕΠΙΚΟΙΝΩΝΙΕΣ\",\n\"ΔΑΣΗ ΚΑΙ ΚΤΗΝΟΤΡΟΦΙΑ\",\n\"ΕΛΕΓΚΤΙΚΟ ΣΥΝΕΔΡΙΟ ΚΑΙ ΣΥΝΤΑΞΕΙΣ\",\n\"ΠΟΛΕΜΙΚΗ ΑΕΡΟΠΟΡΙΑ\",\n\"ΝΟΜΙΚΑ ΠΡΟΣΩΠΑ ΔΗΜΟΣΙΟΥ ΔΙΚΑΙΟΥ\",\n\"ΝΟΜΟΘΕΣΙΑ ΑΝΩΝΥΜΩΝ ΕΤΑΙΡΕΙΩΝ ΤΡΑΠΕΖΩΝ ΚΑΙ ΧΡΗΜΑΤΙΣΤΗΡΙΩΝ\",\n\"ΠΟΛΙΤΙΚΗ ΑΕΡΟΠΟΡΙΑ\",\n\"ΕΜΜΕΣΗ ΦΟΡΟΛΟΓΙΑ\",\n\"ΚΟΙΝΩΝΙΚΕΣ ΑΣΦΑΛΙΣΕΙΣ\",\n\"ΝΟΜΟΘΕΣΙΑ ΔΗΜΩΝ ΚΑΙ ΚΟΙΝΟΤΗΤΩΝ\",\n\"ΝΟΜΟΘΕΣΙΑ ΕΠΙΜΕΛΗΤΗΡΙΩΝ ΣΥΝΕΤΑΙΡΙΣΜΩΝ ΚΑΙ ΣΩΜΑΤΕΙΩΝ\",\n\"ΔΗΜΟΣΙΑ ΕΡΓΑ\",\n\"ΔΙΟΙΚΗΣΗ ΔΙΚΑΙΟΣΥΝΗΣ\",\n\"ΑΣΦΑΛΙΣΤΙΚΑ ΤΑΜΕΙΑ\",\n\"ΕΚΚΛΗΣΙΑΣΤΙΚΗ ΝΟΜΟΘΕΣΙΑ\",\n\"ΕΚΠΑΙΔΕΥΤΙΚΗ ΝΟΜΟΘΕΣΙΑ\",\n\"ΔΗΜΟΣΙΟ ΛΟΓΙΣΤΙΚΟ\",\n\"ΤΕΛΩΝΕΙΑΚΗ ΝΟΜΟΘΕΣΙΑ\",\n\"ΣΥΓΚΟΙΝΩΝΙΕΣ\",\n\"ΕΘΝΙΚΗ ΑΜΥΝΑ\",\n\"ΣΤΡΑΤΟΣ ΞΗΡΑΣ\",\n\"ΑΓΟΡΑΝΟΜΙΚΗ ΝΟΜΟΘΕΣΙΑ\",\n\"ΔΗΜΟΣΙΟΙ ΥΠΑΛΛΗΛΟΙ\",\n\"ΠΕΡΙΟΥΣΙΑ ΔΗΜΟΣΙΟΥ ΚΑΙ ΝΟΜΙΣΜΑ\",\n\"ΟΙΚΟΝΟΜΙΚΗ ΔΙΟΙΚΗΣΗ\",\n\"ΛΙΜΕΝΙΚΗ ΝΟΜΟΘΕΣΙΑ\",\n\"ΑΣΤΙΚΗ ΝΟΜΟΘΕΣΙΑ\",\n\"ΠΟΛΙΤΙΚΗ ΔΙΚΟΝΟΜΙΑ\",\n\"ΔΙΠΛΩΜΑΤΙΚΗ ΝΟΜΟΘΕΣΙΑ\",\n\"ΔΙΟΙΚΗΤΙΚΗ ΝΟΜΟΘΕΣΙΑ\",\n\"ΑΜΕΣΗ ΦΟΡΟΛΟΓΙΑ\",\n\"ΤΥΠΟΣ ΚΑΙ ΤΟΥΡΙΣΜΟΣ\",\n\"ΕΘΝΙΚΗ ΟΙΚΟΝΟΜΙΑ\",\n\"ΑΣΤΥΝΟΜΙΚΗ ΝΟΜΟΘΕΣΙΑ\",\n\"ΑΓΡΟΤΙΚΗ ΝΟΜΟΘΕΣΙΑ\",\n\"ΕΡΓΑΤΙΚΗ ΝΟΜΟΘΕΣΙΑ\",\n\"ΠΟΙΝΙΚΗ ΝΟΜΟΘΕΣΙΑ\",\n\"ΕΜΠΟΡΙΚΗ ΝΟΜΟΘΕΣΙΑ\",\n\"ΕΠΙΣΤΗΜΕΣ ΚΑΙ ΤΕΧΝΕΣ\",\n\"ΕΜΠΟΡΙΚΗ ΝΑΥΤΙΛΙΑ\",\n\"ΣΥΝΤΑΓΜΑΤΙΚΗ ΝΟΜΟΘΕΣΙΑ\"\n] \\\n\n\nThe labels can also be a the chapter-level or subject-level class it belongs to. Some chapter labels are omitted due to size (389 classes). Some subject labels are also omitted due to size (2285 classes).", "### Data Splits\n\n\nSplit: Train, No of Documents: 28,536, Avg. words: 600\nSplit: Development, No of Documents: 9,511, Avg. words: 574\nSplit: Test, No of Documents: 9,516, Avg. words: 595\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nThe dataset was curated by Papaloukas et al. (2021) with the hope to support and encourage further research in NLP for the Greek language.", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nThe ''Permanent Greek Legislation Code - Raptarchis'' is a thorough catalogue of Greek legislation since the creation of the Greek state in 1834 until 2015. It includes Laws, Royal and Presidential Decrees, Regulations and Decisions, retrieved from the Official Government Gazette, where Greek legislation is published. This collection is one of the official, publicly available sources of classified Greek legislation suitable for classification tasks.\n\n\nCurrently, the original catalogue is publicly offered in MS Word (.doc) format through the portal e-Themis, the legal database and management service of it, under the administration of the Ministry of the Interior (Affairs). E-Themis is primarily focused on providing legislation on a multitude of predefined thematic categories, as described in the catalogue. The main goal is to help users find legislation of interest using the thematic index.", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nThe dataset does not include personal or sensitive information.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nPapaloukas et al. (2021)", "### Licensing Information\n\n\n*Christos Papaloukas, Ilias Chalkidis, Konstantinos Athinaios, Despina-Athanasia Pantazi and Manolis Koubarakis.*\n*Multi-granular Legal Topic Classification on Greek Legislation.*\n*Proceedings of the 3rd Natural Legal Language Processing (NLLP) Workshop, Punta Cana, Dominican Republic, 2021*", "### Contributions\n\n\nThanks to @christospi for adding this dataset." ]
c597cae2314ea05de6792a5a7d5f1185f639d2a0
# Dataset Card for "guardian_authorship" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [http://www.icsd.aegean.gr/lecturers/stamatatos/papers/JLP2013.pdf](http://www.icsd.aegean.gr/lecturers/stamatatos/papers/JLP2013.pdf) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 49.61 MB - **Size of the generated dataset:** 38.98 MB - **Total amount of disk used:** 88.59 MB ### Dataset Summary A dataset cross-topic authorship attribution. The dataset is provided by Stamatatos 2013. 1- The cross-topic scenarios are based on Table-4 in Stamatatos 2017 (Ex. cross_topic_1 => row 1:P S U&W ). 2- The cross-genre scenarios are based on Table-5 in the same paper. (Ex. cross_genre_1 => row 1:B P S&U&W). 3- The same-topic/genre scenario is created by grouping all the datasts as follows. For ex., to use same_topic and split the data 60-40 use: train_ds = load_dataset('guardian_authorship', name="cross_topic_<<#>>", split='train[:60%]+validation[:60%]+test[:60%]') tests_ds = load_dataset('guardian_authorship', name="cross_topic_<<#>>", split='train[-40%:]+validation[-40%:]+test[-40%:]') IMPORTANT: train+validation+test[:60%] will generate the wrong splits because the data is imbalanced * See https://huggingface.co/docs/datasets/splits.html for detailed/more examples ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### cross_genre_1 - **Size of downloaded dataset files:** 3.10 MB - **Size of the generated dataset:** 2.74 MB - **Total amount of disk used:** 5.84 MB An example of 'train' looks as follows. ``` { "article": "File 1a\n", "author": 0, "topic": 4 } ``` #### cross_genre_2 - **Size of downloaded dataset files:** 3.10 MB - **Size of the generated dataset:** 2.74 MB - **Total amount of disk used:** 5.84 MB An example of 'validation' looks as follows. ``` { "article": "File 1a\n", "author": 0, "topic": 1 } ``` #### cross_genre_3 - **Size of downloaded dataset files:** 3.10 MB - **Size of the generated dataset:** 2.74 MB - **Total amount of disk used:** 5.84 MB An example of 'validation' looks as follows. ``` { "article": "File 1a\n", "author": 0, "topic": 2 } ``` #### cross_genre_4 - **Size of downloaded dataset files:** 3.10 MB - **Size of the generated dataset:** 2.74 MB - **Total amount of disk used:** 5.84 MB An example of 'validation' looks as follows. ``` { "article": "File 1a\n", "author": 0, "topic": 3 } ``` #### cross_topic_1 - **Size of downloaded dataset files:** 3.10 MB - **Size of the generated dataset:** 2.34 MB - **Total amount of disk used:** 5.43 MB An example of 'validation' looks as follows. ``` { "article": "File 1a\n", "author": 0, "topic": 1 } ``` ### Data Fields The data fields are the same among all splits. #### cross_genre_1 - `author`: a classification label, with possible values including `catherinebennett` (0), `georgemonbiot` (1), `hugoyoung` (2), `jonathanfreedland` (3), `martinkettle` (4). - `topic`: a classification label, with possible values including `Politics` (0), `Society` (1), `UK` (2), `World` (3), `Books` (4). - `article`: a `string` feature. #### cross_genre_2 - `author`: a classification label, with possible values including `catherinebennett` (0), `georgemonbiot` (1), `hugoyoung` (2), `jonathanfreedland` (3), `martinkettle` (4). - `topic`: a classification label, with possible values including `Politics` (0), `Society` (1), `UK` (2), `World` (3), `Books` (4). - `article`: a `string` feature. #### cross_genre_3 - `author`: a classification label, with possible values including `catherinebennett` (0), `georgemonbiot` (1), `hugoyoung` (2), `jonathanfreedland` (3), `martinkettle` (4). - `topic`: a classification label, with possible values including `Politics` (0), `Society` (1), `UK` (2), `World` (3), `Books` (4). - `article`: a `string` feature. #### cross_genre_4 - `author`: a classification label, with possible values including `catherinebennett` (0), `georgemonbiot` (1), `hugoyoung` (2), `jonathanfreedland` (3), `martinkettle` (4). - `topic`: a classification label, with possible values including `Politics` (0), `Society` (1), `UK` (2), `World` (3), `Books` (4). - `article`: a `string` feature. #### cross_topic_1 - `author`: a classification label, with possible values including `catherinebennett` (0), `georgemonbiot` (1), `hugoyoung` (2), `jonathanfreedland` (3), `martinkettle` (4). - `topic`: a classification label, with possible values including `Politics` (0), `Society` (1), `UK` (2), `World` (3), `Books` (4). - `article`: a `string` feature. ### Data Splits | name |train|validation|test| |-------------|----:|---------:|---:| |cross_genre_1| 63| 112| 269| |cross_genre_2| 63| 62| 319| |cross_genre_3| 63| 90| 291| |cross_genre_4| 63| 117| 264| |cross_topic_1| 112| 62| 207| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @article{article, author = {Stamatatos, Efstathios}, year = {2013}, month = {01}, pages = {421-439}, title = {On the robustness of authorship attribution based on character n-gram features}, volume = {21}, journal = {Journal of Law and Policy} } @inproceedings{stamatatos2017authorship, title={Authorship attribution using text distortion}, author={Stamatatos, Efstathios}, booktitle={Proc. of the 15th Conf. of the European Chapter of the Association for Computational Linguistics}, volume={1} pages={1138--1149}, year={2017} } ``` ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@eltoto1219](https://github.com/eltoto1219), [@malikaltakrori](https://github.com/malikaltakrori) for adding this dataset.
guardian_authorship
[ "task_categories:text-classification", "task_ids:multi-class-classification", "task_ids:topic-classification", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:unknown", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification", "topic-classification"], "pretty_name": "GuardianAuthorship", "dataset_info": [{"config_name": "cross_topic_1", "features": [{"name": "author", "dtype": {"class_label": {"names": {"0": "catherinebennett", "1": "georgemonbiot", "2": "hugoyoung", "3": "jonathanfreedland", "4": "martinkettle", "5": "maryriddell", "6": "nickcohen", "7": "peterpreston", "8": "pollytoynbee", "9": "royhattersley", "10": "simonhoggart", "11": "willhutton", "12": "zoewilliams"}}}}, {"name": "topic", "dtype": {"class_label": {"names": {"0": "Politics", "1": "Society", "2": "UK", "3": "World", "4": "Books"}}}}, {"name": "article", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 677054, "num_examples": 112}, {"name": "test", "num_bytes": 1283126, "num_examples": 207}, {"name": "validation", "num_bytes": 374390, "num_examples": 62}], "download_size": 3100749, "dataset_size": 2334570}, {"config_name": "cross_genre_1", "features": [{"name": "author", "dtype": {"class_label": {"names": {"0": "catherinebennett", "1": "georgemonbiot", "2": "hugoyoung", "3": "jonathanfreedland", "4": "martinkettle", "5": "maryriddell", "6": "nickcohen", "7": "peterpreston", "8": "pollytoynbee", "9": "royhattersley", "10": "simonhoggart", "11": "willhutton", "12": "zoewilliams"}}}}, {"name": "topic", "dtype": {"class_label": {"names": {"0": "Politics", "1": "Society", "2": "UK", "3": "World", "4": "Books"}}}}, {"name": "article", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 406144, "num_examples": 63}, {"name": "test", "num_bytes": 1657512, "num_examples": 269}, {"name": "validation", "num_bytes": 677054, "num_examples": 112}], "download_size": 3100749, "dataset_size": 2740710}, {"config_name": "cross_topic_2", "features": [{"name": "author", "dtype": {"class_label": {"names": {"0": "catherinebennett", "1": "georgemonbiot", "2": "hugoyoung", "3": "jonathanfreedland", "4": "martinkettle", "5": "maryriddell", "6": "nickcohen", "7": "peterpreston", "8": "pollytoynbee", "9": "royhattersley", "10": "simonhoggart", "11": "willhutton", "12": "zoewilliams"}}}}, {"name": "topic", "dtype": {"class_label": {"names": {"0": "Politics", "1": "Society", "2": "UK", "3": "World", "4": "Books"}}}}, {"name": "article", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 677054, "num_examples": 112}, {"name": "test", "num_bytes": 1104764, "num_examples": 179}, {"name": "validation", "num_bytes": 552752, "num_examples": 90}], "download_size": 3100749, "dataset_size": 2334570}, {"config_name": "cross_topic_3", "features": [{"name": "author", "dtype": {"class_label": {"names": {"0": "catherinebennett", "1": "georgemonbiot", "2": "hugoyoung", "3": "jonathanfreedland", "4": "martinkettle", "5": "maryriddell", "6": "nickcohen", "7": "peterpreston", "8": "pollytoynbee", "9": "royhattersley", "10": "simonhoggart", "11": "willhutton", "12": "zoewilliams"}}}}, {"name": "topic", "dtype": {"class_label": {"names": {"0": "Politics", "1": "Society", "2": "UK", "3": "World", "4": "Books"}}}}, {"name": "article", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 677054, "num_examples": 112}, {"name": "test", "num_bytes": 927138, "num_examples": 152}, {"name": "validation", "num_bytes": 730378, "num_examples": 117}], "download_size": 3100749, "dataset_size": 2334570}, {"config_name": "cross_topic_4", "features": [{"name": "author", "dtype": {"class_label": {"names": {"0": "catherinebennett", "1": "georgemonbiot", "2": "hugoyoung", "3": "jonathanfreedland", "4": "martinkettle", "5": "maryriddell", "6": "nickcohen", "7": "peterpreston", "8": "pollytoynbee", "9": "royhattersley", "10": "simonhoggart", "11": "willhutton", "12": "zoewilliams"}}}}, {"name": "topic", "dtype": {"class_label": {"names": {"0": "Politics", "1": "Society", "2": "UK", "3": "World", "4": "Books"}}}}, {"name": "article", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 374390, "num_examples": 62}, {"name": "test", "num_bytes": 1283126, "num_examples": 207}, {"name": "validation", "num_bytes": 677054, "num_examples": 112}], "download_size": 3100749, "dataset_size": 2334570}, {"config_name": "cross_topic_5", "features": [{"name": "author", "dtype": {"class_label": {"names": {"0": "catherinebennett", "1": "georgemonbiot", "2": "hugoyoung", "3": "jonathanfreedland", "4": "martinkettle", "5": "maryriddell", "6": "nickcohen", "7": "peterpreston", "8": "pollytoynbee", "9": "royhattersley", "10": "simonhoggart", "11": "willhutton", "12": "zoewilliams"}}}}, {"name": "topic", "dtype": {"class_label": {"names": {"0": "Politics", "1": "Society", "2": "UK", "3": "World", "4": "Books"}}}}, {"name": "article", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 374390, "num_examples": 62}, {"name": "test", "num_bytes": 1407428, "num_examples": 229}, {"name": "validation", "num_bytes": 552752, "num_examples": 90}], "download_size": 3100749, "dataset_size": 2334570}, {"config_name": "cross_topic_6", "features": [{"name": "author", "dtype": {"class_label": {"names": {"0": "catherinebennett", "1": "georgemonbiot", "2": "hugoyoung", "3": "jonathanfreedland", "4": "martinkettle", "5": "maryriddell", "6": "nickcohen", "7": "peterpreston", "8": "pollytoynbee", "9": "royhattersley", "10": "simonhoggart", "11": "willhutton", "12": "zoewilliams"}}}}, {"name": "topic", "dtype": {"class_label": {"names": {"0": "Politics", "1": "Society", "2": "UK", "3": "World", "4": "Books"}}}}, {"name": "article", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 374390, "num_examples": 62}, {"name": "test", "num_bytes": 1229802, "num_examples": 202}, {"name": "validation", "num_bytes": 730378, "num_examples": 117}], "download_size": 3100749, "dataset_size": 2334570}, {"config_name": "cross_topic_7", "features": [{"name": "author", "dtype": {"class_label": {"names": {"0": "catherinebennett", "1": "georgemonbiot", "2": "hugoyoung", "3": "jonathanfreedland", "4": "martinkettle", "5": "maryriddell", "6": "nickcohen", "7": "peterpreston", "8": "pollytoynbee", "9": "royhattersley", "10": "simonhoggart", "11": "willhutton", "12": "zoewilliams"}}}}, {"name": "topic", "dtype": {"class_label": {"names": {"0": "Politics", "1": "Society", "2": "UK", "3": "World", "4": "Books"}}}}, {"name": "article", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 552752, "num_examples": 90}, {"name": "test", "num_bytes": 1104764, "num_examples": 179}, {"name": "validation", "num_bytes": 677054, "num_examples": 112}], "download_size": 3100749, "dataset_size": 2334570}, {"config_name": "cross_topic_8", "features": [{"name": "author", "dtype": {"class_label": {"names": {"0": "catherinebennett", "1": "georgemonbiot", "2": "hugoyoung", "3": "jonathanfreedland", "4": "martinkettle", "5": "maryriddell", "6": "nickcohen", "7": "peterpreston", "8": "pollytoynbee", "9": "royhattersley", "10": "simonhoggart", "11": "willhutton", "12": "zoewilliams"}}}}, {"name": "topic", "dtype": {"class_label": {"names": {"0": "Politics", "1": "Society", "2": "UK", "3": "World", "4": "Books"}}}}, {"name": "article", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 552752, "num_examples": 90}, {"name": "test", "num_bytes": 1407428, "num_examples": 229}, {"name": "validation", "num_bytes": 374390, "num_examples": 62}], "download_size": 3100749, "dataset_size": 2334570}, {"config_name": "cross_topic_9", "features": [{"name": "author", "dtype": {"class_label": {"names": {"0": "catherinebennett", "1": "georgemonbiot", "2": "hugoyoung", "3": "jonathanfreedland", "4": "martinkettle", "5": "maryriddell", "6": "nickcohen", "7": "peterpreston", "8": "pollytoynbee", "9": "royhattersley", "10": "simonhoggart", "11": "willhutton", "12": "zoewilliams"}}}}, {"name": "topic", "dtype": {"class_label": {"names": {"0": "Politics", "1": "Society", "2": "UK", "3": "World", "4": "Books"}}}}, {"name": "article", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 552752, "num_examples": 90}, {"name": "test", "num_bytes": 1051440, "num_examples": 174}, {"name": "validation", "num_bytes": 730378, "num_examples": 117}], "download_size": 3100749, "dataset_size": 2334570}, {"config_name": "cross_topic_10", "features": [{"name": "author", "dtype": {"class_label": {"names": {"0": "catherinebennett", "1": "georgemonbiot", "2": "hugoyoung", "3": "jonathanfreedland", "4": "martinkettle", "5": "maryriddell", "6": "nickcohen", "7": "peterpreston", "8": "pollytoynbee", "9": "royhattersley", "10": "simonhoggart", "11": "willhutton", "12": "zoewilliams"}}}}, {"name": "topic", "dtype": {"class_label": {"names": {"0": "Politics", "1": "Society", "2": "UK", "3": "World", "4": "Books"}}}}, {"name": "article", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 730378, "num_examples": 117}, {"name": "test", "num_bytes": 927138, "num_examples": 152}, {"name": "validation", "num_bytes": 677054, "num_examples": 112}], "download_size": 3100749, "dataset_size": 2334570}, {"config_name": "cross_topic_11", "features": [{"name": "author", "dtype": {"class_label": {"names": {"0": "catherinebennett", "1": "georgemonbiot", "2": "hugoyoung", "3": "jonathanfreedland", "4": "martinkettle", "5": "maryriddell", "6": "nickcohen", "7": "peterpreston", "8": "pollytoynbee", "9": "royhattersley", "10": "simonhoggart", "11": "willhutton", "12": "zoewilliams"}}}}, {"name": "topic", "dtype": {"class_label": {"names": {"0": "Politics", "1": "Society", "2": "UK", "3": "World", "4": "Books"}}}}, {"name": "article", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 730378, "num_examples": 117}, {"name": "test", "num_bytes": 1229802, "num_examples": 202}, {"name": "validation", "num_bytes": 374390, "num_examples": 62}], "download_size": 3100749, "dataset_size": 2334570}, {"config_name": "cross_topic_12", "features": [{"name": "author", "dtype": {"class_label": {"names": {"0": "catherinebennett", "1": "georgemonbiot", "2": "hugoyoung", "3": "jonathanfreedland", "4": "martinkettle", "5": "maryriddell", "6": "nickcohen", "7": "peterpreston", "8": "pollytoynbee", "9": "royhattersley", "10": "simonhoggart", "11": "willhutton", "12": "zoewilliams"}}}}, {"name": "topic", "dtype": {"class_label": {"names": {"0": "Politics", "1": "Society", "2": "UK", "3": "World", "4": "Books"}}}}, {"name": "article", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 730378, "num_examples": 117}, {"name": "test", "num_bytes": 1051440, "num_examples": 174}, {"name": "validation", "num_bytes": 552752, "num_examples": 90}], "download_size": 3100749, "dataset_size": 2334570}, {"config_name": "cross_genre_2", "features": [{"name": "author", "dtype": {"class_label": {"names": {"0": "catherinebennett", "1": "georgemonbiot", "2": "hugoyoung", "3": "jonathanfreedland", "4": "martinkettle", "5": "maryriddell", "6": "nickcohen", "7": "peterpreston", "8": "pollytoynbee", "9": "royhattersley", "10": "simonhoggart", "11": "willhutton", "12": "zoewilliams"}}}}, {"name": "topic", "dtype": {"class_label": {"names": {"0": "Politics", "1": "Society", "2": "UK", "3": "World", "4": "Books"}}}}, {"name": "article", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 406144, "num_examples": 63}, {"name": "test", "num_bytes": 1960176, "num_examples": 319}, {"name": "validation", "num_bytes": 374390, "num_examples": 62}], "download_size": 3100749, "dataset_size": 2740710}, {"config_name": "cross_genre_3", "features": [{"name": "author", "dtype": {"class_label": {"names": {"0": "catherinebennett", "1": "georgemonbiot", "2": "hugoyoung", "3": "jonathanfreedland", "4": "martinkettle", "5": "maryriddell", "6": "nickcohen", "7": "peterpreston", "8": "pollytoynbee", "9": "royhattersley", "10": "simonhoggart", "11": "willhutton", "12": "zoewilliams"}}}}, {"name": "topic", "dtype": {"class_label": {"names": {"0": "Politics", "1": "Society", "2": "UK", "3": "World", "4": "Books"}}}}, {"name": "article", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 406144, "num_examples": 63}, {"name": "test", "num_bytes": 1781814, "num_examples": 291}, {"name": "validation", "num_bytes": 552752, "num_examples": 90}], "download_size": 3100749, "dataset_size": 2740710}, {"config_name": "cross_genre_4", "features": [{"name": "author", "dtype": {"class_label": {"names": {"0": "catherinebennett", "1": "georgemonbiot", "2": "hugoyoung", "3": "jonathanfreedland", "4": "martinkettle", "5": "maryriddell", "6": "nickcohen", "7": "peterpreston", "8": "pollytoynbee", "9": "royhattersley", "10": "simonhoggart", "11": "willhutton", "12": "zoewilliams"}}}}, {"name": "topic", "dtype": {"class_label": {"names": {"0": "Politics", "1": "Society", "2": "UK", "3": "World", "4": "Books"}}}}, {"name": "article", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 406144, "num_examples": 63}, {"name": "test", "num_bytes": 1604188, "num_examples": 264}, {"name": "validation", "num_bytes": 730378, "num_examples": 117}], "download_size": 3100749, "dataset_size": 2740710}]}
2024-01-18T11:04:28+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-multi-class-classification #task_ids-topic-classification #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-unknown #region-us
Dataset Card for "guardian\_authorship" ======================================= Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: * Paper: * Point of Contact: * Size of downloaded dataset files: 49.61 MB * Size of the generated dataset: 38.98 MB * Total amount of disk used: 88.59 MB ### Dataset Summary A dataset cross-topic authorship attribution. The dataset is provided by Stamatatos 2013. 1- The cross-topic scenarios are based on Table-4 in Stamatatos 2017 (Ex. cross\_topic\_1 => row 1:P S U&W ). 2- The cross-genre scenarios are based on Table-5 in the same paper. (Ex. cross\_genre\_1 => row 1:B P S&U&W). 3- The same-topic/genre scenario is created by grouping all the datasts as follows. For ex., to use same\_topic and split the data 60-40 use: train\_ds = load\_dataset('guardian\_authorship', name="cross\_topic\_<<#>>", split='train[:60%]+validation[:60%]+test[:60%]') tests\_ds = load\_dataset('guardian\_authorship', name="cross\_topic\_<<#>>", split='train[-40%:]+validation[-40%:]+test[-40%:]') IMPORTANT: train+validation+test[:60%] will generate the wrong splits because the data is imbalanced * See URL for detailed/more examples ### Supported Tasks and Leaderboards ### Languages Dataset Structure ----------------- ### Data Instances #### cross\_genre\_1 * Size of downloaded dataset files: 3.10 MB * Size of the generated dataset: 2.74 MB * Total amount of disk used: 5.84 MB An example of 'train' looks as follows. #### cross\_genre\_2 * Size of downloaded dataset files: 3.10 MB * Size of the generated dataset: 2.74 MB * Total amount of disk used: 5.84 MB An example of 'validation' looks as follows. #### cross\_genre\_3 * Size of downloaded dataset files: 3.10 MB * Size of the generated dataset: 2.74 MB * Total amount of disk used: 5.84 MB An example of 'validation' looks as follows. #### cross\_genre\_4 * Size of downloaded dataset files: 3.10 MB * Size of the generated dataset: 2.74 MB * Total amount of disk used: 5.84 MB An example of 'validation' looks as follows. #### cross\_topic\_1 * Size of downloaded dataset files: 3.10 MB * Size of the generated dataset: 2.34 MB * Total amount of disk used: 5.43 MB An example of 'validation' looks as follows. ### Data Fields The data fields are the same among all splits. #### cross\_genre\_1 * 'author': a classification label, with possible values including 'catherinebennett' (0), 'georgemonbiot' (1), 'hugoyoung' (2), 'jonathanfreedland' (3), 'martinkettle' (4). * 'topic': a classification label, with possible values including 'Politics' (0), 'Society' (1), 'UK' (2), 'World' (3), 'Books' (4). * 'article': a 'string' feature. #### cross\_genre\_2 * 'author': a classification label, with possible values including 'catherinebennett' (0), 'georgemonbiot' (1), 'hugoyoung' (2), 'jonathanfreedland' (3), 'martinkettle' (4). * 'topic': a classification label, with possible values including 'Politics' (0), 'Society' (1), 'UK' (2), 'World' (3), 'Books' (4). * 'article': a 'string' feature. #### cross\_genre\_3 * 'author': a classification label, with possible values including 'catherinebennett' (0), 'georgemonbiot' (1), 'hugoyoung' (2), 'jonathanfreedland' (3), 'martinkettle' (4). * 'topic': a classification label, with possible values including 'Politics' (0), 'Society' (1), 'UK' (2), 'World' (3), 'Books' (4). * 'article': a 'string' feature. #### cross\_genre\_4 * 'author': a classification label, with possible values including 'catherinebennett' (0), 'georgemonbiot' (1), 'hugoyoung' (2), 'jonathanfreedland' (3), 'martinkettle' (4). * 'topic': a classification label, with possible values including 'Politics' (0), 'Society' (1), 'UK' (2), 'World' (3), 'Books' (4). * 'article': a 'string' feature. #### cross\_topic\_1 * 'author': a classification label, with possible values including 'catherinebennett' (0), 'georgemonbiot' (1), 'hugoyoung' (2), 'jonathanfreedland' (3), 'martinkettle' (4). * 'topic': a classification label, with possible values including 'Politics' (0), 'Society' (1), 'UK' (2), 'World' (3), 'Books' (4). * 'article': a 'string' feature. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to @thomwolf, @eltoto1219, @malikaltakrori for adding this dataset.
[ "### Dataset Summary\n\n\nA dataset cross-topic authorship attribution. The dataset is provided by Stamatatos 2013.\n1- The cross-topic scenarios are based on Table-4 in Stamatatos 2017 (Ex. cross\\_topic\\_1 => row 1:P S U&W ).\n2- The cross-genre scenarios are based on Table-5 in the same paper. (Ex. cross\\_genre\\_1 => row 1:B P S&U&W).\n\n\n3- The same-topic/genre scenario is created by grouping all the datasts as follows.\nFor ex., to use same\\_topic and split the data 60-40 use:\ntrain\\_ds = load\\_dataset('guardian\\_authorship', name=\"cross\\_topic\\_<<#>>\",\nsplit='train[:60%]+validation[:60%]+test[:60%]')\ntests\\_ds = load\\_dataset('guardian\\_authorship', name=\"cross\\_topic\\_<<#>>\",\nsplit='train[-40%:]+validation[-40%:]+test[-40%:]')\n\n\nIMPORTANT: train+validation+test[:60%] will generate the wrong splits because the data is imbalanced\n\n\n* See URL for detailed/more examples", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### cross\\_genre\\_1\n\n\n* Size of downloaded dataset files: 3.10 MB\n* Size of the generated dataset: 2.74 MB\n* Total amount of disk used: 5.84 MB\n\n\nAn example of 'train' looks as follows.", "#### cross\\_genre\\_2\n\n\n* Size of downloaded dataset files: 3.10 MB\n* Size of the generated dataset: 2.74 MB\n* Total amount of disk used: 5.84 MB\n\n\nAn example of 'validation' looks as follows.", "#### cross\\_genre\\_3\n\n\n* Size of downloaded dataset files: 3.10 MB\n* Size of the generated dataset: 2.74 MB\n* Total amount of disk used: 5.84 MB\n\n\nAn example of 'validation' looks as follows.", "#### cross\\_genre\\_4\n\n\n* Size of downloaded dataset files: 3.10 MB\n* Size of the generated dataset: 2.74 MB\n* Total amount of disk used: 5.84 MB\n\n\nAn example of 'validation' looks as follows.", "#### cross\\_topic\\_1\n\n\n* Size of downloaded dataset files: 3.10 MB\n* Size of the generated dataset: 2.34 MB\n* Total amount of disk used: 5.43 MB\n\n\nAn example of 'validation' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### cross\\_genre\\_1\n\n\n* 'author': a classification label, with possible values including 'catherinebennett' (0), 'georgemonbiot' (1), 'hugoyoung' (2), 'jonathanfreedland' (3), 'martinkettle' (4).\n* 'topic': a classification label, with possible values including 'Politics' (0), 'Society' (1), 'UK' (2), 'World' (3), 'Books' (4).\n* 'article': a 'string' feature.", "#### cross\\_genre\\_2\n\n\n* 'author': a classification label, with possible values including 'catherinebennett' (0), 'georgemonbiot' (1), 'hugoyoung' (2), 'jonathanfreedland' (3), 'martinkettle' (4).\n* 'topic': a classification label, with possible values including 'Politics' (0), 'Society' (1), 'UK' (2), 'World' (3), 'Books' (4).\n* 'article': a 'string' feature.", "#### cross\\_genre\\_3\n\n\n* 'author': a classification label, with possible values including 'catherinebennett' (0), 'georgemonbiot' (1), 'hugoyoung' (2), 'jonathanfreedland' (3), 'martinkettle' (4).\n* 'topic': a classification label, with possible values including 'Politics' (0), 'Society' (1), 'UK' (2), 'World' (3), 'Books' (4).\n* 'article': a 'string' feature.", "#### cross\\_genre\\_4\n\n\n* 'author': a classification label, with possible values including 'catherinebennett' (0), 'georgemonbiot' (1), 'hugoyoung' (2), 'jonathanfreedland' (3), 'martinkettle' (4).\n* 'topic': a classification label, with possible values including 'Politics' (0), 'Society' (1), 'UK' (2), 'World' (3), 'Books' (4).\n* 'article': a 'string' feature.", "#### cross\\_topic\\_1\n\n\n* 'author': a classification label, with possible values including 'catherinebennett' (0), 'georgemonbiot' (1), 'hugoyoung' (2), 'jonathanfreedland' (3), 'martinkettle' (4).\n* 'topic': a classification label, with possible values including 'Politics' (0), 'Society' (1), 'UK' (2), 'World' (3), 'Books' (4).\n* 'article': a 'string' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @thomwolf, @eltoto1219, @malikaltakrori for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-multi-class-classification #task_ids-topic-classification #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-unknown #region-us \n", "### Dataset Summary\n\n\nA dataset cross-topic authorship attribution. The dataset is provided by Stamatatos 2013.\n1- The cross-topic scenarios are based on Table-4 in Stamatatos 2017 (Ex. cross\\_topic\\_1 => row 1:P S U&W ).\n2- The cross-genre scenarios are based on Table-5 in the same paper. (Ex. cross\\_genre\\_1 => row 1:B P S&U&W).\n\n\n3- The same-topic/genre scenario is created by grouping all the datasts as follows.\nFor ex., to use same\\_topic and split the data 60-40 use:\ntrain\\_ds = load\\_dataset('guardian\\_authorship', name=\"cross\\_topic\\_<<#>>\",\nsplit='train[:60%]+validation[:60%]+test[:60%]')\ntests\\_ds = load\\_dataset('guardian\\_authorship', name=\"cross\\_topic\\_<<#>>\",\nsplit='train[-40%:]+validation[-40%:]+test[-40%:]')\n\n\nIMPORTANT: train+validation+test[:60%] will generate the wrong splits because the data is imbalanced\n\n\n* See URL for detailed/more examples", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### cross\\_genre\\_1\n\n\n* Size of downloaded dataset files: 3.10 MB\n* Size of the generated dataset: 2.74 MB\n* Total amount of disk used: 5.84 MB\n\n\nAn example of 'train' looks as follows.", "#### cross\\_genre\\_2\n\n\n* Size of downloaded dataset files: 3.10 MB\n* Size of the generated dataset: 2.74 MB\n* Total amount of disk used: 5.84 MB\n\n\nAn example of 'validation' looks as follows.", "#### cross\\_genre\\_3\n\n\n* Size of downloaded dataset files: 3.10 MB\n* Size of the generated dataset: 2.74 MB\n* Total amount of disk used: 5.84 MB\n\n\nAn example of 'validation' looks as follows.", "#### cross\\_genre\\_4\n\n\n* Size of downloaded dataset files: 3.10 MB\n* Size of the generated dataset: 2.74 MB\n* Total amount of disk used: 5.84 MB\n\n\nAn example of 'validation' looks as follows.", "#### cross\\_topic\\_1\n\n\n* Size of downloaded dataset files: 3.10 MB\n* Size of the generated dataset: 2.34 MB\n* Total amount of disk used: 5.43 MB\n\n\nAn example of 'validation' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### cross\\_genre\\_1\n\n\n* 'author': a classification label, with possible values including 'catherinebennett' (0), 'georgemonbiot' (1), 'hugoyoung' (2), 'jonathanfreedland' (3), 'martinkettle' (4).\n* 'topic': a classification label, with possible values including 'Politics' (0), 'Society' (1), 'UK' (2), 'World' (3), 'Books' (4).\n* 'article': a 'string' feature.", "#### cross\\_genre\\_2\n\n\n* 'author': a classification label, with possible values including 'catherinebennett' (0), 'georgemonbiot' (1), 'hugoyoung' (2), 'jonathanfreedland' (3), 'martinkettle' (4).\n* 'topic': a classification label, with possible values including 'Politics' (0), 'Society' (1), 'UK' (2), 'World' (3), 'Books' (4).\n* 'article': a 'string' feature.", "#### cross\\_genre\\_3\n\n\n* 'author': a classification label, with possible values including 'catherinebennett' (0), 'georgemonbiot' (1), 'hugoyoung' (2), 'jonathanfreedland' (3), 'martinkettle' (4).\n* 'topic': a classification label, with possible values including 'Politics' (0), 'Society' (1), 'UK' (2), 'World' (3), 'Books' (4).\n* 'article': a 'string' feature.", "#### cross\\_genre\\_4\n\n\n* 'author': a classification label, with possible values including 'catherinebennett' (0), 'georgemonbiot' (1), 'hugoyoung' (2), 'jonathanfreedland' (3), 'martinkettle' (4).\n* 'topic': a classification label, with possible values including 'Politics' (0), 'Society' (1), 'UK' (2), 'World' (3), 'Books' (4).\n* 'article': a 'string' feature.", "#### cross\\_topic\\_1\n\n\n* 'author': a classification label, with possible values including 'catherinebennett' (0), 'georgemonbiot' (1), 'hugoyoung' (2), 'jonathanfreedland' (3), 'martinkettle' (4).\n* 'topic': a classification label, with possible values including 'Politics' (0), 'Society' (1), 'UK' (2), 'World' (3), 'Books' (4).\n* 'article': a 'string' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @thomwolf, @eltoto1219, @malikaltakrori for adding this dataset." ]
a798d2e917bde4235873e601386571e8ca602530
# Dataset Card for the Gutenberg Time dataset ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **[Repository](https://github.com/allenkim/what-time-is-it)** - **[Paper](https://arxiv.org/abs/2011.04124)** ### Dataset Summary A clean data resource containing all explicit time references in a dataset of 52,183 novels whose full text is available via Project Gutenberg. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Time-of-the-day classification from excerpts. ## Dataset Structure ### Data Instances ``` { "guten_id": 28999, "hour_reference": 12, "time_phrase": "midday", "is_ambiguous": False, "time_pos_start": 133, "time_pos_end": 134, "tok_context": "Sorrows and trials she had had in plenty in her life , but these the sweetness of her nature had transformed , so that from being things difficult to bear , she had built up with them her own character . Sorrow had increased her own power of sympathy ; out of trials she had learnt patience ; and failure and the gradual sinking of one she had loved into the bottomless slough of evil habit had but left her with an added dower of pity and tolerance . So the past had no sting left , and if iron had ever entered into her soul it now but served to make it strong . She was still young , too ; it was not near sunset with her yet , nor even midday , and the future that , humanly speaking , she counted to be hers was almost dazzling in its brightness . For love had dawned for her again , and no uncertain love , wrapped in the mists of memory , but one that had ripened through liking and friendship and intimacy into the authentic glory . He was in England , too ; she was going back to him . And before very long she would never go away from him again ." } ``` ### Data Fields ``` guten_id - Gutenberg ID number hour_reference - hour from 0 to 23 time_phrase - the phrase corresponding to the referenced hour is_ambiguous - boolean whether it is clear whether time is AM or PM time_pos_start - token position where time_phrase begins time_pos_end - token position where time_phrase ends (exclusive) tok_context - context in which time_phrase appears as space-separated tokens ``` ### Data Splits No data splits. ## Dataset Creation ### Curation Rationale The flow of time is an indispensable guide for our actions, and provides a framework in which to see a logical progression of events. Just as in real life,the clock provides the background against which literary works play out: when characters wake, eat,and act. In most works of fiction, the events of the story take place during recognizable time periods over the course of the day. Recognizing a story’s flow through time is essential to understanding the text.In this paper, we try to capture the flow of time through novels by attempting to recognize what time of day each event in the story takes place at. ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? Novel authors. ### Annotations #### Annotation process Manually annotated. #### Who are the annotators? Two of the authors. ### Personal and Sensitive Information No Personal or sensitive information. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators Allen Kim, Charuta Pethe and Steven Skiena, Stony Brook University ### Licensing Information [More Information Needed] ### Citation Information ``` @misc{kim2020time, title={What time is it? Temporal Analysis of Novels}, author={Allen Kim and Charuta Pethe and Steven Skiena}, year={2020}, eprint={2011.04124}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Contributions Thanks to [@TevenLeScao](https://github.com/TevenLeScao) for adding this dataset.
gutenberg_time
[ "task_categories:text-classification", "task_ids:multi-class-classification", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:unknown", "arxiv:2011.04124", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification"], "paperswithcode_id": "gutenberg-time-dataset", "pretty_name": "the Gutenberg Time dataset", "dataset_info": {"features": [{"name": "guten_id", "dtype": "string"}, {"name": "hour_reference", "dtype": "string"}, {"name": "time_phrase", "dtype": "string"}, {"name": "is_ambiguous", "dtype": "bool_"}, {"name": "time_pos_start", "dtype": "int64"}, {"name": "time_pos_end", "dtype": "int64"}, {"name": "tok_context", "dtype": "string"}], "config_name": "gutenberg", "splits": [{"name": "train", "num_bytes": 108550391, "num_examples": 120694}], "download_size": 35853781, "dataset_size": 108550391}}
2024-01-18T11:04:30+00:00
[ "2011.04124" ]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-multi-class-classification #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-unknown #arxiv-2011.04124 #region-us
# Dataset Card for the Gutenberg Time dataset ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Repository - Paper ### Dataset Summary A clean data resource containing all explicit time references in a dataset of 52,183 novels whose full text is available via Project Gutenberg. ### Supported Tasks and Leaderboards ### Languages Time-of-the-day classification from excerpts. ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits No data splits. ## Dataset Creation ### Curation Rationale The flow of time is an indispensable guide for our actions, and provides a framework in which to see a logical progression of events. Just as in real life,the clock provides the background against which literary works play out: when characters wake, eat,and act. In most works of fiction, the events of the story take place during recognizable time periods over the course of the day. Recognizing a story’s flow through time is essential to understanding the text.In this paper, we try to capture the flow of time through novels by attempting to recognize what time of day each event in the story takes place at. ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? Novel authors. ### Annotations #### Annotation process Manually annotated. #### Who are the annotators? Two of the authors. ### Personal and Sensitive Information No Personal or sensitive information. ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators Allen Kim, Charuta Pethe and Steven Skiena, Stony Brook University ### Licensing Information ### Contributions Thanks to @TevenLeScao for adding this dataset.
[ "# Dataset Card for the Gutenberg Time dataset", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Repository\n- Paper", "### Dataset Summary\n\nA clean data resource containing all explicit time references in a dataset of 52,183 novels whose full text is available via Project Gutenberg.", "### Supported Tasks and Leaderboards", "### Languages\n\nTime-of-the-day classification from excerpts.", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits\n\nNo data splits.", "## Dataset Creation", "### Curation Rationale\n\nThe flow of time is an indispensable guide for our actions, and provides a framework in which to see a logical progression of events. Just as in real life,the clock provides the background against which literary works play out: when characters wake, eat,and act. In most works of fiction, the events of the story take place during recognizable time periods over the course of the day. Recognizing a story’s flow through time is essential to understanding the text.In this paper, we try to capture the flow of time through novels by attempting to recognize what time of day each event in the story takes place at.", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?\n\nNovel authors.", "### Annotations", "#### Annotation process\n\nManually annotated.", "#### Who are the annotators?\n\nTwo of the authors.", "### Personal and Sensitive Information\n\nNo Personal or sensitive information.", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nAllen Kim, Charuta Pethe and Steven Skiena, Stony Brook University", "### Licensing Information", "### Contributions\n\nThanks to @TevenLeScao for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-multi-class-classification #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-unknown #arxiv-2011.04124 #region-us \n", "# Dataset Card for the Gutenberg Time dataset", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Repository\n- Paper", "### Dataset Summary\n\nA clean data resource containing all explicit time references in a dataset of 52,183 novels whose full text is available via Project Gutenberg.", "### Supported Tasks and Leaderboards", "### Languages\n\nTime-of-the-day classification from excerpts.", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits\n\nNo data splits.", "## Dataset Creation", "### Curation Rationale\n\nThe flow of time is an indispensable guide for our actions, and provides a framework in which to see a logical progression of events. Just as in real life,the clock provides the background against which literary works play out: when characters wake, eat,and act. In most works of fiction, the events of the story take place during recognizable time periods over the course of the day. Recognizing a story’s flow through time is essential to understanding the text.In this paper, we try to capture the flow of time through novels by attempting to recognize what time of day each event in the story takes place at.", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?\n\nNovel authors.", "### Annotations", "#### Annotation process\n\nManually annotated.", "#### Who are the annotators?\n\nTwo of the authors.", "### Personal and Sensitive Information\n\nNo Personal or sensitive information.", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nAllen Kim, Charuta Pethe and Steven Skiena, Stony Brook University", "### Licensing Information", "### Contributions\n\nThanks to @TevenLeScao for adding this dataset." ]
365cc516c1a05ce78fc750597bcec838b65f1f2f
# Dataset Card for "hans" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://github.com/tommccoy1/hans](https://github.com/tommccoy1/hans) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 30.94 MB - **Size of the generated dataset:** 31.81 MB - **Total amount of disk used:** 62.76 MB ### Dataset Summary The HANS dataset is an NLI evaluation set that tests specific hypotheses about invalid heuristics that NLI models are likely to learn. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### plain_text - **Size of downloaded dataset files:** 30.94 MB - **Size of the generated dataset:** 31.81 MB - **Total amount of disk used:** 62.76 MB An example of 'train' looks as follows. ``` ``` ### Data Fields The data fields are the same among all splits. #### plain_text - `premise`: a `string` feature. - `hypothesis`: a `string` feature. - `label`: a classification label, with possible values including `entailment` (0), `non-entailment` (1). - `parse_premise`: a `string` feature. - `parse_hypothesis`: a `string` feature. - `binary_parse_premise`: a `string` feature. - `binary_parse_hypothesis`: a `string` feature. - `heuristic`: a `string` feature. - `subcase`: a `string` feature. - `template`: a `string` feature. ### Data Splits | name |train|validation| |----------|----:|---------:| |plain_text|30000| 30000| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @article{DBLP:journals/corr/abs-1902-01007, author = {R. Thomas McCoy and Ellie Pavlick and Tal Linzen}, title = {Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language Inference}, journal = {CoRR}, volume = {abs/1902.01007}, year = {2019}, url = {http://arxiv.org/abs/1902.01007}, archivePrefix = {arXiv}, eprint = {1902.01007}, timestamp = {Tue, 21 May 2019 18:03:36 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-1902-01007.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` ### Contributions Thanks to [@TevenLeScao](https://github.com/TevenLeScao), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
hans
[ "task_categories:text-classification", "task_ids:natural-language-inference", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:unknown", "arxiv:1902.01007", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["natural-language-inference"], "paperswithcode_id": "hans", "pretty_name": "Heuristic Analysis for NLI Systems", "dataset_info": {"features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "non-entailment"}}}}, {"name": "parse_premise", "dtype": "string"}, {"name": "parse_hypothesis", "dtype": "string"}, {"name": "binary_parse_premise", "dtype": "string"}, {"name": "binary_parse_hypothesis", "dtype": "string"}, {"name": "heuristic", "dtype": "string"}, {"name": "subcase", "dtype": "string"}, {"name": "template", "dtype": "string"}], "config_name": "plain_text", "splits": [{"name": "train", "num_bytes": 15916371, "num_examples": 30000}, {"name": "validation", "num_bytes": 15893137, "num_examples": 30000}], "download_size": 30947358, "dataset_size": 31809508}}
2024-01-18T11:04:31+00:00
[ "1902.01007" ]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-natural-language-inference #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-unknown #arxiv-1902.01007 #region-us
Dataset Card for "hans" ======================= Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: * Paper: * Point of Contact: * Size of downloaded dataset files: 30.94 MB * Size of the generated dataset: 31.81 MB * Total amount of disk used: 62.76 MB ### Dataset Summary The HANS dataset is an NLI evaluation set that tests specific hypotheses about invalid heuristics that NLI models are likely to learn. ### Supported Tasks and Leaderboards ### Languages Dataset Structure ----------------- ### Data Instances #### plain\_text * Size of downloaded dataset files: 30.94 MB * Size of the generated dataset: 31.81 MB * Total amount of disk used: 62.76 MB An example of 'train' looks as follows. ### Data Fields The data fields are the same among all splits. #### plain\_text * 'premise': a 'string' feature. * 'hypothesis': a 'string' feature. * 'label': a classification label, with possible values including 'entailment' (0), 'non-entailment' (1). * 'parse\_premise': a 'string' feature. * 'parse\_hypothesis': a 'string' feature. * 'binary\_parse\_premise': a 'string' feature. * 'binary\_parse\_hypothesis': a 'string' feature. * 'heuristic': a 'string' feature. * 'subcase': a 'string' feature. * 'template': a 'string' feature. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to @TevenLeScao, @thomwolf for adding this dataset.
[ "### Dataset Summary\n\n\nThe HANS dataset is an NLI evaluation set that tests specific hypotheses about invalid heuristics that NLI models are likely to learn.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### plain\\_text\n\n\n* Size of downloaded dataset files: 30.94 MB\n* Size of the generated dataset: 31.81 MB\n* Total amount of disk used: 62.76 MB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### plain\\_text\n\n\n* 'premise': a 'string' feature.\n* 'hypothesis': a 'string' feature.\n* 'label': a classification label, with possible values including 'entailment' (0), 'non-entailment' (1).\n* 'parse\\_premise': a 'string' feature.\n* 'parse\\_hypothesis': a 'string' feature.\n* 'binary\\_parse\\_premise': a 'string' feature.\n* 'binary\\_parse\\_hypothesis': a 'string' feature.\n* 'heuristic': a 'string' feature.\n* 'subcase': a 'string' feature.\n* 'template': a 'string' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @TevenLeScao, @thomwolf for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-natural-language-inference #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-unknown #arxiv-1902.01007 #region-us \n", "### Dataset Summary\n\n\nThe HANS dataset is an NLI evaluation set that tests specific hypotheses about invalid heuristics that NLI models are likely to learn.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### plain\\_text\n\n\n* Size of downloaded dataset files: 30.94 MB\n* Size of the generated dataset: 31.81 MB\n* Total amount of disk used: 62.76 MB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### plain\\_text\n\n\n* 'premise': a 'string' feature.\n* 'hypothesis': a 'string' feature.\n* 'label': a classification label, with possible values including 'entailment' (0), 'non-entailment' (1).\n* 'parse\\_premise': a 'string' feature.\n* 'parse\\_hypothesis': a 'string' feature.\n* 'binary\\_parse\\_premise': a 'string' feature.\n* 'binary\\_parse\\_hypothesis': a 'string' feature.\n* 'heuristic': a 'string' feature.\n* 'subcase': a 'string' feature.\n* 'template': a 'string' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @TevenLeScao, @thomwolf for adding this dataset." ]
265ffc5409b6a87f8d6a6f77321c9925b55624a9
# Dataset Card for "hansards" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://www.isi.edu/natural-language/download/hansard/](https://www.isi.edu/natural-language/download/hansard/) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 82.83 MB - **Size of the generated dataset:** 260.40 MB - **Total amount of disk used:** 343.23 MB ### Dataset Summary This release contains 1.3 million pairs of aligned text chunks (sentences or smaller fragments) from the official records (Hansards) of the 36th Canadian Parliament. The complete Hansards of the debates in the House and Senate of the 36th Canadian Parliament, as far as available, were aligned. The corpus was then split into 5 sets of sentence pairs: training (80% of the sentence pairs), two sets of sentence pairs for testing (5% each), and two sets of sentence pairs for final evaluation (5% each). The current release consists of the training and testing sets. The evaluation sets are reserved for future MT evaluation purposes and currently not available. Caveats 1. This release contains only sentence pairs. Even though the order of the sentences is the same as in the original, there may be gaps resulting from many-to-one, many-to-many, or one-to-many alignments that were filtered out. Therefore, this release may not be suitable for discourse-related research. 2. Neither the sentence splitting nor the alignments are perfect. In particular, watch out for pairs that differ considerably in length. You may want to filter these out before you do any statistical training. The alignment of the Hansards was performed as part of the ReWrite project under funding from the DARPA TIDES program. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### house - **Size of downloaded dataset files:** 67.58 MB - **Size of the generated dataset:** 214.37 MB - **Total amount of disk used:** 281.95 MB An example of 'train' looks as follows. ``` { "en": "Mr. Walt Lastewka (Parliamentary Secretary to Minister of Industry, Lib.):", "fr": "M. Walt Lastewka (secrétaire parlementaire du ministre de l'Industrie, Lib.):" } ``` #### senate - **Size of downloaded dataset files:** 15.25 MB - **Size of the generated dataset:** 46.03 MB - **Total amount of disk used:** 61.28 MB An example of 'train' looks as follows. ``` { "en": "Mr. Walt Lastewka (Parliamentary Secretary to Minister of Industry, Lib.):", "fr": "M. Walt Lastewka (secrétaire parlementaire du ministre de l'Industrie, Lib.):" } ``` ### Data Fields The data fields are the same among all splits. #### house - `fr`: a `string` feature. - `en`: a `string` feature. #### senate - `fr`: a `string` feature. - `en`: a `string` feature. ### Data Splits | name |train | test | |------|-----:|-----:| |house |947969|122290| |senate|182135| 25553| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` ``` ### Contributions Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf), [@albertvillanova](https://github.com/albertvillanova) for adding this dataset.
hansards
[ "region:us" ]
2022-03-02T23:29:22+00:00
{"pretty_name": "hansards", "dataset_info": [{"config_name": "senate", "features": [{"name": "fr", "dtype": "string"}, {"name": "en", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 5711686, "num_examples": 25553}, {"name": "train", "num_bytes": 40324278, "num_examples": 182135}], "download_size": 15247360, "dataset_size": 46035964}, {"config_name": "house", "features": [{"name": "fr", "dtype": "string"}, {"name": "en", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 22906629, "num_examples": 122290}, {"name": "train", "num_bytes": 191459584, "num_examples": 947969}], "download_size": 67584000, "dataset_size": 214366213}]}
2024-01-18T11:04:33+00:00
[]
[]
TAGS #region-us
Dataset Card for "hansards" =========================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: * Paper: * Point of Contact: * Size of downloaded dataset files: 82.83 MB * Size of the generated dataset: 260.40 MB * Total amount of disk used: 343.23 MB ### Dataset Summary This release contains 1.3 million pairs of aligned text chunks (sentences or smaller fragments) from the official records (Hansards) of the 36th Canadian Parliament. The complete Hansards of the debates in the House and Senate of the 36th Canadian Parliament, as far as available, were aligned. The corpus was then split into 5 sets of sentence pairs: training (80% of the sentence pairs), two sets of sentence pairs for testing (5% each), and two sets of sentence pairs for final evaluation (5% each). The current release consists of the training and testing sets. The evaluation sets are reserved for future MT evaluation purposes and currently not available. Caveats 1. This release contains only sentence pairs. Even though the order of the sentences is the same as in the original, there may be gaps resulting from many-to-one, many-to-many, or one-to-many alignments that were filtered out. Therefore, this release may not be suitable for discourse-related research. 2. Neither the sentence splitting nor the alignments are perfect. In particular, watch out for pairs that differ considerably in length. You may want to filter these out before you do any statistical training. The alignment of the Hansards was performed as part of the ReWrite project under funding from the DARPA TIDES program. ### Supported Tasks and Leaderboards ### Languages Dataset Structure ----------------- ### Data Instances #### house * Size of downloaded dataset files: 67.58 MB * Size of the generated dataset: 214.37 MB * Total amount of disk used: 281.95 MB An example of 'train' looks as follows. #### senate * Size of downloaded dataset files: 15.25 MB * Size of the generated dataset: 46.03 MB * Total amount of disk used: 61.28 MB An example of 'train' looks as follows. ### Data Fields The data fields are the same among all splits. #### house * 'fr': a 'string' feature. * 'en': a 'string' feature. #### senate * 'fr': a 'string' feature. * 'en': a 'string' feature. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to @patrickvonplaten, @thomwolf, @albertvillanova for adding this dataset.
[ "### Dataset Summary\n\n\nThis release contains 1.3 million pairs of aligned text chunks (sentences or smaller fragments)\nfrom the official records (Hansards) of the 36th Canadian Parliament.\n\n\nThe complete Hansards of the debates in the House and Senate of the 36th Canadian Parliament,\nas far as available, were aligned. The corpus was then split into 5 sets of sentence pairs:\ntraining (80% of the sentence pairs), two sets of sentence pairs for testing (5% each), and\ntwo sets of sentence pairs for final evaluation (5% each). The current release consists of the\ntraining and testing sets. The evaluation sets are reserved for future MT evaluation purposes\nand currently not available.\n\n\nCaveats\n\n\n1. This release contains only sentence pairs. Even though the order of the sentences is the same\nas in the original, there may be gaps resulting from many-to-one, many-to-many, or one-to-many\nalignments that were filtered out. Therefore, this release may not be suitable for\ndiscourse-related research.\n2. Neither the sentence splitting nor the alignments are perfect. In particular, watch out for\npairs that differ considerably in length. You may want to filter these out before you do\nany statistical training.\n\n\nThe alignment of the Hansards was performed as part of the ReWrite project under funding\nfrom the DARPA TIDES program.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### house\n\n\n* Size of downloaded dataset files: 67.58 MB\n* Size of the generated dataset: 214.37 MB\n* Total amount of disk used: 281.95 MB\n\n\nAn example of 'train' looks as follows.", "#### senate\n\n\n* Size of downloaded dataset files: 15.25 MB\n* Size of the generated dataset: 46.03 MB\n* Total amount of disk used: 61.28 MB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### house\n\n\n* 'fr': a 'string' feature.\n* 'en': a 'string' feature.", "#### senate\n\n\n* 'fr': a 'string' feature.\n* 'en': a 'string' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @patrickvonplaten, @thomwolf, @albertvillanova for adding this dataset." ]
[ "TAGS\n#region-us \n", "### Dataset Summary\n\n\nThis release contains 1.3 million pairs of aligned text chunks (sentences or smaller fragments)\nfrom the official records (Hansards) of the 36th Canadian Parliament.\n\n\nThe complete Hansards of the debates in the House and Senate of the 36th Canadian Parliament,\nas far as available, were aligned. The corpus was then split into 5 sets of sentence pairs:\ntraining (80% of the sentence pairs), two sets of sentence pairs for testing (5% each), and\ntwo sets of sentence pairs for final evaluation (5% each). The current release consists of the\ntraining and testing sets. The evaluation sets are reserved for future MT evaluation purposes\nand currently not available.\n\n\nCaveats\n\n\n1. This release contains only sentence pairs. Even though the order of the sentences is the same\nas in the original, there may be gaps resulting from many-to-one, many-to-many, or one-to-many\nalignments that were filtered out. Therefore, this release may not be suitable for\ndiscourse-related research.\n2. Neither the sentence splitting nor the alignments are perfect. In particular, watch out for\npairs that differ considerably in length. You may want to filter these out before you do\nany statistical training.\n\n\nThe alignment of the Hansards was performed as part of the ReWrite project under funding\nfrom the DARPA TIDES program.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### house\n\n\n* Size of downloaded dataset files: 67.58 MB\n* Size of the generated dataset: 214.37 MB\n* Total amount of disk used: 281.95 MB\n\n\nAn example of 'train' looks as follows.", "#### senate\n\n\n* Size of downloaded dataset files: 15.25 MB\n* Size of the generated dataset: 46.03 MB\n* Total amount of disk used: 61.28 MB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### house\n\n\n* 'fr': a 'string' feature.\n* 'en': a 'string' feature.", "#### senate\n\n\n* 'fr': a 'string' feature.\n* 'en': a 'string' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @patrickvonplaten, @thomwolf, @albertvillanova for adding this dataset." ]
b108d2c32ee4e1f4176ea233e1a5ac17bceb9ef9
# Dataset Card for Hard ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Hard](https://github.com/elnagara/HARD-Arabic-Dataset) - **Repository:** [Hard](https://github.com/elnagara/HARD-Arabic-Dataset) - **Paper:** [Hotel Arabic-Reviews Dataset Construction for Sentiment Analysis Applications](https://link.springer.com/chapter/10.1007/978-3-319-67056-0_3) - **Point of Contact:** [Ashraf Elnagar]([email protected]) ### Dataset Summary This dataset contains 93,700 hotel reviews in Arabic language.The hotel reviews were collected from Booking.com website during June/July 2016.The reviews are expressed in Modern Standard Arabic as well as dialectal Arabic.The following table summarize some tatistics on the HARD Dataset. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The dataset is based on Arabic. ## Dataset Structure ### Data Instances A typical data point comprises a rating from 1 to 5 for hotels. ### Data Fields [More Information Needed] ### Data Splits The dataset is not split. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ### Contributions Thanks to [@zaidalyafeai](https://github.com/zaidalyafeai) for adding this dataset.
hard
[ "task_categories:text-classification", "task_ids:multi-class-classification", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:ar", "license:unknown", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["ar"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification"], "paperswithcode_id": "hard", "pretty_name": "Hotel Arabic-Reviews Dataset", "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "1", "1": "2", "2": "3", "3": "4", "4": "5"}}}}], "config_name": "plain_text", "splits": [{"name": "train", "num_bytes": 27507085, "num_examples": 105698}], "download_size": 8508677, "dataset_size": 27507085}}
2024-01-18T11:04:34+00:00
[]
[ "ar" ]
TAGS #task_categories-text-classification #task_ids-multi-class-classification #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-Arabic #license-unknown #region-us
# Dataset Card for Hard ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: Hard - Repository: Hard - Paper: Hotel Arabic-Reviews Dataset Construction for Sentiment Analysis Applications - Point of Contact: Ashraf Elnagar ### Dataset Summary This dataset contains 93,700 hotel reviews in Arabic language.The hotel reviews were collected from URL website during June/July 2016.The reviews are expressed in Modern Standard Arabic as well as dialectal Arabic.The following table summarize some tatistics on the HARD Dataset. ### Supported Tasks and Leaderboards ### Languages The dataset is based on Arabic. ## Dataset Structure ### Data Instances A typical data point comprises a rating from 1 to 5 for hotels. ### Data Fields ### Data Splits The dataset is not split. ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions Thanks to @zaidalyafeai for adding this dataset.
[ "# Dataset Card for Hard", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: Hard\n- Repository: Hard\n- Paper: Hotel Arabic-Reviews Dataset Construction for Sentiment Analysis Applications\n- Point of Contact: Ashraf Elnagar", "### Dataset Summary\n\nThis dataset contains 93,700 hotel reviews in Arabic language.The hotel reviews were collected from URL website during June/July 2016.The reviews are expressed in Modern Standard Arabic as well as dialectal Arabic.The following table summarize some tatistics on the HARD Dataset.", "### Supported Tasks and Leaderboards", "### Languages\n\nThe dataset is based on Arabic.", "## Dataset Structure", "### Data Instances\n\nA typical data point comprises a rating from 1 to 5 for hotels.", "### Data Fields", "### Data Splits\n\nThe dataset is not split.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @zaidalyafeai for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-multi-class-classification #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-Arabic #license-unknown #region-us \n", "# Dataset Card for Hard", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: Hard\n- Repository: Hard\n- Paper: Hotel Arabic-Reviews Dataset Construction for Sentiment Analysis Applications\n- Point of Contact: Ashraf Elnagar", "### Dataset Summary\n\nThis dataset contains 93,700 hotel reviews in Arabic language.The hotel reviews were collected from URL website during June/July 2016.The reviews are expressed in Modern Standard Arabic as well as dialectal Arabic.The following table summarize some tatistics on the HARD Dataset.", "### Supported Tasks and Leaderboards", "### Languages\n\nThe dataset is based on Arabic.", "## Dataset Structure", "### Data Instances\n\nA typical data point comprises a rating from 1 to 5 for hotels.", "### Data Fields", "### Data Splits\n\nThe dataset is not split.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @zaidalyafeai for adding this dataset." ]
44b8c3a1ee9220cd697c667905cfbab65891dff9
# Dataset Card for HAREM ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [HAREM homepage](https://www.linguateca.pt/primeiroHAREM/harem_coleccaodourada_en.html) - **Repository:** [HAREM repository](https://www.linguateca.pt/primeiroHAREM/harem_coleccaodourada_en.html) - **Paper:** [HAREM: An Advanced NER Evaluation Contest for Portuguese](http://comum.rcaap.pt/bitstream/10400.26/76/1/SantosSecoCardosoVilelaLREC2006.pdf) - **Point of Contact:** [Diana Santos](mailto:[email protected]) ### Dataset Summary The HAREM is a Portuguese language corpus commonly used for Named Entity Recognition tasks. It includes about 93k words, from 129 different texts, from several genres, and language varieties. The split of this dataset version follows the division made by [1], where 7% HAREM documents are the validation set and the miniHAREM corpus (with about 65k words) is the test set. There are two versions of the dataset set, a version that has a total of 10 different named entity classes (Person, Organization, Location, Value, Date, Title, Thing, Event, Abstraction, and Other) and a "selective" version with only 5 classes (Person, Organization, Location, Value, and Date). It's important to note that the original version of the HAREM dataset has 2 levels of NER details, namely "Category" and "Sub-type". The dataset version processed here ONLY USE the "Category" level of the original dataset. [1] Souza, Fábio, Rodrigo Nogueira, and Roberto Lotufo. "BERTimbau: Pretrained BERT Models for Brazilian Portuguese." Brazilian Conference on Intelligent Systems. Springer, Cham, 2020. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Portuguese ## Dataset Structure ### Data Instances ``` { "id": "HAREM-871-07800", "ner_tags": [3, 0, 0, 3, 4, 4, 4, 4, 4, 4, 4, 4, ], "tokens": [ "Abraço", "Página", "Principal", "ASSOCIAÇÃO", "DE", "APOIO", "A", "PESSOAS", "COM", "VIH", "/", "SIDA" ] } ``` ### Data Fields - `id`: id of the sample - `tokens`: the tokens of the example text - `ner_tags`: the NER tags of each token The NER tags correspond to this list: ``` "O", "B-PESSOA", "I-PESSOA", "B-ORGANIZACAO", "I-ORGANIZACAO", "B-LOCAL", "I-LOCAL", "B-TEMPO", "I-TEMPO", "B-VALOR", "I-VALOR", "B-ABSTRACCAO", "I-ABSTRACCAO", "B-ACONTECIMENTO", "I-ACONTECIMENTO", "B-COISA", "I-COISA", "B-OBRA", "I-OBRA", "B-OUTRO", "I-OUTRO" ``` The NER tags have the same format as in the CoNLL shared task: a B denotes the first item of a phrase and an I any non-initial word. ### Data Splits The data is split into train, validation and test set for each of the two versions (default and selective). The split sizes are as follow: | Train | Val | Test | | ------ | ----- | ---- | | 121 | 8 | 128 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @inproceedings{santos2006harem, title={Harem: An advanced ner evaluation contest for portuguese}, author={Santos, Diana and Seco, Nuno and Cardoso, Nuno and Vilela, Rui}, booktitle={quot; In Nicoletta Calzolari; Khalid Choukri; Aldo Gangemi; Bente Maegaard; Joseph Mariani; Jan Odjik; Daniel Tapias (ed) Proceedings of the 5 th International Conference on Language Resources and Evaluation (LREC'2006)(Genoa Italy 22-28 May 2006)}, year={2006} } ``` ### Contributions Thanks to [@jonatasgrosman](https://github.com/jonatasgrosman) for adding this dataset.
harem
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:n<1K", "source_datasets:original", "language:pt", "license:unknown", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["pt"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "pretty_name": "HAREM", "dataset_info": [{"config_name": "default", "features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-PESSOA", "2": "I-PESSOA", "3": "B-ORGANIZACAO", "4": "I-ORGANIZACAO", "5": "B-LOCAL", "6": "I-LOCAL", "7": "B-TEMPO", "8": "I-TEMPO", "9": "B-VALOR", "10": "I-VALOR", "11": "B-ABSTRACCAO", "12": "I-ABSTRACCAO", "13": "B-ACONTECIMENTO", "14": "I-ACONTECIMENTO", "15": "B-COISA", "16": "I-COISA", "17": "B-OBRA", "18": "I-OBRA", "19": "B-OUTRO", "20": "I-OUTRO"}}}}], "splits": [{"name": "train", "num_bytes": 1506373, "num_examples": 121}, {"name": "test", "num_bytes": 1062714, "num_examples": 128}, {"name": "validation", "num_bytes": 51318, "num_examples": 8}], "download_size": 1887281, "dataset_size": 2620405}, {"config_name": "selective", "features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-PESSOA", "2": "I-PESSOA", "3": "B-ORGANIZACAO", "4": "I-ORGANIZACAO", "5": "B-LOCAL", "6": "I-LOCAL", "7": "B-TEMPO", "8": "I-TEMPO", "9": "B-VALOR", "10": "I-VALOR"}}}}], "splits": [{"name": "train", "num_bytes": 1506373, "num_examples": 121}, {"name": "test", "num_bytes": 1062714, "num_examples": 128}, {"name": "validation", "num_bytes": 51318, "num_examples": 8}], "download_size": 1715873, "dataset_size": 2620405}]}
2024-01-18T11:04:35+00:00
[]
[ "pt" ]
TAGS #task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-n<1K #source_datasets-original #language-Portuguese #license-unknown #region-us
Dataset Card for HAREM ====================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: HAREM homepage * Repository: HAREM repository * Paper: HAREM: An Advanced NER Evaluation Contest for Portuguese * Point of Contact: Diana Santos ### Dataset Summary The HAREM is a Portuguese language corpus commonly used for Named Entity Recognition tasks. It includes about 93k words, from 129 different texts, from several genres, and language varieties. The split of this dataset version follows the division made by [1], where 7% HAREM documents are the validation set and the miniHAREM corpus (with about 65k words) is the test set. There are two versions of the dataset set, a version that has a total of 10 different named entity classes (Person, Organization, Location, Value, Date, Title, Thing, Event, Abstraction, and Other) and a "selective" version with only 5 classes (Person, Organization, Location, Value, and Date). It's important to note that the original version of the HAREM dataset has 2 levels of NER details, namely "Category" and "Sub-type". The dataset version processed here ONLY USE the "Category" level of the original dataset. [1] Souza, Fábio, Rodrigo Nogueira, and Roberto Lotufo. "BERTimbau: Pretrained BERT Models for Brazilian Portuguese." Brazilian Conference on Intelligent Systems. Springer, Cham, 2020. ### Supported Tasks and Leaderboards ### Languages Portuguese Dataset Structure ----------------- ### Data Instances ### Data Fields * 'id': id of the sample * 'tokens': the tokens of the example text * 'ner\_tags': the NER tags of each token The NER tags correspond to this list: The NER tags have the same format as in the CoNLL shared task: a B denotes the first item of a phrase and an I any non-initial word. ### Data Splits The data is split into train, validation and test set for each of the two versions (default and selective). The split sizes are as follow: Train: 121, Val: 8, Test: 128 Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to @jonatasgrosman for adding this dataset.
[ "### Dataset Summary\n\n\nThe HAREM is a Portuguese language corpus commonly used for Named Entity Recognition tasks. It includes about 93k words, from 129 different texts,\nfrom several genres, and language varieties. The split of this dataset version follows the division made by [1], where 7% HAREM\ndocuments are the validation set and the miniHAREM corpus (with about 65k words) is the test set. There are two versions of the dataset set,\na version that has a total of 10 different named entity classes (Person, Organization, Location, Value, Date, Title, Thing, Event,\nAbstraction, and Other) and a \"selective\" version with only 5 classes (Person, Organization, Location, Value, and Date).\n\n\nIt's important to note that the original version of the HAREM dataset has 2 levels of NER details, namely \"Category\" and \"Sub-type\".\nThe dataset version processed here ONLY USE the \"Category\" level of the original dataset.\n\n\n[1] Souza, Fábio, Rodrigo Nogueira, and Roberto Lotufo. \"BERTimbau: Pretrained BERT Models for Brazilian Portuguese.\" Brazilian Conference on Intelligent Systems. Springer, Cham, 2020.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nPortuguese\n\n\nDataset Structure\n-----------------", "### Data Instances", "### Data Fields\n\n\n* 'id': id of the sample\n* 'tokens': the tokens of the example text\n* 'ner\\_tags': the NER tags of each token\n\n\nThe NER tags correspond to this list:\n\n\nThe NER tags have the same format as in the CoNLL shared task: a B denotes the first item of a phrase and an I any non-initial word.", "### Data Splits\n\n\nThe data is split into train, validation and test set for each of the two versions (default and selective). The split sizes are as follow:\n\n\nTrain: 121, Val: 8, Test: 128\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @jonatasgrosman for adding this dataset." ]
[ "TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-n<1K #source_datasets-original #language-Portuguese #license-unknown #region-us \n", "### Dataset Summary\n\n\nThe HAREM is a Portuguese language corpus commonly used for Named Entity Recognition tasks. It includes about 93k words, from 129 different texts,\nfrom several genres, and language varieties. The split of this dataset version follows the division made by [1], where 7% HAREM\ndocuments are the validation set and the miniHAREM corpus (with about 65k words) is the test set. There are two versions of the dataset set,\na version that has a total of 10 different named entity classes (Person, Organization, Location, Value, Date, Title, Thing, Event,\nAbstraction, and Other) and a \"selective\" version with only 5 classes (Person, Organization, Location, Value, and Date).\n\n\nIt's important to note that the original version of the HAREM dataset has 2 levels of NER details, namely \"Category\" and \"Sub-type\".\nThe dataset version processed here ONLY USE the \"Category\" level of the original dataset.\n\n\n[1] Souza, Fábio, Rodrigo Nogueira, and Roberto Lotufo. \"BERTimbau: Pretrained BERT Models for Brazilian Portuguese.\" Brazilian Conference on Intelligent Systems. Springer, Cham, 2020.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nPortuguese\n\n\nDataset Structure\n-----------------", "### Data Instances", "### Data Fields\n\n\n* 'id': id of the sample\n* 'tokens': the tokens of the example text\n* 'ner\\_tags': the NER tags of each token\n\n\nThe NER tags correspond to this list:\n\n\nThe NER tags have the same format as in the CoNLL shared task: a B denotes the first item of a phrase and an I any non-initial word.", "### Data Splits\n\n\nThe data is split into train, validation and test set for each of the two versions (default and selective). The split sizes are as follow:\n\n\nTrain: 121, Val: 8, Test: 128\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @jonatasgrosman for adding this dataset." ]
ffbd71a13d272ff0563b7ed097406ae4048afe75
# Dataset Card for [HasPart] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://allenai.org/data/haspartkb - **Repository:** - **Paper:** https://arxiv.org/abs/2006.07510 - **Leaderboard:** - **Point of Contact:** Peter Clark <[email protected]> ### Dataset Summary This dataset is a new knowledge-base (KB) of hasPart relationships, extracted from a large corpus of generic statements. Complementary to other resources available, it is the first which is all three of: accurate (90% precision), salient (covers relationships a person may mention), and has high coverage of common terms (approximated as within a 10 year old’s vocabulary), as well as having several times more hasPart entries than in the popular ontologies ConceptNet and WordNet. In addition, it contains information about quantifiers, argument modifiers, and links the entities to appropriate concepts in Wikipedia and WordNet. ### Supported Tasks and Leaderboards Text Classification / Scoring - meronyms (e.g., `plant` has part `stem`) ### Languages English ## Dataset Structure ### Data Instances [More Information Needed] ``` {'arg1': 'plant', 'arg2': 'stem', 'score': 0.9991798414303377, 'synset': ['wn.plant.n.02', 'wn.stalk.n.02'], 'wikipedia_primary_page': ['Plant']} ``` ### Data Fields - `arg1`, `arg2`: These are the entities of the meronym, i.e., `arg1` _has\_part_ `arg2` - `score`: Meronymic score per the procedure described below - `synset`: Ontological classification from WordNet for the two entities - `wikipedia_primary_page`: Wikipedia page of the entities **Note**: some examples contain synset / wikipedia info for only one of the entities. ### Data Splits Single training file ## Dataset Creation Our approach to hasPart extraction has five steps: 1. Collect generic sentences from a large corpus 2. Train and apply a RoBERTa model to identify hasPart relations in those sentences 3. Normalize the entity names 4. Aggregate and filter the entries 5. Link the hasPart arguments to Wikipedia pages and WordNet senses Rather than extract knowledge from arbitrary text, we extract hasPart relations from generic sentences, e.g., “Dogs have tails.”, in order to bias the process towards extractions that are general (apply to most members of a category) and salient (notable enough to write down). As a source of generic sentences, we use **GenericsKB**, a large repository of 3.4M standalone generics previously harvested from a Webcrawl of 1.7B sentences. ### Annotations #### Annotation process For each sentence _S_ in GenericsKB, we identify all noun chunks in the sentence using a noun chunker (spaCy's Doc.noun chunks). Each chunk is a candidate whole or part. Then, for each possible pair, we use a RoBERTa model to classify whether a hasPart relationship exists between them. The input sentence is presented to RoBERTa as a sequence of wordpiece tokens, with the start and end of the candidate hasPart arguments identified using special tokens, e.g.: > `[CLS] [ARG1-B]Some pond snails[ARG1-E] have [ARG2-B]gills[ARG2-E] to breathe in water.` where `[ARG1/2-B/E]` are special tokens denoting the argument boundaries. The `[CLS]` token is projected to two class labels (hasPart/notHasPart), and a softmax layer is then applied, resulting in output probabilities for the class labels. We train with cross-entropy loss. We use RoBERTa-large (24 layers), each with a hidden size of 1024, and 16 attention heads, and a total of 355M parameters. We use the pre-trained weights available with the model and further fine-tune the model parameters by training on our labeled data for 15 epochs. To train the model, we use a hand-annotated set of ∼2k examples. #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information @misc{bhakthavatsalam2020dogs, title={Do Dogs have Whiskers? A New Knowledge Base of hasPart Relations}, author={Sumithra Bhakthavatsalam and Kyle Richardson and Niket Tandon and Peter Clark}, year={2020}, eprint={2006.07510}, archivePrefix={arXiv}, primaryClass={cs.CL} } ### Contributions Thanks to [@jeromeku](https://github.com/jeromeku) for adding this dataset.
has_part
[ "task_categories:text-classification", "task_ids:text-scoring", "annotations_creators:machine-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|other-Generics-KB", "language:en", "license:unknown", "Meronym-Prediction", "arxiv:2006.07510", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["machine-generated"], "language_creators": ["found"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended|other-Generics-KB"], "task_categories": ["text-classification"], "task_ids": ["text-scoring"], "paperswithcode_id": "haspart-kb", "pretty_name": "hasPart KB", "tags": ["Meronym-Prediction"], "dataset_info": {"features": [{"name": "arg1", "dtype": "string"}, {"name": "arg2", "dtype": "string"}, {"name": "score", "dtype": "float64"}, {"name": "wikipedia_primary_page", "sequence": "string"}, {"name": "synset", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 4363417, "num_examples": 49848}], "download_size": 7437382, "dataset_size": 4363417}}
2024-01-18T11:04:39+00:00
[ "2006.07510" ]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-text-scoring #annotations_creators-machine-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|other-Generics-KB #language-English #license-unknown #Meronym-Prediction #arxiv-2006.07510 #region-us
# Dataset Card for [HasPart] ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: - Paper: URL - Leaderboard: - Point of Contact: Peter Clark <peterc@URL> ### Dataset Summary This dataset is a new knowledge-base (KB) of hasPart relationships, extracted from a large corpus of generic statements. Complementary to other resources available, it is the first which is all three of: accurate (90% precision), salient (covers relationships a person may mention), and has high coverage of common terms (approximated as within a 10 year old’s vocabulary), as well as having several times more hasPart entries than in the popular ontologies ConceptNet and WordNet. In addition, it contains information about quantifiers, argument modifiers, and links the entities to appropriate concepts in Wikipedia and WordNet. ### Supported Tasks and Leaderboards Text Classification / Scoring - meronyms (e.g., 'plant' has part 'stem') ### Languages English ## Dataset Structure ### Data Instances ### Data Fields - 'arg1', 'arg2': These are the entities of the meronym, i.e., 'arg1' _has\_part_ 'arg2' - 'score': Meronymic score per the procedure described below - 'synset': Ontological classification from WordNet for the two entities - 'wikipedia_primary_page': Wikipedia page of the entities Note: some examples contain synset / wikipedia info for only one of the entities. ### Data Splits Single training file ## Dataset Creation Our approach to hasPart extraction has five steps: 1. Collect generic sentences from a large corpus 2. Train and apply a RoBERTa model to identify hasPart relations in those sentences 3. Normalize the entity names 4. Aggregate and filter the entries 5. Link the hasPart arguments to Wikipedia pages and WordNet senses Rather than extract knowledge from arbitrary text, we extract hasPart relations from generic sentences, e.g., “Dogs have tails.”, in order to bias the process towards extractions that are general (apply to most members of a category) and salient (notable enough to write down). As a source of generic sentences, we use GenericsKB, a large repository of 3.4M standalone generics previously harvested from a Webcrawl of 1.7B sentences. ### Annotations #### Annotation process For each sentence _S_ in GenericsKB, we identify all noun chunks in the sentence using a noun chunker (spaCy's URL chunks). Each chunk is a candidate whole or part. Then, for each possible pair, we use a RoBERTa model to classify whether a hasPart relationship exists between them. The input sentence is presented to RoBERTa as a sequence of wordpiece tokens, with the start and end of the candidate hasPart arguments identified using special tokens, e.g.: > '[CLS] [ARG1-B]Some pond snails[ARG1-E] have [ARG2-B]gills[ARG2-E] to breathe in water.' where '[ARG1/2-B/E]' are special tokens denoting the argument boundaries. The '[CLS]' token is projected to two class labels (hasPart/notHasPart), and a softmax layer is then applied, resulting in output probabilities for the class labels. We train with cross-entropy loss. We use RoBERTa-large (24 layers), each with a hidden size of 1024, and 16 attention heads, and a total of 355M parameters. We use the pre-trained weights available with the model and further fine-tune the model parameters by training on our labeled data for 15 epochs. To train the model, we use a hand-annotated set of ∼2k examples. #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information @misc{bhakthavatsalam2020dogs, title={Do Dogs have Whiskers? A New Knowledge Base of hasPart Relations}, author={Sumithra Bhakthavatsalam and Kyle Richardson and Niket Tandon and Peter Clark}, year={2020}, eprint={2006.07510}, archivePrefix={arXiv}, primaryClass={cs.CL} } ### Contributions Thanks to @jeromeku for adding this dataset.
[ "# Dataset Card for [HasPart]", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper: URL\n- Leaderboard:\n- Point of Contact: Peter Clark <peterc@URL>", "### Dataset Summary\n\nThis dataset is a new knowledge-base (KB) of hasPart relationships, extracted from a large corpus of generic statements. Complementary to other resources available, it is the first which is all three of: accurate (90% precision), salient (covers relationships a person may mention), and has high coverage of common terms (approximated as within a 10 year old’s vocabulary), as well as having several times more hasPart entries than in the popular ontologies ConceptNet and WordNet. In addition, it contains information about quantifiers, argument modifiers, and links the entities to appropriate concepts in Wikipedia and WordNet.", "### Supported Tasks and Leaderboards\n\nText Classification / Scoring - meronyms (e.g., 'plant' has part 'stem')", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances", "### Data Fields\n\n- 'arg1', 'arg2': These are the entities of the meronym, i.e., 'arg1' _has\\_part_ 'arg2'\n- 'score': Meronymic score per the procedure described below\n- 'synset': Ontological classification from WordNet for the two entities\n- 'wikipedia_primary_page': Wikipedia page of the entities\n\nNote: some examples contain synset / wikipedia info for only one of the entities.", "### Data Splits\n\nSingle training file", "## Dataset Creation\n\nOur approach to hasPart extraction has five steps:\n\n1. Collect generic sentences from a large corpus\n2. Train and apply a RoBERTa model to identify hasPart relations in those sentences\n3. Normalize the entity names\n4. Aggregate and filter the entries\n5. Link the hasPart arguments to Wikipedia pages and WordNet senses\n\nRather than extract knowledge from arbitrary text, we extract hasPart relations from generic sentences, e.g., “Dogs have tails.”, in order to bias the process towards extractions that are general (apply to most members of a category) and salient (notable enough to write down). As a source of generic sentences, we use GenericsKB, a large repository of 3.4M standalone generics previously harvested from a Webcrawl of 1.7B sentences.", "### Annotations", "#### Annotation process\n\nFor each sentence _S_ in GenericsKB, we identify all noun chunks in the sentence using a noun chunker (spaCy's URL chunks). Each chunk is a candidate whole or part. Then, for each possible pair, we use a RoBERTa model to classify whether a hasPart relationship exists between them. The input sentence is presented to RoBERTa as a sequence of wordpiece tokens, with the start and end of the candidate hasPart arguments identified using special tokens, e.g.:\n\n> '[CLS] [ARG1-B]Some pond snails[ARG1-E] have [ARG2-B]gills[ARG2-E] to\nbreathe in water.'\n\nwhere '[ARG1/2-B/E]' are special tokens denoting the argument boundaries. The '[CLS]' token is projected to two class labels (hasPart/notHasPart), and a softmax layer is then applied, resulting in output probabilities for the class labels. We train with cross-entropy loss. We use RoBERTa-large (24 layers), each with a hidden size of 1024, and 16 attention heads, and a total of 355M parameters. We use the pre-trained weights available with the\nmodel and further fine-tune the model parameters by training on our labeled data for 15 epochs. To train the model, we use a hand-annotated set of ∼2k examples.", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\n\n\n\n\n@misc{bhakthavatsalam2020dogs,\n title={Do Dogs have Whiskers? A New Knowledge Base of hasPart Relations}, \n author={Sumithra Bhakthavatsalam and Kyle Richardson and Niket Tandon and Peter Clark},\n year={2020},\n eprint={2006.07510},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}", "### Contributions\n\nThanks to @jeromeku for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-text-scoring #annotations_creators-machine-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|other-Generics-KB #language-English #license-unknown #Meronym-Prediction #arxiv-2006.07510 #region-us \n", "# Dataset Card for [HasPart]", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper: URL\n- Leaderboard:\n- Point of Contact: Peter Clark <peterc@URL>", "### Dataset Summary\n\nThis dataset is a new knowledge-base (KB) of hasPart relationships, extracted from a large corpus of generic statements. Complementary to other resources available, it is the first which is all three of: accurate (90% precision), salient (covers relationships a person may mention), and has high coverage of common terms (approximated as within a 10 year old’s vocabulary), as well as having several times more hasPart entries than in the popular ontologies ConceptNet and WordNet. In addition, it contains information about quantifiers, argument modifiers, and links the entities to appropriate concepts in Wikipedia and WordNet.", "### Supported Tasks and Leaderboards\n\nText Classification / Scoring - meronyms (e.g., 'plant' has part 'stem')", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances", "### Data Fields\n\n- 'arg1', 'arg2': These are the entities of the meronym, i.e., 'arg1' _has\\_part_ 'arg2'\n- 'score': Meronymic score per the procedure described below\n- 'synset': Ontological classification from WordNet for the two entities\n- 'wikipedia_primary_page': Wikipedia page of the entities\n\nNote: some examples contain synset / wikipedia info for only one of the entities.", "### Data Splits\n\nSingle training file", "## Dataset Creation\n\nOur approach to hasPart extraction has five steps:\n\n1. Collect generic sentences from a large corpus\n2. Train and apply a RoBERTa model to identify hasPart relations in those sentences\n3. Normalize the entity names\n4. Aggregate and filter the entries\n5. Link the hasPart arguments to Wikipedia pages and WordNet senses\n\nRather than extract knowledge from arbitrary text, we extract hasPart relations from generic sentences, e.g., “Dogs have tails.”, in order to bias the process towards extractions that are general (apply to most members of a category) and salient (notable enough to write down). As a source of generic sentences, we use GenericsKB, a large repository of 3.4M standalone generics previously harvested from a Webcrawl of 1.7B sentences.", "### Annotations", "#### Annotation process\n\nFor each sentence _S_ in GenericsKB, we identify all noun chunks in the sentence using a noun chunker (spaCy's URL chunks). Each chunk is a candidate whole or part. Then, for each possible pair, we use a RoBERTa model to classify whether a hasPart relationship exists between them. The input sentence is presented to RoBERTa as a sequence of wordpiece tokens, with the start and end of the candidate hasPart arguments identified using special tokens, e.g.:\n\n> '[CLS] [ARG1-B]Some pond snails[ARG1-E] have [ARG2-B]gills[ARG2-E] to\nbreathe in water.'\n\nwhere '[ARG1/2-B/E]' are special tokens denoting the argument boundaries. The '[CLS]' token is projected to two class labels (hasPart/notHasPart), and a softmax layer is then applied, resulting in output probabilities for the class labels. We train with cross-entropy loss. We use RoBERTa-large (24 layers), each with a hidden size of 1024, and 16 attention heads, and a total of 355M parameters. We use the pre-trained weights available with the\nmodel and further fine-tune the model parameters by training on our labeled data for 15 epochs. To train the model, we use a hand-annotated set of ∼2k examples.", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\n\n\n\n\n@misc{bhakthavatsalam2020dogs,\n title={Do Dogs have Whiskers? A New Knowledge Base of hasPart Relations}, \n author={Sumithra Bhakthavatsalam and Kyle Richardson and Niket Tandon and Peter Clark},\n year={2020},\n eprint={2006.07510},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}", "### Contributions\n\nThanks to @jeromeku for adding this dataset." ]
6589df66d83353e29210d24eaacc8d726535ce60
# Dataset Card for HateOffensive ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage** : https://arxiv.org/abs/1905.12516 - **Repository** : https://github.com/t-davidson/hate-speech-and-offensive-language - **Paper** : https://arxiv.org/abs/1905.12516 - **Leaderboard** : - **Point of Contact** : trd54 at cornell dot edu ### Dataset Summary ### Supported Tasks and Leaderboards [More Information Needed] ### Languages English (`en`) ## Dataset Structure ### Data Instances ``` { "count": 3, "hate_speech_annotation": 0, "offensive_language_annotation": 0, "neither_annotation": 3, "label": 2, # "neither" "tweet": "!!! RT @mayasolovely: As a woman you shouldn't complain about cleaning up your house. &amp; as a man you should always take the trash out...") } ``` ### Data Fields count: (Integer) number of users who coded each tweet (min is 3, sometimes more users coded a tweet when judgments were determined to be unreliable, hate_speech_annotation: (Integer) number of users who judged the tweet to be hate speech, offensive_language_annotation: (Integer) number of users who judged the tweet to be offensive, neither_annotation: (Integer) number of users who judged the tweet to be neither offensive nor non-offensive, label: (Class Label) integer class label for majority of CF users (0: 'hate-speech', 1: 'offensive-language' or 2: 'neither'), tweet: (string) ### Data Splits This dataset is not splitted, only the train split is available. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information Usernames are not anonymized in the dataset. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information MIT License ### Citation Information @inproceedings{hateoffensive, title = {Automated Hate Speech Detection and the Problem of Offensive Language}, author = {Davidson, Thomas and Warmsley, Dana and Macy, Michael and Weber, Ingmar}, booktitle = {Proceedings of the 11th International AAAI Conference on Web and Social Media}, series = {ICWSM '17}, year = {2017}, location = {Montreal, Canada}, pages = {512-515} } ### Contributions Thanks to [@MisbahKhan789](https://github.com/MisbahKhan789) for adding this dataset.
hate_offensive
[ "task_categories:text-classification", "task_ids:multi-class-classification", "annotations_creators:crowdsourced", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:mit", "hate-speech-detection", "arxiv:1905.12516", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["machine-generated"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification"], "paperswithcode_id": "hate-speech-and-offensive-language", "pretty_name": "HateOffensive", "tags": ["hate-speech-detection"], "dataset_info": {"features": [{"name": "total_annotation_count", "dtype": "int32"}, {"name": "hate_speech_annotations", "dtype": "int32"}, {"name": "offensive_language_annotations", "dtype": "int32"}, {"name": "neither_annotations", "dtype": "int32"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "hate-speech", "1": "offensive-language", "2": "neither"}}}}, {"name": "tweet", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2811298, "num_examples": 24783}], "download_size": 2546446, "dataset_size": 2811298}}
2024-01-18T11:04:40+00:00
[ "1905.12516" ]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-multi-class-classification #annotations_creators-crowdsourced #language_creators-machine-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-mit #hate-speech-detection #arxiv-1905.12516 #region-us
# Dataset Card for HateOffensive ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage : URL - Repository : URL - Paper : URL - Leaderboard : - Point of Contact : trd54 at cornell dot edu ### Dataset Summary ### Supported Tasks and Leaderboards ### Languages English ('en') ## Dataset Structure ### Data Instances ### Data Fields count: (Integer) number of users who coded each tweet (min is 3, sometimes more users coded a tweet when judgments were determined to be unreliable, hate_speech_annotation: (Integer) number of users who judged the tweet to be hate speech, offensive_language_annotation: (Integer) number of users who judged the tweet to be offensive, neither_annotation: (Integer) number of users who judged the tweet to be neither offensive nor non-offensive, label: (Class Label) integer class label for majority of CF users (0: 'hate-speech', 1: 'offensive-language' or 2: 'neither'), tweet: (string) ### Data Splits This dataset is not splitted, only the train split is available. ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Usernames are not anonymized in the dataset. ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information MIT License @inproceedings{hateoffensive, title = {Automated Hate Speech Detection and the Problem of Offensive Language}, author = {Davidson, Thomas and Warmsley, Dana and Macy, Michael and Weber, Ingmar}, booktitle = {Proceedings of the 11th International AAAI Conference on Web and Social Media}, series = {ICWSM '17}, year = {2017}, location = {Montreal, Canada}, pages = {512-515} } ### Contributions Thanks to @MisbahKhan789 for adding this dataset.
[ "# Dataset Card for HateOffensive", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n- Homepage : URL \n- Repository : URL\n- Paper : URL \n- Leaderboard : \n- Point of Contact : trd54 at cornell dot edu", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages\nEnglish ('en')", "## Dataset Structure", "### Data Instances", "### Data Fields\n\ncount: (Integer) number of users who coded each tweet (min is 3, sometimes more users coded a tweet when judgments were determined to be unreliable,\nhate_speech_annotation: (Integer) number of users who judged the tweet to be hate speech,\noffensive_language_annotation: (Integer) number of users who judged the tweet to be offensive,\nneither_annotation: (Integer) number of users who judged the tweet to be neither offensive nor non-offensive,\nlabel: (Class Label) integer class label for majority of CF users (0: 'hate-speech', 1: 'offensive-language' or 2: 'neither'),\ntweet: (string)", "### Data Splits\nThis dataset is not splitted, only the train split is available.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\nUsernames are not anonymized in the dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\nMIT License\n\n\n@inproceedings{hateoffensive,\n title = {Automated Hate Speech Detection and the Problem of Offensive Language},\n author = {Davidson, Thomas and Warmsley, Dana and Macy, Michael and Weber, Ingmar}, \n booktitle = {Proceedings of the 11th International AAAI Conference on Web and Social Media},\n series = {ICWSM '17},\n year = {2017},\n location = {Montreal, Canada},\n pages = {512-515}\n }", "### Contributions\n\nThanks to @MisbahKhan789 for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-multi-class-classification #annotations_creators-crowdsourced #language_creators-machine-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-mit #hate-speech-detection #arxiv-1905.12516 #region-us \n", "# Dataset Card for HateOffensive", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n- Homepage : URL \n- Repository : URL\n- Paper : URL \n- Leaderboard : \n- Point of Contact : trd54 at cornell dot edu", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages\nEnglish ('en')", "## Dataset Structure", "### Data Instances", "### Data Fields\n\ncount: (Integer) number of users who coded each tweet (min is 3, sometimes more users coded a tweet when judgments were determined to be unreliable,\nhate_speech_annotation: (Integer) number of users who judged the tweet to be hate speech,\noffensive_language_annotation: (Integer) number of users who judged the tweet to be offensive,\nneither_annotation: (Integer) number of users who judged the tweet to be neither offensive nor non-offensive,\nlabel: (Class Label) integer class label for majority of CF users (0: 'hate-speech', 1: 'offensive-language' or 2: 'neither'),\ntweet: (string)", "### Data Splits\nThis dataset is not splitted, only the train split is available.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\nUsernames are not anonymized in the dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\nMIT License\n\n\n@inproceedings{hateoffensive,\n title = {Automated Hate Speech Detection and the Problem of Offensive Language},\n author = {Davidson, Thomas and Warmsley, Dana and Macy, Michael and Weber, Ingmar}, \n booktitle = {Proceedings of the 11th International AAAI Conference on Web and Social Media},\n series = {ICWSM '17},\n year = {2017},\n location = {Montreal, Canada},\n pages = {512-515}\n }", "### Contributions\n\nThanks to @MisbahKhan789 for adding this dataset." ]
34e1d3d74774f7470a4105efb67c535c0778d892
# Dataset Card for [Dataset Name] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/Vicomtech/hate-speech-dataset - **Repository:** https://github.com/Vicomtech/hate-speech-dataset - **Paper:** https://www.aclweb.org/anthology/W18-51.pdf - **Leaderboard:** - **Point of Contact:** ### Dataset Summary These files contain text extracted from Stormfront, a white supremacist forum. A random set of forums posts have been sampled from several subforums and split into sentences. Those sentences have been manually labelled as containing hate speech or not, according to certain annotation guidelines. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages English ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields - text: the provided sentence - user_id: information to make it possible to re-build the conversations these sentences belong to - subforum_id: information to make it possible to re-build the conversations these sentences belong to - num_contexts: number of previous posts the annotator had to read before making a decision over the category of the sentence - label: hate, noHate, relation (sentence in the post doesn't contain hate speech on their own, but combination of serveral sentences does) or idk/skip (sentences that are not written in English or that don't contain information as to be classified into hate or noHate) ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @inproceedings{gibert2018hate, title = "{Hate Speech Dataset from a White Supremacy Forum}", author = "de Gibert, Ona and Perez, Naiara and Garc{\'\i}a-Pablos, Aitor and Cuadros, Montse", booktitle = "Proceedings of the 2nd Workshop on Abusive Language Online ({ALW}2)", month = oct, year = "2018", address = "Brussels, Belgium", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/W18-5102", doi = "10.18653/v1/W18-5102", pages = "11--20", } ``` ### Contributions Thanks to [@czabo](https://github.com/czabo) for adding this dataset.
hate_speech18
[ "task_categories:text-classification", "task_ids:intent-classification", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-sa-3.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-sa-3.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["intent-classification"], "paperswithcode_id": "hate-speech", "pretty_name": "Hate Speech", "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "user_id", "dtype": "int64"}, {"name": "subforum_id", "dtype": "int64"}, {"name": "num_contexts", "dtype": "int64"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "noHate", "1": "hate", "2": "idk/skip", "3": "relation"}}}}], "splits": [{"name": "train", "num_bytes": 1375340, "num_examples": 10944}], "download_size": 3664530, "dataset_size": 1375340}, "train-eval-index": [{"config": "default", "task": "text-classification", "task_id": "multi_class_classification", "splits": {"train_split": "train"}, "col_mapping": {"text": "text", "label": "target"}, "metrics": [{"type": "accuracy", "name": "Accuracy"}, {"type": "f1", "name": "F1 macro", "args": {"average": "macro"}}, {"type": "f1", "name": "F1 micro", "args": {"average": "micro"}}, {"type": "f1", "name": "F1 weighted", "args": {"average": "weighted"}}, {"type": "precision", "name": "Precision macro", "args": {"average": "macro"}}, {"type": "precision", "name": "Precision micro", "args": {"average": "micro"}}, {"type": "precision", "name": "Precision weighted", "args": {"average": "weighted"}}, {"type": "recall", "name": "Recall macro", "args": {"average": "macro"}}, {"type": "recall", "name": "Recall micro", "args": {"average": "micro"}}, {"type": "recall", "name": "Recall weighted", "args": {"average": "weighted"}}]}]}
2024-01-18T11:04:44+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-intent-classification #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-sa-3.0 #region-us
# Dataset Card for [Dataset Name] ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: URL - Paper: URL - Leaderboard: - Point of Contact: ### Dataset Summary These files contain text extracted from Stormfront, a white supremacist forum. A random set of forums posts have been sampled from several subforums and split into sentences. Those sentences have been manually labelled as containing hate speech or not, according to certain annotation guidelines. ### Supported Tasks and Leaderboards ### Languages English ## Dataset Structure ### Data Instances ### Data Fields - text: the provided sentence - user_id: information to make it possible to re-build the conversations these sentences belong to - subforum_id: information to make it possible to re-build the conversations these sentences belong to - num_contexts: number of previous posts the annotator had to read before making a decision over the category of the sentence - label: hate, noHate, relation (sentence in the post doesn't contain hate speech on their own, but combination of serveral sentences does) or idk/skip (sentences that are not written in English or that don't contain information as to be classified into hate or noHate) ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions Thanks to @czabo for adding this dataset.
[ "# Dataset Card for [Dataset Name]", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nThese files contain text extracted from Stormfront, a white supremacist forum. A random set of forums posts have been sampled from \nseveral subforums and split into sentences. Those sentences have been manually labelled as containing hate speech or not, according \nto certain annotation guidelines.", "### Supported Tasks and Leaderboards", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances", "### Data Fields\n\n- text: the provided sentence\n- user_id: information to make it possible to re-build the conversations these sentences belong to\n- subforum_id: information to make it possible to re-build the conversations these sentences belong to\n- num_contexts: number of previous posts the annotator had to read before making a decision over the category of the sentence\n- label: hate, noHate, relation (sentence in the post doesn't contain hate speech on their own, but combination of serveral sentences does) \n or idk/skip (sentences that are not written in English or that don't contain information as to be classified into hate or noHate)", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @czabo for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-intent-classification #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-sa-3.0 #region-us \n", "# Dataset Card for [Dataset Name]", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nThese files contain text extracted from Stormfront, a white supremacist forum. A random set of forums posts have been sampled from \nseveral subforums and split into sentences. Those sentences have been manually labelled as containing hate speech or not, according \nto certain annotation guidelines.", "### Supported Tasks and Leaderboards", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances", "### Data Fields\n\n- text: the provided sentence\n- user_id: information to make it possible to re-build the conversations these sentences belong to\n- subforum_id: information to make it possible to re-build the conversations these sentences belong to\n- num_contexts: number of previous posts the annotator had to read before making a decision over the category of the sentence\n- label: hate, noHate, relation (sentence in the post doesn't contain hate speech on their own, but combination of serveral sentences does) \n or idk/skip (sentences that are not written in English or that don't contain information as to be classified into hate or noHate)", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @czabo for adding this dataset." ]
1994e9bb7f3ec07518e3f0d9e870cb293e234686
# Dataset Card for Hate Speech in Filipino ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Hate Speech Dataset in Filipino homepage](https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks) - **Repository:** [Hate Speech Dataset in Filipino homepage](https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks) - **Paper:** [PCJ paper](https://pcj.csp.org.ph/index.php/pcj/issue/download/29/PCJ%20V14%20N1%20pp1-14%202019) - **Leaderboard:** - **Point of Contact:** [Jan Christian Cruz](mailto:[email protected]) ### Dataset Summary Contains 10k tweets (training set) that are labeled as hate speech or non-hate speech. Released with 4,232 validation and 4,232 testing samples. Collected during the 2016 Philippine Presidential Elections. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The dataset is primarily in Filipino, with the addition of some English words commonly used in Filipino vernacular ## Dataset Structure ### Data Instances Sample data: ``` { "text": "Taas ni Mar Roxas ah. KULTONG DILAW NGA NAMAN", "label": 1 } ``` ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale This study seeks to contribute to the filling of this gap through the development of a model that can automate hate speech detection and classification in Philippine election-related tweets. The role of the microblogging site Twitter as a platform for the expression of support and hate during the 2016 Philippine presidential election has been supported in news reports and systematic studies. Thus, the particular question addressed in this paper is: Can existing techniques in language processing and machine learning be applied to detect hate speech in the Philippine election context? ### Source Data #### Initial Data Collection and Normalization The dataset used in this study was a subset of the corpus 1,696,613 tweets crawled by Andrade et al. and posted from November 2015 to May 2016 during the campaign period for the Philippine presidential election. They were culled based on the presence of candidate names (e.g., Binay, Duterte, Poe, Roxas, and Santiago) and election-related hashtags (e.g., #Halalan2016, #Eleksyon2016, and #PiliPinas2016). Data preprocessing was performed to prepare the tweets for feature extraction and classification. It consisted of the following steps: data de-identification, uniform resource locator (URL) removal, special character processing, normalization, hashtag processing, and tokenization. [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [Jan Christian Cruz](mailto:[email protected]) ### Licensing Information [More Information Needed] ### Citation Information @article{Cabasag-2019-hate-speech, title={Hate speech in Philippine election-related tweets: Automatic detection and classification using natural language processing.}, author={Neil Vicente Cabasag, Vicente Raphael Chan, Sean Christian Lim, Mark Edward Gonzales, and Charibeth Cheng}, journal={Philippine Computing Journal}, volume={XIV}, number={1}, month={August}, year={2019} } ### Contributions Thanks to [@anaerobeth](https://github.com/anaerobeth) for adding this dataset.
hate_speech_filipino
[ "task_categories:text-classification", "task_ids:sentiment-analysis", "annotations_creators:machine-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|other-twitter-data-philippine-election", "language:tl", "license:unknown", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["machine-generated"], "language_creators": ["crowdsourced"], "language": ["tl"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended|other-twitter-data-philippine-election"], "task_categories": ["text-classification"], "task_ids": ["sentiment-analysis"], "pretty_name": "Hate Speech in Filipino", "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}], "splits": [{"name": "train", "num_bytes": 995919, "num_examples": 10000}, {"name": "test", "num_bytes": 995919, "num_examples": 10000}, {"name": "validation", "num_bytes": 424365, "num_examples": 4232}], "download_size": 822927, "dataset_size": 2416203}}
2024-01-18T11:04:45+00:00
[]
[ "tl" ]
TAGS #task_categories-text-classification #task_ids-sentiment-analysis #annotations_creators-machine-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|other-twitter-data-philippine-election #language-Tagalog #license-unknown #region-us
# Dataset Card for Hate Speech in Filipino ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: Hate Speech Dataset in Filipino homepage - Repository: Hate Speech Dataset in Filipino homepage - Paper: PCJ paper - Leaderboard: - Point of Contact: Jan Christian Cruz ### Dataset Summary Contains 10k tweets (training set) that are labeled as hate speech or non-hate speech. Released with 4,232 validation and 4,232 testing samples. Collected during the 2016 Philippine Presidential Elections. ### Supported Tasks and Leaderboards ### Languages The dataset is primarily in Filipino, with the addition of some English words commonly used in Filipino vernacular ## Dataset Structure ### Data Instances Sample data: ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale This study seeks to contribute to the filling of this gap through the development of a model that can automate hate speech detection and classification in Philippine election-related tweets. The role of the microblogging site Twitter as a platform for the expression of support and hate during the 2016 Philippine presidential election has been supported in news reports and systematic studies. Thus, the particular question addressed in this paper is: Can existing techniques in language processing and machine learning be applied to detect hate speech in the Philippine election context? ### Source Data #### Initial Data Collection and Normalization The dataset used in this study was a subset of the corpus 1,696,613 tweets crawled by Andrade et al. and posted from November 2015 to May 2016 during the campaign period for the Philippine presidential election. They were culled based on the presence of candidate names (e.g., Binay, Duterte, Poe, Roxas, and Santiago) and election-related hashtags (e.g., #Halalan2016, #Eleksyon2016, and #PiliPinas2016). Data preprocessing was performed to prepare the tweets for feature extraction and classification. It consisted of the following steps: data de-identification, uniform resource locator (URL) removal, special character processing, normalization, hashtag processing, and tokenization. #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators Jan Christian Cruz ### Licensing Information @article{Cabasag-2019-hate-speech, title={Hate speech in Philippine election-related tweets: Automatic detection and classification using natural language processing.}, author={Neil Vicente Cabasag, Vicente Raphael Chan, Sean Christian Lim, Mark Edward Gonzales, and Charibeth Cheng}, journal={Philippine Computing Journal}, volume={XIV}, number={1}, month={August}, year={2019} } ### Contributions Thanks to @anaerobeth for adding this dataset.
[ "# Dataset Card for Hate Speech in Filipino", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: Hate Speech Dataset in Filipino homepage\n- Repository: Hate Speech Dataset in Filipino homepage\n- Paper: PCJ paper\n- Leaderboard:\n- Point of Contact: Jan Christian Cruz", "### Dataset Summary\nContains 10k tweets (training set) that are labeled as hate speech or non-hate speech. Released with 4,232 validation and 4,232 testing samples. Collected during the 2016 Philippine Presidential Elections.", "### Supported Tasks and Leaderboards", "### Languages\n\nThe dataset is primarily in Filipino, with the addition of some English words commonly used in Filipino vernacular", "## Dataset Structure", "### Data Instances\n\nSample data:", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale\n\nThis study seeks to contribute to the filling of this gap through the development of a model that can automate hate speech detection and classification in Philippine election-related tweets. The role of the microblogging site Twitter as a platform for the expression of support and hate during the 2016 Philippine presidential election has been supported in news reports and systematic studies. Thus, the particular question addressed in this paper is: Can existing techniques in language processing and machine learning be applied to detect hate speech in the Philippine election context?", "### Source Data", "#### Initial Data Collection and Normalization\n\nThe dataset used in this study was a subset of the corpus 1,696,613 tweets crawled by Andrade et al. and posted from November 2015 to May 2016 during the campaign period for the Philippine presidential election. They were culled based on the presence of candidate names (e.g., Binay, Duterte, Poe, Roxas, and Santiago) and election-related hashtags (e.g., #Halalan2016, #Eleksyon2016, and #PiliPinas2016).\n\nData preprocessing was performed to prepare the tweets for feature extraction and classification. It consisted of the following steps: data de-identification, uniform resource locator (URL) removal, special character processing, normalization, hashtag processing, and tokenization.", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nJan Christian Cruz", "### Licensing Information\n\n\n\n\n\n@article{Cabasag-2019-hate-speech,\n title={Hate speech in Philippine election-related tweets: Automatic detection and classification using natural language processing.},\n author={Neil Vicente Cabasag, Vicente Raphael Chan, Sean Christian Lim, Mark Edward Gonzales, and Charibeth Cheng},\n journal={Philippine Computing Journal},\n volume={XIV},\n number={1},\n month={August},\n year={2019}\n}", "### Contributions\n\nThanks to @anaerobeth for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-sentiment-analysis #annotations_creators-machine-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|other-twitter-data-philippine-election #language-Tagalog #license-unknown #region-us \n", "# Dataset Card for Hate Speech in Filipino", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: Hate Speech Dataset in Filipino homepage\n- Repository: Hate Speech Dataset in Filipino homepage\n- Paper: PCJ paper\n- Leaderboard:\n- Point of Contact: Jan Christian Cruz", "### Dataset Summary\nContains 10k tweets (training set) that are labeled as hate speech or non-hate speech. Released with 4,232 validation and 4,232 testing samples. Collected during the 2016 Philippine Presidential Elections.", "### Supported Tasks and Leaderboards", "### Languages\n\nThe dataset is primarily in Filipino, with the addition of some English words commonly used in Filipino vernacular", "## Dataset Structure", "### Data Instances\n\nSample data:", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale\n\nThis study seeks to contribute to the filling of this gap through the development of a model that can automate hate speech detection and classification in Philippine election-related tweets. The role of the microblogging site Twitter as a platform for the expression of support and hate during the 2016 Philippine presidential election has been supported in news reports and systematic studies. Thus, the particular question addressed in this paper is: Can existing techniques in language processing and machine learning be applied to detect hate speech in the Philippine election context?", "### Source Data", "#### Initial Data Collection and Normalization\n\nThe dataset used in this study was a subset of the corpus 1,696,613 tweets crawled by Andrade et al. and posted from November 2015 to May 2016 during the campaign period for the Philippine presidential election. They were culled based on the presence of candidate names (e.g., Binay, Duterte, Poe, Roxas, and Santiago) and election-related hashtags (e.g., #Halalan2016, #Eleksyon2016, and #PiliPinas2016).\n\nData preprocessing was performed to prepare the tweets for feature extraction and classification. It consisted of the following steps: data de-identification, uniform resource locator (URL) removal, special character processing, normalization, hashtag processing, and tokenization.", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nJan Christian Cruz", "### Licensing Information\n\n\n\n\n\n@article{Cabasag-2019-hate-speech,\n title={Hate speech in Philippine election-related tweets: Automatic detection and classification using natural language processing.},\n author={Neil Vicente Cabasag, Vicente Raphael Chan, Sean Christian Lim, Mark Edward Gonzales, and Charibeth Cheng},\n journal={Philippine Computing Journal},\n volume={XIV},\n number={1},\n month={August},\n year={2019}\n}", "### Contributions\n\nThanks to @anaerobeth for adding this dataset." ]
adc5fb774614827695774f2dbe0ea8122f6a92b4
# Dataset Card for [Dataset Name] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/t-davidson/hate-speech-and-offensive-language - **Repository:** https://github.com/t-davidson/hate-speech-and-offensive-language - **Paper:** https://arxiv.org/abs/1703.04009 - **Leaderboard:** - **Point of Contact:** https://docs.google.com/forms/d/e/1FAIpQLSdrPNlfVBlqxun2tivzAtsZaOoPC5YYMocn-xscCgeRakLXHg/viewform?usp=pp_url&entry.1506871634&entry.147453066&entry.1390333885&entry.516829772 ### Dataset Summary An annotated dataset for hate speech and offensive language detection on tweets. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages English (`en`) ## Dataset Structure ### Data Instances ``` { "count": 3, "hate_speech_annotation": 0, "offensive_language_annotation": 0, "neither_annotation": 3, "label": 2, # "neither" "tweet": "!!! RT @mayasolovely: As a woman you shouldn't complain about cleaning up your house. &amp; as a man you should always take the trash out...") } ``` ### Data Fields ``` count: (Integer) number of users who coded each tweet (min is 3, sometimes more users coded a tweet when judgments were determined to be unreliable, hate_speech_annotation: (Integer) number of users who judged the tweet to be hate speech, offensive_language_annotation: (Integer) number of users who judged the tweet to be offensive, neither_annotation: (Integer) number of users who judged the tweet to be neither offensive nor non-offensive, label: (Class Label) class label for majority of CF users (0: 'hate-speech', 1: 'offensive-language' or 2: 'neither'), tweet: (string) ``` ### Data Splits This dataset is not splitted, only the train split is available. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information Usernames are not anonymized in the dataset. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information MIT License ### Citation Information @inproceedings{hateoffensive, title = {Automated Hate Speech Detection and the Problem of Offensive Language}, author = {Davidson, Thomas and Warmsley, Dana and Macy, Michael and Weber, Ingmar}, booktitle = {Proceedings of the 11th International AAAI Conference on Web and Social Media}, series = {ICWSM '17}, year = {2017}, location = {Montreal, Canada}, pages = {512-515} } ### Contributions Thanks to [@hugoabonizio](https://github.com/hugoabonizio) for adding this dataset.
tdavidson/hate_speech_offensive
[ "task_categories:text-classification", "annotations_creators:expert-generated", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:unknown", "hate-speech-detection", "arxiv:1703.04009", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated", "crowdsourced"], "language_creators": ["found"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": [], "paperswithcode_id": "hate-speech-and-offensive-language", "pretty_name": "Hate Speech and Offensive Language", "tags": ["hate-speech-detection"], "dataset_info": {"features": [{"name": "count", "dtype": "int64"}, {"name": "hate_speech_count", "dtype": "int64"}, {"name": "offensive_language_count", "dtype": "int64"}, {"name": "neither_count", "dtype": "int64"}, {"name": "class", "dtype": {"class_label": {"names": {"0": "hate speech", "1": "offensive language", "2": "neither"}}}}, {"name": "tweet", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3207814, "num_examples": 24783}], "download_size": 1627672, "dataset_size": 3207814}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "train-eval-index": [{"config": "default", "task": "text-classification", "task_id": "multi_class_classification", "splits": {"train_split": "train"}, "col_mapping": {"tweet": "text", "label": "target"}, "metrics": [{"type": "accuracy", "name": "Accuracy"}, {"type": "f1", "name": "F1 macro", "args": {"average": "macro"}}, {"type": "f1", "name": "F1 micro", "args": {"average": "micro"}}, {"type": "f1", "name": "F1 weighted", "args": {"average": "weighted"}}, {"type": "precision", "name": "Precision macro", "args": {"average": "macro"}}, {"type": "precision", "name": "Precision micro", "args": {"average": "micro"}}, {"type": "precision", "name": "Precision weighted", "args": {"average": "weighted"}}, {"type": "recall", "name": "Recall macro", "args": {"average": "macro"}}, {"type": "recall", "name": "Recall micro", "args": {"average": "micro"}}, {"type": "recall", "name": "Recall weighted", "args": {"average": "weighted"}}]}]}
2024-01-04T12:06:17+00:00
[ "1703.04009" ]
[ "en" ]
TAGS #task_categories-text-classification #annotations_creators-expert-generated #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-unknown #hate-speech-detection #arxiv-1703.04009 #region-us
# Dataset Card for [Dataset Name] ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: URL - Paper: URL - Leaderboard: - Point of Contact: URL ### Dataset Summary An annotated dataset for hate speech and offensive language detection on tweets. ### Supported Tasks and Leaderboards ### Languages English ('en') ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits This dataset is not splitted, only the train split is available. ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Usernames are not anonymized in the dataset. ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information MIT License @inproceedings{hateoffensive, title = {Automated Hate Speech Detection and the Problem of Offensive Language}, author = {Davidson, Thomas and Warmsley, Dana and Macy, Michael and Weber, Ingmar}, booktitle = {Proceedings of the 11th International AAAI Conference on Web and Social Media}, series = {ICWSM '17}, year = {2017}, location = {Montreal, Canada}, pages = {512-515} } ### Contributions Thanks to @hugoabonizio for adding this dataset.
[ "# Dataset Card for [Dataset Name]", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard:\n- Point of Contact: URL", "### Dataset Summary\n\nAn annotated dataset for hate speech and offensive language detection on tweets.", "### Supported Tasks and Leaderboards", "### Languages\nEnglish ('en')", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits\nThis dataset is not splitted, only the train split is available.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\nUsernames are not anonymized in the dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\nMIT License\n\n\n@inproceedings{hateoffensive,\n title = {Automated Hate Speech Detection and the Problem of Offensive Language},\n author = {Davidson, Thomas and Warmsley, Dana and Macy, Michael and Weber, Ingmar}, \n booktitle = {Proceedings of the 11th International AAAI Conference on Web and Social Media},\n series = {ICWSM '17},\n year = {2017},\n location = {Montreal, Canada},\n pages = {512-515}\n }", "### Contributions\n\nThanks to @hugoabonizio for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #annotations_creators-expert-generated #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-unknown #hate-speech-detection #arxiv-1703.04009 #region-us \n", "# Dataset Card for [Dataset Name]", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard:\n- Point of Contact: URL", "### Dataset Summary\n\nAn annotated dataset for hate speech and offensive language detection on tweets.", "### Supported Tasks and Leaderboards", "### Languages\nEnglish ('en')", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits\nThis dataset is not splitted, only the train split is available.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\nUsernames are not anonymized in the dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\nMIT License\n\n\n@inproceedings{hateoffensive,\n title = {Automated Hate Speech Detection and the Problem of Offensive Language},\n author = {Davidson, Thomas and Warmsley, Dana and Macy, Michael and Weber, Ingmar}, \n booktitle = {Proceedings of the 11th International AAAI Conference on Web and Social Media},\n series = {ICWSM '17},\n year = {2017},\n location = {Montreal, Canada},\n pages = {512-515}\n }", "### Contributions\n\nThanks to @hugoabonizio for adding this dataset." ]
c2ea15ae8f531f96cf734c91db08b7bd60ab1201
# Dataset Card for HateSpeechPl ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://zil.ipipan.waw.pl/HateSpeech - **Repository:** [N/A] - **Paper:** http://www.qualitativesociologyreview.org/PL/Volume38/PSJ_13_2_Troszynski_Wawer.pdf - **Leaderboard:** [N/A] - **Point of Contact:** [Marek Troszyński]([email protected]), [Aleksander Wawer]([email protected]) ### Dataset Summary The dataset was created to analyze the possibility of automating the recognition of hate speech in Polish. It was collected from the Polish forums and represents various types and degrees of offensive language, expressed towards minorities. The original dataset is provided as an export of MySQL tables, what makes it hard to load. Due to that, it was converted to CSV and put to a Github repository. ### Supported Tasks and Leaderboards - `text-classification`: The dataset might be used to perform the text classification on different target fields, like the presence of irony/sarcasm, minority it describes or a topic. - `text-scoring`: The sentiment analysis is another task which might be solved on a dataset. ### Languages Polish, collected from public forums, including the HTML formatting of the text. ## Dataset Structure ### Data Instances The dataset consists of three collections, originally provided as separate MySQL tables. Here represented as three CSV files. ``` { 'id': 1, 'text_id': 121713, 'annotator_id': 1, 'minority_id': 72, 'negative_emotions': false, 'call_to_action': false, 'source_of_knowledge': 2, 'irony_sarcasm': false, 'topic': 18, 'text': ' <font color=\"blue\"> Niemiec</font> mówi co innego', 'rating': 0 } ``` ### Data Fields List and describe the fields present in the dataset. Mention their data type, and whether they are used as input or output in any of the tasks the dataset currently supports. If the data has span indices, describe their attributes, such as whether they are at the character level or word level, whether they are contiguous or not, etc. If the datasets contains example IDs, state whether they have an inherent meaning, such as a mapping to other datasets or pointing to relationships between data points. - `id`: unique identifier of the entry - `text_id`: text identifier, useful when a single text is rated several times by different annotators - `annotator_id`: identifier of the person who annotated the text - `minority_id`: the internal identifier of the minority described in the text - `negative_emotions`: boolean indicator of the presence of negative emotions in the text - `call_to_action`: boolean indicator set to true, if the text calls the audience to perform any action, typically with a negative emotions - `source_of_knowledge`: categorical variable, describing the source of knowledge for the post rating - 0, 1 or 2 (direct, lexical or contextual, but the description of the meaning for different values couldn't be found) - `irony_sarcasm`: boolean indicator of the present of irony or sarcasm - `topic`: internal identifier of the topic the text is about - `text`: post text content - `rating`: integer value, from 0 to 4 - the higher the value, the more negative the text content is ### Data Splits The dataset was not originally split at all. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data The dataset was collected from the public forums. [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information The dataset doesn't contain any personal or sensitive information. ## Considerations for Using the Data ### Social Impact of Dataset The automated hate speech recognition is the main beneficial outcome of using the dataset. ### Discussion of Biases The dataset contains negative posts only and due to that might underrepresent the whole language. ### Other Known Limitations Dataset provided for research purposes only. Please check dataset license for additional information. ## Additional Information ### Dataset Curators The dataset was created by Marek Troszyński and Aleksander Wawer, during work done at [IPI PAN](https://www.ipipan.waw.pl/). ### Licensing Information According to [Metashare](http://metashare.nlp.ipipan.waw.pl/metashare/repository/browse/polish-hatespeech-corpus/21b7e2366b0011e284b6000423bfd61cbc7616f601724f09bafc8a62c42d56de/), the dataset is licensed under CC-BY-NC-SA, but the version is not mentioned. ### Citation Information ``` @article{troszynski2017czy, title={Czy komputer rozpozna hejtera? Wykorzystanie uczenia maszynowego (ML) w jako{\'s}ciowej analizie danych}, author={Troszy{\'n}ski, Marek and Wawer, Aleksandra}, journal={Przegl{\k{a}}d Socjologii Jako{\'s}ciowej}, volume={13}, number={2}, pages={62--80}, year={2017}, publisher={Uniwersytet {\L}{\'o}dzki, Wydzia{\l} Ekonomiczno-Socjologiczny, Katedra Socjologii~…} } ``` ### Contributions Thanks to [@kacperlukawski](https://github.com/kacperlukawski) for adding this dataset.
hate_speech_pl
[ "task_categories:text-classification", "task_ids:text-scoring", "task_ids:multi-class-classification", "task_ids:multi-label-classification", "task_ids:sentiment-classification", "task_ids:sentiment-scoring", "task_ids:topic-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:pl", "license:cc-by-nc-sa-3.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["pl"], "license": ["cc-by-nc-sa-3.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["text-scoring", "multi-class-classification", "multi-label-classification", "sentiment-classification", "sentiment-scoring", "topic-classification"], "pretty_name": "HateSpeechPl", "dataset_info": {"features": [{"name": "id", "dtype": "uint16"}, {"name": "text_id", "dtype": "uint32"}, {"name": "annotator_id", "dtype": "uint8"}, {"name": "minority_id", "dtype": "uint8"}, {"name": "negative_emotions", "dtype": "bool"}, {"name": "call_to_action", "dtype": "bool"}, {"name": "source_of_knowledge", "dtype": "uint8"}, {"name": "irony_sarcasm", "dtype": "bool"}, {"name": "topic", "dtype": "uint8"}, {"name": "text", "dtype": "string"}, {"name": "rating", "dtype": "uint8"}], "splits": [{"name": "train", "num_bytes": 3436190, "num_examples": 13887}], "download_size": 3877954, "dataset_size": 3436190}}
2024-01-18T11:04:47+00:00
[]
[ "pl" ]
TAGS #task_categories-text-classification #task_ids-text-scoring #task_ids-multi-class-classification #task_ids-multi-label-classification #task_ids-sentiment-classification #task_ids-sentiment-scoring #task_ids-topic-classification #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Polish #license-cc-by-nc-sa-3.0 #region-us
# Dataset Card for HateSpeechPl ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: [N/A] - Paper: URL - Leaderboard: [N/A] - Point of Contact: Marek Troszyński, Aleksander Wawer ### Dataset Summary The dataset was created to analyze the possibility of automating the recognition of hate speech in Polish. It was collected from the Polish forums and represents various types and degrees of offensive language, expressed towards minorities. The original dataset is provided as an export of MySQL tables, what makes it hard to load. Due to that, it was converted to CSV and put to a Github repository. ### Supported Tasks and Leaderboards - 'text-classification': The dataset might be used to perform the text classification on different target fields, like the presence of irony/sarcasm, minority it describes or a topic. - 'text-scoring': The sentiment analysis is another task which might be solved on a dataset. ### Languages Polish, collected from public forums, including the HTML formatting of the text. ## Dataset Structure ### Data Instances The dataset consists of three collections, originally provided as separate MySQL tables. Here represented as three CSV files. ### Data Fields List and describe the fields present in the dataset. Mention their data type, and whether they are used as input or output in any of the tasks the dataset currently supports. If the data has span indices, describe their attributes, such as whether they are at the character level or word level, whether they are contiguous or not, etc. If the datasets contains example IDs, state whether they have an inherent meaning, such as a mapping to other datasets or pointing to relationships between data points. - 'id': unique identifier of the entry - 'text_id': text identifier, useful when a single text is rated several times by different annotators - 'annotator_id': identifier of the person who annotated the text - 'minority_id': the internal identifier of the minority described in the text - 'negative_emotions': boolean indicator of the presence of negative emotions in the text - 'call_to_action': boolean indicator set to true, if the text calls the audience to perform any action, typically with a negative emotions - 'source_of_knowledge': categorical variable, describing the source of knowledge for the post rating - 0, 1 or 2 (direct, lexical or contextual, but the description of the meaning for different values couldn't be found) - 'irony_sarcasm': boolean indicator of the present of irony or sarcasm - 'topic': internal identifier of the topic the text is about - 'text': post text content - 'rating': integer value, from 0 to 4 - the higher the value, the more negative the text content is ### Data Splits The dataset was not originally split at all. ## Dataset Creation ### Curation Rationale ### Source Data The dataset was collected from the public forums. #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information The dataset doesn't contain any personal or sensitive information. ## Considerations for Using the Data ### Social Impact of Dataset The automated hate speech recognition is the main beneficial outcome of using the dataset. ### Discussion of Biases The dataset contains negative posts only and due to that might underrepresent the whole language. ### Other Known Limitations Dataset provided for research purposes only. Please check dataset license for additional information. ## Additional Information ### Dataset Curators The dataset was created by Marek Troszyński and Aleksander Wawer, during work done at IPI PAN. ### Licensing Information According to Metashare, the dataset is licensed under CC-BY-NC-SA, but the version is not mentioned. ### Contributions Thanks to @kacperlukawski for adding this dataset.
[ "# Dataset Card for HateSpeechPl", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: [N/A]\n- Paper: URL\n- Leaderboard: [N/A]\n- Point of Contact: Marek Troszyński, Aleksander Wawer", "### Dataset Summary\n\nThe dataset was created to analyze the possibility of automating the recognition of hate speech in Polish. It was collected from the Polish forums and represents various types and degrees of offensive language, expressed towards minorities.\n\nThe original dataset is provided as an export of MySQL tables, what makes it hard to load. Due to that, it was converted to CSV and put to a Github repository.", "### Supported Tasks and Leaderboards\n\n- 'text-classification': The dataset might be used to perform the text classification on different target fields, like the presence of irony/sarcasm, minority it describes or a topic. \n- 'text-scoring': The sentiment analysis is another task which might be solved on a dataset.", "### Languages\n\nPolish, collected from public forums, including the HTML formatting of the text.", "## Dataset Structure", "### Data Instances\n\nThe dataset consists of three collections, originally provided as separate MySQL tables. Here represented as three CSV files.", "### Data Fields\n\nList and describe the fields present in the dataset. Mention their data type, and whether they are used as input or output in any of the tasks the dataset currently supports. If the data has span indices, describe their attributes, such as whether they are at the character level or word level, whether they are contiguous or not, etc. If the datasets contains example IDs, state whether they have an inherent meaning, such as a mapping to other datasets or pointing to relationships between data points.\n\n- 'id': unique identifier of the entry\n- 'text_id': text identifier, useful when a single text is rated several times by different annotators\n- 'annotator_id': identifier of the person who annotated the text\n- 'minority_id': the internal identifier of the minority described in the text\n- 'negative_emotions': boolean indicator of the presence of negative emotions in the text\n- 'call_to_action': boolean indicator set to true, if the text calls the audience to perform any action, typically with a negative emotions\n- 'source_of_knowledge': categorical variable, describing the source of knowledge for the post rating - 0, 1 or 2 (direct, lexical or contextual, but the description of the meaning for different values couldn't be found)\n- 'irony_sarcasm': boolean indicator of the present of irony or sarcasm\n- 'topic': internal identifier of the topic the text is about\n- 'text': post text content\n- 'rating': integer value, from 0 to 4 - the higher the value, the more negative the text content is", "### Data Splits\n\nThe dataset was not originally split at all.", "## Dataset Creation", "### Curation Rationale", "### Source Data\n\nThe dataset was collected from the public forums.", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\nThe dataset doesn't contain any personal or sensitive information.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThe automated hate speech recognition is the main beneficial outcome of using the dataset.", "### Discussion of Biases\n\nThe dataset contains negative posts only and due to that might underrepresent the whole language.", "### Other Known Limitations\n\nDataset provided for research purposes only. Please check dataset license for additional information.", "## Additional Information", "### Dataset Curators\n\nThe dataset was created by Marek Troszyński and Aleksander Wawer, during work done at IPI PAN.", "### Licensing Information\n\nAccording to Metashare, the dataset is licensed under CC-BY-NC-SA, but the version is not mentioned.", "### Contributions\n\nThanks to @kacperlukawski for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-text-scoring #task_ids-multi-class-classification #task_ids-multi-label-classification #task_ids-sentiment-classification #task_ids-sentiment-scoring #task_ids-topic-classification #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Polish #license-cc-by-nc-sa-3.0 #region-us \n", "# Dataset Card for HateSpeechPl", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: [N/A]\n- Paper: URL\n- Leaderboard: [N/A]\n- Point of Contact: Marek Troszyński, Aleksander Wawer", "### Dataset Summary\n\nThe dataset was created to analyze the possibility of automating the recognition of hate speech in Polish. It was collected from the Polish forums and represents various types and degrees of offensive language, expressed towards minorities.\n\nThe original dataset is provided as an export of MySQL tables, what makes it hard to load. Due to that, it was converted to CSV and put to a Github repository.", "### Supported Tasks and Leaderboards\n\n- 'text-classification': The dataset might be used to perform the text classification on different target fields, like the presence of irony/sarcasm, minority it describes or a topic. \n- 'text-scoring': The sentiment analysis is another task which might be solved on a dataset.", "### Languages\n\nPolish, collected from public forums, including the HTML formatting of the text.", "## Dataset Structure", "### Data Instances\n\nThe dataset consists of three collections, originally provided as separate MySQL tables. Here represented as three CSV files.", "### Data Fields\n\nList and describe the fields present in the dataset. Mention their data type, and whether they are used as input or output in any of the tasks the dataset currently supports. If the data has span indices, describe their attributes, such as whether they are at the character level or word level, whether they are contiguous or not, etc. If the datasets contains example IDs, state whether they have an inherent meaning, such as a mapping to other datasets or pointing to relationships between data points.\n\n- 'id': unique identifier of the entry\n- 'text_id': text identifier, useful when a single text is rated several times by different annotators\n- 'annotator_id': identifier of the person who annotated the text\n- 'minority_id': the internal identifier of the minority described in the text\n- 'negative_emotions': boolean indicator of the presence of negative emotions in the text\n- 'call_to_action': boolean indicator set to true, if the text calls the audience to perform any action, typically with a negative emotions\n- 'source_of_knowledge': categorical variable, describing the source of knowledge for the post rating - 0, 1 or 2 (direct, lexical or contextual, but the description of the meaning for different values couldn't be found)\n- 'irony_sarcasm': boolean indicator of the present of irony or sarcasm\n- 'topic': internal identifier of the topic the text is about\n- 'text': post text content\n- 'rating': integer value, from 0 to 4 - the higher the value, the more negative the text content is", "### Data Splits\n\nThe dataset was not originally split at all.", "## Dataset Creation", "### Curation Rationale", "### Source Data\n\nThe dataset was collected from the public forums.", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\nThe dataset doesn't contain any personal or sensitive information.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThe automated hate speech recognition is the main beneficial outcome of using the dataset.", "### Discussion of Biases\n\nThe dataset contains negative posts only and due to that might underrepresent the whole language.", "### Other Known Limitations\n\nDataset provided for research purposes only. Please check dataset license for additional information.", "## Additional Information", "### Dataset Curators\n\nThe dataset was created by Marek Troszyński and Aleksander Wawer, during work done at IPI PAN.", "### Licensing Information\n\nAccording to Metashare, the dataset is licensed under CC-BY-NC-SA, but the version is not mentioned.", "### Contributions\n\nThanks to @kacperlukawski for adding this dataset." ]
b0f431acbf8d3865cb7c7b3effb2a9771a618ebc
# Dataset Card for [Dataset Name] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/paulafortuna/Portuguese-Hate-Speech-Dataset - **Repository:** https://github.com/paulafortuna/Portuguese-Hate-Speech-Dataset - **Paper:** https://www.aclweb.org/anthology/W19-3510/ - **Leaderboard:** - **Point of Contact:** ### Dataset Summary Portuguese dataset for hate speech detection composed of 5,668 tweets with binary annotations (i.e. 'hate' vs. 'no-hate'). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @inproceedings{fortuna-etal-2019-hierarchically, title = "A Hierarchically-Labeled {P}ortuguese Hate Speech Dataset", author = "Fortuna, Paula and Rocha da Silva, Jo{\~a}o and Soler-Company, Juan and Wanner, Leo and Nunes, S{\'e}rgio", editor = "Roberts, Sarah T. and Tetreault, Joel and Prabhakaran, Vinodkumar and Waseem, Zeerak", booktitle = "Proceedings of the Third Workshop on Abusive Language Online", month = aug, year = "2019", address = "Florence, Italy", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/W19-3510", doi = "10.18653/v1/W19-3510", pages = "94--104", } ``` ### Contributions Thanks to [@hugoabonizio](https://github.com/hugoabonizio) for adding this dataset.
hate_speech_portuguese
[ "task_categories:text-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:pt", "license:unknown", "hate-speech-detection", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["pt"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": [], "pretty_name": "HateSpeechPortuguese", "tags": ["hate-speech-detection"], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "no-hate", "1": "hate"}}}}, {"name": "hatespeech_G1", "dtype": "string"}, {"name": "annotator_G1", "dtype": "string"}, {"name": "hatespeech_G2", "dtype": "string"}, {"name": "annotator_G2", "dtype": "string"}, {"name": "hatespeech_G3", "dtype": "string"}, {"name": "annotator_G3", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 826130, "num_examples": 5670}], "download_size": 763846, "dataset_size": 826130}}
2024-01-18T11:04:58+00:00
[]
[ "pt" ]
TAGS #task_categories-text-classification #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Portuguese #license-unknown #hate-speech-detection #region-us
# Dataset Card for [Dataset Name] ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: URL - Paper: URL - Leaderboard: - Point of Contact: ### Dataset Summary Portuguese dataset for hate speech detection composed of 5,668 tweets with binary annotations (i.e. 'hate' vs. 'no-hate'). ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions Thanks to @hugoabonizio for adding this dataset.
[ "# Dataset Card for [Dataset Name]", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nPortuguese dataset for hate speech detection composed of 5,668 tweets with binary annotations (i.e. 'hate' vs. 'no-hate').", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @hugoabonizio for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Portuguese #license-unknown #hate-speech-detection #region-us \n", "# Dataset Card for [Dataset Name]", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nPortuguese dataset for hate speech detection composed of 5,668 tweets with binary annotations (i.e. 'hate' vs. 'no-hate').", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @hugoabonizio for adding this dataset." ]
f6a8b7de6ec31b30919d2f48cf685400ce61185f
# Dataset Card for hatexplain ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Needs More Information] - **Repository:** https://github.com/punyajoy/HateXplain/ - **Paper:** https://arxiv.org/abs/2012.10289 - **Leaderboard:** [Needs More Information] - **Point of Contact:** Punyajoy Saha ([email protected]) ### Dataset Summary Hatexplain is the first benchmark hate speech dataset covering multiple aspects of the issue. Each post in the dataset is annotated from three different perspectives: the basic, commonly used 3-class classification (i.e., hate, offensive or normal), the target community (i.e., the community that has been the victim of hate speech/offensive speech in the post), and the rationales, i.e., the portions of the post on which their labeling decision (as hate, offensive or normal) is based. WARNING: This dataset contains content that are offensive and/or hateful in nature. ### Supported Tasks and Leaderboards [Needs More Information] ### Languages The language supported is English. ## Dataset Structure ### Data Instances Sample Entry: ``` { "id": "24198545_gab", "annotators": [ { "label": 0, # hatespeech "annotator_id": 4, "target": ["African"] }, { "label": 0, # hatespeech "annotator_id": 3, "target": ["African"] }, { "label": 2, # offensive "annotator_id": 5, "target": ["African"] } ], "rationales":[ [0,0,0,0,0,0,0,0,1,0,0,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0], [0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0], [0,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0] ], "post_tokens": ["and","this","is","why","i","end","up","with","nigger","trainee","doctors","who","can","not","speak","properly","lack","basic","knowledge","of","biology","it","truly","scary","if","the","public","only","knew"] } } ``` ### Data Fields :small_blue_diamond:post_id : Unique id for each post<br/> :small_blue_diamond:annotators : The list of annotations from each annotator<br/> :small_blue_diamond:annotators[label] : The label assigned by the annotator to this post. Possible values: `hatespeech` (0), `normal` (1) or `offensive` (2)<br/> :small_blue_diamond:annotators[annotator_id] : The unique Id assigned to each annotator<br/> :small_blue_diamond:annotators[target] : A list of target community present in the post<br/> :small_blue_diamond:rationales : A list of rationales selected by annotators. Each rationales represents a list with values 0 or 1. A value of 1 means that the token is part of the rationale selected by the annotator. To get the particular token, we can use the same index position in "post_tokens"<br/> :small_blue_diamond:post_tokens : The list of tokens representing the post which was annotated<br/> ### Data Splits [Post_id_divisions](https://github.com/hate-alert/HateXplain/blob/master/Data/post_id_divisions.json) has a dictionary having train, valid and test post ids that are used to divide the dataset into train, val and test set in the ratio of 8:1:1. ## Dataset Creation ### Curation Rationale The existing hate speech datasets do not provide human rationale which could justify the human reasoning behind their annotation process. This dataset allows researchers to move a step in this direction. The dataset provides token-level annotatoins for the annotation decision. ### Source Data We collected the data from Twitter and Gab. #### Initial Data Collection and Normalization We combined the lexicon set provided by [Davidson 2017](https://arxiv.org/abs/1703.04009), [Ousidhoum 2019](https://arxiv.org/abs/1908.11049), and [Mathew 2019](https://arxiv.org/abs/1812.01693) to generate a single lexicon. We do not consider reposts and remove duplicates. We also ensure that the posts do not contain links, pictures, or videos as they indicate additional information that mightnot be available to the annotators. However, we do not exclude the emojis from the text as they might carry importantinformation for the hate and offensive speech labeling task. #### Who are the source language producers? The dataset is human generated using Amazon Mechanical Turk (AMT). ### Annotations #### Annotation process Each post in our dataset contains three types of annotations. First, whether the text is a hate speech, offensive speech, or normal. Second, the target communities in the text. Third, if the text is considered as hate speech, or offensive by majority of the annotators, we further ask the annotators to annotate parts of the text, which are words orphrases that could be a potential reason for the given annotation. Before starting the annotation task, workers are explicitly warned that the annotation task displays some hateful or offensive content. We prepare instructions for workers that clearly explain the goal of the annotation task, how to annotate spans and also include a definition for each category. We provide multiple examples with classification, target community and span annotations to help the annotators understand the task. #### Who are the annotators? To ensure high quality dataset, we use built-in MTurk qualification requirements, namely the HITApproval Rate(95%) for all Requesters’ HITs and the Number of HITs Approved(5,000) requirements. Pilot annotation: In the pilot task, each annotator was provided with 20 posts and they were required to do the hate/offensive speech classification as well as identify the target community (if any). In order to have a clear understanding of the task, they were provided with multiple examples along with explanations for the labelling process. The main purpose of the pilot task was to shortlist those annotators who were able to do the classification accurately. We also collected feedback from annotators to improve the main annotation task. A total of 621 annotators took part in the pilot task. Out of these, 253 were selected for the main task. Main annotation: After the pilot annotation, once we had ascertained the quality of the annotators, we started with the main annotation task. In each round, we would select a batch of around 200 posts. Each post was annotated by three annotators, then majority voting was applied to decide the final label. The final dataset is composed of 9,055 posts from Twitter and 11,093 posts from Gab. The Krippendorff's alpha for the inter-annotator agreement is 0.46 which is higher than other hate speech datasets. ### Personal and Sensitive Information The posts were anonymized by replacing the usernames with <user> token. ## Considerations for Using the Data ### Social Impact of Dataset The dataset could prove beneficial to develop models which are more explainable and less biased. ### Discussion of Biases [Needs More Information] ### Other Known Limitations The dataset has some limitations. First is the lack of external context. The dataset lacks any external context such as profile bio, user gender, history of posts etc., which might be helpful in the classification task. Another issue is the focus on English language and lack of multilingual hate speech. ## Additional Information ### Dataset Curators Binny Mathew - IIT Kharagpur, India Punyajoy Saha - IIT Kharagpur, India Seid Muhie Yimam - Universit ̈at Hamburg, Germany Chris Biemann - Universit ̈at Hamburg, Germany Pawan Goyal - IIT Kharagpur, India Animesh Mukherjee - IIT Kharagpur, India ### Licensing Information MIT License ### Citation Information ```bibtex @article{mathew2020hatexplain, title={HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection}, author={Binny Mathew and Punyajoy Saha and Seid Muhie Yimam and Chris Biemann and Pawan Goyal and Animesh Mukherjee}, year={2021}, conference={AAAI conference on artificial intelligence} } ### Contributions Thanks to [@kushal2000](https://github.com/kushal2000) for adding this dataset.
hatexplain
[ "task_categories:text-classification", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-4.0", "hate-speech-detection", "arxiv:2012.10289", "arxiv:1703.04009", "arxiv:1908.11049", "arxiv:1812.01693", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": [], "paperswithcode_id": "hatexplain", "pretty_name": "hatexplain", "tags": ["hate-speech-detection"], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "annotators", "sequence": [{"name": "label", "dtype": {"class_label": {"names": {"0": "hatespeech", "1": "normal", "2": "offensive"}}}}, {"name": "annotator_id", "dtype": "int32"}, {"name": "target", "sequence": "string"}]}, {"name": "rationales", "sequence": {"sequence": "int32"}}, {"name": "post_tokens", "sequence": "string"}], "config_name": "plain_text", "splits": [{"name": "train", "num_bytes": 7114730, "num_examples": 15383}, {"name": "validation", "num_bytes": 884940, "num_examples": 1922}, {"name": "test", "num_bytes": 884784, "num_examples": 1924}], "download_size": 12848091, "dataset_size": 8884454}}
2024-01-18T11:05:02+00:00
[ "2012.10289", "1703.04009", "1908.11049", "1812.01693" ]
[ "en" ]
TAGS #task_categories-text-classification #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-4.0 #hate-speech-detection #arxiv-2012.10289 #arxiv-1703.04009 #arxiv-1908.11049 #arxiv-1812.01693 #region-us
# Dataset Card for hatexplain ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: - Repository: URL - Paper: URL - Leaderboard: - Point of Contact: Punyajoy Saha (punyajoys@URL) ### Dataset Summary Hatexplain is the first benchmark hate speech dataset covering multiple aspects of the issue. Each post in the dataset is annotated from three different perspectives: the basic, commonly used 3-class classification (i.e., hate, offensive or normal), the target community (i.e., the community that has been the victim of hate speech/offensive speech in the post), and the rationales, i.e., the portions of the post on which their labeling decision (as hate, offensive or normal) is based. WARNING: This dataset contains content that are offensive and/or hateful in nature. ### Supported Tasks and Leaderboards ### Languages The language supported is English. ## Dataset Structure ### Data Instances Sample Entry: ### Data Fields :small_blue_diamond:post_id : Unique id for each post<br/> :small_blue_diamond:annotators : The list of annotations from each annotator<br/> :small_blue_diamond:annotators[label] : The label assigned by the annotator to this post. Possible values: 'hatespeech' (0), 'normal' (1) or 'offensive' (2)<br/> :small_blue_diamond:annotators[annotator_id] : The unique Id assigned to each annotator<br/> :small_blue_diamond:annotators[target] : A list of target community present in the post<br/> :small_blue_diamond:rationales : A list of rationales selected by annotators. Each rationales represents a list with values 0 or 1. A value of 1 means that the token is part of the rationale selected by the annotator. To get the particular token, we can use the same index position in "post_tokens"<br/> :small_blue_diamond:post_tokens : The list of tokens representing the post which was annotated<br/> ### Data Splits Post_id_divisions has a dictionary having train, valid and test post ids that are used to divide the dataset into train, val and test set in the ratio of 8:1:1. ## Dataset Creation ### Curation Rationale The existing hate speech datasets do not provide human rationale which could justify the human reasoning behind their annotation process. This dataset allows researchers to move a step in this direction. The dataset provides token-level annotatoins for the annotation decision. ### Source Data We collected the data from Twitter and Gab. #### Initial Data Collection and Normalization We combined the lexicon set provided by Davidson 2017, Ousidhoum 2019, and Mathew 2019 to generate a single lexicon. We do not consider reposts and remove duplicates. We also ensure that the posts do not contain links, pictures, or videos as they indicate additional information that mightnot be available to the annotators. However, we do not exclude the emojis from the text as they might carry importantinformation for the hate and offensive speech labeling task. #### Who are the source language producers? The dataset is human generated using Amazon Mechanical Turk (AMT). ### Annotations #### Annotation process Each post in our dataset contains three types of annotations. First, whether the text is a hate speech, offensive speech, or normal. Second, the target communities in the text. Third, if the text is considered as hate speech, or offensive by majority of the annotators, we further ask the annotators to annotate parts of the text, which are words orphrases that could be a potential reason for the given annotation. Before starting the annotation task, workers are explicitly warned that the annotation task displays some hateful or offensive content. We prepare instructions for workers that clearly explain the goal of the annotation task, how to annotate spans and also include a definition for each category. We provide multiple examples with classification, target community and span annotations to help the annotators understand the task. #### Who are the annotators? To ensure high quality dataset, we use built-in MTurk qualification requirements, namely the HITApproval Rate(95%) for all Requesters’ HITs and the Number of HITs Approved(5,000) requirements. Pilot annotation: In the pilot task, each annotator was provided with 20 posts and they were required to do the hate/offensive speech classification as well as identify the target community (if any). In order to have a clear understanding of the task, they were provided with multiple examples along with explanations for the labelling process. The main purpose of the pilot task was to shortlist those annotators who were able to do the classification accurately. We also collected feedback from annotators to improve the main annotation task. A total of 621 annotators took part in the pilot task. Out of these, 253 were selected for the main task. Main annotation: After the pilot annotation, once we had ascertained the quality of the annotators, we started with the main annotation task. In each round, we would select a batch of around 200 posts. Each post was annotated by three annotators, then majority voting was applied to decide the final label. The final dataset is composed of 9,055 posts from Twitter and 11,093 posts from Gab. The Krippendorff's alpha for the inter-annotator agreement is 0.46 which is higher than other hate speech datasets. ### Personal and Sensitive Information The posts were anonymized by replacing the usernames with <user> token. ## Considerations for Using the Data ### Social Impact of Dataset The dataset could prove beneficial to develop models which are more explainable and less biased. ### Discussion of Biases ### Other Known Limitations The dataset has some limitations. First is the lack of external context. The dataset lacks any external context such as profile bio, user gender, history of posts etc., which might be helpful in the classification task. Another issue is the focus on English language and lack of multilingual hate speech. ## Additional Information ### Dataset Curators Binny Mathew - IIT Kharagpur, India Punyajoy Saha - IIT Kharagpur, India Seid Muhie Yimam - Universit ̈at Hamburg, Germany Chris Biemann - Universit ̈at Hamburg, Germany Pawan Goyal - IIT Kharagpur, India Animesh Mukherjee - IIT Kharagpur, India ### Licensing Information MIT License '''bibtex @article{mathew2020hatexplain, title={HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection}, author={Binny Mathew and Punyajoy Saha and Seid Muhie Yimam and Chris Biemann and Pawan Goyal and Animesh Mukherjee}, year={2021}, conference={AAAI conference on artificial intelligence} } ### Contributions Thanks to @kushal2000 for adding this dataset.
[ "# Dataset Card for hatexplain", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: URL\n- Leaderboard: \n- Point of Contact: Punyajoy Saha (punyajoys@URL)", "### Dataset Summary\n\nHatexplain is the first benchmark hate speech dataset covering multiple aspects of the issue. Each post in the dataset is annotated from three different perspectives: the basic, commonly used 3-class classification (i.e., hate, offensive or normal), the target community (i.e., the community that has been the victim of hate speech/offensive speech in the post), and the rationales, i.e., the portions of the post on which their labeling decision (as hate, offensive or normal) is based.\n\nWARNING: This dataset contains content that are offensive and/or hateful in nature.", "### Supported Tasks and Leaderboards", "### Languages\n\nThe language supported is English.", "## Dataset Structure", "### Data Instances\n\nSample Entry:", "### Data Fields\n\n:small_blue_diamond:post_id : Unique id for each post<br/>\n:small_blue_diamond:annotators : The list of annotations from each annotator<br/>\n:small_blue_diamond:annotators[label] : The label assigned by the annotator to this post. Possible values: 'hatespeech' (0), 'normal' (1) or 'offensive' (2)<br/>\n:small_blue_diamond:annotators[annotator_id] : The unique Id assigned to each annotator<br/>\n:small_blue_diamond:annotators[target] : A list of target community present in the post<br/>\n:small_blue_diamond:rationales : A list of rationales selected by annotators. Each rationales represents a list with values 0 or 1. A value of 1 means that the token is part of the rationale selected by the annotator. To get the particular token, we can use the same index position in \"post_tokens\"<br/>\n:small_blue_diamond:post_tokens : The list of tokens representing the post which was annotated<br/>", "### Data Splits\n\nPost_id_divisions has a dictionary having train, valid and test post ids that are used to divide the dataset into train, val and test set in the ratio of 8:1:1.", "## Dataset Creation", "### Curation Rationale\n\nThe existing hate speech datasets do not provide human rationale which could justify the human reasoning behind their annotation process. This dataset allows researchers to move a step in this direction. The dataset provides token-level annotatoins for the annotation decision.", "### Source Data\n\nWe collected the data from Twitter and Gab.", "#### Initial Data Collection and Normalization\n\nWe combined the lexicon set provided by Davidson 2017, Ousidhoum 2019, and Mathew 2019 to generate a single lexicon. We do not consider reposts and remove duplicates. We also ensure that the posts do not contain links, pictures, or videos as they indicate additional information that mightnot be available to the annotators. However, we do not exclude the emojis from the text as they might carry importantinformation for the hate and offensive speech labeling task.", "#### Who are the source language producers?\n\nThe dataset is human generated using Amazon Mechanical Turk (AMT).", "### Annotations", "#### Annotation process\n\nEach post in our dataset contains three types of annotations. First, whether the text is a hate speech, offensive speech, or normal. Second, the target communities in the text. Third, if the text is considered as hate speech, or offensive by majority of the annotators, we further ask the annotators to annotate parts of the text, which are words orphrases that could be a potential reason for the given annotation. \n\nBefore starting the annotation task, workers are explicitly warned that the annotation task displays some hateful or offensive content. We prepare instructions for workers that clearly explain the goal of the annotation task, how to annotate spans and also include a definition for each category. We provide multiple examples with classification, target community and span annotations to help the annotators understand the task.", "#### Who are the annotators?\n\nTo ensure high quality dataset, we use built-in MTurk qualification requirements, namely the HITApproval Rate(95%) for all Requesters’ HITs and the Number of HITs Approved(5,000) requirements.\n\nPilot annotation: In the pilot task, each annotator was provided with 20 posts and they were required to do the hate/offensive speech classification as well as identify the target community (if any). In order to have a clear understanding of the task, they were provided with multiple examples along with explanations for the labelling process. The main purpose of the pilot task was to shortlist those annotators who were able to do the classification accurately. We also collected feedback from annotators to improve the main annotation task. A total of 621 annotators took part in the pilot task. Out of these, 253 were selected for the main task.\n\n\nMain annotation: After the pilot annotation, once we had ascertained the quality of the annotators, we started with the main annotation task. In each round, we would select a batch of around 200 posts. Each post was annotated by three annotators, then majority voting was applied to decide the final label. The final dataset is composed of 9,055 posts from Twitter and 11,093 posts from Gab. The Krippendorff's alpha for the inter-annotator agreement is 0.46 which is higher than other hate speech datasets.", "### Personal and Sensitive Information\n\nThe posts were anonymized by replacing the usernames with <user> token.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThe dataset could prove beneficial to develop models which are more explainable and less biased.", "### Discussion of Biases", "### Other Known Limitations\n\nThe dataset has some limitations. First is the lack of external context. The dataset lacks any external context such as profile bio, user gender, history of posts etc., which might be helpful in the classification task. Another issue is the focus on English language and lack of multilingual hate speech.", "## Additional Information", "### Dataset Curators\n\nBinny Mathew - IIT Kharagpur, India\nPunyajoy Saha - IIT Kharagpur, India\nSeid Muhie Yimam - Universit ̈at Hamburg, Germany\nChris Biemann - Universit ̈at Hamburg, Germany\nPawan Goyal - IIT Kharagpur, India\nAnimesh Mukherjee - IIT Kharagpur, India", "### Licensing Information\n\nMIT License\n\n\n\n'''bibtex\n@article{mathew2020hatexplain,\n title={HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection}, \n author={Binny Mathew and Punyajoy Saha and Seid Muhie Yimam and Chris Biemann and Pawan Goyal and Animesh Mukherjee},\n year={2021},\n conference={AAAI conference on artificial intelligence}\n}", "### Contributions\n\nThanks to @kushal2000 for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-4.0 #hate-speech-detection #arxiv-2012.10289 #arxiv-1703.04009 #arxiv-1908.11049 #arxiv-1812.01693 #region-us \n", "# Dataset Card for hatexplain", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: URL\n- Leaderboard: \n- Point of Contact: Punyajoy Saha (punyajoys@URL)", "### Dataset Summary\n\nHatexplain is the first benchmark hate speech dataset covering multiple aspects of the issue. Each post in the dataset is annotated from three different perspectives: the basic, commonly used 3-class classification (i.e., hate, offensive or normal), the target community (i.e., the community that has been the victim of hate speech/offensive speech in the post), and the rationales, i.e., the portions of the post on which their labeling decision (as hate, offensive or normal) is based.\n\nWARNING: This dataset contains content that are offensive and/or hateful in nature.", "### Supported Tasks and Leaderboards", "### Languages\n\nThe language supported is English.", "## Dataset Structure", "### Data Instances\n\nSample Entry:", "### Data Fields\n\n:small_blue_diamond:post_id : Unique id for each post<br/>\n:small_blue_diamond:annotators : The list of annotations from each annotator<br/>\n:small_blue_diamond:annotators[label] : The label assigned by the annotator to this post. Possible values: 'hatespeech' (0), 'normal' (1) or 'offensive' (2)<br/>\n:small_blue_diamond:annotators[annotator_id] : The unique Id assigned to each annotator<br/>\n:small_blue_diamond:annotators[target] : A list of target community present in the post<br/>\n:small_blue_diamond:rationales : A list of rationales selected by annotators. Each rationales represents a list with values 0 or 1. A value of 1 means that the token is part of the rationale selected by the annotator. To get the particular token, we can use the same index position in \"post_tokens\"<br/>\n:small_blue_diamond:post_tokens : The list of tokens representing the post which was annotated<br/>", "### Data Splits\n\nPost_id_divisions has a dictionary having train, valid and test post ids that are used to divide the dataset into train, val and test set in the ratio of 8:1:1.", "## Dataset Creation", "### Curation Rationale\n\nThe existing hate speech datasets do not provide human rationale which could justify the human reasoning behind their annotation process. This dataset allows researchers to move a step in this direction. The dataset provides token-level annotatoins for the annotation decision.", "### Source Data\n\nWe collected the data from Twitter and Gab.", "#### Initial Data Collection and Normalization\n\nWe combined the lexicon set provided by Davidson 2017, Ousidhoum 2019, and Mathew 2019 to generate a single lexicon. We do not consider reposts and remove duplicates. We also ensure that the posts do not contain links, pictures, or videos as they indicate additional information that mightnot be available to the annotators. However, we do not exclude the emojis from the text as they might carry importantinformation for the hate and offensive speech labeling task.", "#### Who are the source language producers?\n\nThe dataset is human generated using Amazon Mechanical Turk (AMT).", "### Annotations", "#### Annotation process\n\nEach post in our dataset contains three types of annotations. First, whether the text is a hate speech, offensive speech, or normal. Second, the target communities in the text. Third, if the text is considered as hate speech, or offensive by majority of the annotators, we further ask the annotators to annotate parts of the text, which are words orphrases that could be a potential reason for the given annotation. \n\nBefore starting the annotation task, workers are explicitly warned that the annotation task displays some hateful or offensive content. We prepare instructions for workers that clearly explain the goal of the annotation task, how to annotate spans and also include a definition for each category. We provide multiple examples with classification, target community and span annotations to help the annotators understand the task.", "#### Who are the annotators?\n\nTo ensure high quality dataset, we use built-in MTurk qualification requirements, namely the HITApproval Rate(95%) for all Requesters’ HITs and the Number of HITs Approved(5,000) requirements.\n\nPilot annotation: In the pilot task, each annotator was provided with 20 posts and they were required to do the hate/offensive speech classification as well as identify the target community (if any). In order to have a clear understanding of the task, they were provided with multiple examples along with explanations for the labelling process. The main purpose of the pilot task was to shortlist those annotators who were able to do the classification accurately. We also collected feedback from annotators to improve the main annotation task. A total of 621 annotators took part in the pilot task. Out of these, 253 were selected for the main task.\n\n\nMain annotation: After the pilot annotation, once we had ascertained the quality of the annotators, we started with the main annotation task. In each round, we would select a batch of around 200 posts. Each post was annotated by three annotators, then majority voting was applied to decide the final label. The final dataset is composed of 9,055 posts from Twitter and 11,093 posts from Gab. The Krippendorff's alpha for the inter-annotator agreement is 0.46 which is higher than other hate speech datasets.", "### Personal and Sensitive Information\n\nThe posts were anonymized by replacing the usernames with <user> token.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThe dataset could prove beneficial to develop models which are more explainable and less biased.", "### Discussion of Biases", "### Other Known Limitations\n\nThe dataset has some limitations. First is the lack of external context. The dataset lacks any external context such as profile bio, user gender, history of posts etc., which might be helpful in the classification task. Another issue is the focus on English language and lack of multilingual hate speech.", "## Additional Information", "### Dataset Curators\n\nBinny Mathew - IIT Kharagpur, India\nPunyajoy Saha - IIT Kharagpur, India\nSeid Muhie Yimam - Universit ̈at Hamburg, Germany\nChris Biemann - Universit ̈at Hamburg, Germany\nPawan Goyal - IIT Kharagpur, India\nAnimesh Mukherjee - IIT Kharagpur, India", "### Licensing Information\n\nMIT License\n\n\n\n'''bibtex\n@article{mathew2020hatexplain,\n title={HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection}, \n author={Binny Mathew and Punyajoy Saha and Seid Muhie Yimam and Chris Biemann and Pawan Goyal and Animesh Mukherjee},\n year={2021},\n conference={AAAI conference on artificial intelligence}\n}", "### Contributions\n\nThanks to @kushal2000 for adding this dataset." ]
a6cbe619a7a18309bbd7a3813033b180a043c1a8
# Dataset Card for Hausa VOA NER Corpus ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://www.aclweb.org/anthology/2020.emnlp-main.204/ - **Repository:** [Hausa VOA NER](https://github.com/uds-lsv/transfer-distant-transformer-african/tree/master/data/hausa_ner) - **Paper:** https://www.aclweb.org/anthology/2020.emnlp-main.204/ - **Leaderboard:** - **Point of Contact:** [David Adelani](mailto:[email protected]) ### Dataset Summary The Hausa VOA NER is a named entity recognition (NER) dataset for Hausa language based on the [VOA Hausa news](https://www.voahausa.com/) corpus. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The language supported is Hausa. ## Dataset Structure ### Data Instances A data point consists of sentences seperated by empty line and tab-seperated tokens and tags. {'id': '0', 'ner_tags': [B-PER, 0, 0, B-LOC, 0], 'tokens': ['Trump', 'ya', 'ce', 'Rasha', 'ma'] } ### Data Fields - `id`: id of the sample - `tokens`: the tokens of the example text - `ner_tags`: the NER tags of each token The NER tags correspond to this list: ``` "O", "B-PER", "I-PER", "B-ORG", "I-ORG", "B-LOC", "I-LOC", "B-DATE", "I-DATE", ``` The NER tags have the same format as in the CoNLL shared task: a B denotes the first item of a phrase and an I any non-initial word. There are four types of phrases: person names (PER), organizations (ORG), locations (LOC) and dates & times (DATE). (O) is used for tokens not considered part of any named entity. ### Data Splits Training (1,014 sentences), validation (145 sentences) and test split (291 sentences) ## Dataset Creation ### Curation Rationale The data was created to help introduce resources to new language - Hausa. [More Information Needed] ### Source Data #### Initial Data Collection and Normalization The dataset is based on the news domain and was crawled from [VOA Hausa news](https://www.voahausa.com/). [More Information Needed] #### Who are the source language producers? The dataset was collected from VOA Hausa news. Most of the texts used in creating the Hausa VOA NER are news stories from Nigeria, Niger Republic, United States, and other parts of the world. [More Information Needed] ### Annotations Named entity recognition annotation #### Annotation process [More Information Needed] #### Who are the annotators? The data was annotated by Jesujoba Alabi and David Adelani for the paper: [Transfer Learning and Distant Supervision for Multilingual Transformer Models: A Study on African Languages](https://www.aclweb.org/anthology/2020.emnlp-main.204/). [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators The annotated data sets were developed by students of Saarland University, Saarbrücken, Germany . ### Licensing Information The data is under the [Creative Commons Attribution 4.0 ](https://creativecommons.org/licenses/by/4.0/) ### Citation Information ``` @inproceedings{hedderich-etal-2020-transfer, title = "Transfer Learning and Distant Supervision for Multilingual Transformer Models: A Study on {A}frican Languages", author = "Hedderich, Michael A. and Adelani, David and Zhu, Dawei and Alabi, Jesujoba and Markus, Udia and Klakow, Dietrich", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.emnlp-main.204", doi = "10.18653/v1/2020.emnlp-main.204", pages = "2580--2591", } ``` ### Contributions Thanks to [@dadelani](https://github.com/dadelani) for adding this dataset.
hausa_voa_ner
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:ha", "license:cc-by-4.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["ha"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "pretty_name": "Hausa VOA NER Corpus", "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-PER", "2": "I-PER", "3": "B-ORG", "4": "I-ORG", "5": "B-LOC", "6": "I-LOC", "7": "B-DATE", "8": "I-DATE"}}}}], "config_name": "hausa_voa_ner", "splits": [{"name": "train", "num_bytes": 483634, "num_examples": 1015}, {"name": "validation", "num_bytes": 69673, "num_examples": 146}, {"name": "test", "num_bytes": 139227, "num_examples": 292}], "download_size": 324962, "dataset_size": 692534}}
2024-01-18T11:05:04+00:00
[]
[ "ha" ]
TAGS #task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Hausa #license-cc-by-4.0 #region-us
# Dataset Card for Hausa VOA NER Corpus ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: Hausa VOA NER - Paper: URL - Leaderboard: - Point of Contact: David Adelani ### Dataset Summary The Hausa VOA NER is a named entity recognition (NER) dataset for Hausa language based on the VOA Hausa news corpus. ### Supported Tasks and Leaderboards ### Languages The language supported is Hausa. ## Dataset Structure ### Data Instances A data point consists of sentences seperated by empty line and tab-seperated tokens and tags. {'id': '0', 'ner_tags': [B-PER, 0, 0, B-LOC, 0], 'tokens': ['Trump', 'ya', 'ce', 'Rasha', 'ma'] } ### Data Fields - 'id': id of the sample - 'tokens': the tokens of the example text - 'ner_tags': the NER tags of each token The NER tags correspond to this list: The NER tags have the same format as in the CoNLL shared task: a B denotes the first item of a phrase and an I any non-initial word. There are four types of phrases: person names (PER), organizations (ORG), locations (LOC) and dates & times (DATE). (O) is used for tokens not considered part of any named entity. ### Data Splits Training (1,014 sentences), validation (145 sentences) and test split (291 sentences) ## Dataset Creation ### Curation Rationale The data was created to help introduce resources to new language - Hausa. ### Source Data #### Initial Data Collection and Normalization The dataset is based on the news domain and was crawled from VOA Hausa news. #### Who are the source language producers? The dataset was collected from VOA Hausa news. Most of the texts used in creating the Hausa VOA NER are news stories from Nigeria, Niger Republic, United States, and other parts of the world. ### Annotations Named entity recognition annotation #### Annotation process #### Who are the annotators? The data was annotated by Jesujoba Alabi and David Adelani for the paper: Transfer Learning and Distant Supervision for Multilingual Transformer Models: A Study on African Languages. ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators The annotated data sets were developed by students of Saarland University, Saarbrücken, Germany . ### Licensing Information The data is under the Creative Commons Attribution 4.0 ### Contributions Thanks to @dadelani for adding this dataset.
[ "# Dataset Card for Hausa VOA NER Corpus", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: Hausa VOA NER\n- Paper: URL\n- Leaderboard:\n- Point of Contact: David Adelani", "### Dataset Summary\nThe Hausa VOA NER is a named entity recognition (NER) dataset for Hausa language based on the VOA Hausa news corpus.", "### Supported Tasks and Leaderboards", "### Languages\n\nThe language supported is Hausa.", "## Dataset Structure", "### Data Instances\n\nA data point consists of sentences seperated by empty line and tab-seperated tokens and tags. \n{'id': '0',\n 'ner_tags': [B-PER, 0, 0, B-LOC, 0],\n 'tokens': ['Trump', 'ya', 'ce', 'Rasha', 'ma']\n}", "### Data Fields\n\n- 'id': id of the sample\n- 'tokens': the tokens of the example text\n- 'ner_tags': the NER tags of each token\n\nThe NER tags correspond to this list:\n\nThe NER tags have the same format as in the CoNLL shared task: a B denotes the first item of a phrase and an I any non-initial word. There are four types of phrases: person names (PER), organizations (ORG), locations (LOC) and dates & times (DATE). (O) is used for tokens not considered part of any named entity.", "### Data Splits\n\nTraining (1,014 sentences), validation (145 sentences) and test split (291 sentences)", "## Dataset Creation", "### Curation Rationale\n\nThe data was created to help introduce resources to new language - Hausa.", "### Source Data", "#### Initial Data Collection and Normalization\n\nThe dataset is based on the news domain and was crawled from VOA Hausa news.", "#### Who are the source language producers?\n\nThe dataset was collected from VOA Hausa news. Most of the texts used in creating the Hausa VOA NER are news stories from Nigeria, Niger Republic, United States, and other parts of the world.", "### Annotations\nNamed entity recognition annotation", "#### Annotation process", "#### Who are the annotators?\n\nThe data was annotated by Jesujoba Alabi and David Adelani for the paper: \nTransfer Learning and Distant Supervision for Multilingual Transformer Models: A Study on African Languages.", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nThe annotated data sets were developed by students of Saarland University, Saarbrücken, Germany .", "### Licensing Information\n\nThe data is under the Creative Commons Attribution 4.0", "### Contributions\n\nThanks to @dadelani for adding this dataset." ]
[ "TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Hausa #license-cc-by-4.0 #region-us \n", "# Dataset Card for Hausa VOA NER Corpus", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: Hausa VOA NER\n- Paper: URL\n- Leaderboard:\n- Point of Contact: David Adelani", "### Dataset Summary\nThe Hausa VOA NER is a named entity recognition (NER) dataset for Hausa language based on the VOA Hausa news corpus.", "### Supported Tasks and Leaderboards", "### Languages\n\nThe language supported is Hausa.", "## Dataset Structure", "### Data Instances\n\nA data point consists of sentences seperated by empty line and tab-seperated tokens and tags. \n{'id': '0',\n 'ner_tags': [B-PER, 0, 0, B-LOC, 0],\n 'tokens': ['Trump', 'ya', 'ce', 'Rasha', 'ma']\n}", "### Data Fields\n\n- 'id': id of the sample\n- 'tokens': the tokens of the example text\n- 'ner_tags': the NER tags of each token\n\nThe NER tags correspond to this list:\n\nThe NER tags have the same format as in the CoNLL shared task: a B denotes the first item of a phrase and an I any non-initial word. There are four types of phrases: person names (PER), organizations (ORG), locations (LOC) and dates & times (DATE). (O) is used for tokens not considered part of any named entity.", "### Data Splits\n\nTraining (1,014 sentences), validation (145 sentences) and test split (291 sentences)", "## Dataset Creation", "### Curation Rationale\n\nThe data was created to help introduce resources to new language - Hausa.", "### Source Data", "#### Initial Data Collection and Normalization\n\nThe dataset is based on the news domain and was crawled from VOA Hausa news.", "#### Who are the source language producers?\n\nThe dataset was collected from VOA Hausa news. Most of the texts used in creating the Hausa VOA NER are news stories from Nigeria, Niger Republic, United States, and other parts of the world.", "### Annotations\nNamed entity recognition annotation", "#### Annotation process", "#### Who are the annotators?\n\nThe data was annotated by Jesujoba Alabi and David Adelani for the paper: \nTransfer Learning and Distant Supervision for Multilingual Transformer Models: A Study on African Languages.", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nThe annotated data sets were developed by students of Saarland University, Saarbrücken, Germany .", "### Licensing Information\n\nThe data is under the Creative Commons Attribution 4.0", "### Contributions\n\nThanks to @dadelani for adding this dataset." ]
7474a2e18924beddcdfdb1e9173deaeefc11eba4
# Dataset Card for Hausa VOA News Topic Classification dataset (hausa_voa_topics) ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - - **Repository:** https://github.com/uds-lsv/transfer-distant-transformer-african - **Paper:** https://www.aclweb.org/anthology/2020.emnlp-main.204/ - **Leaderboard:** - - **Point of Contact:** Michael A. Hedderich and David Adelani {mhedderich, didelani} (at) lsv.uni-saarland.de ### Dataset Summary A news headline topic classification dataset, similar to AG-news, for Hausa. The news headlines were collected from [VOA Hausa](https://www.voahausa.com/). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Hausa (ISO 639-1: ha) ## Dataset Structure ### Data Instances An instance consists of a news title sentence and the corresponding topic label. ### Data Fields - `news_title`: A news title - `label`: The label describing the topic of the news title. Can be one of the following classes: Nigeria, Africa, World, Health or Politics. ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@michael-aloys](https://github.com/michael-aloys) for adding this dataset.
hausa_voa_topics
[ "task_categories:text-classification", "task_ids:topic-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:ha", "license:unknown", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["ha"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["topic-classification"], "pretty_name": "Hausa Voa News Topic Classification Dataset (HausaVoaTopics)", "dataset_info": {"features": [{"name": "news_title", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "Africa", "1": "Health", "2": "Nigeria", "3": "Politics", "4": "World"}}}}], "splits": [{"name": "train", "num_bytes": 144932, "num_examples": 2045}, {"name": "validation", "num_bytes": 20565, "num_examples": 290}, {"name": "test", "num_bytes": 41195, "num_examples": 582}], "download_size": 195824, "dataset_size": 206692}}
2024-01-18T11:05:06+00:00
[]
[ "ha" ]
TAGS #task_categories-text-classification #task_ids-topic-classification #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Hausa #license-unknown #region-us
# Dataset Card for Hausa VOA News Topic Classification dataset (hausa_voa_topics) ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: - - Repository: URL - Paper: URL - Leaderboard: - - Point of Contact: Michael A. Hedderich and David Adelani {mhedderich, didelani} (at) URL ### Dataset Summary A news headline topic classification dataset, similar to AG-news, for Hausa. The news headlines were collected from VOA Hausa. ### Supported Tasks and Leaderboards ### Languages Hausa (ISO 639-1: ha) ## Dataset Structure ### Data Instances An instance consists of a news title sentence and the corresponding topic label. ### Data Fields - 'news_title': A news title - 'label': The label describing the topic of the news title. Can be one of the following classes: Nigeria, Africa, World, Health or Politics. ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions Thanks to @michael-aloys for adding this dataset.
[ "# Dataset Card for Hausa VOA News Topic Classification dataset (hausa_voa_topics)", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: -\n- Repository: URL\n- Paper: URL\n- Leaderboard: -\n- Point of Contact: Michael A. Hedderich and David Adelani \n{mhedderich, didelani} (at) URL", "### Dataset Summary\n\nA news headline topic classification dataset, similar to AG-news, for Hausa. The news headlines were collected from VOA Hausa.", "### Supported Tasks and Leaderboards", "### Languages\n\nHausa (ISO 639-1: ha)", "## Dataset Structure", "### Data Instances\n\nAn instance consists of a news title sentence and the corresponding topic label.", "### Data Fields\n\n- 'news_title': A news title \n- 'label': The label describing the topic of the news title. Can be one of the following classes: Nigeria, Africa, World, Health or Politics.", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @michael-aloys for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-topic-classification #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Hausa #license-unknown #region-us \n", "# Dataset Card for Hausa VOA News Topic Classification dataset (hausa_voa_topics)", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: -\n- Repository: URL\n- Paper: URL\n- Leaderboard: -\n- Point of Contact: Michael A. Hedderich and David Adelani \n{mhedderich, didelani} (at) URL", "### Dataset Summary\n\nA news headline topic classification dataset, similar to AG-news, for Hausa. The news headlines were collected from VOA Hausa.", "### Supported Tasks and Leaderboards", "### Languages\n\nHausa (ISO 639-1: ha)", "## Dataset Structure", "### Data Instances\n\nAn instance consists of a news title sentence and the corresponding topic label.", "### Data Fields\n\n- 'news_title': A news title \n- 'label': The label describing the topic of the news title. Can be one of the following classes: Nigeria, Africa, World, Health or Politics.", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @michael-aloys for adding this dataset." ]
bf02be70697d04b233d822c8b6d12ea72526e666
# Dataset Card for Hindi Discourse Analysis Dataset ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **HomePage:** [GitHub](https://github.com/midas-research/hindi-nli-data) - **Paper:** [Aclweb](https://www.aclweb.org/anthology/2020.aacl-main.71) - **Point of Contact:** [GitHub](https://github.com/midas-research/hindi-nli-data) ### Dataset Summary - Dataset for Natural Language Inference in Hindi Language. Hindi Discourse Analysis (HDA) Dataset consists of textual-entailment pairs. - Each row of the Datasets if made up of 4 columns - Premise, Hypothesis, Label and Topic. - Premise and Hypothesis is written in Hindi while Entailment_Label is in English. - Entailment_label is of 2 types - entailed and not-entailed. - Entailed means that hypotheis can be inferred from premise and not-entailed means vice versa - Dataset can be used to train models for Natural Language Inference tasks in Hindi Language. ### Supported Tasks and Leaderboards - Natural Language Inference for Hindi ### Languages - Dataset is in Hindi ## Dataset Structure - Data is structured in TSV format. - train, test and dev files are in seperate files ### Dataset Instances An example of 'train' looks as follows. ``` {'hypothesis': 'यह एक वर्णनात्मक कथन है।', 'label': 1, 'premise': 'जैसे उस का सारा चेहरा अपना हो और आँखें किसी दूसरे की जो चेहरे पर पपोटों के पीछे महसूर कर दी गईं।', 'topic': 1} ``` ### Data Fields Each row contatins 4 columns: - premise: string - hypothesis: string - label: class label with values that correspond to "not-entailment" (0) or "entailment" (1) - topic: class label with values that correspond to "Argumentative" (0), "Descriptive" (1), "Dialogic" (2), "Informative" (3) or "Narrative" (4). ### Data Splits - Train : 31892 - Valid : 9460 - Test : 9970 ## Dataset Creation - We employ a recasting technique from Poliak et al. (2018a,b) to convert publicly available Hindi Discourse Analysis classification datasets in Hindi and pose them as TE problems - In this recasting process, we build template hypotheses for each class in the label taxonomy - Then, we pair the original annotated sentence with each of the template hypotheses to create TE samples. - For more information on the recasting process, refer to paper https://www.aclweb.org/anthology/2020.aacl-main.71 ### Source Data Source Dataset for the recasting process is the BBC Hindi Headlines Dataset(https://github.com/NirantK/hindi2vec/releases/tag/bbc-hindi-v0.1) #### Initial Data Collection and Normalization - Initial Data was collected by members of MIDAS Lab from Hindi Websites. They crowd sourced the data annotation process and selected two random stories from our corpus and had the three annotators work on them independently and classify each sentence based on the discourse mode. - Please refer to this paper for detailed information: https://www.aclweb.org/anthology/2020.lrec-1.149/ - The Discourse is further classified into "Argumentative" , "Descriptive" , "Dialogic" , "Informative" and "Narrative" - 5 Clases. #### Who are the source language producers? Please refer to this paper for detailed information: https://www.aclweb.org/anthology/2020.lrec-1.149/ ### Annotations #### Annotation process Annotation process has been described in Dataset Creation Section. #### Who are the annotators? Annotation is done automatically by machine and corresponding recasting process. ### Personal and Sensitive Information No Personal and Sensitive Information is mentioned in the Datasets. ## Considerations for Using the Data Pls refer to this paper: https://www.aclweb.org/anthology/2020.aacl-main.71 ### Discussion of Biases No known bias exist in the dataset. Pls refer to this paper: https://www.aclweb.org/anthology/2020.aacl-main.71 ### Other Known Limitations No other known limitations . Size of data may not be enough to train large models ## Additional Information Pls refer to this link: https://github.com/midas-research/hindi-nli-data ### Dataset Curators It is written in the repo : https://github.com/midas-research/hindi-nli-data that - This corpus can be used freely for research purposes. - The paper listed below provide details of the creation and use of the corpus. If you use the corpus, then please cite the paper. - If interested in commercial use of the corpus, send email to [email protected]. - If you use the corpus in a product or application, then please credit the authors and Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi appropriately. Also, if you send us an email, we will be thrilled to know about how you have used the corpus. - Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi, India disclaims any responsibility for the use of the corpus and does not provide technical support. However, the contact listed above will be happy to respond to queries and clarifications. - Rather than redistributing the corpus, please direct interested parties to this page - Please feel free to send us an email: - with feedback regarding the corpus. - with information on how you have used the corpus. - if interested in having us analyze your data for natural language inference. - if interested in a collaborative research project. ### Licensing Information Copyright (C) 2019 Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi (MIDAS, IIIT-Delhi). Pls contact authors for any information on the dataset. ### Citation Information ``` @inproceedings{uppal-etal-2020-two, title = "Two-Step Classification using Recasted Data for Low Resource Settings", author = "Uppal, Shagun and Gupta, Vivek and Swaminathan, Avinash and Zhang, Haimin and Mahata, Debanjan and Gosangi, Rakesh and Shah, Rajiv Ratn and Stent, Amanda", booktitle = "Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing", month = dec, year = "2020", address = "Suzhou, China", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.aacl-main.71", pages = "706--719", abstract = "An NLP model{'}s ability to reason should be independent of language. Previous works utilize Natural Language Inference (NLI) to understand the reasoning ability of models, mostly focusing on high resource languages like English. To address scarcity of data in low-resource languages such as Hindi, we use data recasting to create NLI datasets for four existing text classification datasets. Through experiments, we show that our recasted dataset is devoid of statistical irregularities and spurious patterns. We further study the consistency in predictions of the textual entailment models and propose a consistency regulariser to remove pairwise-inconsistencies in predictions. We propose a novel two-step classification method which uses textual-entailment predictions for classification task. We further improve the performance by using a joint-objective for classification and textual entailment. We therefore highlight the benefits of data recasting and improvements on classification performance using our approach with supporting experimental results.", } ``` ### Contributions Thanks to [@avinsit123](https://github.com/avinsit123) for adding this dataset.
hda_nli_hindi
[ "task_categories:text-classification", "task_ids:natural-language-inference", "annotations_creators:machine-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|hindi_discourse", "language:hi", "license:mit", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["machine-generated"], "language_creators": ["found"], "language": ["hi"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended|hindi_discourse"], "task_categories": ["text-classification"], "task_ids": ["natural-language-inference"], "pretty_name": "Hindi Discourse Analysis Dataset", "dataset_info": [{"config_name": "HDA hindi nli", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "not-entailment", "1": "entailment"}}}}, {"name": "topic", "dtype": {"class_label": {"names": {"0": "Argumentative", "1": "Descriptive", "2": "Dialogic", "3": "Informative", "4": "Narrative"}}}}], "splits": [{"name": "train", "num_bytes": 8721972, "num_examples": 31892}, {"name": "validation", "num_bytes": 2556118, "num_examples": 9460}, {"name": "test", "num_bytes": 2646453, "num_examples": 9970}], "download_size": 13519261, "dataset_size": 13924543}, {"config_name": "hda nli hindi", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "not-entailment", "1": "entailment"}}}}, {"name": "topic", "dtype": {"class_label": {"names": {"0": "Argumentative", "1": "Descriptive", "2": "Dialogic", "3": "Informative", "4": "Narrative"}}}}], "splits": [{"name": "train", "num_bytes": 8721972, "num_examples": 31892}, {"name": "validation", "num_bytes": 2556118, "num_examples": 9460}, {"name": "test", "num_bytes": 2646453, "num_examples": 9970}], "download_size": 13519261, "dataset_size": 13924543}]}
2024-01-18T11:05:10+00:00
[]
[ "hi" ]
TAGS #task_categories-text-classification #task_ids-natural-language-inference #annotations_creators-machine-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|hindi_discourse #language-Hindi #license-mit #region-us
# Dataset Card for Hindi Discourse Analysis Dataset ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - HomePage: GitHub - Paper: Aclweb - Point of Contact: GitHub ### Dataset Summary - Dataset for Natural Language Inference in Hindi Language. Hindi Discourse Analysis (HDA) Dataset consists of textual-entailment pairs. - Each row of the Datasets if made up of 4 columns - Premise, Hypothesis, Label and Topic. - Premise and Hypothesis is written in Hindi while Entailment_Label is in English. - Entailment_label is of 2 types - entailed and not-entailed. - Entailed means that hypotheis can be inferred from premise and not-entailed means vice versa - Dataset can be used to train models for Natural Language Inference tasks in Hindi Language. ### Supported Tasks and Leaderboards - Natural Language Inference for Hindi ### Languages - Dataset is in Hindi ## Dataset Structure - Data is structured in TSV format. - train, test and dev files are in seperate files ### Dataset Instances An example of 'train' looks as follows. ### Data Fields Each row contatins 4 columns: - premise: string - hypothesis: string - label: class label with values that correspond to "not-entailment" (0) or "entailment" (1) - topic: class label with values that correspond to "Argumentative" (0), "Descriptive" (1), "Dialogic" (2), "Informative" (3) or "Narrative" (4). ### Data Splits - Train : 31892 - Valid : 9460 - Test : 9970 ## Dataset Creation - We employ a recasting technique from Poliak et al. (2018a,b) to convert publicly available Hindi Discourse Analysis classification datasets in Hindi and pose them as TE problems - In this recasting process, we build template hypotheses for each class in the label taxonomy - Then, we pair the original annotated sentence with each of the template hypotheses to create TE samples. - For more information on the recasting process, refer to paper URL ### Source Data Source Dataset for the recasting process is the BBC Hindi Headlines Dataset(URL #### Initial Data Collection and Normalization - Initial Data was collected by members of MIDAS Lab from Hindi Websites. They crowd sourced the data annotation process and selected two random stories from our corpus and had the three annotators work on them independently and classify each sentence based on the discourse mode. - Please refer to this paper for detailed information: URL - The Discourse is further classified into "Argumentative" , "Descriptive" , "Dialogic" , "Informative" and "Narrative" - 5 Clases. #### Who are the source language producers? Please refer to this paper for detailed information: URL ### Annotations #### Annotation process Annotation process has been described in Dataset Creation Section. #### Who are the annotators? Annotation is done automatically by machine and corresponding recasting process. ### Personal and Sensitive Information No Personal and Sensitive Information is mentioned in the Datasets. ## Considerations for Using the Data Pls refer to this paper: URL ### Discussion of Biases No known bias exist in the dataset. Pls refer to this paper: URL ### Other Known Limitations No other known limitations . Size of data may not be enough to train large models ## Additional Information Pls refer to this link: URL ### Dataset Curators It is written in the repo : URL that - This corpus can be used freely for research purposes. - The paper listed below provide details of the creation and use of the corpus. If you use the corpus, then please cite the paper. - If interested in commercial use of the corpus, send email to midas@URL. - If you use the corpus in a product or application, then please credit the authors and Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi appropriately. Also, if you send us an email, we will be thrilled to know about how you have used the corpus. - Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi, India disclaims any responsibility for the use of the corpus and does not provide technical support. However, the contact listed above will be happy to respond to queries and clarifications. - Rather than redistributing the corpus, please direct interested parties to this page - Please feel free to send us an email: - with feedback regarding the corpus. - with information on how you have used the corpus. - if interested in having us analyze your data for natural language inference. - if interested in a collaborative research project. ### Licensing Information Copyright (C) 2019 Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi (MIDAS, IIIT-Delhi). Pls contact authors for any information on the dataset. ### Contributions Thanks to @avinsit123 for adding this dataset.
[ "# Dataset Card for Hindi Discourse Analysis Dataset", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- HomePage: GitHub\n- Paper: Aclweb\n- Point of Contact: GitHub", "### Dataset Summary\n\n- Dataset for Natural Language Inference in Hindi Language. Hindi Discourse Analysis (HDA) Dataset consists of textual-entailment pairs.\n- Each row of the Datasets if made up of 4 columns - Premise, Hypothesis, Label and Topic.\n- Premise and Hypothesis is written in Hindi while Entailment_Label is in English.\n- Entailment_label is of 2 types - entailed and not-entailed.\n- Entailed means that hypotheis can be inferred from premise and not-entailed means vice versa\n- Dataset can be used to train models for Natural Language Inference tasks in Hindi Language.", "### Supported Tasks and Leaderboards\n\n- Natural Language Inference for Hindi", "### Languages\n\n- Dataset is in Hindi", "## Dataset Structure\n\n- Data is structured in TSV format. \n- train, test and dev files are in seperate files", "### Dataset Instances\n\nAn example of 'train' looks as follows.", "### Data Fields\n\nEach row contatins 4 columns:\n- premise: string\n- hypothesis: string\n- label: class label with values that correspond to \"not-entailment\" (0) or \"entailment\" (1)\n- topic: class label with values that correspond to \"Argumentative\" (0), \"Descriptive\" (1), \"Dialogic\" (2), \"Informative\" (3) or \"Narrative\" (4).", "### Data Splits\n\n- Train : 31892\n- Valid : 9460\n- Test : 9970", "## Dataset Creation\n\n- We employ a recasting technique from Poliak et al. (2018a,b) to convert publicly available Hindi Discourse Analysis classification datasets in Hindi and pose them as TE problems\n- In this recasting process, we build template hypotheses for each class in the label taxonomy\n- Then, we pair the original annotated sentence with each of the template hypotheses to create TE samples.\n- For more information on the recasting process, refer to paper URL", "### Source Data\n\nSource Dataset for the recasting process is the BBC Hindi Headlines Dataset(URL", "#### Initial Data Collection and Normalization\n\n- Initial Data was collected by members of MIDAS Lab from Hindi Websites. They crowd sourced the data annotation process and selected two random stories from our corpus and had the three annotators work on them independently and classify each sentence based on the discourse mode.\n- Please refer to this paper for detailed information: URL\n- The Discourse is further classified into \"Argumentative\" , \"Descriptive\" , \"Dialogic\" , \"Informative\" and \"Narrative\" - 5 Clases.", "#### Who are the source language producers?\n\nPlease refer to this paper for detailed information: URL", "### Annotations", "#### Annotation process\n\nAnnotation process has been described in Dataset Creation Section.", "#### Who are the annotators?\n\nAnnotation is done automatically by machine and corresponding recasting process.", "### Personal and Sensitive Information\n\nNo Personal and Sensitive Information is mentioned in the Datasets.", "## Considerations for Using the Data\n\nPls refer to this paper: URL", "### Discussion of Biases\n\nNo known bias exist in the dataset.\nPls refer to this paper: URL", "### Other Known Limitations\n\nNo other known limitations . Size of data may not be enough to train large models", "## Additional Information\n\nPls refer to this link: URL", "### Dataset Curators\n\nIt is written in the repo : URL that \n- This corpus can be used freely for research purposes.\n- The paper listed below provide details of the creation and use of the corpus. If you use the corpus, then please cite the paper.\n- If interested in commercial use of the corpus, send email to midas@URL.\n- If you use the corpus in a product or application, then please credit the authors and Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi appropriately. Also, if you send us an email, we will be thrilled to know about how you have used the corpus.\n- Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi, India disclaims any responsibility for the use of the corpus and does not provide technical support. However, the contact listed above will be happy to respond to queries and clarifications.\n- Rather than redistributing the corpus, please direct interested parties to this page\n- Please feel free to send us an email:\n - with feedback regarding the corpus.\n - with information on how you have used the corpus.\n - if interested in having us analyze your data for natural language inference.\n - if interested in a collaborative research project.", "### Licensing Information\n\nCopyright (C) 2019 Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi (MIDAS, IIIT-Delhi).\nPls contact authors for any information on the dataset.", "### Contributions\n\nThanks to @avinsit123 for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-natural-language-inference #annotations_creators-machine-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|hindi_discourse #language-Hindi #license-mit #region-us \n", "# Dataset Card for Hindi Discourse Analysis Dataset", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- HomePage: GitHub\n- Paper: Aclweb\n- Point of Contact: GitHub", "### Dataset Summary\n\n- Dataset for Natural Language Inference in Hindi Language. Hindi Discourse Analysis (HDA) Dataset consists of textual-entailment pairs.\n- Each row of the Datasets if made up of 4 columns - Premise, Hypothesis, Label and Topic.\n- Premise and Hypothesis is written in Hindi while Entailment_Label is in English.\n- Entailment_label is of 2 types - entailed and not-entailed.\n- Entailed means that hypotheis can be inferred from premise and not-entailed means vice versa\n- Dataset can be used to train models for Natural Language Inference tasks in Hindi Language.", "### Supported Tasks and Leaderboards\n\n- Natural Language Inference for Hindi", "### Languages\n\n- Dataset is in Hindi", "## Dataset Structure\n\n- Data is structured in TSV format. \n- train, test and dev files are in seperate files", "### Dataset Instances\n\nAn example of 'train' looks as follows.", "### Data Fields\n\nEach row contatins 4 columns:\n- premise: string\n- hypothesis: string\n- label: class label with values that correspond to \"not-entailment\" (0) or \"entailment\" (1)\n- topic: class label with values that correspond to \"Argumentative\" (0), \"Descriptive\" (1), \"Dialogic\" (2), \"Informative\" (3) or \"Narrative\" (4).", "### Data Splits\n\n- Train : 31892\n- Valid : 9460\n- Test : 9970", "## Dataset Creation\n\n- We employ a recasting technique from Poliak et al. (2018a,b) to convert publicly available Hindi Discourse Analysis classification datasets in Hindi and pose them as TE problems\n- In this recasting process, we build template hypotheses for each class in the label taxonomy\n- Then, we pair the original annotated sentence with each of the template hypotheses to create TE samples.\n- For more information on the recasting process, refer to paper URL", "### Source Data\n\nSource Dataset for the recasting process is the BBC Hindi Headlines Dataset(URL", "#### Initial Data Collection and Normalization\n\n- Initial Data was collected by members of MIDAS Lab from Hindi Websites. They crowd sourced the data annotation process and selected two random stories from our corpus and had the three annotators work on them independently and classify each sentence based on the discourse mode.\n- Please refer to this paper for detailed information: URL\n- The Discourse is further classified into \"Argumentative\" , \"Descriptive\" , \"Dialogic\" , \"Informative\" and \"Narrative\" - 5 Clases.", "#### Who are the source language producers?\n\nPlease refer to this paper for detailed information: URL", "### Annotations", "#### Annotation process\n\nAnnotation process has been described in Dataset Creation Section.", "#### Who are the annotators?\n\nAnnotation is done automatically by machine and corresponding recasting process.", "### Personal and Sensitive Information\n\nNo Personal and Sensitive Information is mentioned in the Datasets.", "## Considerations for Using the Data\n\nPls refer to this paper: URL", "### Discussion of Biases\n\nNo known bias exist in the dataset.\nPls refer to this paper: URL", "### Other Known Limitations\n\nNo other known limitations . Size of data may not be enough to train large models", "## Additional Information\n\nPls refer to this link: URL", "### Dataset Curators\n\nIt is written in the repo : URL that \n- This corpus can be used freely for research purposes.\n- The paper listed below provide details of the creation and use of the corpus. If you use the corpus, then please cite the paper.\n- If interested in commercial use of the corpus, send email to midas@URL.\n- If you use the corpus in a product or application, then please credit the authors and Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi appropriately. Also, if you send us an email, we will be thrilled to know about how you have used the corpus.\n- Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi, India disclaims any responsibility for the use of the corpus and does not provide technical support. However, the contact listed above will be happy to respond to queries and clarifications.\n- Rather than redistributing the corpus, please direct interested parties to this page\n- Please feel free to send us an email:\n - with feedback regarding the corpus.\n - with information on how you have used the corpus.\n - if interested in having us analyze your data for natural language inference.\n - if interested in a collaborative research project.", "### Licensing Information\n\nCopyright (C) 2019 Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi (MIDAS, IIIT-Delhi).\nPls contact authors for any information on the dataset.", "### Contributions\n\nThanks to @avinsit123 for adding this dataset." ]
ae92197e17fcfe34debe704d38dfa64925e5a540
# Dataset Card for HEAD-QA ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [HEAD-QA homepage](https://aghie.github.io/head-qa/) - **Repository:** [HEAD-QA repository](https://github.com/aghie/head-qa) - **Paper:** [HEAD-QA: A Healthcare Dataset for Complex Reasoning](https://www.aclweb.org/anthology/P19-1092/) - **Leaderboard:** [HEAD-QA leaderboard](https://aghie.github.io/head-qa/#leaderboard-general) - **Point of Contact:** [María Grandury](mailto:[email protected]) (Dataset Submitter) ### Dataset Summary HEAD-QA is a multi-choice HEAlthcare Dataset. The questions come from exams to access a specialized position in the Spanish healthcare system, and are challenging even for highly specialized humans. They are designed by the [Ministerio de Sanidad, Consumo y Bienestar Social](https://www.mscbs.gob.es/), who also provides direct [access](https://fse.mscbs.gob.es/fseweb/view/public/datosanteriores/cuadernosExamen/busquedaConvocatoria.xhtml) to the exams of the last 5 years (in Spanish). ``` Date of the last update of the documents object of the reuse: January, 14th, 2019. ``` HEAD-QA tries to make these questions accesible for the Natural Language Processing community. We hope it is an useful resource towards achieving better QA systems. The dataset contains questions about the following topics: - Medicine - Nursing - Psychology - Chemistry - Pharmacology - Biology ### Supported Tasks and Leaderboards - `multiple-choice-qa`: HEAD-QA is a multi-choice question answering testbed to encourage research on complex reasoning. ### Languages The questions and answers are available in both Spanish (BCP-47 code: 'es-ES') and English (BCP-47 code: 'en'). The language by default is Spanish: ``` from datasets import load_dataset data_es = load_dataset('head_qa') data_en = load_dataset('head_qa', 'en') ``` ## Dataset Structure ### Data Instances A typical data point comprises a question `qtext`, multiple possible answers `atext` and the right answer `ra`. An example from the HEAD-QA dataset looks as follows: ``` { 'qid': '1', 'category': 'biology', 'qtext': 'Los potenciales postsinápticos excitadores:', 'answers': [ { 'aid': 1, 'atext': 'Son de tipo todo o nada.' }, { 'aid': 2, 'atext': 'Son hiperpolarizantes.' }, { 'aid': 3, 'atext': 'Se pueden sumar.' }, { 'aid': 4, 'atext': 'Se propagan a largas distancias.' }, { 'aid': 5, 'atext': 'Presentan un periodo refractario.' }], 'ra': '3', 'image': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=675x538 at 0x1B42B6A1668>, 'name': 'Cuaderno_2013_1_B', 'year': '2013' } ``` ### Data Fields - `qid`: question identifier (int) - `category`: category of the question: "medicine", "nursing", "psychology", "chemistry", "pharmacology", "biology" - `qtext`: question text - `answers`: list of possible answers. Each element of the list is a dictionary with 2 keys: - `aid`: answer identifier (int) - `atext`: answer text - `ra`: `aid` of the right answer (int) - `image`: (optional) a `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]` - `name`: name of the exam from which the question was extracted - `year`: year in which the exam took place ### Data Splits The data is split into train, validation and test set for each of the two languages. The split sizes are as follow: | | Train | Val | Test | | ----- | ------ | ----- | ---- | | Spanish | 2657 | 1366 | 2742 | | English | 2657 | 1366 | 2742 | ## Dataset Creation ### Curation Rationale As motivation for the creation of this dataset, here is the abstract of the paper: "We present HEAD-QA, a multi-choice question answering testbed to encourage research on complex reasoning. The questions come from exams to access a specialized position in the Spanish healthcare system, and are challenging even for highly specialized humans. We then consider monolingual (Spanish) and cross-lingual (to English) experiments with information retrieval and neural techniques. We show that: (i) HEAD-QA challenges current methods, and (ii) the results lag well behind human performance, demonstrating its usefulness as a benchmark for future work." ### Source Data #### Initial Data Collection and Normalization The questions come from exams to access a specialized position in the Spanish healthcare system, and are designed by the [Ministerio de Sanidad, Consumo y Bienestar Social](https://www.mscbs.gob.es/), who also provides direct [access](https://fse.mscbs.gob.es/fseweb/view/public/datosanteriores/cuadernosExamen/busquedaConvocatoria.xhtml) to the exams of the last 5 years (in Spanish). #### Who are the source language producers? The dataset was created by David Vilares and Carlos Gómez-Rodríguez. ### Annotations The dataset does not contain any additional annotations. #### Annotation process [N/A] #### Who are the annotators? [N/A] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators The dataset was created by David Vilares and Carlos Gómez-Rodríguez. ### Licensing Information According to the [HEAD-QA homepage](https://aghie.github.io/head-qa/#legal-requirements): The Ministerio de Sanidad, Consumo y Biniestar Social allows the redistribution of the exams and their content under [certain conditions:](https://www.mscbs.gob.es/avisoLegal/home.htm) - The denaturalization of the content of the information is prohibited in any circumstance. - The user is obliged to cite the source of the documents subject to reuse. - The user is obliged to indicate the date of the last update of the documents object of the reuse. According to the [HEAD-QA repository](https://github.com/aghie/head-qa/blob/master/LICENSE): The dataset is licensed under the [MIT License](https://mit-license.org/). ### Citation Information ``` @inproceedings{vilares-gomez-rodriguez-2019-head, title = "{HEAD}-{QA}: A Healthcare Dataset for Complex Reasoning", author = "Vilares, David and G{\'o}mez-Rodr{\'i}guez, Carlos", booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", month = jul, year = "2019", address = "Florence, Italy", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/P19-1092", doi = "10.18653/v1/P19-1092", pages = "960--966", abstract = "We present HEAD-QA, a multi-choice question answering testbed to encourage research on complex reasoning. The questions come from exams to access a specialized position in the Spanish healthcare system, and are challenging even for highly specialized humans. We then consider monolingual (Spanish) and cross-lingual (to English) experiments with information retrieval and neural techniques. We show that: (i) HEAD-QA challenges current methods, and (ii) the results lag well behind human performance, demonstrating its usefulness as a benchmark for future work.", } ``` ### Contributions Thanks to [@mariagrandury](https://github.com/mariagrandury) for adding this dataset.
head_qa
[ "task_categories:question-answering", "task_ids:multiple-choice-qa", "annotations_creators:no-annotation", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "language:es", "license:mit", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["expert-generated"], "language": ["en", "es"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": ["multiple-choice-qa"], "paperswithcode_id": "headqa", "pretty_name": "HEAD-QA", "config_names": ["en", "es"], "dataset_info": [{"config_name": "es", "features": [{"name": "name", "dtype": "string"}, {"name": "year", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "qid", "dtype": "int32"}, {"name": "qtext", "dtype": "string"}, {"name": "ra", "dtype": "int32"}, {"name": "image", "dtype": "image"}, {"name": "answers", "list": [{"name": "aid", "dtype": "int32"}, {"name": "atext", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1229678, "num_examples": 2657}, {"name": "test", "num_bytes": 1204006, "num_examples": 2742}, {"name": "validation", "num_bytes": 573354, "num_examples": 1366}], "download_size": 79365502, "dataset_size": 3007038}, {"config_name": "en", "features": [{"name": "name", "dtype": "string"}, {"name": "year", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "qid", "dtype": "int32"}, {"name": "qtext", "dtype": "string"}, {"name": "ra", "dtype": "int32"}, {"name": "image", "dtype": "image"}, {"name": "answers", "list": [{"name": "aid", "dtype": "int32"}, {"name": "atext", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1156808, "num_examples": 2657}, {"name": "test", "num_bytes": 1131536, "num_examples": 2742}, {"name": "validation", "num_bytes": 539892, "num_examples": 1366}], "download_size": 79365502, "dataset_size": 2828236}]}
2024-01-18T11:05:14+00:00
[]
[ "en", "es" ]
TAGS #task_categories-question-answering #task_ids-multiple-choice-qa #annotations_creators-no-annotation #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #language-Spanish #license-mit #region-us
Dataset Card for HEAD-QA ======================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: HEAD-QA homepage * Repository: HEAD-QA repository * Paper: HEAD-QA: A Healthcare Dataset for Complex Reasoning * Leaderboard: HEAD-QA leaderboard * Point of Contact: María Grandury (Dataset Submitter) ### Dataset Summary HEAD-QA is a multi-choice HEAlthcare Dataset. The questions come from exams to access a specialized position in the Spanish healthcare system, and are challenging even for highly specialized humans. They are designed by the Ministerio de Sanidad, Consumo y Bienestar Social, who also provides direct access to the exams of the last 5 years (in Spanish). HEAD-QA tries to make these questions accesible for the Natural Language Processing community. We hope it is an useful resource towards achieving better QA systems. The dataset contains questions about the following topics: * Medicine * Nursing * Psychology * Chemistry * Pharmacology * Biology ### Supported Tasks and Leaderboards * 'multiple-choice-qa': HEAD-QA is a multi-choice question answering testbed to encourage research on complex reasoning. ### Languages The questions and answers are available in both Spanish (BCP-47 code: 'es-ES') and English (BCP-47 code: 'en'). The language by default is Spanish: Dataset Structure ----------------- ### Data Instances A typical data point comprises a question 'qtext', multiple possible answers 'atext' and the right answer 'ra'. An example from the HEAD-QA dataset looks as follows: ### Data Fields * 'qid': question identifier (int) * 'category': category of the question: "medicine", "nursing", "psychology", "chemistry", "pharmacology", "biology" * 'qtext': question text * 'answers': list of possible answers. Each element of the list is a dictionary with 2 keys: + 'aid': answer identifier (int) + 'atext': answer text * 'ra': 'aid' of the right answer (int) * 'image': (optional) a 'PIL.Image.Image' object containing the image. Note that when accessing the image column: 'dataset[0]["image"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '"image"' column, *i.e.* 'dataset[0]["image"]' should always be preferred over 'dataset["image"][0]' * 'name': name of the exam from which the question was extracted * 'year': year in which the exam took place ### Data Splits The data is split into train, validation and test set for each of the two languages. The split sizes are as follow: Dataset Creation ---------------- ### Curation Rationale As motivation for the creation of this dataset, here is the abstract of the paper: "We present HEAD-QA, a multi-choice question answering testbed to encourage research on complex reasoning. The questions come from exams to access a specialized position in the Spanish healthcare system, and are challenging even for highly specialized humans. We then consider monolingual (Spanish) and cross-lingual (to English) experiments with information retrieval and neural techniques. We show that: (i) HEAD-QA challenges current methods, and (ii) the results lag well behind human performance, demonstrating its usefulness as a benchmark for future work." ### Source Data #### Initial Data Collection and Normalization The questions come from exams to access a specialized position in the Spanish healthcare system, and are designed by the Ministerio de Sanidad, Consumo y Bienestar Social, who also provides direct access to the exams of the last 5 years (in Spanish). #### Who are the source language producers? The dataset was created by David Vilares and Carlos Gómez-Rodríguez. ### Annotations The dataset does not contain any additional annotations. #### Annotation process [N/A] #### Who are the annotators? [N/A] ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators The dataset was created by David Vilares and Carlos Gómez-Rodríguez. ### Licensing Information According to the HEAD-QA homepage: The Ministerio de Sanidad, Consumo y Biniestar Social allows the redistribution of the exams and their content under certain conditions: * The denaturalization of the content of the information is prohibited in any circumstance. * The user is obliged to cite the source of the documents subject to reuse. * The user is obliged to indicate the date of the last update of the documents object of the reuse. According to the HEAD-QA repository: The dataset is licensed under the MIT License. ### Contributions Thanks to @mariagrandury for adding this dataset.
[ "### Dataset Summary\n\n\nHEAD-QA is a multi-choice HEAlthcare Dataset. The questions come from exams to access a specialized position in the\nSpanish healthcare system, and are challenging even for highly specialized humans. They are designed by the\nMinisterio de Sanidad, Consumo y Bienestar Social, who also provides direct\naccess\nto the exams of the last 5 years (in Spanish).\n\n\nHEAD-QA tries to make these questions accesible for the Natural Language Processing community. We hope it is an useful resource towards achieving better QA systems. The dataset contains questions about the following topics:\n\n\n* Medicine\n* Nursing\n* Psychology\n* Chemistry\n* Pharmacology\n* Biology", "### Supported Tasks and Leaderboards\n\n\n* 'multiple-choice-qa': HEAD-QA is a multi-choice question answering testbed to encourage research on complex reasoning.", "### Languages\n\n\nThe questions and answers are available in both Spanish (BCP-47 code: 'es-ES') and English (BCP-47 code: 'en').\n\n\nThe language by default is Spanish:\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA typical data point comprises a question 'qtext', multiple possible answers 'atext' and the right answer 'ra'.\n\n\nAn example from the HEAD-QA dataset looks as follows:", "### Data Fields\n\n\n* 'qid': question identifier (int)\n* 'category': category of the question: \"medicine\", \"nursing\", \"psychology\", \"chemistry\", \"pharmacology\", \"biology\"\n* 'qtext': question text\n* 'answers': list of possible answers. Each element of the list is a dictionary with 2 keys:\n\t+ 'aid': answer identifier (int)\n\t+ 'atext': answer text\n* 'ra': 'aid' of the right answer (int)\n* 'image': (optional) a 'PIL.Image.Image' object containing the image. Note that when accessing the image column: 'dataset[0][\"image\"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '\"image\"' column, *i.e.* 'dataset[0][\"image\"]' should always be preferred over 'dataset[\"image\"][0]'\n* 'name': name of the exam from which the question was extracted\n* 'year': year in which the exam took place", "### Data Splits\n\n\nThe data is split into train, validation and test set for each of the two languages. The split sizes are as follow:\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nAs motivation for the creation of this dataset, here is the abstract of the paper:\n\n\n\"We present HEAD-QA, a multi-choice question answering testbed to encourage research on complex reasoning. The questions\ncome from exams to access a specialized position in the Spanish healthcare system, and are challenging even for highly\nspecialized humans. We then consider monolingual (Spanish) and cross-lingual (to English) experiments with information\nretrieval and neural techniques. We show that: (i) HEAD-QA challenges current methods, and (ii) the results lag well\nbehind human performance, demonstrating its usefulness as a benchmark for future work.\"", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nThe questions come from exams to access a specialized position in the Spanish healthcare system, and are designed by the\nMinisterio de Sanidad, Consumo y Bienestar Social, who also provides direct\naccess\nto the exams of the last 5 years (in Spanish).", "#### Who are the source language producers?\n\n\nThe dataset was created by David Vilares and Carlos Gómez-Rodríguez.", "### Annotations\n\n\nThe dataset does not contain any additional annotations.", "#### Annotation process\n\n\n[N/A]", "#### Who are the annotators?\n\n\n[N/A]", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThe dataset was created by David Vilares and Carlos Gómez-Rodríguez.", "### Licensing Information\n\n\nAccording to the HEAD-QA homepage:\n\n\nThe Ministerio de Sanidad, Consumo y Biniestar Social allows the redistribution of the exams and their content under certain conditions:\n\n\n* The denaturalization of the content of the information is prohibited in any circumstance.\n* The user is obliged to cite the source of the documents subject to reuse.\n* The user is obliged to indicate the date of the last update of the documents object of the reuse.\n\n\nAccording to the HEAD-QA repository:\n\n\nThe dataset is licensed under the MIT License.", "### Contributions\n\n\nThanks to @mariagrandury for adding this dataset." ]
[ "TAGS\n#task_categories-question-answering #task_ids-multiple-choice-qa #annotations_creators-no-annotation #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #language-Spanish #license-mit #region-us \n", "### Dataset Summary\n\n\nHEAD-QA is a multi-choice HEAlthcare Dataset. The questions come from exams to access a specialized position in the\nSpanish healthcare system, and are challenging even for highly specialized humans. They are designed by the\nMinisterio de Sanidad, Consumo y Bienestar Social, who also provides direct\naccess\nto the exams of the last 5 years (in Spanish).\n\n\nHEAD-QA tries to make these questions accesible for the Natural Language Processing community. We hope it is an useful resource towards achieving better QA systems. The dataset contains questions about the following topics:\n\n\n* Medicine\n* Nursing\n* Psychology\n* Chemistry\n* Pharmacology\n* Biology", "### Supported Tasks and Leaderboards\n\n\n* 'multiple-choice-qa': HEAD-QA is a multi-choice question answering testbed to encourage research on complex reasoning.", "### Languages\n\n\nThe questions and answers are available in both Spanish (BCP-47 code: 'es-ES') and English (BCP-47 code: 'en').\n\n\nThe language by default is Spanish:\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA typical data point comprises a question 'qtext', multiple possible answers 'atext' and the right answer 'ra'.\n\n\nAn example from the HEAD-QA dataset looks as follows:", "### Data Fields\n\n\n* 'qid': question identifier (int)\n* 'category': category of the question: \"medicine\", \"nursing\", \"psychology\", \"chemistry\", \"pharmacology\", \"biology\"\n* 'qtext': question text\n* 'answers': list of possible answers. Each element of the list is a dictionary with 2 keys:\n\t+ 'aid': answer identifier (int)\n\t+ 'atext': answer text\n* 'ra': 'aid' of the right answer (int)\n* 'image': (optional) a 'PIL.Image.Image' object containing the image. Note that when accessing the image column: 'dataset[0][\"image\"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '\"image\"' column, *i.e.* 'dataset[0][\"image\"]' should always be preferred over 'dataset[\"image\"][0]'\n* 'name': name of the exam from which the question was extracted\n* 'year': year in which the exam took place", "### Data Splits\n\n\nThe data is split into train, validation and test set for each of the two languages. The split sizes are as follow:\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nAs motivation for the creation of this dataset, here is the abstract of the paper:\n\n\n\"We present HEAD-QA, a multi-choice question answering testbed to encourage research on complex reasoning. The questions\ncome from exams to access a specialized position in the Spanish healthcare system, and are challenging even for highly\nspecialized humans. We then consider monolingual (Spanish) and cross-lingual (to English) experiments with information\nretrieval and neural techniques. We show that: (i) HEAD-QA challenges current methods, and (ii) the results lag well\nbehind human performance, demonstrating its usefulness as a benchmark for future work.\"", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nThe questions come from exams to access a specialized position in the Spanish healthcare system, and are designed by the\nMinisterio de Sanidad, Consumo y Bienestar Social, who also provides direct\naccess\nto the exams of the last 5 years (in Spanish).", "#### Who are the source language producers?\n\n\nThe dataset was created by David Vilares and Carlos Gómez-Rodríguez.", "### Annotations\n\n\nThe dataset does not contain any additional annotations.", "#### Annotation process\n\n\n[N/A]", "#### Who are the annotators?\n\n\n[N/A]", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThe dataset was created by David Vilares and Carlos Gómez-Rodríguez.", "### Licensing Information\n\n\nAccording to the HEAD-QA homepage:\n\n\nThe Ministerio de Sanidad, Consumo y Biniestar Social allows the redistribution of the exams and their content under certain conditions:\n\n\n* The denaturalization of the content of the information is prohibited in any circumstance.\n* The user is obliged to cite the source of the documents subject to reuse.\n* The user is obliged to indicate the date of the last update of the documents object of the reuse.\n\n\nAccording to the HEAD-QA repository:\n\n\nThe dataset is licensed under the MIT License.", "### Contributions\n\n\nThanks to @mariagrandury for adding this dataset." ]
57995242674fa19c6e547ed15fba1a68bf5025ee
# Dataset Card for PUBHEALTH ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [PUBHEALTH homepage](https://github.com/neemakot/Health-Fact-Checking) - **Repository:** [PUBHEALTH repository](https://github.com/neemakot/Health-Fact-Checking/blob/master/data/DATASHEET.md) - **Paper:** [Explainable Automated Fact-Checking for Public Health Claims"](https://arxiv.org/abs/2010.09926) - **Point of Contact:**[Neema Kotonya](mailto:[email protected]) ### Dataset Summary PUBHEALTH is a comprehensive dataset for explainable automated fact-checking of public health claims. Each instance in the PUBHEALTH dataset has an associated veracity label (true, false, unproven, mixture). Furthermore each instance in the dataset has an explanation text field. The explanation is a justification for which the claim has been assigned a particular veracity label. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The text in the dataset is in English. ## Dataset Structure ### Data Instances The following is an example instance of the PUBHEALTH dataset: | Field | Example | | ----------------- | -------------------------------------------------------------| | __claim__ | Expired boxes of cake and pancake mix are dangerously toxic. | | __explanation__ | What's True: Pancake and cake mixes that contain mold can cause life-threatening allergic reactions. What's False: Pancake and cake mixes that have passed their expiration dates are not inherently dangerous to ordinarily healthy people, and the yeast in packaged baking products does not "over time develops spores." | | __label__ | mixture | | __author(s)__ | David Mikkelson | | __date published__ | April 19, 2006 | | __tags__ | food, allergies, baking, cake | | __main_text__ | In April 2006, the experience of a 14-year-old who had eaten pancakes made from a mix that had gone moldy was described in the popular newspaper column Dear Abby. The account has since been circulated widely on the Internet as scores of concerned homemakers ponder the safety of the pancake and other baking mixes lurking in their larders [...] | | __evidence sources__ | [1] Bennett, Allan and Kim Collins. “An Unusual Case of Anaphylaxis: Mold in Pancake Mix.” American Journal of Forensic Medicine & Pathology. September 2001 (pp. 292-295). [2] Phillips, Jeanne. “Dear Abby.” 14 April 2006 [syndicated column]. | ### Data Fields Mentioned above in data instances. ### Data Splits | | # Instances | |-----------|-------------| | train.tsv | 9832 | | dev.tsv | 1221 | | test.tsv | 1235 | | total | 12288 | ## Dataset Creation ### Curation Rationale The dataset was created to explore fact-checking of difficult to verify claims i.e., those which require expertise from outside of the journalistics domain, in this case biomedical and public health expertise. It was also created in response to the lack of fact-checking datasets which provide gold standard natural language explanations for verdicts/labels. ### Source Data #### Initial Data Collection and Normalization The dataset was retrieved from the following fact-checking, news reviews and news websites: | URL | Type | |-----------------------------------|--------------------| | http://snopes.com/ | fact-checking | | http://politifact.com/ | fact-checking | | http://truthorfiction.com/ | fact-checking | | https://www.factcheck.org/ | fact-checking | | https://fullfact.org/ | fact-checking | | https://apnews.com/ | news | | https://uk.reuters.com/ | news | | https://www.healthnewsreview.org/ | health news review | #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information Not to our knowledge, but if it is brought to our attention that we are mistaken we will make the appropriate corrections to the dataset. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators The dataset was created by Neema Kotonya, and Francesca Toni, for their research paper "Explainable Automated Fact-Checking for Public Health Claims" presented at EMNLP 2020. ### Licensing Information MIT License ### Citation Information ``` @inproceedings{kotonya-toni-2020-explainable, title = "Explainable Automated Fact-Checking for Public Health Claims", author = "Kotonya, Neema and Toni, Francesca", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.emnlp-main.623", pages = "7740--7754", } ``` ### Contributions Thanks to [@bhavitvyamalik](https://github.com/bhavitvyamalik) for adding this dataset.
health_fact
[ "task_categories:text-classification", "task_ids:fact-checking", "task_ids:multi-class-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:mit", "arxiv:2010.09926", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["fact-checking", "multi-class-classification"], "paperswithcode_id": "pubhealth", "pretty_name": "PUBHEALTH", "dataset_info": {"features": [{"name": "claim_id", "dtype": "string"}, {"name": "claim", "dtype": "string"}, {"name": "date_published", "dtype": "string"}, {"name": "explanation", "dtype": "string"}, {"name": "fact_checkers", "dtype": "string"}, {"name": "main_text", "dtype": "string"}, {"name": "sources", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "false", "1": "mixture", "2": "true", "3": "unproven"}}}}, {"name": "subjects", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 53985377, "num_examples": 9832}, {"name": "test", "num_bytes": 6825221, "num_examples": 1235}, {"name": "validation", "num_bytes": 6653044, "num_examples": 1225}], "download_size": 24892660, "dataset_size": 67463642}, "train-eval-index": [{"config": "default", "task": "text-classification", "task_id": "multi_class_classification", "splits": {"train_split": "train", "eval_split": "test"}, "col_mapping": {"claim": "text", "label": "target"}, "metrics": [{"type": "accuracy", "name": "Accuracy"}, {"type": "f1", "name": "F1 macro", "args": {"average": "macro"}}, {"type": "f1", "name": "F1 micro", "args": {"average": "micro"}}, {"type": "f1", "name": "F1 weighted", "args": {"average": "weighted"}}, {"type": "precision", "name": "Precision macro", "args": {"average": "macro"}}, {"type": "precision", "name": "Precision micro", "args": {"average": "micro"}}, {"type": "precision", "name": "Precision weighted", "args": {"average": "weighted"}}, {"type": "recall", "name": "Recall macro", "args": {"average": "macro"}}, {"type": "recall", "name": "Recall micro", "args": {"average": "micro"}}, {"type": "recall", "name": "Recall weighted", "args": {"average": "weighted"}}]}]}
2024-01-18T11:05:17+00:00
[ "2010.09926" ]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-fact-checking #task_ids-multi-class-classification #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-mit #arxiv-2010.09926 #region-us
Dataset Card for PUBHEALTH ========================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: PUBHEALTH homepage * Repository: PUBHEALTH repository * Paper: Explainable Automated Fact-Checking for Public Health Claims" * Point of Contact:Neema Kotonya ### Dataset Summary PUBHEALTH is a comprehensive dataset for explainable automated fact-checking of public health claims. Each instance in the PUBHEALTH dataset has an associated veracity label (true, false, unproven, mixture). Furthermore each instance in the dataset has an explanation text field. The explanation is a justification for which the claim has been assigned a particular veracity label. ### Supported Tasks and Leaderboards ### Languages The text in the dataset is in English. Dataset Structure ----------------- ### Data Instances The following is an example instance of the PUBHEALTH dataset: ### Data Fields Mentioned above in data instances. ### Data Splits Dataset Creation ---------------- ### Curation Rationale The dataset was created to explore fact-checking of difficult to verify claims i.e., those which require expertise from outside of the journalistics domain, in this case biomedical and public health expertise. It was also created in response to the lack of fact-checking datasets which provide gold standard natural language explanations for verdicts/labels. ### Source Data #### Initial Data Collection and Normalization The dataset was retrieved from the following fact-checking, news reviews and news websites: #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Not to our knowledge, but if it is brought to our attention that we are mistaken we will make the appropriate corrections to the dataset. Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators The dataset was created by Neema Kotonya, and Francesca Toni, for their research paper "Explainable Automated Fact-Checking for Public Health Claims" presented at EMNLP 2020. ### Licensing Information MIT License ### Contributions Thanks to @bhavitvyamalik for adding this dataset.
[ "### Dataset Summary\n\n\nPUBHEALTH is a comprehensive dataset for explainable automated fact-checking of public health claims. Each instance in the PUBHEALTH dataset has an associated veracity label (true, false, unproven, mixture). Furthermore each instance in the dataset has an explanation text field. The explanation is a justification for which the claim has been assigned a particular veracity label.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nThe text in the dataset is in English.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nThe following is an example instance of the PUBHEALTH dataset:", "### Data Fields\n\n\nMentioned above in data instances.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nThe dataset was created to explore fact-checking of difficult to verify claims i.e., those which require expertise from outside of the journalistics domain, in this case biomedical and public health expertise.\n\n\nIt was also created in response to the lack of fact-checking datasets which provide gold standard natural language explanations for verdicts/labels.", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nThe dataset was retrieved from the following fact-checking, news reviews and news websites:", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nNot to our knowledge, but if it is brought to our attention that we are mistaken we will make the appropriate corrections to the dataset.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThe dataset was created by Neema Kotonya, and Francesca Toni, for their research paper \"Explainable Automated Fact-Checking for Public Health Claims\" presented at EMNLP 2020.", "### Licensing Information\n\n\nMIT License", "### Contributions\n\n\nThanks to @bhavitvyamalik for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-fact-checking #task_ids-multi-class-classification #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-mit #arxiv-2010.09926 #region-us \n", "### Dataset Summary\n\n\nPUBHEALTH is a comprehensive dataset for explainable automated fact-checking of public health claims. Each instance in the PUBHEALTH dataset has an associated veracity label (true, false, unproven, mixture). Furthermore each instance in the dataset has an explanation text field. The explanation is a justification for which the claim has been assigned a particular veracity label.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nThe text in the dataset is in English.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nThe following is an example instance of the PUBHEALTH dataset:", "### Data Fields\n\n\nMentioned above in data instances.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nThe dataset was created to explore fact-checking of difficult to verify claims i.e., those which require expertise from outside of the journalistics domain, in this case biomedical and public health expertise.\n\n\nIt was also created in response to the lack of fact-checking datasets which provide gold standard natural language explanations for verdicts/labels.", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nThe dataset was retrieved from the following fact-checking, news reviews and news websites:", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nNot to our knowledge, but if it is brought to our attention that we are mistaken we will make the appropriate corrections to the dataset.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThe dataset was created by Neema Kotonya, and Francesca Toni, for their research paper \"Explainable Automated Fact-Checking for Public Health Claims\" presented at EMNLP 2020.", "### Licensing Information\n\n\nMIT License", "### Contributions\n\n\nThanks to @bhavitvyamalik for adding this dataset." ]
2ecd5e49f71424153082040dc6662f1821715c25
# Dataset Card for Hebrew Projectbenyehuda ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/projectbenyehuda/public_domain_dump - **Repository:** https://github.com/projectbenyehuda/public_domain_dump - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This repository contains a dump of thousands of public domain works in Hebrew, from Project Ben-Yehuda, in plaintext UTF-8 files, with and without diacritics (nikkud), and in HTML files. The pseudocatalogue.csv file is a list of titles, authors, genres, and file paths, to help you process the dump. The Releases tab contains a downloadable ZIP archive of the full release. The git repo can be used to track individual file changes, or for incremenetal updates. In the ZIPs, each format (plaintext, plaintext stripped of diacritics, and HTML) has a ZIP file containing one directory per author, with all the author's works under that directory. To request changes or improvements to this dump, file an issue against this repository. All these works are in the public domain, so you are free to make any use of them, and do not need to ask for permission. If you would like to give credit, please credit "Project Ben-Yehuda volunteers", and include a link to the site. We'd also love to hear about the uses you've made of this dump, as it encourages us to keep producing the dump. E-mail us with a brief description (and links, if/as appropriate) of your re-use, at [email protected]. There are 10078 files, 3181136 lines Data Annotation: ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Hebrew ## Dataset Structure ### Data Instances Sample: ``` { 'id': 10, 'url': 'https://raw.githubusercontent.com/projectbenyehuda/public_domain_dump/master/txt/p23/m10.txt', 'title': 'חצי-נחמה', 'authors': 'אחד העם', 'translators': '', 'original_language': '', 'genre': 'מאמרים ומסות', 'source_edition': '', 'text': '\n\n\n\t\n\tחצי-נחמה\n\t\n\n\n\n1\n\nבין כל הצרות שנתחדשו עלינו בעת האחרונה תעשׂה ביחוד רושם מעציב בלב כל איש ישׂראל התחדשות ‘עלילת־הדם’. העלילה הנתעבה הזאת, בכל יָשנה, היתה ותהיה תמיד בעינינו כחדשה, ומימי הבינים ועד עתה תצטין בפעולתה החזקה על רוח עמנו, לא רק במקום המעשׂה, כי אם גם בארצות רחוקות שהגיעה אליהן השמועה.\n\nאמרתי: ‘על רוח עמנו’, כי אמנם רואה אני מקור החזיון הזה לא בסבּות חיצוניות, כי אם עמוק ברוח העם. בימי הבינים, שהיה כלל ישׂראל במקרים כאלה רגיל לחשוב עצמו כעומד במשפט ביחד עם אותם האומללים שעלה עליהם הגורל להיות כפּרותו, – יש מקום אמנם לראות בזה רק תוצאת הסכנה הגשמית הגדולה להכלל כולו, שהיתה כרוכה אז באמת בעקב כל עלילה כזו. גם לפני חמשים שנה, בימי מנוחה ושלוה, שעוררה עלילת דמשׂק רעש גדול כל־כך בארצות המערב, עדיין יש מקום לאמר, כי היתה בזה, להפך, יד הקנאה הגדולה לכבודם וזכויותיהם ששׂררה אז בלבות אחינו המערביים, אשר זה מעט יצאו מעבדות לחרות. אך בימינו אלה הרי מצד אחד אין הסכנה הגשמית גדולה עוד הרבה, ביחוד לקהלות רחוקות, ומצד אחר כבר הורגלנו לשמוע חרפתנו בקור רוח וקנאת כבודנו לא תאכלנו עוד, ואם בכל זאת גם עתה עודנו מתעוררים ומתנודדים בחזקה לשמע ‘עלילת־דם’, ורגש הכלל יתפרץ החוצה מכל עברים להשליך מעליו את החלאה הזאת, – אות הוא, כי לא הפחד ולא הכבוד החיצוני הם המניעים לזה, כי אם רוח העם הוא המרגיש פה את קלונו והוא זה המתעורר והמעורר; כי אעפ"י שבכל יתר הדברים כבר הביאונו צרותינו לאותו המצב שעליו אמר הנשׂיא החכם בימי קדם: ‘אין בשׂר המת מרגיש באיזמל’, – הנה פה אין ‘האיזמל’ חותך את ‘הבשׂר’ בלבד, כי אם עד הנפש יגע…\n\nאבל – ‘אין רע בלא טוב’, כלומר, בלא לקח טוב. גם הרע הגדול הזה שאנו עסוקים בו אינו ריק מלקח טוב, ואנחנו, אשר לא אדונים אנחנו לגורלנו וגם את הטוב גם את הרע נקבל מן החוץ שלא בטובתנו, ראוי לנו לבקש ברעותינו תמיד את התועלת הלמודית הצפונה בהן, והיתה לנו זאת, לפחות, חצי נחמה.\n\n\n\nאחד הכוחות היותר גדולים בחיי החברה הוא – ‘ההסכמה הכללית’. היו ימים שגם הפלוסופים ראו בהסכמה זו מופת נאמן על הדבר המוסכם ונתנו לה מקום בתוך שאר מופתיהם על מציאות האלהות. עתה אמנם יודעים הפלוסופים , שאין שקר ואין אולת אשר לא תוכל לבוא עליו ‘ההסכמה הכללית’, אם אך תנאי החיים נאותים לזה. אבל רק הפלוסופים יודעים זאת, ובעיני ההמון עוד גם עתה אין אַבטוֹריטט גדול מן ‘ההסכמה’: אם ‘כל העולם’ מאמינים שהדבר כן, בודאי כן הוא; ואם אני איני מבינו, אחרים מבינים; ואם אני רואה כעין סתירה לו, הרי ‘הכל’ רואים גם כן ואעפ"כ מאמינים, וכי חכם אני מכל העולם? – זה הוא בקירוב מהלך הרעיונות של האיש הפשוט, בדעת או בלי דעת ברורה, ומתוך כך הוא מסכים גם מצדו ונעשׂה בעצמו חלק מן ‘ההסכמה’.\n\nוכל־כך גדול כוח ‘ההסכמה’, עד שעל הרוב לא יוכל האדם למַלט נפשו מפעולתה גם כשהוא עצמו הוא ‘הדבר המוסכם’. אם ‘כל העולם’ אומרים על פלוני שגדול הוא בחכמה או ביראה, שיש בו מדה פלונית, טובה או רעה, – סופו להסכים לזה גם בעצמו, אע"פ שמתחלה לא מצא בנפשו אותו היתרון או החסרון שאחרים מיחסים לו. ולא זו בלבד אלא שההסכמה הזאת מצד ‘המוסכם’ עצמו פועלת מעט מעט על תכונת רוחו עד שמקרבתו באמת (או, לפחות, מולידה בו נטיה להתקרב) אל המצב ההוא שרואה בו ‘כל העולם’. על כן יזהירו הפדגוגים בצדק, לבלתי עורר את הילדים על מגרעותיהם המוסריות בראשית התפתחותן, וכל שכּן לבלתי יחס להם מגרעות שאין בהם, כי על ידי זה אפשר שנחזק בלבם את הראשונות ונוליד בם נטיה להאחרונות.\n\nואולם, הדבר מובן, כי ‘כל העולם’ אינו אחד לכל אחד. האדם רואה ‘עולמו’ רק באותה החברה שהוא חושב עצמו לחלק ממנה ורואה באישיה אנשים הקרובים לו מאיזה צד; אבל אין אדם חושב למאומה הסכמת אנשים שרוחם זרה לו לגמרי, שאינו מרגיש בנפשו שום יחס פנימי בינו ובינם. ככה אין האוֹרתוֹדוֹכּסים והמשׂכילים שלנו שׂמים לב כלל אלו להסכמתם של אלו, אף בדברים שאינם נוגעים לאמונה ודת, ושׂחקם ולעגם של אלו על אלו אינו עושׂה שום רושם בלבם של שניהם, לפי שכּל אחת משתי הכּתּות רואה את חברתה כאלו אינה. ואולם כשתנאי החיים מכריחים את בני הכתות השונות להמצא במשׂא ומתן תמידי זה עם זה והם מתרגלים לראות זה בזה קודם כל את האדם, – אז יתרחב ‘עולמם’ והשקפותיהם סובלות שנויים רבים על פי הסכמת ‘העולם’ במובנו החדש.\n\n\n\nלפיכך, בדורות שעברו, כשהיו אבותינו מאמינים בפשטו של ‘אתה בחרתנו’, לא היתה החרפּה שחרפום האומות פועלת כלל על טוהר נפשם פנימה. הם ידעו את ערכם ולא התפעלו עד מה מן ‘ההסכמה הכללית’ אשר מחוץ להם, בהיות כל חברת ‘המסכימים’ נחשבת בעיניהם למין מיוחד של בריות זרות להם ושונות מהם שנוי עצמי, בלי כל יחס וכל דמיון בינם ובינן. אז היה היהודי יכול לשמוע במנוחת לב כל המגרעות המוסריות והחטאים המעשׂיים שטפלה עליו הסכמת העמים, מבלי להרגיש בנפשו שום בושה או שפלוּת פנימית. כי מה לו ולמחשבות ‘הנכרים’ עליו ועל ערכּוֹ? לוּ רק יתנו לו לישב בשלוה! – אבל בדור הזה אין הדבר כן, עתה ‘עולמנו’ נתרחב הרבה, וההסכמה האירופּית פועלת עלינו בחזקה בכל ענפי החיים. ולפי שאין אנו מוציאים עוד את ‘הכל’ מן הכלל, לכן נתפעל בעל כרחנו ממה ש’הכל\' מוציאים אותנו מן הכלל, סופר אחד רוסי שאל באלו הימים בתמימוּת: אחר שכל העולם שׂונאים את היהודים, וכי אפשר לאמור, שכל העולם חייבים והיהודים זכאים? – ושאלה כזו מתגנבת עתה גם אל לב רבים מאחינו: וכי אפשר לאמור, שכל אותן התכונות הנשחתות והמעשׂים הרעים שכל העולם מיחס ליהודים אינם אלא ‘בדותא’?\n\nוהספק הזה, מכיון שנתעורר, מוצא לו מחיה בנקל באותם ההיקשים המוטעים ‘מן הפרט אל הכלל’ הרגילים מאד אצל המון בני האדם. הספור הידוע על דבר נוסע אחד, שבא לאחת הערים ונזדמן לאכסניא שהיה בה משרת כבד־פה, וכתב בפנקסו: בעיר פלונית משרתי האכסניות הם כבדי־פה, – הספור הזה מצייר בצורה של התוּל דרכי־ההגיון של ההמון ברוב משפטיו הכלליים. כל החזיונות הנראים באיזה דבר פרטי רגיל ההמון ליחס אל הכלל שהדבר ההוא מתחשב עליו לפי שמו התמידי, מבלי להתבונן, כי ‘פרט’ אחד יוכל להתחשב על ‘כללים’ רבים ביחד, כלומר, להיות שוּתף בתכוּנה אחת עם פרטיו של כלל אחד ובתכונה אחרת עם פרטיו של כלל אחר, בעוד שהשם הנקרא עליו מציין רק את התיחסותו לאחד הכללים באחד מצדדיו, לא בכולם. – על משפטים ממין זה תוכל להשען, וגם תשען באמת, ההסכמה הכללית ביחוסה אלינו: פלוני ופלוני הם יהודים לפי שמם ורמאים לפי תכוּנתם; שמע מינה, שהיהודים הם לפי תכונתם רמאים. ההגיון האמתי ישיב אמנם על זה, כי אף אם היו באמת כל היהודים בדורנו רמאים, אין מזה עוד ראיה, שהיהודים הם רמאים, כלומר, שתכוּנת הרמאוּת הנמצאת בכל יהודי נמצאת בו מצד התיחסותו אל הכלל ‘יהודים’ ולא מצד איזה כלל אחר (למשל, כלל ‘סוחרים’), שגם אליו מתיחס היהודי בתור פרט, ביחד עם אחרים אשר דבר אין להם עם הכלל ‘יהודים’. וכדי לברר הדבר, צריך לבדוֹק תחלה אותם ‘האחרים’ המשתתפים יחד עם היהודים בכללים אחרים. ורק אחר שנמצא על ידי בדיקה זו, שאין תכוּנת הרמאוּת מצויה בשום ‘כלל’ אחר המשותף ליהודים ולאחרים, – רק אז תהיה לנו צדקה לחרוץ משפט, כי היהדות היא אֵם הרמאוּת. – אבל, כאמור, אין דרכם של בני אדם להעמיק בהגיון, ואין אנו יכולים לדרוש כזאת גם מהמון בני עמנו. הם שומעים את המשפט החרוץ של ההסכמה הכללית ורואים עם זה, שרבים בקרבּנוּ כך הם באמת כמו שאומרת ההסכמה, ובזה די להם, והרי הם מתחילים להסכים גם בעצמם. וככה עוברות ‘תכוּנות היהודים’ כמטבע כשרה מיד ליד, מן ההסכמה החיצונית של העמים אל ההסכמה הפנימית בקרב עמנו, רק עם ההבדל הזה, שהעמים מונים את תכוּנותינו הרעות אחת לאחת בקול ענוֹת גבוּרה ולעג השאננים, ואנחנו עונים אחריהם מלה במלה בקול דממה דקה והצטדקות חלושה; הם ממשילים אותנו לכלי חרס, שאין לו תקנה אלא שבירה, ואנחנו ממשילים עצמנו לכלי מתכת, שאפשר לו בהגעלה ולבּוּן…\n\nהמצב הזה, אם יאריך ימים, יוכל לגרום לנו נזק מוסרי גדול. אין דבר מסוכּן לגוי ולאדם כהודאה על חטאים שאין בו. מי שחטא באמת, הרי שערי תשובה לא ננעלו, וברצונו הטוב יכול להסיר חלאתו מעליו. אבל מי שאחרים הביאוהו לחשוֹד עצמו במה שאין בו, איך יוכל להטהר בעיני עצמו? מצד אחד מאמין הוא לדברי האומרים לו: טול קורה מבין עיניך, ומצד אחר מרגיש הוא, שאינו יכול לטול את הקורה מבין עיניו, אחר שאינה באמת אלא בדמיון, והרי הוא במצב אותם המונומַנים הידועים, שמאיזו סבּה באו לידי אמונה, כי משׂא כבד תלוי להם בחוטמם מבלי שיוכלו להסירו. ולא עוד אלא שלפעמים תביא אמונה זו את האיש הפרטי להשתתף באותה המדה המגוּנה שלפי אמונתו היא קנין הכלל כולו, אעפ“י שהוא עצמו מצד פרטיותו אינו נוטה כלל לזה. אין ספק, למשל, כי בקרב העם שיצאו מתוכו אנשים כהרמב”ם נמצאים גם עתה בעלי דעה מיושבת ואוהבי סדר ושיטה בכל דבר, והם, בקחתם חלק בעבודת הצבּוּר, היו יכולים לתת בה את רוחם ולפעול גם על יתר העובדים. אבל מה נעשׂה, וכל גזרה ‘ההסכמה’, ששׂנאת הסדרים היא תכוּנה יהודית, וכבר הסכמנו גם אנחנו להסכמה זו (אעפ"י שעוד לא נתברר, אם התכוּנה הזאת, המצויה באמת בחלק גדול מעמנו, מתיחסת אל הכלל ‘יהודים’, או אולי – מה שיותר מתקבל על הלב – אל הכלל ‘חניכי־החדר’). ועל כן תרפינה ידי אוהבי הסדר, בהאמינם, כי אין עצה ואין תבונה נגד תכוּנת העם. ואם פטריוטים הם, יעקרו גם מלבם את האהבה לסדרים, המתנגדת לרוח עמם, ויעשׂו גם הם את מעשׂיהם כראוי ליהודים אמתיים…\n\n\n\nצריך איפוא לבקש איזה אמצעי, איך להוציא את עצמנו מתחת השפעת ‘ההסכמה הכללית’ בנוגע לתכוּנות ישׂראל וערכו המוסרי, כדי שלא נהיה בזויים בעיני עצמנו ולא נחשוב, שבאמת גרועים אנחנו מכל בני האדם תחת השמש, וכדי שלא נבוא עי"ז להיות ברבות הימים בפועל מה שאין אנו עתה אלא בדמיון.\n\nואת האמצעי הזה נותנת לנו ‘ההסכמה הכללית’ עצמה על ידי עלילת־הדם. העלילה הזאת היא היחידה בין כל רעותיה אשר בה לא תוכל ההסכמה להביא גם אותנו לידי ספק, אם באמת ‘כל העולם חייבים ואנחנו זכאים’, בהיותה מיוסדת כולה על שקר מוחלט ואין לה משען באיזה היקש מוטעה ‘מן הפרט על הכלל’. כל איש ישׂראל שנתחנך בתוך עמו יודע בבירור גמור, שאין בתוך כלל ישׂראל אף פרט אחד האוכל דם אדם לשם שמים. ואת הידיעה הברורה הזאת משגיאת ‘ההסכמה הכללית’, המתחדשת בלבנו מזמן לזמן על ידי התחדשות עלילת־הדם, צריכים אנו לשמור תמיד בזכרוננו, והיא תעזור לנו לעקור מלבנו את הנטיה להכּנע מפני האַבטוֹריטט של ‘כל העולם’ גם ביתר הדברים. יאמר כל העולם מה שיאמר על דבר פחיתוּת ערכּנוּ המוסרי, – אנחנו יודעים, כי ‘ההסכמה’ הזאת נשענת רק על הגיון המוני, בלי כל יסוד מדעי אמתּי. כי מי בא בסוד עמקי רוחנו וראה את ‘היהודי’ כמו שהוא מצד עצמו? מי שקל זה לעומת זה יהודים ושאינם יהודים הדומים אלו לאלו בכל יתר ‘הכללים’: סוחרים לעומת סוחרים, נרדפים לעומת נרדפים, רעבים לעומת רעבים וכו\'. – מי שקל כל אלה במאזני החכמה האמתּית ומצא את הכף מַכרעת לאחד הצדדים?\n\n‘וכי אפשר שכּל העולם חייבים והיהודים זכאים?’\n\nאפשר ואפשר, ועלילת־הדם תוכיח. פה הרי היהודים זכאים וטהורים כמלאכי השרת: יהודי ודם! היש שני הפכים גדולים מאלו? – ואף על פי כן…\n\n\n\nה\' תשרי תרנ"ג\n\n\n\n\n\n\nנדפס ב‘המליץ’ י“ד תשרי תרנ”ג. \xa0↩\n\n\n\n\n\n\n\n\n\n\nאת הטקסט לעיל הפיקו מתנדבי פרויקט בן־יהודה באינטרנט. הוא זמין תמיד בכתובת הבאה:https://benyehuda.org/read/10' } ``` ### Data Fields - `authors` - `genre` - `id` - `original_language` - `source_edition` - `text` - `title` - `translators` - `url` ### Data Splits | | train | |--------|------:| | corpus | 10078 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? Researchers. ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information MIT License Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ### Citation Information ``` @article{, author = {}, title = {Public domain texts from Project Ben-Yehuda}, journal = {}, url = {https://github.com/projectbenyehuda/public_domain_dump}, year = {2020}, } ``` ### Contributions Thanks to [@imvladikon](https://github.com/imvladikon) for adding this dataset.
hebrew_projectbenyehuda
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:he", "license:mit", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["he"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-generation", "fill-mask"], "task_ids": ["language-modeling", "masked-language-modeling"], "pretty_name": "Hebrew Projectbenyehuda", "dataset_info": {"features": [{"name": "id", "dtype": "int32"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "authors", "dtype": "string"}, {"name": "translators", "dtype": "string"}, {"name": "original_language", "dtype": "string"}, {"name": "genre", "dtype": "string"}, {"name": "source_edition", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 318732537, "num_examples": 10078}], "download_size": 317749152, "dataset_size": 318732537}}
2024-01-18T11:05:18+00:00
[]
[ "he" ]
TAGS #task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Hebrew #license-mit #region-us
Dataset Card for Hebrew Projectbenyehuda ======================================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: * Leaderboard: * Point of Contact: ### Dataset Summary This repository contains a dump of thousands of public domain works in Hebrew, from Project Ben-Yehuda, in plaintext UTF-8 files, with and without diacritics (nikkud), and in HTML files. The URL file is a list of titles, authors, genres, and file paths, to help you process the dump. The Releases tab contains a downloadable ZIP archive of the full release. The git repo can be used to track individual file changes, or for incremenetal updates. In the ZIPs, each format (plaintext, plaintext stripped of diacritics, and HTML) has a ZIP file containing one directory per author, with all the author's works under that directory. To request changes or improvements to this dump, file an issue against this repository. All these works are in the public domain, so you are free to make any use of them, and do not need to ask for permission. If you would like to give credit, please credit "Project Ben-Yehuda volunteers", and include a link to the site. We'd also love to hear about the uses you've made of this dump, as it encourages us to keep producing the dump. E-mail us with a brief description (and links, if/as appropriate) of your re-use, at editor@URL. There are 10078 files, 3181136 lines Data Annotation: ### Supported Tasks and Leaderboards ### Languages Hebrew Dataset Structure ----------------- ### Data Instances Sample: ### Data Fields * 'authors' * 'genre' * 'id' * 'original\_language' * 'source\_edition' * 'text' * 'title' * 'translators' * 'url' ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? Researchers. ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information MIT License Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ### Contributions Thanks to @imvladikon for adding this dataset.
[ "### Dataset Summary\n\n\nThis repository contains a dump of thousands of public domain works in Hebrew, from Project Ben-Yehuda, in plaintext UTF-8 files, with and without diacritics (nikkud), and in HTML files. The URL file is a list of titles, authors, genres, and file paths, to help you process the dump.\n\n\nThe Releases tab contains a downloadable ZIP archive of the full release. The git repo can be used to track individual file changes, or for incremenetal updates. In the ZIPs, each format (plaintext, plaintext stripped of diacritics, and HTML) has a ZIP file containing one directory per author, with all the author's works under that directory.\n\n\nTo request changes or improvements to this dump, file an issue against this repository.\n\n\nAll these works are in the public domain, so you are free to make any use of them, and do not need to ask for permission.\n\n\nIf you would like to give credit, please credit \"Project Ben-Yehuda volunteers\", and include a link to the site. We'd also love to hear about the uses you've made of this dump, as it encourages us to keep producing the dump. E-mail us with a brief description (and links, if/as appropriate) of your re-use, at editor@URL.\n\n\nThere are 10078 files, 3181136 lines\n\n\nData Annotation:", "### Supported Tasks and Leaderboards", "### Languages\n\n\nHebrew\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nSample:", "### Data Fields\n\n\n* 'authors'\n* 'genre'\n* 'id'\n* 'original\\_language'\n* 'source\\_edition'\n* 'text'\n* 'title'\n* 'translators'\n* 'url'", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?\n\n\nResearchers.", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nMIT License\n\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.", "### Contributions\n\n\nThanks to @imvladikon for adding this dataset." ]
[ "TAGS\n#task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Hebrew #license-mit #region-us \n", "### Dataset Summary\n\n\nThis repository contains a dump of thousands of public domain works in Hebrew, from Project Ben-Yehuda, in plaintext UTF-8 files, with and without diacritics (nikkud), and in HTML files. The URL file is a list of titles, authors, genres, and file paths, to help you process the dump.\n\n\nThe Releases tab contains a downloadable ZIP archive of the full release. The git repo can be used to track individual file changes, or for incremenetal updates. In the ZIPs, each format (plaintext, plaintext stripped of diacritics, and HTML) has a ZIP file containing one directory per author, with all the author's works under that directory.\n\n\nTo request changes or improvements to this dump, file an issue against this repository.\n\n\nAll these works are in the public domain, so you are free to make any use of them, and do not need to ask for permission.\n\n\nIf you would like to give credit, please credit \"Project Ben-Yehuda volunteers\", and include a link to the site. We'd also love to hear about the uses you've made of this dump, as it encourages us to keep producing the dump. E-mail us with a brief description (and links, if/as appropriate) of your re-use, at editor@URL.\n\n\nThere are 10078 files, 3181136 lines\n\n\nData Annotation:", "### Supported Tasks and Leaderboards", "### Languages\n\n\nHebrew\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nSample:", "### Data Fields\n\n\n* 'authors'\n* 'genre'\n* 'id'\n* 'original\\_language'\n* 'source\\_edition'\n* 'text'\n* 'title'\n* 'translators'\n* 'url'", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?\n\n\nResearchers.", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nMIT License\n\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.", "### Contributions\n\n\nThanks to @imvladikon for adding this dataset." ]
952c9525954c1dac50d5f95945eb5585bb6464e7
# Dataset Card for HebrewSentiment ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/omilab/Neural-Sentiment-Analyzer-for-Modern-Hebrew - **Repository:** https://github.com/omilab/Neural-Sentiment-Analyzer-for-Modern-Hebrew - **Paper:** http://aclweb.org/anthology/C18-1190 - **Leaderboard:** - **Point of Contact:** ### Dataset Summary HebrewSentiment is a data set consists of 12,804 user comments to posts on the official Facebook page of Israel’s president, Mr. Reuven Rivlin. In October 2015, we used the open software application Netvizz (Rieder, 2013) to scrape all the comments to all of the president’s posts in the period of June – August 2014, the first three months of Rivlin’s presidency.2 While the president’s posts aimed at reconciling tensions and called for tolerance and empathy, the sentiment expressed in the comments to the president’s posts was polarized between citizens who warmly thanked the president, and citizens that fiercely critiqued his policy. Of the 12,804 comments, 370 are neutral; 8,512 are positive, 3,922 negative. Data Annotation: ### Supported Tasks and Leaderboards Sentiment Analysis ### Languages Hebrew ## Dataset Structure tsv format: {hebrew_sentence}\t{sentiment_label} ### Data Instances רובי הייתי רוצה לראות ערביה נישאת ליהודי 1 תמונה יפיפיה-שפו 0 חייבים לעשות סוג של חרם כשכתבים שונאי ישראל עולים לשידור צריכים להעביר לערוץ אחר ואז תראו מה יעשה כוחו של הרייטינג ( בהקשר לדבריה של רינה מצליח ) 2 ### Data Fields - `text`: The modern hebrew inpput text. - `label`: The sentiment label. 0=positive , 1=negative, 2=off-topic. ### Data Splits | | train | test | |--------------------------|--------|---------| | HebrewSentiment (token) | 10243 | 2559 | | HebrewSentiment (morph) | 10243 | 2559 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization User comments to posts on the official Facebook page of Israel’s president, Mr. Reuven Rivlin. In October 2015, we used the open software application Netvizz (Rieder, 2013) to scrape all the comments to all of the president’s posts in the period of June – August 2014, the first three months of Rivlin’s presidency. #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process A trained researcher examined each comment and determined its sentiment value, where comments with an overall positive sentiment were assigned the value 0, comments with an overall negative sentiment were assigned the value 1, and comments that are off-topic to the post’s content were assigned the value 2. We validated the coding scheme by asking a second trained researcher to code the same data. There was substantial agreement between raters (N of agreements: 10623, N of disagreements: 2105, Coehn’s Kappa = 0.697, p = 0). #### Who are the annotators? Researchers ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators OMIlab, The Open University of Israel ### Licensing Information MIT License Copyright (c) 2018 OMIlab, The Open University of Israel Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ### Citation Information @inproceedings{amram-etal-2018-representations, title = "Representations and Architectures in Neural Sentiment Analysis for Morphologically Rich Languages: A Case Study from {M}odern {H}ebrew", author = "Amram, Adam and Ben David, Anat and Tsarfaty, Reut", booktitle = "Proceedings of the 27th International Conference on Computational Linguistics", month = aug, year = "2018", address = "Santa Fe, New Mexico, USA", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/C18-1190", pages = "2242--2252", abstract = "This paper empirically studies the effects of representation choices on neural sentiment analysis for Modern Hebrew, a morphologically rich language (MRL) for which no sentiment analyzer currently exists. We study two dimensions of representational choices: (i) the granularity of the input signal (token-based vs. morpheme-based), and (ii) the level of encoding of vocabulary items (string-based vs. character-based). We hypothesise that for MRLs, languages where multiple meaning-bearing elements may be carried by a single space-delimited token, these choices will have measurable effects on task perfromance, and that these effects may vary for different architectural designs {---} fully-connected, convolutional or recurrent. Specifically, we hypothesize that morpheme-based representations will have advantages in terms of their generalization capacity and task accuracy, due to their better OOV coverage. To empirically study these effects, we develop a new sentiment analysis benchmark for Hebrew, based on 12K social media comments, and provide two instances of these data: in token-based and morpheme-based settings. Our experiments show that representation choices empirical effects vary with architecture type. While fully-connected and convolutional networks slightly prefer token-based settings, RNNs benefit from a morpheme-based representation, in accord with the hypothesis that explicit morphological information may help generalize. Our endeavour also delivers the first state-of-the-art broad-coverage sentiment analyzer for Hebrew, with over 89{\%} accuracy, alongside an established benchmark to further study the effects of linguistic representation choices on neural networks{'} task performance.", } ### Contributions Thanks to [@elronbandel](https://github.com/elronbandel) for adding this dataset.
hebrew_sentiment
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:he", "license:mit", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["he"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["sentiment-classification"], "paperswithcode_id": "modern-hebrew-sentiment-dataset", "pretty_name": "HebrewSentiment", "dataset_info": [{"config_name": "token", "features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "pos", "1": "neg", "2": "off-topic"}}}}], "splits": [{"name": "train", "num_bytes": 2159738, "num_examples": 10244}, {"name": "test", "num_bytes": 540883, "num_examples": 2560}], "download_size": 2593643, "dataset_size": 2700621}, {"config_name": "morph", "features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "pos", "1": "neg", "2": "off-topic"}}}}], "splits": [{"name": "train", "num_bytes": 2258128, "num_examples": 10221}, {"name": "test", "num_bytes": 571401, "num_examples": 2555}], "download_size": 2722672, "dataset_size": 2829529}]}
2024-01-18T11:05:19+00:00
[]
[ "he" ]
TAGS #task_categories-text-classification #task_ids-sentiment-classification #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Hebrew #license-mit #region-us
Dataset Card for HebrewSentiment ================================ Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: URL * Leaderboard: * Point of Contact: ### Dataset Summary HebrewSentiment is a data set consists of 12,804 user comments to posts on the official Facebook page of Israel’s president, Mr. Reuven Rivlin. In October 2015, we used the open software application Netvizz (Rieder, 2013) to scrape all the comments to all of the president’s posts in the period of June – August 2014, the first three months of Rivlin’s presidency.2 While the president’s posts aimed at reconciling tensions and called for tolerance and empathy, the sentiment expressed in the comments to the president’s posts was polarized between citizens who warmly thanked the president, and citizens that fiercely critiqued his policy. Of the 12,804 comments, 370 are neutral; 8,512 are positive, 3,922 negative. Data Annotation: ### Supported Tasks and Leaderboards Sentiment Analysis ### Languages Hebrew Dataset Structure ----------------- tsv format: {hebrew\_sentence}\t{sentiment\_label} ### Data Instances רובי הייתי רוצה לראות ערביה נישאת ליהודי 1 תמונה יפיפיה-שפו 0 חייבים לעשות סוג של חרם כשכתבים שונאי ישראל עולים לשידור צריכים להעביר לערוץ אחר ואז תראו מה יעשה כוחו של הרייטינג ( בהקשר לדבריה של רינה מצליח ) 2 ### Data Fields * 'text': The modern hebrew inpput text. * 'label': The sentiment label. 0=positive , 1=negative, 2=off-topic. ### Data Splits train: HebrewSentiment (token), test: 10243 train: HebrewSentiment (morph), test: 10243 Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization User comments to posts on the official Facebook page of Israel’s president, Mr. Reuven Rivlin. In October 2015, we used the open software application Netvizz (Rieder, 2013) to scrape all the comments to all of the president’s posts in the period of June – August 2014, the first three months of Rivlin’s presidency. #### Who are the source language producers? ### Annotations #### Annotation process A trained researcher examined each comment and determined its sentiment value, where comments with an overall positive sentiment were assigned the value 0, comments with an overall negative sentiment were assigned the value 1, and comments that are off-topic to the post’s content were assigned the value 2. We validated the coding scheme by asking a second trained researcher to code the same data. There was substantial agreement between raters (N of agreements: 10623, N of disagreements: 2105, Coehn’s Kappa = 0.697, p = 0). #### Who are the annotators? Researchers ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators OMIlab, The Open University of Israel ### Licensing Information MIT License Copyright (c) 2018 OMIlab, The Open University of Israel Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. @inproceedings{amram-etal-2018-representations, title = "Representations and Architectures in Neural Sentiment Analysis for Morphologically Rich Languages: A Case Study from {M}odern {H}ebrew", author = "Amram, Adam and Ben David, Anat and Tsarfaty, Reut", booktitle = "Proceedings of the 27th International Conference on Computational Linguistics", month = aug, year = "2018", address = "Santa Fe, New Mexico, USA", publisher = "Association for Computational Linguistics", url = "URL pages = "2242--2252", abstract = "This paper empirically studies the effects of representation choices on neural sentiment analysis for Modern Hebrew, a morphologically rich language (MRL) for which no sentiment analyzer currently exists. We study two dimensions of representational choices: (i) the granularity of the input signal (token-based vs. morpheme-based), and (ii) the level of encoding of vocabulary items (string-based vs. character-based). We hypothesise that for MRLs, languages where multiple meaning-bearing elements may be carried by a single space-delimited token, these choices will have measurable effects on task perfromance, and that these effects may vary for different architectural designs {---} fully-connected, convolutional or recurrent. Specifically, we hypothesize that morpheme-based representations will have advantages in terms of their generalization capacity and task accuracy, due to their better OOV coverage. To empirically study these effects, we develop a new sentiment analysis benchmark for Hebrew, based on 12K social media comments, and provide two instances of these data: in token-based and morpheme-based settings. Our experiments show that representation choices empirical effects vary with architecture type. While fully-connected and convolutional networks slightly prefer token-based settings, RNNs benefit from a morpheme-based representation, in accord with the hypothesis that explicit morphological information may help generalize. Our endeavour also delivers the first state-of-the-art broad-coverage sentiment analyzer for Hebrew, with over 89{%} accuracy, alongside an established benchmark to further study the effects of linguistic representation choices on neural networks{'} task performance.", } ### Contributions Thanks to @elronbandel for adding this dataset.
[ "### Dataset Summary\n\n\nHebrewSentiment is a data set consists of 12,804 user comments to posts on the official Facebook page of Israel’s\npresident, Mr. Reuven Rivlin. In October 2015, we used the open software application Netvizz (Rieder,\n2013) to scrape all the comments to all of the president’s posts in the period of June – August 2014,\nthe first three months of Rivlin’s presidency.2 While the president’s posts aimed at reconciling tensions\nand called for tolerance and empathy, the sentiment expressed in the comments to the president’s posts\nwas polarized between citizens who warmly thanked the president, and citizens that fiercely critiqued his\npolicy. Of the 12,804 comments, 370 are neutral; 8,512 are positive, 3,922 negative.\n\n\nData Annotation:", "### Supported Tasks and Leaderboards\n\n\nSentiment Analysis", "### Languages\n\n\nHebrew\n\n\nDataset Structure\n-----------------\n\n\ntsv format:\n{hebrew\\_sentence}\\t{sentiment\\_label}", "### Data Instances\n\n\nרובי הייתי רוצה לראות ערביה נישאת ליהודי 1\nתמונה יפיפיה-שפו 0\nחייבים לעשות סוג של חרם כשכתבים שונאי ישראל עולים לשידור צריכים להעביר לערוץ אחר ואז תראו מה יעשה כוחו של הרייטינג ( בהקשר לדבריה של רינה מצליח ) 2", "### Data Fields\n\n\n* 'text': The modern hebrew inpput text.\n* 'label': The sentiment label. 0=positive , 1=negative, 2=off-topic.", "### Data Splits\n\n\ntrain: HebrewSentiment (token), test: 10243\ntrain: HebrewSentiment (morph), test: 10243\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nUser comments to posts on the official Facebook page of Israel’s\npresident, Mr. Reuven Rivlin. In October 2015, we used the open software application Netvizz (Rieder,\n2013) to scrape all the comments to all of the president’s posts in the period of June – August 2014,\nthe first three months of Rivlin’s presidency.", "#### Who are the source language producers?", "### Annotations", "#### Annotation process\n\n\nA trained researcher examined each comment and determined its sentiment value,\nwhere comments with an overall positive sentiment were assigned the value 0, comments with an overall\nnegative sentiment were assigned the value 1, and comments that are off-topic to the post’s content\nwere assigned the value 2. We validated the coding scheme by asking a second trained researcher to\ncode the same data. There was substantial agreement between raters (N of agreements: 10623, N of\ndisagreements: 2105, Coehn’s Kappa = 0.697, p = 0).", "#### Who are the annotators?\n\n\nResearchers", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nOMIlab, The Open University of Israel", "### Licensing Information\n\n\nMIT License\n\n\nCopyright (c) 2018 OMIlab, The Open University of Israel\n\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n\n\n@inproceedings{amram-etal-2018-representations,\ntitle = \"Representations and Architectures in Neural Sentiment Analysis for Morphologically Rich Languages: A Case Study from {M}odern {H}ebrew\",\nauthor = \"Amram, Adam and\nBen David, Anat and\nTsarfaty, Reut\",\nbooktitle = \"Proceedings of the 27th International Conference on Computational Linguistics\",\nmonth = aug,\nyear = \"2018\",\naddress = \"Santa Fe, New Mexico, USA\",\npublisher = \"Association for Computational Linguistics\",\nurl = \"URL\npages = \"2242--2252\",\nabstract = \"This paper empirically studies the effects of representation choices on neural sentiment analysis for Modern Hebrew, a morphologically rich language (MRL) for which no sentiment analyzer currently exists. We study two dimensions of representational choices: (i) the granularity of the input signal (token-based vs. morpheme-based), and (ii) the level of encoding of vocabulary items (string-based vs. character-based). We hypothesise that for MRLs, languages where multiple meaning-bearing elements may be carried by a single space-delimited token, these choices will have measurable effects on task perfromance, and that these effects may vary for different architectural designs {---} fully-connected, convolutional or recurrent. Specifically, we hypothesize that morpheme-based representations will have advantages in terms of their generalization capacity and task accuracy, due to their better OOV coverage. To empirically study these effects, we develop a new sentiment analysis benchmark for Hebrew, based on 12K social media comments, and provide two instances of these data: in token-based and morpheme-based settings. Our experiments show that representation choices empirical effects vary with architecture type. While fully-connected and convolutional networks slightly prefer token-based settings, RNNs benefit from a morpheme-based representation, in accord with the hypothesis that explicit morphological information may help generalize. Our endeavour also delivers the first state-of-the-art broad-coverage sentiment analyzer for Hebrew, with over 89{%} accuracy, alongside an established benchmark to further study the effects of linguistic representation choices on neural networks{'} task performance.\",\n}", "### Contributions\n\n\nThanks to @elronbandel for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-sentiment-classification #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Hebrew #license-mit #region-us \n", "### Dataset Summary\n\n\nHebrewSentiment is a data set consists of 12,804 user comments to posts on the official Facebook page of Israel’s\npresident, Mr. Reuven Rivlin. In October 2015, we used the open software application Netvizz (Rieder,\n2013) to scrape all the comments to all of the president’s posts in the period of June – August 2014,\nthe first three months of Rivlin’s presidency.2 While the president’s posts aimed at reconciling tensions\nand called for tolerance and empathy, the sentiment expressed in the comments to the president’s posts\nwas polarized between citizens who warmly thanked the president, and citizens that fiercely critiqued his\npolicy. Of the 12,804 comments, 370 are neutral; 8,512 are positive, 3,922 negative.\n\n\nData Annotation:", "### Supported Tasks and Leaderboards\n\n\nSentiment Analysis", "### Languages\n\n\nHebrew\n\n\nDataset Structure\n-----------------\n\n\ntsv format:\n{hebrew\\_sentence}\\t{sentiment\\_label}", "### Data Instances\n\n\nרובי הייתי רוצה לראות ערביה נישאת ליהודי 1\nתמונה יפיפיה-שפו 0\nחייבים לעשות סוג של חרם כשכתבים שונאי ישראל עולים לשידור צריכים להעביר לערוץ אחר ואז תראו מה יעשה כוחו של הרייטינג ( בהקשר לדבריה של רינה מצליח ) 2", "### Data Fields\n\n\n* 'text': The modern hebrew inpput text.\n* 'label': The sentiment label. 0=positive , 1=negative, 2=off-topic.", "### Data Splits\n\n\ntrain: HebrewSentiment (token), test: 10243\ntrain: HebrewSentiment (morph), test: 10243\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nUser comments to posts on the official Facebook page of Israel’s\npresident, Mr. Reuven Rivlin. In October 2015, we used the open software application Netvizz (Rieder,\n2013) to scrape all the comments to all of the president’s posts in the period of June – August 2014,\nthe first three months of Rivlin’s presidency.", "#### Who are the source language producers?", "### Annotations", "#### Annotation process\n\n\nA trained researcher examined each comment and determined its sentiment value,\nwhere comments with an overall positive sentiment were assigned the value 0, comments with an overall\nnegative sentiment were assigned the value 1, and comments that are off-topic to the post’s content\nwere assigned the value 2. We validated the coding scheme by asking a second trained researcher to\ncode the same data. There was substantial agreement between raters (N of agreements: 10623, N of\ndisagreements: 2105, Coehn’s Kappa = 0.697, p = 0).", "#### Who are the annotators?\n\n\nResearchers", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nOMIlab, The Open University of Israel", "### Licensing Information\n\n\nMIT License\n\n\nCopyright (c) 2018 OMIlab, The Open University of Israel\n\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n\n\n@inproceedings{amram-etal-2018-representations,\ntitle = \"Representations and Architectures in Neural Sentiment Analysis for Morphologically Rich Languages: A Case Study from {M}odern {H}ebrew\",\nauthor = \"Amram, Adam and\nBen David, Anat and\nTsarfaty, Reut\",\nbooktitle = \"Proceedings of the 27th International Conference on Computational Linguistics\",\nmonth = aug,\nyear = \"2018\",\naddress = \"Santa Fe, New Mexico, USA\",\npublisher = \"Association for Computational Linguistics\",\nurl = \"URL\npages = \"2242--2252\",\nabstract = \"This paper empirically studies the effects of representation choices on neural sentiment analysis for Modern Hebrew, a morphologically rich language (MRL) for which no sentiment analyzer currently exists. We study two dimensions of representational choices: (i) the granularity of the input signal (token-based vs. morpheme-based), and (ii) the level of encoding of vocabulary items (string-based vs. character-based). We hypothesise that for MRLs, languages where multiple meaning-bearing elements may be carried by a single space-delimited token, these choices will have measurable effects on task perfromance, and that these effects may vary for different architectural designs {---} fully-connected, convolutional or recurrent. Specifically, we hypothesize that morpheme-based representations will have advantages in terms of their generalization capacity and task accuracy, due to their better OOV coverage. To empirically study these effects, we develop a new sentiment analysis benchmark for Hebrew, based on 12K social media comments, and provide two instances of these data: in token-based and morpheme-based settings. Our experiments show that representation choices empirical effects vary with architecture type. While fully-connected and convolutional networks slightly prefer token-based settings, RNNs benefit from a morpheme-based representation, in accord with the hypothesis that explicit morphological information may help generalize. Our endeavour also delivers the first state-of-the-art broad-coverage sentiment analyzer for Hebrew, with over 89{%} accuracy, alongside an established benchmark to further study the effects of linguistic representation choices on neural networks{'} task performance.\",\n}", "### Contributions\n\n\nThanks to @elronbandel for adding this dataset." ]
6b177ceac535d8df1ad03971babe7816d61c6186
# Dataset Card for HebrewSentiment ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://thisworld.online/ - **Repository:** https://github.com/thisworld1/thisworld.online - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary HebrewThisWorld is a data set consists of 2028 issues of the newspaper 'This World' edited by Uri Avnery and were published between 1950 and 1989. Released under the AGPLv3 license. Data Annotation: ### Supported Tasks and Leaderboards Language modeling ### Languages Hebrew ## Dataset Structure csv file with "," delimeter ### Data Instances Sample: ```json { "issue_num": 637, "page_count": 16, "date": "1950-01-01", "date_he": "1 בינואר 1950", "year": "1950", "href": "https://thisworld.online/1950/637", "pdf": "https://olam.eu-central-1.linodeobjects.com/pdfs/B-I0637-D010150.pdf", "coverpage": "https://olam.eu-central-1.linodeobjects.com/pages/637/t-1.png", "backpage": "https://olam.eu-central-1.linodeobjects.com/pages/637/t-16.png", "content": "\nלפיד\nהנוער ־ בירושלים צילומים :\n\nב. רותנברג\n\nוזהו הלפיד\n...", "url": "https://thisworld.online/api/1950/637" } ``` ### Data Fields - `issue_num`: ID/Number of the issue - `page_count`: Page count of the current issue - `date`: Published date - `date_he`: Published date in Hebrew - `year`: Year of the issue - `href`: URL to the issue to scan/print etc. - `pdf`: URL to the issue to scan in pdf - `coverpage`: URL to coverpage - `backpage`: URL to backpage - `content`: text content of the issue - `url`: URL ### Data Splits | | train | |--------|------:| | corpus | 2028 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [thisworld.online](https://thisworld.online/) #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? Researchers ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information GNU AGPLv3+ This is free software, and you are welcome to redistribute it under certain conditions. This program is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more details. You should have received a copy of the GNU Affero General Public License along with this program. If not, see <http://www.gnu.org/licenses/>. ### Citation Information https://thisworld.online/ ### Contributions Thanks to [@lhoestq](https://github.com/lhoestq), [@imvladikon](https://github.com/imvladikon) for adding this dataset.
hebrew_this_world
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:he", "license:agpl-3.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["he"], "license": ["agpl-3.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-generation", "fill-mask"], "task_ids": ["language-modeling", "masked-language-modeling"], "pretty_name": "HebrewSentiment", "dataset_info": {"features": [{"name": "issue_num", "dtype": "int64"}, {"name": "page_count", "dtype": "int64"}, {"name": "date", "dtype": "string"}, {"name": "date_he", "dtype": "string"}, {"name": "year", "dtype": "string"}, {"name": "href", "dtype": "string"}, {"name": "pdf", "dtype": "string"}, {"name": "coverpage", "dtype": "string"}, {"name": "backpage", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 678389435, "num_examples": 2028}], "download_size": 678322912, "dataset_size": 678389435}}
2024-01-18T11:05:21+00:00
[]
[ "he" ]
TAGS #task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Hebrew #license-agpl-3.0 #region-us
Dataset Card for HebrewSentiment ================================ Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: * Leaderboard: * Point of Contact: ### Dataset Summary HebrewThisWorld is a data set consists of 2028 issues of the newspaper 'This World' edited by Uri Avnery and were published between 1950 and 1989. Released under the AGPLv3 license. Data Annotation: ### Supported Tasks and Leaderboards Language modeling ### Languages Hebrew Dataset Structure ----------------- csv file with "," delimeter ### Data Instances Sample: ### Data Fields * 'issue\_num': ID/Number of the issue * 'page\_count': Page count of the current issue * 'date': Published date * 'date\_he': Published date in Hebrew * 'year': Year of the issue * 'href': URL to the issue to scan/print etc. * 'pdf': URL to the issue to scan in pdf * 'coverpage': URL to coverpage * 'backpage': URL to backpage * 'content': text content of the issue * 'url': URL ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data URL #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? Researchers ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information GNU AGPLv3+ This is free software, and you are welcome to redistribute it under certain conditions. This program is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more details. You should have received a copy of the GNU Affero General Public License along with this program. If not, see <URL URL ### Contributions Thanks to @lhoestq, @imvladikon for adding this dataset.
[ "### Dataset Summary\n\n\nHebrewThisWorld is a data set consists of 2028 issues of the newspaper 'This World' edited by Uri Avnery and were published between 1950 and 1989. Released under the AGPLv3 license.\n\n\nData Annotation:", "### Supported Tasks and Leaderboards\n\n\nLanguage modeling", "### Languages\n\n\nHebrew\n\n\nDataset Structure\n-----------------\n\n\ncsv file with \",\" delimeter", "### Data Instances\n\n\nSample:", "### Data Fields\n\n\n* 'issue\\_num': ID/Number of the issue\n* 'page\\_count': Page count of the current issue\n* 'date': Published date\n* 'date\\_he': Published date in Hebrew\n* 'year': Year of the issue\n* 'href': URL to the issue to scan/print etc.\n* 'pdf': URL to the issue to scan in pdf\n* 'coverpage': URL to coverpage\n* 'backpage': URL to backpage\n* 'content': text content of the issue\n* 'url': URL", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data\n\n\nURL", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?\n\n\nResearchers", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nGNU AGPLv3+\n\n\nThis is free software, and you are welcome to redistribute it under certain conditions.\n\n\nThis program is free software: you can redistribute it and/or modify\nit under the terms of the GNU Affero General Public License as published by\nthe Free Software Foundation, either version 3 of the License, or\n(at your option) any later version.\n\n\nThis program is distributed in the hope that it will be useful,\nbut WITHOUT ANY WARRANTY; without even the implied warranty of\nMERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\nGNU Affero General Public License for more details.\n\n\nYou should have received a copy of the GNU Affero General Public License\nalong with this program. If not, see <URL\n\n\nURL", "### Contributions\n\n\nThanks to @lhoestq, @imvladikon for adding this dataset." ]
[ "TAGS\n#task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Hebrew #license-agpl-3.0 #region-us \n", "### Dataset Summary\n\n\nHebrewThisWorld is a data set consists of 2028 issues of the newspaper 'This World' edited by Uri Avnery and were published between 1950 and 1989. Released under the AGPLv3 license.\n\n\nData Annotation:", "### Supported Tasks and Leaderboards\n\n\nLanguage modeling", "### Languages\n\n\nHebrew\n\n\nDataset Structure\n-----------------\n\n\ncsv file with \",\" delimeter", "### Data Instances\n\n\nSample:", "### Data Fields\n\n\n* 'issue\\_num': ID/Number of the issue\n* 'page\\_count': Page count of the current issue\n* 'date': Published date\n* 'date\\_he': Published date in Hebrew\n* 'year': Year of the issue\n* 'href': URL to the issue to scan/print etc.\n* 'pdf': URL to the issue to scan in pdf\n* 'coverpage': URL to coverpage\n* 'backpage': URL to backpage\n* 'content': text content of the issue\n* 'url': URL", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data\n\n\nURL", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?\n\n\nResearchers", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nGNU AGPLv3+\n\n\nThis is free software, and you are welcome to redistribute it under certain conditions.\n\n\nThis program is free software: you can redistribute it and/or modify\nit under the terms of the GNU Affero General Public License as published by\nthe Free Software Foundation, either version 3 of the License, or\n(at your option) any later version.\n\n\nThis program is distributed in the hope that it will be useful,\nbut WITHOUT ANY WARRANTY; without even the implied warranty of\nMERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\nGNU Affero General Public License for more details.\n\n\nYou should have received a copy of the GNU Affero General Public License\nalong with this program. If not, see <URL\n\n\nURL", "### Contributions\n\n\nThanks to @lhoestq, @imvladikon for adding this dataset." ]
6002345709e0801764318f06bf06ce1e7d1a1fe3
# Dataset Card for "hellaswag" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://rowanzellers.com/hellaswag/](https://rowanzellers.com/hellaswag/) - **Repository:** [https://github.com/rowanz/hellaswag/](https://github.com/rowanz/hellaswag/) - **Paper:** [HellaSwag: Can a Machine Really Finish Your Sentence?](https://arxiv.org/abs/1905.07830) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 71.49 MB - **Size of the generated dataset:** 65.32 MB - **Total amount of disk used:** 136.81 MB ### Dataset Summary HellaSwag: Can a Machine Really Finish Your Sentence? is a new dataset for commonsense NLI. A paper was published at ACL2019. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### default - **Size of downloaded dataset files:** 71.49 MB - **Size of the generated dataset:** 65.32 MB - **Total amount of disk used:** 136.81 MB An example of 'train' looks as follows. ``` This example was too long and was cropped: { "activity_label": "Removing ice from car", "ctx": "Then, the man writes over the snow covering the window of a car, and a woman wearing winter clothes smiles. then", "ctx_a": "Then, the man writes over the snow covering the window of a car, and a woman wearing winter clothes smiles.", "ctx_b": "then", "endings": "[\", the man adds wax to the windshield and cuts it.\", \", a person board a ski lift, while two men supporting the head of the per...", "ind": 4, "label": "3", "source_id": "activitynet~v_-1IBHYS3L-Y", "split": "train", "split_type": "indomain" } ``` ### Data Fields The data fields are the same among all splits. #### default - `ind`: a `int32` feature. - `activity_label`: a `string` feature. - `ctx_a`: a `string` feature. - `ctx_b`: a `string` feature. - `ctx`: a `string` feature. - `endings`: a `list` of `string` features. - `source_id`: a `string` feature. - `split`: a `string` feature. - `split_type`: a `string` feature. - `label`: a `string` feature. ### Data Splits | name |train|validation|test | |-------|----:|---------:|----:| |default|39905| 10042|10003| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information MIT https://github.com/rowanz/hellaswag/blob/master/LICENSE ### Citation Information ``` @inproceedings{zellers2019hellaswag, title={HellaSwag: Can a Machine Really Finish Your Sentence?}, author={Zellers, Rowan and Holtzman, Ari and Bisk, Yonatan and Farhadi, Ali and Choi, Yejin}, booktitle ={Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics}, year={2019} } ``` ### Contributions Thanks to [@albertvillanova](https://github.com/albertvillanova), [@mariamabarham](https://github.com/mariamabarham), [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun) for adding this dataset.
Rowan/hellaswag
[ "language:en", "arxiv:1905.07830", "region:us" ]
2022-03-02T23:29:22+00:00
{"language": ["en"], "paperswithcode_id": "hellaswag", "pretty_name": "HellaSwag", "dataset_info": {"features": [{"name": "ind", "dtype": "int32"}, {"name": "activity_label", "dtype": "string"}, {"name": "ctx_a", "dtype": "string"}, {"name": "ctx_b", "dtype": "string"}, {"name": "ctx", "dtype": "string"}, {"name": "endings", "sequence": "string"}, {"name": "source_id", "dtype": "string"}, {"name": "split", "dtype": "string"}, {"name": "split_type", "dtype": "string"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 43232624, "num_examples": 39905}, {"name": "test", "num_bytes": 10791853, "num_examples": 10003}, {"name": "validation", "num_bytes": 11175717, "num_examples": 10042}], "download_size": 71494896, "dataset_size": 65200194}}
2023-09-28T13:49:00+00:00
[ "1905.07830" ]
[ "en" ]
TAGS #language-English #arxiv-1905.07830 #region-us
Dataset Card for "hellaswag" ============================ Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: HellaSwag: Can a Machine Really Finish Your Sentence? * Point of Contact: * Size of downloaded dataset files: 71.49 MB * Size of the generated dataset: 65.32 MB * Total amount of disk used: 136.81 MB ### Dataset Summary HellaSwag: Can a Machine Really Finish Your Sentence? is a new dataset for commonsense NLI. A paper was published at ACL2019. ### Supported Tasks and Leaderboards ### Languages Dataset Structure ----------------- ### Data Instances #### default * Size of downloaded dataset files: 71.49 MB * Size of the generated dataset: 65.32 MB * Total amount of disk used: 136.81 MB An example of 'train' looks as follows. ### Data Fields The data fields are the same among all splits. #### default * 'ind': a 'int32' feature. * 'activity\_label': a 'string' feature. * 'ctx\_a': a 'string' feature. * 'ctx\_b': a 'string' feature. * 'ctx': a 'string' feature. * 'endings': a 'list' of 'string' features. * 'source\_id': a 'string' feature. * 'split': a 'string' feature. * 'split\_type': a 'string' feature. * 'label': a 'string' feature. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information MIT URL ### Contributions Thanks to @albertvillanova, @mariamabarham, @thomwolf, @patrickvonplaten, @lewtun for adding this dataset.
[ "### Dataset Summary\n\n\nHellaSwag: Can a Machine Really Finish Your Sentence? is a new dataset for commonsense NLI. A paper was published at ACL2019.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### default\n\n\n* Size of downloaded dataset files: 71.49 MB\n* Size of the generated dataset: 65.32 MB\n* Total amount of disk used: 136.81 MB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### default\n\n\n* 'ind': a 'int32' feature.\n* 'activity\\_label': a 'string' feature.\n* 'ctx\\_a': a 'string' feature.\n* 'ctx\\_b': a 'string' feature.\n* 'ctx': a 'string' feature.\n* 'endings': a 'list' of 'string' features.\n* 'source\\_id': a 'string' feature.\n* 'split': a 'string' feature.\n* 'split\\_type': a 'string' feature.\n* 'label': a 'string' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nMIT URL", "### Contributions\n\n\nThanks to @albertvillanova, @mariamabarham, @thomwolf, @patrickvonplaten, @lewtun for adding this dataset." ]
[ "TAGS\n#language-English #arxiv-1905.07830 #region-us \n", "### Dataset Summary\n\n\nHellaSwag: Can a Machine Really Finish Your Sentence? is a new dataset for commonsense NLI. A paper was published at ACL2019.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### default\n\n\n* Size of downloaded dataset files: 71.49 MB\n* Size of the generated dataset: 65.32 MB\n* Total amount of disk used: 136.81 MB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### default\n\n\n* 'ind': a 'int32' feature.\n* 'activity\\_label': a 'string' feature.\n* 'ctx\\_a': a 'string' feature.\n* 'ctx\\_b': a 'string' feature.\n* 'ctx': a 'string' feature.\n* 'endings': a 'list' of 'string' features.\n* 'source\\_id': a 'string' feature.\n* 'split': a 'string' feature.\n* 'split\\_type': a 'string' feature.\n* 'label': a 'string' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nMIT URL", "### Contributions\n\n\nThanks to @albertvillanova, @mariamabarham, @thomwolf, @patrickvonplaten, @lewtun for adding this dataset." ]
7a00892cd331d78a88c8c869d0224a5cdd149848
# Dataset Card for MMLU ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository**: https://github.com/hendrycks/test - **Paper**: https://arxiv.org/abs/2009.03300 ### Dataset Summary [Measuring Massive Multitask Language Understanding](https://arxiv.org/pdf/2009.03300) by [Dan Hendrycks](https://people.eecs.berkeley.edu/~hendrycks/), [Collin Burns](http://collinpburns.com), [Steven Basart](https://stevenbas.art), Andy Zou, Mantas Mazeika, [Dawn Song](https://people.eecs.berkeley.edu/~dawnsong/), and [Jacob Steinhardt](https://www.stat.berkeley.edu/~jsteinhardt/) (ICLR 2021). This is a massive multitask test consisting of multiple-choice questions from various branches of knowledge. The test spans subjects in the humanities, social sciences, hard sciences, and other areas that are important for some people to learn. This covers 57 tasks including elementary mathematics, US history, computer science, law, and more. To attain high accuracy on this test, models must possess extensive world knowledge and problem solving ability. A complete list of tasks: ['abstract_algebra', 'anatomy', 'astronomy', 'business_ethics', 'clinical_knowledge', 'college_biology', 'college_chemistry', 'college_computer_science', 'college_mathematics', 'college_medicine', 'college_physics', 'computer_security', 'conceptual_physics', 'econometrics', 'electrical_engineering', 'elementary_mathematics', 'formal_logic', 'global_facts', 'high_school_biology', 'high_school_chemistry', 'high_school_computer_science', 'high_school_european_history', 'high_school_geography', 'high_school_government_and_politics', 'high_school_macroeconomics', 'high_school_mathematics', 'high_school_microeconomics', 'high_school_physics', 'high_school_psychology', 'high_school_statistics', 'high_school_us_history', 'high_school_world_history', 'human_aging', 'human_sexuality', 'international_law', 'jurisprudence', 'logical_fallacies', 'machine_learning', 'management', 'marketing', 'medical_genetics', 'miscellaneous', 'moral_disputes', 'moral_scenarios', 'nutrition', 'philosophy', 'prehistory', 'professional_accounting', 'professional_law', 'professional_medicine', 'professional_psychology', 'public_relations', 'security_studies', 'sociology', 'us_foreign_policy', 'virology', 'world_religions'] ### Supported Tasks and Leaderboards | Model | Authors | Humanities | Social Science | STEM | Other | Average | |------------------------------------|----------|:-------:|:-------:|:-------:|:-------:|:-------:| | [UnifiedQA](https://arxiv.org/abs/2005.00700) | Khashabi et al., 2020 | 45.6 | 56.6 | 40.2 | 54.6 | 48.9 | [GPT-3](https://arxiv.org/abs/2005.14165) (few-shot) | Brown et al., 2020 | 40.8 | 50.4 | 36.7 | 48.8 | 43.9 | [GPT-2](https://arxiv.org/abs/2005.14165) | Radford et al., 2019 | 32.8 | 33.3 | 30.2 | 33.1 | 32.4 | Random Baseline | N/A | 25.0 | 25.0 | 25.0 | 25.0 | 25.0 | 25.0 ### Languages English ## Dataset Structure ### Data Instances An example from anatomy subtask looks as follows: ``` { "question": "What is the embryological origin of the hyoid bone?", "choices": ["The first pharyngeal arch", "The first and second pharyngeal arches", "The second pharyngeal arch", "The second and third pharyngeal arches"], "answer": "D" } ``` ### Data Fields - `question`: a string feature - `choices`: a list of 4 string features - `answer`: a ClassLabel feature ### Data Splits - `auxiliary_train`: auxiliary multiple-choice training questions from ARC, MC_TEST, OBQA, RACE, etc. - `dev`: 5 examples per subtask, meant for few-shot setting - `test`: there are at least 100 examples per subtask | | auxiliary_train | dev | val | test | | ----- | :------: | :-----: | :-----: | :-----: | | TOTAL | 99842 | 285 | 1531 | 14042 ## Dataset Creation ### Curation Rationale Transformer models have driven this recent progress by pretraining on massive text corpora, including all of Wikipedia, thousands of books, and numerous websites. These models consequently see extensive information about specialized topics, most of which is not assessed by existing NLP benchmarks. To bridge the gap between the wide-ranging knowledge that models see during pretraining and the existing measures of success, we introduce a new benchmark for assessing models across a diverse set of subjects that humans learn. ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [MIT License](https://github.com/hendrycks/test/blob/master/LICENSE) ### Citation Information If you find this useful in your research, please consider citing the test and also the [ETHICS](https://arxiv.org/abs/2008.02275) dataset it draws from: ``` @article{hendryckstest2021, title={Measuring Massive Multitask Language Understanding}, author={Dan Hendrycks and Collin Burns and Steven Basart and Andy Zou and Mantas Mazeika and Dawn Song and Jacob Steinhardt}, journal={Proceedings of the International Conference on Learning Representations (ICLR)}, year={2021} } @article{hendrycks2021ethics, title={Aligning AI With Shared Human Values}, author={Dan Hendrycks and Collin Burns and Steven Basart and Andrew Critch and Jerry Li and Dawn Song and Jacob Steinhardt}, journal={Proceedings of the International Conference on Learning Representations (ICLR)}, year={2021} } ``` ### Contributions Thanks to [@andyzoujm](https://github.com/andyzoujm) for adding this dataset.
cais/mmlu
[ "task_categories:question-answering", "task_ids:multiple-choice-qa", "annotations_creators:no-annotation", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:mit", "arxiv:2009.03300", "arxiv:2005.00700", "arxiv:2005.14165", "arxiv:2008.02275", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": ["multiple-choice-qa"], "paperswithcode_id": "mmlu", "pretty_name": "Measuring Massive Multitask Language Understanding", "language_bcp47": ["en-US"], "dataset_info": [{"config_name": "abstract_algebra", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "auxiliary_train", "num_bytes": 160601377, "num_examples": 99842}, {"name": "test", "num_bytes": 19328, "num_examples": 100}, {"name": "validation", "num_bytes": 2024, "num_examples": 11}, {"name": "dev", "num_bytes": 830, "num_examples": 5}], "download_size": 166184960, "dataset_size": 160623559}, {"config_name": "anatomy", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "auxiliary_train", "num_bytes": 160601377, "num_examples": 99842}, {"name": "test", "num_bytes": 33121, "num_examples": 135}, {"name": "validation", "num_bytes": 3140, "num_examples": 14}, {"name": "dev", "num_bytes": 967, "num_examples": 5}], "download_size": 166184960, "dataset_size": 160638605}, {"config_name": "astronomy", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "auxiliary_train", "num_bytes": 160601377, "num_examples": 99842}, {"name": "test", "num_bytes": 46771, "num_examples": 152}, {"name": "validation", "num_bytes": 5027, "num_examples": 16}, {"name": "dev", "num_bytes": 2076, "num_examples": 5}], "download_size": 166184960, "dataset_size": 160655251}, {"config_name": "business_ethics", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "auxiliary_train", "num_bytes": 160601377, "num_examples": 99842}, {"name": "test", "num_bytes": 33252, "num_examples": 100}, {"name": "validation", "num_bytes": 3038, "num_examples": 11}, {"name": "dev", "num_bytes": 2190, "num_examples": 5}], "download_size": 166184960, "dataset_size": 160639857}, {"config_name": "clinical_knowledge", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "auxiliary_train", "num_bytes": 160601377, "num_examples": 99842}, {"name": "test", "num_bytes": 62754, "num_examples": 265}, {"name": "validation", "num_bytes": 6664, "num_examples": 29}, {"name": "dev", "num_bytes": 1210, "num_examples": 5}], "download_size": 166184960, "dataset_size": 160672005}, {"config_name": "college_biology", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "auxiliary_train", "num_bytes": 160601377, "num_examples": 99842}, {"name": "test", "num_bytes": 48797, "num_examples": 144}, {"name": "validation", "num_bytes": 4819, "num_examples": 16}, {"name": "dev", "num_bytes": 1532, "num_examples": 5}], "download_size": 166184960, "dataset_size": 160656525}, {"config_name": "college_chemistry", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "auxiliary_train", "num_bytes": 160601377, "num_examples": 99842}, {"name": "test", "num_bytes": 24708, "num_examples": 100}, {"name": "validation", "num_bytes": 2328, "num_examples": 8}, {"name": "dev", "num_bytes": 1331, "num_examples": 5}], "download_size": 166184960, "dataset_size": 160629744}, {"config_name": "college_computer_science", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "auxiliary_train", "num_bytes": 160601377, "num_examples": 99842}, {"name": "test", "num_bytes": 42641, "num_examples": 100}, {"name": "validation", "num_bytes": 4663, "num_examples": 11}, {"name": "dev", "num_bytes": 2765, "num_examples": 5}], "download_size": 166184960, "dataset_size": 160651446}, {"config_name": "college_mathematics", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "auxiliary_train", "num_bytes": 160601377, "num_examples": 99842}, {"name": "test", "num_bytes": 24711, "num_examples": 100}, {"name": "validation", "num_bytes": 2668, "num_examples": 11}, {"name": "dev", "num_bytes": 1493, "num_examples": 5}], "download_size": 166184960, "dataset_size": 160630249}, {"config_name": "college_medicine", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "auxiliary_train", "num_bytes": 160601377, "num_examples": 99842}, {"name": "test", "num_bytes": 82397, "num_examples": 173}, {"name": "validation", "num_bytes": 7909, "num_examples": 22}, {"name": "dev", "num_bytes": 1670, "num_examples": 5}], "download_size": 166184960, "dataset_size": 160693353}, {"config_name": "college_physics", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "auxiliary_train", "num_bytes": 160601377, "num_examples": 99842}, {"name": "test", "num_bytes": 30181, "num_examples": 102}, {"name": "validation", "num_bytes": 3490, "num_examples": 11}, {"name": "dev", "num_bytes": 1412, "num_examples": 5}], "download_size": 166184960, "dataset_size": 160636460}, {"config_name": "computer_security", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "auxiliary_train", "num_bytes": 160601377, "num_examples": 99842}, {"name": "test", "num_bytes": 27124, "num_examples": 100}, {"name": "validation", "num_bytes": 4549, "num_examples": 11}, {"name": "dev", "num_bytes": 1101, "num_examples": 5}], "download_size": 166184960, "dataset_size": 160634151}, {"config_name": "conceptual_physics", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "auxiliary_train", "num_bytes": 160601377, "num_examples": 99842}, {"name": "test", "num_bytes": 40709, "num_examples": 235}, {"name": "validation", "num_bytes": 4474, "num_examples": 26}, {"name": "dev", "num_bytes": 934, "num_examples": 5}], "download_size": 166184960, "dataset_size": 160647494}, {"config_name": "econometrics", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "auxiliary_train", "num_bytes": 160601377, "num_examples": 99842}, {"name": "test", "num_bytes": 46547, "num_examples": 114}, {"name": "validation", "num_bytes": 4967, "num_examples": 12}, {"name": "dev", "num_bytes": 1644, "num_examples": 5}], "download_size": 166184960, "dataset_size": 160654535}, {"config_name": "electrical_engineering", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "auxiliary_train", "num_bytes": 160601377, "num_examples": 99842}, {"name": "test", "num_bytes": 25142, "num_examples": 145}, {"name": "validation", "num_bytes": 2903, "num_examples": 16}, {"name": "dev", "num_bytes": 972, "num_examples": 5}], "download_size": 166184960, "dataset_size": 160630394}, {"config_name": "elementary_mathematics", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "auxiliary_train", "num_bytes": 160601377, "num_examples": 99842}, {"name": "test", "num_bytes": 70108, "num_examples": 378}, {"name": "validation", "num_bytes": 8988, "num_examples": 41}, {"name": "dev", "num_bytes": 1440, "num_examples": 5}], "download_size": 166184960, "dataset_size": 160681913}, {"config_name": "formal_logic", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "auxiliary_train", "num_bytes": 160601377, "num_examples": 99842}, {"name": "test", "num_bytes": 49785, "num_examples": 126}, {"name": "validation", "num_bytes": 6252, "num_examples": 14}, {"name": "dev", "num_bytes": 1757, "num_examples": 5}], "download_size": 166184960, "dataset_size": 160659171}, {"config_name": "global_facts", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "auxiliary_train", "num_bytes": 160601377, "num_examples": 99842}, {"name": "test", "num_bytes": 18403, "num_examples": 100}, {"name": "validation", "num_bytes": 1865, "num_examples": 10}, {"name": "dev", "num_bytes": 1229, "num_examples": 5}], "download_size": 166184960, "dataset_size": 160622874}, {"config_name": "high_school_biology", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "auxiliary_train", "num_bytes": 160601377, "num_examples": 99842}, {"name": "test", "num_bytes": 109732, "num_examples": 310}, {"name": "validation", "num_bytes": 11022, "num_examples": 32}, {"name": "dev", "num_bytes": 1673, "num_examples": 5}], "download_size": 166184960, "dataset_size": 160723804}, {"config_name": "high_school_chemistry", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "auxiliary_train", "num_bytes": 160601377, "num_examples": 99842}, {"name": "test", "num_bytes": 58464, "num_examples": 203}, {"name": "validation", "num_bytes": 7092, "num_examples": 22}, {"name": "dev", "num_bytes": 1220, "num_examples": 5}], "download_size": 166184960, "dataset_size": 160668153}, {"config_name": "high_school_computer_science", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "auxiliary_train", "num_bytes": 160601377, "num_examples": 99842}, {"name": "test", "num_bytes": 44476, "num_examples": 100}, {"name": "validation", "num_bytes": 3343, "num_examples": 9}, {"name": "dev", "num_bytes": 2918, "num_examples": 5}], "download_size": 166184960, "dataset_size": 160652114}, {"config_name": "high_school_european_history", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "auxiliary_train", "num_bytes": 160601377, "num_examples": 99842}, {"name": "test", "num_bytes": 270300, "num_examples": 165}, {"name": "validation", "num_bytes": 29632, "num_examples": 18}, {"name": "dev", "num_bytes": 11564, "num_examples": 5}], "download_size": 166184960, "dataset_size": 160912873}, {"config_name": "high_school_geography", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "auxiliary_train", "num_bytes": 160601377, "num_examples": 99842}, {"name": "test", "num_bytes": 42034, "num_examples": 198}, {"name": "validation", "num_bytes": 4332, "num_examples": 22}, {"name": "dev", "num_bytes": 1403, "num_examples": 5}], "download_size": 166184960, "dataset_size": 160649146}, {"config_name": "high_school_government_and_politics", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "auxiliary_train", "num_bytes": 160601377, "num_examples": 99842}, {"name": "test", "num_bytes": 66074, "num_examples": 193}, {"name": "validation", "num_bytes": 7063, "num_examples": 21}, {"name": "dev", "num_bytes": 1779, "num_examples": 5}], "download_size": 166184960, "dataset_size": 160676293}, {"config_name": "high_school_macroeconomics", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "auxiliary_train", "num_bytes": 160601377, "num_examples": 99842}, {"name": "test", "num_bytes": 117687, "num_examples": 390}, {"name": "validation", "num_bytes": 13020, "num_examples": 43}, {"name": "dev", "num_bytes": 1328, "num_examples": 5}], "download_size": 166184960, "dataset_size": 160733412}, {"config_name": "high_school_mathematics", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "auxiliary_train", "num_bytes": 160601377, "num_examples": 99842}, {"name": "test", "num_bytes": 54854, "num_examples": 270}, {"name": "validation", "num_bytes": 5765, "num_examples": 29}, {"name": "dev", "num_bytes": 1297, "num_examples": 5}], "download_size": 166184960, "dataset_size": 160663293}, {"config_name": "high_school_microeconomics", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "auxiliary_train", "num_bytes": 160601377, "num_examples": 99842}, {"name": "test", "num_bytes": 75703, "num_examples": 238}, {"name": "validation", "num_bytes": 7553, "num_examples": 26}, {"name": "dev", "num_bytes": 1298, "num_examples": 5}], "download_size": 166184960, "dataset_size": 160685931}, {"config_name": "high_school_physics", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "auxiliary_train", "num_bytes": 160601377, "num_examples": 99842}, {"name": "test", "num_bytes": 59538, "num_examples": 151}, {"name": "validation", "num_bytes": 6771, "num_examples": 17}, {"name": "dev", "num_bytes": 1489, "num_examples": 5}], "download_size": 166184960, "dataset_size": 160669175}, {"config_name": "high_school_psychology", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "auxiliary_train", "num_bytes": 160601377, "num_examples": 99842}, {"name": "test", "num_bytes": 159407, "num_examples": 545}, {"name": "validation", "num_bytes": 17269, "num_examples": 60}, {"name": "dev", "num_bytes": 1905, "num_examples": 5}], "download_size": 166184960, "dataset_size": 160779958}, {"config_name": "high_school_statistics", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "auxiliary_train", "num_bytes": 160601377, "num_examples": 99842}, {"name": "test", "num_bytes": 110702, "num_examples": 216}, {"name": "validation", "num_bytes": 9997, "num_examples": 23}, {"name": "dev", "num_bytes": 2528, "num_examples": 5}], "download_size": 166184960, "dataset_size": 160724604}, {"config_name": "high_school_us_history", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "auxiliary_train", "num_bytes": 160601377, "num_examples": 99842}, {"name": "test", "num_bytes": 296734, "num_examples": 204}, {"name": "validation", "num_bytes": 31706, "num_examples": 22}, {"name": "dev", "num_bytes": 8864, "num_examples": 5}], "download_size": 166184960, "dataset_size": 160938681}, {"config_name": "high_school_world_history", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "auxiliary_train", "num_bytes": 160601377, "num_examples": 99842}, {"name": "test", "num_bytes": 378617, "num_examples": 237}, {"name": "validation", "num_bytes": 45501, "num_examples": 26}, {"name": "dev", "num_bytes": 4882, "num_examples": 5}], "download_size": 166184960, "dataset_size": 161030377}, {"config_name": "human_aging", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "auxiliary_train", "num_bytes": 160601377, "num_examples": 99842}, {"name": "test", "num_bytes": 46098, "num_examples": 223}, {"name": "validation", "num_bytes": 4707, "num_examples": 23}, {"name": "dev", "num_bytes": 1008, "num_examples": 5}], "download_size": 166184960, "dataset_size": 160653190}, {"config_name": "human_sexuality", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "auxiliary_train", "num_bytes": 160601377, "num_examples": 99842}, {"name": "test", "num_bytes": 32110, "num_examples": 131}, {"name": "validation", "num_bytes": 2421, "num_examples": 12}, {"name": "dev", "num_bytes": 1077, "num_examples": 5}], "download_size": 166184960, "dataset_size": 160636985}, {"config_name": "international_law", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "auxiliary_train", "num_bytes": 160601377, "num_examples": 99842}, {"name": "test", "num_bytes": 53531, "num_examples": 121}, {"name": "validation", "num_bytes": 6473, "num_examples": 13}, {"name": "dev", "num_bytes": 2418, "num_examples": 5}], "download_size": 166184960, "dataset_size": 160663799}, {"config_name": "jurisprudence", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "auxiliary_train", "num_bytes": 160601377, "num_examples": 99842}, {"name": "test", "num_bytes": 33986, "num_examples": 108}, {"name": "validation", "num_bytes": 3729, "num_examples": 11}, {"name": "dev", "num_bytes": 1303, "num_examples": 5}], "download_size": 166184960, "dataset_size": 160640395}, {"config_name": "logical_fallacies", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "auxiliary_train", "num_bytes": 160601377, "num_examples": 99842}, {"name": "test", "num_bytes": 50117, "num_examples": 163}, {"name": "validation", "num_bytes": 5103, "num_examples": 18}, {"name": "dev", "num_bytes": 1573, "num_examples": 5}], "download_size": 166184960, "dataset_size": 160658170}, {"config_name": "machine_learning", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "auxiliary_train", "num_bytes": 160601377, "num_examples": 99842}, {"name": "test", "num_bytes": 33880, "num_examples": 112}, {"name": "validation", "num_bytes": 3232, "num_examples": 11}, {"name": "dev", "num_bytes": 2323, "num_examples": 5}], "download_size": 166184960, "dataset_size": 160640812}, {"config_name": "management", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "auxiliary_train", "num_bytes": 160601377, "num_examples": 99842}, {"name": "test", "num_bytes": 20002, "num_examples": 103}, {"name": "validation", "num_bytes": 1820, "num_examples": 11}, {"name": "dev", "num_bytes": 898, "num_examples": 5}], "download_size": 166184960, "dataset_size": 160624097}, {"config_name": "marketing", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "auxiliary_train", "num_bytes": 160601377, "num_examples": 99842}, {"name": "test", "num_bytes": 63025, "num_examples": 234}, {"name": "validation", "num_bytes": 7394, "num_examples": 25}, {"name": "dev", "num_bytes": 1481, "num_examples": 5}], "download_size": 166184960, "dataset_size": 160673277}, {"config_name": "medical_genetics", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "auxiliary_train", "num_bytes": 160601377, "num_examples": 99842}, {"name": "test", "num_bytes": 20864, "num_examples": 100}, {"name": "validation", "num_bytes": 3005, "num_examples": 11}, {"name": "dev", "num_bytes": 1089, "num_examples": 5}], "download_size": 166184960, "dataset_size": 160626335}, {"config_name": "miscellaneous", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "auxiliary_train", "num_bytes": 160601377, "num_examples": 99842}, {"name": "test", "num_bytes": 147704, "num_examples": 783}, {"name": "validation", "num_bytes": 14330, "num_examples": 86}, {"name": "dev", "num_bytes": 699, "num_examples": 5}], "download_size": 166184960, "dataset_size": 160764110}, {"config_name": "moral_disputes", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "auxiliary_train", "num_bytes": 160601377, "num_examples": 99842}, {"name": "test", "num_bytes": 107818, "num_examples": 346}, {"name": "validation", "num_bytes": 12420, "num_examples": 38}, {"name": "dev", "num_bytes": 1755, "num_examples": 5}], "download_size": 166184960, "dataset_size": 160723370}, {"config_name": "moral_scenarios", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "auxiliary_train", "num_bytes": 160601377, "num_examples": 99842}, {"name": "test", "num_bytes": 374026, "num_examples": 895}, {"name": "validation", "num_bytes": 42338, "num_examples": 100}, {"name": "dev", "num_bytes": 2058, "num_examples": 5}], "download_size": 166184960, "dataset_size": 161019799}, {"config_name": "nutrition", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "auxiliary_train", "num_bytes": 160601377, "num_examples": 99842}, {"name": "test", "num_bytes": 92410, "num_examples": 306}, {"name": "validation", "num_bytes": 8436, "num_examples": 33}, {"name": "dev", "num_bytes": 2085, "num_examples": 5}], "download_size": 166184960, "dataset_size": 160704308}, {"config_name": "philosophy", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "auxiliary_train", "num_bytes": 160601377, "num_examples": 99842}, {"name": "test", "num_bytes": 80073, "num_examples": 311}, {"name": "validation", "num_bytes": 9184, "num_examples": 34}, {"name": "dev", "num_bytes": 988, "num_examples": 5}], "download_size": 166184960, "dataset_size": 160691622}, {"config_name": "prehistory", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "auxiliary_train", "num_bytes": 160601377, "num_examples": 99842}, {"name": "test", "num_bytes": 89594, "num_examples": 324}, {"name": "validation", "num_bytes": 10285, "num_examples": 35}, {"name": "dev", "num_bytes": 1878, "num_examples": 5}], "download_size": 166184960, "dataset_size": 160703134}, {"config_name": "professional_accounting", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "auxiliary_train", "num_bytes": 160601377, "num_examples": 99842}, {"name": "test", "num_bytes": 124550, "num_examples": 282}, {"name": "validation", "num_bytes": 14372, "num_examples": 31}, {"name": "dev", "num_bytes": 2148, "num_examples": 5}], "download_size": 166184960, "dataset_size": 160742447}, {"config_name": "professional_law", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "auxiliary_train", "num_bytes": 160601377, "num_examples": 99842}, {"name": "test", "num_bytes": 1891762, "num_examples": 1534}, {"name": "validation", "num_bytes": 203519, "num_examples": 170}, {"name": "dev", "num_bytes": 6610, "num_examples": 5}], "download_size": 166184960, "dataset_size": 162703268}, {"config_name": "professional_medicine", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "auxiliary_train", "num_bytes": 160601377, "num_examples": 99842}, {"name": "test", "num_bytes": 217561, "num_examples": 272}, {"name": "validation", "num_bytes": 23847, "num_examples": 31}, {"name": "dev", "num_bytes": 3807, "num_examples": 5}], "download_size": 166184960, "dataset_size": 160846592}, {"config_name": "professional_psychology", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "auxiliary_train", "num_bytes": 160601377, "num_examples": 99842}, {"name": "test", "num_bytes": 225899, "num_examples": 612}, {"name": "validation", "num_bytes": 29101, "num_examples": 69}, {"name": "dev", "num_bytes": 2267, "num_examples": 5}], "download_size": 166184960, "dataset_size": 160858644}, {"config_name": "public_relations", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "auxiliary_train", "num_bytes": 160601377, "num_examples": 99842}, {"name": "test", "num_bytes": 28760, "num_examples": 110}, {"name": "validation", "num_bytes": 4566, "num_examples": 12}, {"name": "dev", "num_bytes": 1496, "num_examples": 5}], "download_size": 166184960, "dataset_size": 160636199}, {"config_name": "security_studies", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "auxiliary_train", "num_bytes": 160601377, "num_examples": 99842}, {"name": "test", "num_bytes": 204844, "num_examples": 245}, {"name": "validation", "num_bytes": 22637, "num_examples": 27}, {"name": "dev", "num_bytes": 5335, "num_examples": 5}], "download_size": 166184960, "dataset_size": 160834193}, {"config_name": "sociology", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "auxiliary_train", "num_bytes": 160601377, "num_examples": 99842}, {"name": "test", "num_bytes": 66243, "num_examples": 201}, {"name": "validation", "num_bytes": 7184, "num_examples": 22}, {"name": "dev", "num_bytes": 1613, "num_examples": 5}], "download_size": 166184960, "dataset_size": 160676417}, {"config_name": "us_foreign_policy", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "auxiliary_train", "num_bytes": 160601377, "num_examples": 99842}, {"name": "test", "num_bytes": 28443, "num_examples": 100}, {"name": "validation", "num_bytes": 3264, "num_examples": 11}, {"name": "dev", "num_bytes": 1611, "num_examples": 5}], "download_size": 166184960, "dataset_size": 160634695}, {"config_name": "virology", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "auxiliary_train", "num_bytes": 160601377, "num_examples": 99842}, {"name": "test", "num_bytes": 38759, "num_examples": 166}, {"name": "validation", "num_bytes": 5463, "num_examples": 18}, {"name": "dev", "num_bytes": 1096, "num_examples": 5}], "download_size": 166184960, "dataset_size": 160646695}, {"config_name": "world_religions", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "auxiliary_train", "num_bytes": 160601377, "num_examples": 99842}, {"name": "test", "num_bytes": 25274, "num_examples": 171}, {"name": "validation", "num_bytes": 2765, "num_examples": 19}, {"name": "dev", "num_bytes": 670, "num_examples": 5}], "download_size": 166184960, "dataset_size": 160630086}]}
2023-10-07T10:24:05+00:00
[ "2009.03300", "2005.00700", "2005.14165", "2008.02275" ]
[ "en" ]
TAGS #task_categories-question-answering #task_ids-multiple-choice-qa #annotations_creators-no-annotation #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-mit #arxiv-2009.03300 #arxiv-2005.00700 #arxiv-2005.14165 #arxiv-2008.02275 #region-us
Dataset Card for MMLU ===================== Table of Contents ----------------- * Table of Contents * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Repository: URL * Paper: URL ### Dataset Summary Measuring Massive Multitask Language Understanding by Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt (ICLR 2021). This is a massive multitask test consisting of multiple-choice questions from various branches of knowledge. The test spans subjects in the humanities, social sciences, hard sciences, and other areas that are important for some people to learn. This covers 57 tasks including elementary mathematics, US history, computer science, law, and more. To attain high accuracy on this test, models must possess extensive world knowledge and problem solving ability. A complete list of tasks: ['abstract\_algebra', 'anatomy', 'astronomy', 'business\_ethics', 'clinical\_knowledge', 'college\_biology', 'college\_chemistry', 'college\_computer\_science', 'college\_mathematics', 'college\_medicine', 'college\_physics', 'computer\_security', 'conceptual\_physics', 'econometrics', 'electrical\_engineering', 'elementary\_mathematics', 'formal\_logic', 'global\_facts', 'high\_school\_biology', 'high\_school\_chemistry', 'high\_school\_computer\_science', 'high\_school\_european\_history', 'high\_school\_geography', 'high\_school\_government\_and\_politics', 'high\_school\_macroeconomics', 'high\_school\_mathematics', 'high\_school\_microeconomics', 'high\_school\_physics', 'high\_school\_psychology', 'high\_school\_statistics', 'high\_school\_us\_history', 'high\_school\_world\_history', 'human\_aging', 'human\_sexuality', 'international\_law', 'jurisprudence', 'logical\_fallacies', 'machine\_learning', 'management', 'marketing', 'medical\_genetics', 'miscellaneous', 'moral\_disputes', 'moral\_scenarios', 'nutrition', 'philosophy', 'prehistory', 'professional\_accounting', 'professional\_law', 'professional\_medicine', 'professional\_psychology', 'public\_relations', 'security\_studies', 'sociology', 'us\_foreign\_policy', 'virology', 'world\_religions'] ### Supported Tasks and Leaderboards ### Languages English Dataset Structure ----------------- ### Data Instances An example from anatomy subtask looks as follows: ### Data Fields * 'question': a string feature * 'choices': a list of 4 string features * 'answer': a ClassLabel feature ### Data Splits * 'auxiliary\_train': auxiliary multiple-choice training questions from ARC, MC\_TEST, OBQA, RACE, etc. * 'dev': 5 examples per subtask, meant for few-shot setting * 'test': there are at least 100 examples per subtask Dataset Creation ---------------- ### Curation Rationale Transformer models have driven this recent progress by pretraining on massive text corpora, including all of Wikipedia, thousands of books, and numerous websites. These models consequently see extensive information about specialized topics, most of which is not assessed by existing NLP benchmarks. To bridge the gap between the wide-ranging knowledge that models see during pretraining and the existing measures of success, we introduce a new benchmark for assessing models across a diverse set of subjects that humans learn. ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information MIT License If you find this useful in your research, please consider citing the test and also the ETHICS dataset it draws from: ### Contributions Thanks to @andyzoujm for adding this dataset.
[ "### Dataset Summary\n\n\nMeasuring Massive Multitask Language Understanding by Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt (ICLR 2021).\n\n\nThis is a massive multitask test consisting of multiple-choice questions from various branches of knowledge. The test spans subjects in the humanities, social sciences, hard sciences, and other areas that are important for some people to learn. This covers 57 tasks including elementary mathematics, US history, computer science, law, and more. To attain high accuracy on this test, models must possess extensive world knowledge and problem solving ability.\n\n\nA complete list of tasks: ['abstract\\_algebra', 'anatomy', 'astronomy', 'business\\_ethics', 'clinical\\_knowledge', 'college\\_biology', 'college\\_chemistry', 'college\\_computer\\_science', 'college\\_mathematics', 'college\\_medicine', 'college\\_physics', 'computer\\_security', 'conceptual\\_physics', 'econometrics', 'electrical\\_engineering', 'elementary\\_mathematics', 'formal\\_logic', 'global\\_facts', 'high\\_school\\_biology', 'high\\_school\\_chemistry', 'high\\_school\\_computer\\_science', 'high\\_school\\_european\\_history', 'high\\_school\\_geography', 'high\\_school\\_government\\_and\\_politics', 'high\\_school\\_macroeconomics', 'high\\_school\\_mathematics', 'high\\_school\\_microeconomics', 'high\\_school\\_physics', 'high\\_school\\_psychology', 'high\\_school\\_statistics', 'high\\_school\\_us\\_history', 'high\\_school\\_world\\_history', 'human\\_aging', 'human\\_sexuality', 'international\\_law', 'jurisprudence', 'logical\\_fallacies', 'machine\\_learning', 'management', 'marketing', 'medical\\_genetics', 'miscellaneous', 'moral\\_disputes', 'moral\\_scenarios', 'nutrition', 'philosophy', 'prehistory', 'professional\\_accounting', 'professional\\_law', 'professional\\_medicine', 'professional\\_psychology', 'public\\_relations', 'security\\_studies', 'sociology', 'us\\_foreign\\_policy', 'virology', 'world\\_religions']", "### Supported Tasks and Leaderboards", "### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example from anatomy subtask looks as follows:", "### Data Fields\n\n\n* 'question': a string feature\n* 'choices': a list of 4 string features\n* 'answer': a ClassLabel feature", "### Data Splits\n\n\n* 'auxiliary\\_train': auxiliary multiple-choice training questions from ARC, MC\\_TEST, OBQA, RACE, etc.\n* 'dev': 5 examples per subtask, meant for few-shot setting\n* 'test': there are at least 100 examples per subtask\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nTransformer models have driven this recent progress by pretraining on massive text corpora, including all of Wikipedia, thousands of books, and numerous websites. These models consequently see extensive information about specialized topics, most of which is not assessed by existing NLP benchmarks. To bridge the gap between the wide-ranging knowledge that models see during pretraining and the existing measures of success, we introduce a new benchmark for assessing models across a diverse set of subjects that humans learn.", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nMIT License\n\n\nIf you find this useful in your research, please consider citing the test and also the ETHICS dataset it draws from:", "### Contributions\n\n\nThanks to @andyzoujm for adding this dataset." ]
[ "TAGS\n#task_categories-question-answering #task_ids-multiple-choice-qa #annotations_creators-no-annotation #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-mit #arxiv-2009.03300 #arxiv-2005.00700 #arxiv-2005.14165 #arxiv-2008.02275 #region-us \n", "### Dataset Summary\n\n\nMeasuring Massive Multitask Language Understanding by Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt (ICLR 2021).\n\n\nThis is a massive multitask test consisting of multiple-choice questions from various branches of knowledge. The test spans subjects in the humanities, social sciences, hard sciences, and other areas that are important for some people to learn. This covers 57 tasks including elementary mathematics, US history, computer science, law, and more. To attain high accuracy on this test, models must possess extensive world knowledge and problem solving ability.\n\n\nA complete list of tasks: ['abstract\\_algebra', 'anatomy', 'astronomy', 'business\\_ethics', 'clinical\\_knowledge', 'college\\_biology', 'college\\_chemistry', 'college\\_computer\\_science', 'college\\_mathematics', 'college\\_medicine', 'college\\_physics', 'computer\\_security', 'conceptual\\_physics', 'econometrics', 'electrical\\_engineering', 'elementary\\_mathematics', 'formal\\_logic', 'global\\_facts', 'high\\_school\\_biology', 'high\\_school\\_chemistry', 'high\\_school\\_computer\\_science', 'high\\_school\\_european\\_history', 'high\\_school\\_geography', 'high\\_school\\_government\\_and\\_politics', 'high\\_school\\_macroeconomics', 'high\\_school\\_mathematics', 'high\\_school\\_microeconomics', 'high\\_school\\_physics', 'high\\_school\\_psychology', 'high\\_school\\_statistics', 'high\\_school\\_us\\_history', 'high\\_school\\_world\\_history', 'human\\_aging', 'human\\_sexuality', 'international\\_law', 'jurisprudence', 'logical\\_fallacies', 'machine\\_learning', 'management', 'marketing', 'medical\\_genetics', 'miscellaneous', 'moral\\_disputes', 'moral\\_scenarios', 'nutrition', 'philosophy', 'prehistory', 'professional\\_accounting', 'professional\\_law', 'professional\\_medicine', 'professional\\_psychology', 'public\\_relations', 'security\\_studies', 'sociology', 'us\\_foreign\\_policy', 'virology', 'world\\_religions']", "### Supported Tasks and Leaderboards", "### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example from anatomy subtask looks as follows:", "### Data Fields\n\n\n* 'question': a string feature\n* 'choices': a list of 4 string features\n* 'answer': a ClassLabel feature", "### Data Splits\n\n\n* 'auxiliary\\_train': auxiliary multiple-choice training questions from ARC, MC\\_TEST, OBQA, RACE, etc.\n* 'dev': 5 examples per subtask, meant for few-shot setting\n* 'test': there are at least 100 examples per subtask\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nTransformer models have driven this recent progress by pretraining on massive text corpora, including all of Wikipedia, thousands of books, and numerous websites. These models consequently see extensive information about specialized topics, most of which is not assessed by existing NLP benchmarks. To bridge the gap between the wide-ranging knowledge that models see during pretraining and the existing measures of success, we introduce a new benchmark for assessing models across a diverse set of subjects that humans learn.", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nMIT License\n\n\nIf you find this useful in your research, please consider citing the test and also the ETHICS dataset it draws from:", "### Contributions\n\n\nThanks to @andyzoujm for adding this dataset." ]
62656da6951bc2f0749778a32389c6b7268d68ae
# Dataset Card for HindEnCorp ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://lindat.mff.cuni.cz/repository/xmlui/handle/11858/00-097C-0000-0023-625F-0 - **Repository:** https://lindat.mff.cuni.cz/repository/xmlui/ - **Paper:** http://www.lrec-conf.org/proceedings/lrec2014/pdf/835_Paper.pdf - **Leaderboard:** - **Point of Contact:** ### Dataset Summary HindEnCorp parallel texts (sentence-aligned) come from the following sources: Tides, which contains 50K sentence pairs taken mainly from news articles. This dataset was originally col- lected for the DARPA-TIDES surprise-language con- test in 2002, later refined at IIIT Hyderabad and provided for the NLP Tools Contest at ICON 2008 (Venkatapathy, 2008). Commentaries by Daniel Pipes contain 322 articles in English written by a journalist Daniel Pipes and translated into Hindi. EMILLE. This corpus (Baker et al., 2002) consists of three components: monolingual, parallel and annotated corpora. There are fourteen monolingual sub- corpora, including both written and (for some lan- guages) spoken data for fourteen South Asian lan- guages. The EMILLE monolingual corpora contain in total 92,799,000 words (including 2,627,000 words of transcribed spoken data for Bengali, Gujarati, Hindi, Punjabi and Urdu). The parallel corpus consists of 200,000 words of text in English and its accompanying translations into Hindi and other languages. Smaller datasets as collected by Bojar et al. (2010) include the corpus used at ACL 2005 (a subcorpus of EMILLE), a corpus of named entities from Wikipedia (crawled in 2009), and Agriculture domain parallel corpus.  For the current release, we are extending the parallel corpus using these sources: Intercorp (Čermák and Rosen,2012) is a large multilingual parallel corpus of 32 languages including Hindi. The central language used for alignment is Czech. Intercorp’s core texts amount to 202 million words. These core texts are most suitable for us because their sentence alignment is manually checked and therefore very reliable. They cover predominately short sto- ries and novels. There are seven Hindi texts in Inter- corp. Unfortunately, only for three of them the English translation is available; the other four are aligned only with Czech texts. The Hindi subcorpus of Intercorp contains 118,000 words in Hindi. TED talks 3 held in various languages, primarily English, are equipped with transcripts and these are translated into 102 languages. There are 179 talks for which Hindi translation is available. The Indic multi-parallel corpus (Birch et al., 2011; Post et al., 2012) is a corpus of texts from Wikipedia translated from the respective Indian language into English by non-expert translators hired over Mechanical Turk. The quality is thus somewhat mixed in many respects starting from typesetting and punctuation over capi- talization, spelling, word choice to sentence structure. A little bit of control could be in principle obtained from the fact that every input sentence was translated 4 times. We used the 2012 release of the corpus. Launchpad.net is a software collaboration platform that hosts many open-source projects and facilitates also collaborative localization of the tools. We downloaded all revisions of all the hosted projects and extracted the localization (.po) files. Other smaller datasets. This time, we added Wikipedia entities as crawled in 2013 (including any morphological variants of the named entitity that appears on the Hindi variant of the Wikipedia page) and words, word examples and quotes from the Shabdkosh online dictionary. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Hindi, English ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields HindEncorp Columns: - source identifier (where do the segments come from) - alignment type (number of English segments - number of Hindi segments) - alignment quality, which is one of the following: "manual" ... for sources that were sentence-aligned manually "implied" ... for sources where one side was constructed by translating segment by segment float ... a value somehow reflecting the goodness of the automatic alignment; not really reliable - English segment or segments - Hindi segment or segments Each of the segments field is in the plaintext or export format as described above. If there are more than one segments on a line (e.g. for lines with alignment type 2-1 where there are two English segments), then the segments are delimited with `<s>` in the text field. ### Data Splits [More Information Needed] ## Dataset Creation ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? Daniel Pipes,Baker,Bojar,"Čermák and Rosen,2012","Birch et al., 2011; Post et al., 2012" ### Annotations #### Annotation process the 1st part of data TIDES was originally col- lected for the DARPA-TIDES surprise-language con- test in 2002, later refined at IIIT Hyderabad and provided for the NLP Tools Contest at ICON 2008 (Venkatapathy, 2008). #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations Dataset provided for research purposes only. Please check dataset license for additional information. ## Additional Information ### Dataset Curators Bojar, Ondřej ; Diatka, Vojtěch ; Straňák, Pavel ; Tamchyna, Aleš ; Zeman, Daniel ### Licensing Information CC BY-NC-SA 3.0 ### Citation Information @InProceedings{hindencorp05:lrec:2014, author = {Ond{\v{r}}ej Bojar and Vojt{\v{e}}ch Diatka and Pavel Rychl{\'{y}} and Pavel Stra{\v{n}}{\'{a}}k and V{\'{\i}}t Suchomel and Ale{\v{s}} Tamchyna and Daniel Zeman}, title = "{HindEnCorp - Hindi-English and Hindi-only Corpus for Machine Translation}", booktitle = {Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)}, year = {2014}, month = {may}, date = {26-31}, address = {Reykjavik, Iceland}, editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Thierry Declerck and Hrafn Loftsson and Bente Maegaard and Joseph Mariani and Asuncion Moreno and Jan Odijk and Stelios Piperidis}, publisher = {European Language Resources Association (ELRA)}, isbn = {978-2-9517408-8-4}, language = {english} } ### Contributions Thanks to [@rahul-art](https://github.com/rahul-art) for adding this dataset.
hind_encorp
[ "task_categories:translation", "annotations_creators:expert-generated", "language_creators:crowdsourced", "language_creators:machine-generated", "multilinguality:translation", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "language:hi", "license:cc-by-nc-sa-3.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["crowdsourced", "machine-generated"], "language": ["en", "hi"], "license": ["cc-by-nc-sa-3.0"], "multilinguality": ["translation"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["translation"], "task_ids": [], "paperswithcode_id": "hindencorp", "pretty_name": "HindEnCorp", "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "alignment_type", "dtype": "string"}, {"name": "alignment_quality", "dtype": "string"}, {"name": "translation", "dtype": {"translation": {"languages": ["en", "hi"]}}}], "splits": [{"name": "train", "num_bytes": 78945714, "num_examples": 273885}], "download_size": 23899723, "dataset_size": 78945714}}
2024-01-18T11:05:24+00:00
[]
[ "en", "hi" ]
TAGS #task_categories-translation #annotations_creators-expert-generated #language_creators-crowdsourced #language_creators-machine-generated #multilinguality-translation #size_categories-100K<n<1M #source_datasets-original #language-English #language-Hindi #license-cc-by-nc-sa-3.0 #region-us
# Dataset Card for HindEnCorp ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: URL - Paper: URL - Leaderboard: - Point of Contact: ### Dataset Summary HindEnCorp parallel texts (sentence-aligned) come from the following sources: Tides, which contains 50K sentence pairs taken mainly from news articles. This dataset was originally col- lected for the DARPA-TIDES surprise-language con- test in 2002, later refined at IIIT Hyderabad and provided for the NLP Tools Contest at ICON 2008 (Venkatapathy, 2008). Commentaries by Daniel Pipes contain 322 articles in English written by a journalist Daniel Pipes and translated into Hindi. EMILLE. This corpus (Baker et al., 2002) consists of three components: monolingual, parallel and annotated corpora. There are fourteen monolingual sub- corpora, including both written and (for some lan- guages) spoken data for fourteen South Asian lan- guages. The EMILLE monolingual corpora contain in total 92,799,000 words (including 2,627,000 words of transcribed spoken data for Bengali, Gujarati, Hindi, Punjabi and Urdu). The parallel corpus consists of 200,000 words of text in English and its accompanying translations into Hindi and other languages. Smaller datasets as collected by Bojar et al. (2010) include the corpus used at ACL 2005 (a subcorpus of EMILLE), a corpus of named entities from Wikipedia (crawled in 2009), and Agriculture domain parallel corpus.  For the current release, we are extending the parallel corpus using these sources: Intercorp (Čermák and Rosen,2012) is a large multilingual parallel corpus of 32 languages including Hindi. The central language used for alignment is Czech. Intercorp’s core texts amount to 202 million words. These core texts are most suitable for us because their sentence alignment is manually checked and therefore very reliable. They cover predominately short sto- ries and novels. There are seven Hindi texts in Inter- corp. Unfortunately, only for three of them the English translation is available; the other four are aligned only with Czech texts. The Hindi subcorpus of Intercorp contains 118,000 words in Hindi. TED talks 3 held in various languages, primarily English, are equipped with transcripts and these are translated into 102 languages. There are 179 talks for which Hindi translation is available. The Indic multi-parallel corpus (Birch et al., 2011; Post et al., 2012) is a corpus of texts from Wikipedia translated from the respective Indian language into English by non-expert translators hired over Mechanical Turk. The quality is thus somewhat mixed in many respects starting from typesetting and punctuation over capi- talization, spelling, word choice to sentence structure. A little bit of control could be in principle obtained from the fact that every input sentence was translated 4 times. We used the 2012 release of the corpus. URL is a software collaboration platform that hosts many open-source projects and facilitates also collaborative localization of the tools. We downloaded all revisions of all the hosted projects and extracted the localization (.po) files. Other smaller datasets. This time, we added Wikipedia entities as crawled in 2013 (including any morphological variants of the named entitity that appears on the Hindi variant of the Wikipedia page) and words, word examples and quotes from the Shabdkosh online dictionary. ### Supported Tasks and Leaderboards ### Languages Hindi, English ## Dataset Structure ### Data Instances ### Data Fields HindEncorp Columns: - source identifier (where do the segments come from) - alignment type (number of English segments - number of Hindi segments) - alignment quality, which is one of the following: "manual" ... for sources that were sentence-aligned manually "implied" ... for sources where one side was constructed by translating segment by segment float ... a value somehow reflecting the goodness of the automatic alignment; not really reliable - English segment or segments - Hindi segment or segments Each of the segments field is in the plaintext or export format as described above. If there are more than one segments on a line (e.g. for lines with alignment type 2-1 where there are two English segments), then the segments are delimited with '<s>' in the text field. ### Data Splits ## Dataset Creation ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? Daniel Pipes,Baker,Bojar,"Čermák and Rosen,2012","Birch et al., 2011; Post et al., 2012" ### Annotations #### Annotation process the 1st part of data TIDES was originally col- lected for the DARPA-TIDES surprise-language con- test in 2002, later refined at IIIT Hyderabad and provided for the NLP Tools Contest at ICON 2008 (Venkatapathy, 2008). #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Dataset provided for research purposes only. Please check dataset license for additional information. ## Additional Information ### Dataset Curators Bojar, Ondřej ; Diatka, Vojtěch ; Straňák, Pavel ; Tamchyna, Aleš ; Zeman, Daniel ### Licensing Information CC BY-NC-SA 3.0 @InProceedings{hindencorp05:lrec:2014, author = {Ond{\v{r}}ej Bojar and Vojt{\v{e}}ch Diatka and Pavel Rychl{\'{y}} and Pavel Stra{\v{n}}{\'{a}}k and V{\'{\i}}t Suchomel and Ale{\v{s}} Tamchyna and Daniel Zeman}, title = "{HindEnCorp - Hindi-English and Hindi-only Corpus for Machine Translation}", booktitle = {Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)}, year = {2014}, month = {may}, date = {26-31}, address = {Reykjavik, Iceland}, editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Thierry Declerck and Hrafn Loftsson and Bente Maegaard and Joseph Mariani and Asuncion Moreno and Jan Odijk and Stelios Piperidis}, publisher = {European Language Resources Association (ELRA)}, isbn = {978-2-9517408-8-4}, language = {english} } ### Contributions Thanks to @rahul-art for adding this dataset.
[ "# Dataset Card for HindEnCorp", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nHindEnCorp parallel texts (sentence-aligned) come from the following sources:\nTides, which contains 50K sentence pairs taken mainly from news articles. This dataset was originally col- lected for the DARPA-TIDES surprise-language con- test in 2002, later refined at IIIT Hyderabad and provided for the NLP Tools Contest at ICON 2008 (Venkatapathy, 2008).\n\nCommentaries by Daniel Pipes contain 322 articles in English written by a journalist Daniel Pipes and translated into Hindi.\n\nEMILLE. This corpus (Baker et al., 2002) consists of three components: monolingual, parallel and annotated corpora. There are fourteen monolingual sub- corpora, including both written and (for some lan- guages) spoken data for fourteen South Asian lan- guages. The EMILLE monolingual corpora contain in total 92,799,000 words (including 2,627,000 words of transcribed spoken data for Bengali, Gujarati, Hindi, Punjabi and Urdu). The parallel corpus consists of 200,000 words of text in English and its accompanying translations into Hindi and other languages.\n\nSmaller datasets as collected by Bojar et al. (2010) include the corpus used at ACL 2005 (a subcorpus of EMILLE), a corpus of named entities from Wikipedia (crawled in 2009), and Agriculture domain parallel corpus.\n\nFor the current release, we are extending the parallel corpus using these sources:\nIntercorp (Čermák and Rosen,2012) is a large multilingual parallel corpus of 32 languages including Hindi. The central language used for alignment is Czech. Intercorp’s core texts amount to 202 million words. These core texts are most suitable for us because their sentence alignment is manually checked and therefore very reliable. They cover predominately short sto- ries and novels. There are seven Hindi texts in Inter- corp. Unfortunately, only for three of them the English translation is available; the other four are aligned only with Czech texts. The Hindi subcorpus of Intercorp contains 118,000 words in Hindi.\n\nTED talks 3 held in various languages, primarily English, are equipped with transcripts and these are translated into 102 languages. There are 179 talks for which Hindi translation is available.\n\nThe Indic multi-parallel corpus (Birch et al., 2011; Post et al., 2012) is a corpus of texts from Wikipedia translated from the respective Indian language into English by non-expert translators hired over Mechanical Turk. The quality is thus somewhat mixed in many respects starting from typesetting and punctuation over capi- talization, spelling, word choice to sentence structure. A little bit of control could be in principle obtained from the fact that every input sentence was translated 4 times. We used the 2012 release of the corpus.\n\nURL is a software collaboration platform that hosts many open-source projects and facilitates also collaborative localization of the tools. We downloaded all revisions of all the hosted projects and extracted the localization (.po) files.\n\nOther smaller datasets. This time, we added Wikipedia entities as crawled in 2013 (including any morphological variants of the named entitity that appears on the Hindi variant of the Wikipedia page) and words, word examples and quotes from the Shabdkosh online dictionary.", "### Supported Tasks and Leaderboards", "### Languages\n\nHindi, English", "## Dataset Structure", "### Data Instances", "### Data Fields\n\nHindEncorp Columns:\n\n- source identifier (where do the segments come from)\n- alignment type (number of English segments - number of Hindi segments)\n- alignment quality, which is one of the following:\n \"manual\" ... for sources that were sentence-aligned manually\n \"implied\" ... for sources where one side was constructed by translating\n segment by segment\n float ... a value somehow reflecting the goodness of the automatic\n alignment; not really reliable\n- English segment or segments\n- Hindi segment or segments\n\nEach of the segments field is in the plaintext or export format as described\nabove.\n\nIf there are more than one segments on a line (e.g. for lines with alignment\ntype 2-1 where there are two English segments), then the segments are delimited\nwith '<s>' in the text field.", "### Data Splits", "## Dataset Creation", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?\n\nDaniel Pipes,Baker,Bojar,\"Čermák and Rosen,2012\",\"Birch et al., 2011; Post et al., 2012\"", "### Annotations", "#### Annotation process\n\nthe 1st part of data TIDES was originally col- lected for the DARPA-TIDES surprise-language con- test in 2002, later refined at IIIT Hyderabad and provided for the NLP Tools Contest at ICON 2008 (Venkatapathy, 2008).", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\nDataset provided for research purposes only. Please check dataset license for additional information.", "## Additional Information", "### Dataset Curators\n\nBojar, Ondřej ; Diatka, Vojtěch ; Straňák, Pavel ; Tamchyna, Aleš ; Zeman, Daniel", "### Licensing Information\n\nCC BY-NC-SA 3.0\n\n\n\n@InProceedings{hindencorp05:lrec:2014,\n author = {Ond{\\v{r}}ej Bojar and Vojt{\\v{e}}ch Diatka\n and Pavel Rychl{\\'{y}} and Pavel Stra{\\v{n}}{\\'{a}}k\n and V{\\'{\\i}}t Suchomel and Ale{\\v{s}} Tamchyna and Daniel Zeman},\n title = \"{HindEnCorp - Hindi-English and Hindi-only Corpus for Machine\n Translation}\",\n booktitle = {Proceedings of the Ninth International Conference on Language\n Resources and Evaluation (LREC'14)},\n year = {2014},\n month = {may},\n date = {26-31},\n address = {Reykjavik, Iceland},\n editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and\n Thierry Declerck and Hrafn Loftsson and Bente Maegaard and Joseph Mariani\n and Asuncion Moreno and Jan Odijk and Stelios Piperidis},\n publisher = {European Language Resources Association (ELRA)},\n isbn = {978-2-9517408-8-4},\n language = {english}\n}", "### Contributions\n\nThanks to @rahul-art for adding this dataset." ]
[ "TAGS\n#task_categories-translation #annotations_creators-expert-generated #language_creators-crowdsourced #language_creators-machine-generated #multilinguality-translation #size_categories-100K<n<1M #source_datasets-original #language-English #language-Hindi #license-cc-by-nc-sa-3.0 #region-us \n", "# Dataset Card for HindEnCorp", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nHindEnCorp parallel texts (sentence-aligned) come from the following sources:\nTides, which contains 50K sentence pairs taken mainly from news articles. This dataset was originally col- lected for the DARPA-TIDES surprise-language con- test in 2002, later refined at IIIT Hyderabad and provided for the NLP Tools Contest at ICON 2008 (Venkatapathy, 2008).\n\nCommentaries by Daniel Pipes contain 322 articles in English written by a journalist Daniel Pipes and translated into Hindi.\n\nEMILLE. This corpus (Baker et al., 2002) consists of three components: monolingual, parallel and annotated corpora. There are fourteen monolingual sub- corpora, including both written and (for some lan- guages) spoken data for fourteen South Asian lan- guages. The EMILLE monolingual corpora contain in total 92,799,000 words (including 2,627,000 words of transcribed spoken data for Bengali, Gujarati, Hindi, Punjabi and Urdu). The parallel corpus consists of 200,000 words of text in English and its accompanying translations into Hindi and other languages.\n\nSmaller datasets as collected by Bojar et al. (2010) include the corpus used at ACL 2005 (a subcorpus of EMILLE), a corpus of named entities from Wikipedia (crawled in 2009), and Agriculture domain parallel corpus.\n\nFor the current release, we are extending the parallel corpus using these sources:\nIntercorp (Čermák and Rosen,2012) is a large multilingual parallel corpus of 32 languages including Hindi. The central language used for alignment is Czech. Intercorp’s core texts amount to 202 million words. These core texts are most suitable for us because their sentence alignment is manually checked and therefore very reliable. They cover predominately short sto- ries and novels. There are seven Hindi texts in Inter- corp. Unfortunately, only for three of them the English translation is available; the other four are aligned only with Czech texts. The Hindi subcorpus of Intercorp contains 118,000 words in Hindi.\n\nTED talks 3 held in various languages, primarily English, are equipped with transcripts and these are translated into 102 languages. There are 179 talks for which Hindi translation is available.\n\nThe Indic multi-parallel corpus (Birch et al., 2011; Post et al., 2012) is a corpus of texts from Wikipedia translated from the respective Indian language into English by non-expert translators hired over Mechanical Turk. The quality is thus somewhat mixed in many respects starting from typesetting and punctuation over capi- talization, spelling, word choice to sentence structure. A little bit of control could be in principle obtained from the fact that every input sentence was translated 4 times. We used the 2012 release of the corpus.\n\nURL is a software collaboration platform that hosts many open-source projects and facilitates also collaborative localization of the tools. We downloaded all revisions of all the hosted projects and extracted the localization (.po) files.\n\nOther smaller datasets. This time, we added Wikipedia entities as crawled in 2013 (including any morphological variants of the named entitity that appears on the Hindi variant of the Wikipedia page) and words, word examples and quotes from the Shabdkosh online dictionary.", "### Supported Tasks and Leaderboards", "### Languages\n\nHindi, English", "## Dataset Structure", "### Data Instances", "### Data Fields\n\nHindEncorp Columns:\n\n- source identifier (where do the segments come from)\n- alignment type (number of English segments - number of Hindi segments)\n- alignment quality, which is one of the following:\n \"manual\" ... for sources that were sentence-aligned manually\n \"implied\" ... for sources where one side was constructed by translating\n segment by segment\n float ... a value somehow reflecting the goodness of the automatic\n alignment; not really reliable\n- English segment or segments\n- Hindi segment or segments\n\nEach of the segments field is in the plaintext or export format as described\nabove.\n\nIf there are more than one segments on a line (e.g. for lines with alignment\ntype 2-1 where there are two English segments), then the segments are delimited\nwith '<s>' in the text field.", "### Data Splits", "## Dataset Creation", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?\n\nDaniel Pipes,Baker,Bojar,\"Čermák and Rosen,2012\",\"Birch et al., 2011; Post et al., 2012\"", "### Annotations", "#### Annotation process\n\nthe 1st part of data TIDES was originally col- lected for the DARPA-TIDES surprise-language con- test in 2002, later refined at IIIT Hyderabad and provided for the NLP Tools Contest at ICON 2008 (Venkatapathy, 2008).", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\nDataset provided for research purposes only. Please check dataset license for additional information.", "## Additional Information", "### Dataset Curators\n\nBojar, Ondřej ; Diatka, Vojtěch ; Straňák, Pavel ; Tamchyna, Aleš ; Zeman, Daniel", "### Licensing Information\n\nCC BY-NC-SA 3.0\n\n\n\n@InProceedings{hindencorp05:lrec:2014,\n author = {Ond{\\v{r}}ej Bojar and Vojt{\\v{e}}ch Diatka\n and Pavel Rychl{\\'{y}} and Pavel Stra{\\v{n}}{\\'{a}}k\n and V{\\'{\\i}}t Suchomel and Ale{\\v{s}} Tamchyna and Daniel Zeman},\n title = \"{HindEnCorp - Hindi-English and Hindi-only Corpus for Machine\n Translation}\",\n booktitle = {Proceedings of the Ninth International Conference on Language\n Resources and Evaluation (LREC'14)},\n year = {2014},\n month = {may},\n date = {26-31},\n address = {Reykjavik, Iceland},\n editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and\n Thierry Declerck and Hrafn Loftsson and Bente Maegaard and Joseph Mariani\n and Asuncion Moreno and Jan Odijk and Stelios Piperidis},\n publisher = {European Language Resources Association (ELRA)},\n isbn = {978-2-9517408-8-4},\n language = {english}\n}", "### Contributions\n\nThanks to @rahul-art for adding this dataset." ]
218ce687943a0da435d6d62751a4ab216be6cd40
# Dataset Card for Discourse Analysis dataset ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** https://github.com/midas-research/hindi-discourse - **Paper:** [An Annotated Dataset of Discourse Modes in Hindi Stories](https://aclanthology.org/2020.lrec-1.149/) - **Point of Contact:** https://github.com/midas-research/MeTooMA ### Dataset Summary - The Hindi Discourse Analysis dataset is a corpus for analyzing discourse modes present in its sentences. - It contains sentences from stories written by 11 famous authors from the 20th Century. - 4-5 stories by each author have been selected which were available in the public domain resulting in a collection of 53 stories. - Most of these short stories were originally written in Hindi but some of them were written in other Indian languages and later translated to Hindi. The corpus contains a total of 10472 sentences belonging to the following categories: - Argumentative - Descriptive - Dialogic - Informative - Narrative ### Supported Tasks and Leaderboards - Discourse Analysis of Hindi. ### Languages Hindi ## Dataset Structure - The dataset is structured into JSON format. ### Data Instances {'Story_no': 15, 'Sentence': ' गाँठ से साढ़े तीन रुपये लग गये, जो अब पेट में जाकर खनकते भी नहीं! जो तेरी करनी मालिक! ” “इसमें मालिक की क्या करनी है? ”', 'Discourse Mode': 'Dialogue'} ### Data Fields Sentence number, story number, sentence and discourse mode ### Data Splits - Train: 9983 ## Dataset Creation ### Curation Rationale - Present a new publicly available corpus consisting of sentences from short stories written in a low-resource language of Hindi having high quality annotation for five different discourse modes - argumentative, narrative, descriptive, dialogic and informative. - Perform a detailed analysis of the proposed annotated corpus and characterize the performance of different classification algorithms. ### Source Data - Source of all the data points in this dataset is Hindi stories written by famous authors of Hindi literature. #### Initial Data Collection and Normalization - All the data was collected from various Hindi websites. - We chose against crowd-sourcing the annotation pro- cess because we wanted to directly work with the an- notators for qualitative feedback and to also ensure high quality annotations. - We employed three native Hindi speakers with college level education for the an- notation task. - We first selected two random stories from our corpus and had the three annotators work on them independently and classify each sentence based on the discourse mode. - Please refer to this paper for detailed information: https://www.aclweb.org/anthology/2020.lrec-1.149/ #### Who are the source language producers? Please refer to this paper for detailed information: https://www.aclweb.org/anthology/2020.lrec-1.149/ ### Annotations #### Annotation process - The authors chose against crowd sourcing for labeling this dataset due to its highly sensitive nature. - The annotators are domain experts having degress in advanced clinical psychology and gender studies. - They were provided a guidelines document with instructions about each task and its definitions, labels and examples. - They studied the document, worked a few examples to get used to this annotation task. - They also provided feedback for improving the class definitions. - The annotation process is not mutually exclusive, implying that presence of one label does not mean the absence of the other one. #### Who are the annotators? - The annotators were three native Hindi speakers with college level education. - Please refer to the accompnaying paper for a detailed annotation process. ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset - As a future work we would also like to use the presented corpus to see how it could be further used in certain downstream tasks such as emotion analysis, machine translation, textual entailment, and speech sythesis for improving storytelling experience in Hindi language. ### Discussion of Biases [More Information Needed] ### Other Known Limitations - We could not get the best performance using the deep learning model trained on the data, due to insufficient data for DL models. ## Additional Information Please refer to this link: https://github.com/midas-research/hindi-discourse ### Dataset Curators - If you use the corpus in a product or application, then please credit the authors and [Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi] (http://midas.iiitd.edu.in) appropriately. Also, if you send us an email, we will be thrilled to know about how you have used the corpus. - If interested in commercial use of the corpus, send email to [email protected]. - Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi, India disclaims any responsibility for the use of the corpus and does not provide technical support. However, the contact listed above will be happy to respond to queries and clarifications - Please feel free to send us an email: - with feedback regarding the corpus. - with information on how you have used the corpus. - if interested in having us analyze your social media data. - if interested in a collaborative research project. ### Licensing Information - If you use the corpus in a product or application, then please credit the authors and [Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi] (http://midas.iiitd.edu.in) appropriately. ### Citation Information Please cite the following publication if you make use of the dataset: https://aclanthology.org/2020.lrec-1.149/ ``` @inproceedings{dhanwal-etal-2020-annotated, title = "An Annotated Dataset of Discourse Modes in {H}indi Stories", author = "Dhanwal, Swapnil and Dutta, Hritwik and Nankani, Hitesh and Shrivastava, Nilay and Kumar, Yaman and Li, Junyi Jessy and Mahata, Debanjan and Gosangi, Rakesh and Zhang, Haimin and Shah, Rajiv Ratn and Stent, Amanda", booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference", month = may, year = "2020", address = "Marseille, France", publisher = "European Language Resources Association", url = "https://www.aclweb.org/anthology/2020.lrec-1.149", pages = "1191--1196", abstract = "In this paper, we present a new corpus consisting of sentences from Hindi short stories annotated for five different discourse modes argumentative, narrative, descriptive, dialogic and informative. We present a detailed account of the entire data collection and annotation processes. The annotations have a very high inter-annotator agreement (0.87 k-alpha). We analyze the data in terms of label distributions, part of speech tags, and sentence lengths. We characterize the performance of various classification algorithms on this dataset and perform ablation studies to understand the nature of the linguistic models suitable for capturing the nuances of the embedded discourse structures in the presented corpus.", language = "English", ISBN = "979-10-95546-34-4", } ``` ### Contributions Thanks to [@duttahritwik](https://github.com/duttahritwik) for adding this dataset.
hindi_discourse
[ "task_categories:text-classification", "task_ids:multi-label-classification", "annotations_creators:other", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:hi", "license:other", "discourse-analysis", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["other"], "language_creators": ["found"], "language": ["hi"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["multi-label-classification"], "pretty_name": "Discourse Analysis dataset", "tags": ["discourse-analysis"], "dataset_info": {"features": [{"name": "Story_no", "dtype": "int32"}, {"name": "Sentence", "dtype": "string"}, {"name": "Discourse Mode", "dtype": {"class_label": {"names": {"0": "Argumentative", "1": "Descriptive", "2": "Dialogue", "3": "Informative", "4": "Narrative", "5": "Other"}}}}], "splits": [{"name": "train", "num_bytes": 1998930, "num_examples": 9968}], "download_size": 4176677, "dataset_size": 1998930}}
2024-01-18T11:05:28+00:00
[]
[ "hi" ]
TAGS #task_categories-text-classification #task_ids-multi-label-classification #annotations_creators-other #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Hindi #license-other #discourse-analysis #region-us
# Dataset Card for Discourse Analysis dataset ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: - Repository: URL - Paper: An Annotated Dataset of Discourse Modes in Hindi Stories - Point of Contact: URL ### Dataset Summary - The Hindi Discourse Analysis dataset is a corpus for analyzing discourse modes present in its sentences. - It contains sentences from stories written by 11 famous authors from the 20th Century. - 4-5 stories by each author have been selected which were available in the public domain resulting in a collection of 53 stories. - Most of these short stories were originally written in Hindi but some of them were written in other Indian languages and later translated to Hindi. The corpus contains a total of 10472 sentences belonging to the following categories: - Argumentative - Descriptive - Dialogic - Informative - Narrative ### Supported Tasks and Leaderboards - Discourse Analysis of Hindi. ### Languages Hindi ## Dataset Structure - The dataset is structured into JSON format. ### Data Instances {'Story_no': 15, 'Sentence': ' गाँठ से साढ़े तीन रुपये लग गये, जो अब पेट में जाकर खनकते भी नहीं! जो तेरी करनी मालिक! ” “इसमें मालिक की क्या करनी है? ”', 'Discourse Mode': 'Dialogue'} ### Data Fields Sentence number, story number, sentence and discourse mode ### Data Splits - Train: 9983 ## Dataset Creation ### Curation Rationale - Present a new publicly available corpus consisting of sentences from short stories written in a low-resource language of Hindi having high quality annotation for five different discourse modes - argumentative, narrative, descriptive, dialogic and informative. - Perform a detailed analysis of the proposed annotated corpus and characterize the performance of different classification algorithms. ### Source Data - Source of all the data points in this dataset is Hindi stories written by famous authors of Hindi literature. #### Initial Data Collection and Normalization - All the data was collected from various Hindi websites. - We chose against crowd-sourcing the annotation pro- cess because we wanted to directly work with the an- notators for qualitative feedback and to also ensure high quality annotations. - We employed three native Hindi speakers with college level education for the an- notation task. - We first selected two random stories from our corpus and had the three annotators work on them independently and classify each sentence based on the discourse mode. - Please refer to this paper for detailed information: URL #### Who are the source language producers? Please refer to this paper for detailed information: URL ### Annotations #### Annotation process - The authors chose against crowd sourcing for labeling this dataset due to its highly sensitive nature. - The annotators are domain experts having degress in advanced clinical psychology and gender studies. - They were provided a guidelines document with instructions about each task and its definitions, labels and examples. - They studied the document, worked a few examples to get used to this annotation task. - They also provided feedback for improving the class definitions. - The annotation process is not mutually exclusive, implying that presence of one label does not mean the absence of the other one. #### Who are the annotators? - The annotators were three native Hindi speakers with college level education. - Please refer to the accompnaying paper for a detailed annotation process. ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset - As a future work we would also like to use the presented corpus to see how it could be further used in certain downstream tasks such as emotion analysis, machine translation, textual entailment, and speech sythesis for improving storytelling experience in Hindi language. ### Discussion of Biases ### Other Known Limitations - We could not get the best performance using the deep learning model trained on the data, due to insufficient data for DL models. ## Additional Information Please refer to this link: URL ### Dataset Curators - If you use the corpus in a product or application, then please credit the authors and [Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi] (URL) appropriately. Also, if you send us an email, we will be thrilled to know about how you have used the corpus. - If interested in commercial use of the corpus, send email to midas@URL. - Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi, India disclaims any responsibility for the use of the corpus and does not provide technical support. However, the contact listed above will be happy to respond to queries and clarifications - Please feel free to send us an email: - with feedback regarding the corpus. - with information on how you have used the corpus. - if interested in having us analyze your social media data. - if interested in a collaborative research project. ### Licensing Information - If you use the corpus in a product or application, then please credit the authors and [Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi] (URL) appropriately. Please cite the following publication if you make use of the dataset: URL ### Contributions Thanks to @duttahritwik for adding this dataset.
[ "# Dataset Card for Discourse Analysis dataset", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage:\n- Repository: URL\n- Paper: An Annotated Dataset of Discourse Modes in Hindi Stories\n- Point of Contact: URL", "### Dataset Summary\n\n- The Hindi Discourse Analysis dataset is a corpus for analyzing discourse modes present in its sentences.\n- It contains sentences from stories written by 11 famous authors from the 20th Century.\n- 4-5 stories by each author have been selected which were available in the public domain resulting in a collection of 53 stories.\n- Most of these short stories were originally written in Hindi but some of them were written in other Indian languages and later translated to Hindi.\n\nThe corpus contains a total of 10472 sentences belonging to the following categories:\n- Argumentative\n- Descriptive\n- Dialogic\n- Informative\n- Narrative", "### Supported Tasks and Leaderboards\n\n- Discourse Analysis of Hindi.", "### Languages\n\nHindi", "## Dataset Structure\n- The dataset is structured into JSON format.", "### Data Instances\n{'Story_no': 15, 'Sentence': ' गाँठ से साढ़े तीन रुपये लग गये, जो अब पेट में जाकर खनकते भी नहीं! जो तेरी करनी मालिक! ” “इसमें मालिक की क्या करनी है? ”', 'Discourse Mode': 'Dialogue'}", "### Data Fields\n\nSentence number, story number, sentence and discourse mode", "### Data Splits\n\n- Train: 9983", "## Dataset Creation", "### Curation Rationale\n- Present a new publicly available corpus\nconsisting of sentences from short stories written in a\nlow-resource language of Hindi having high quality annotation for five different discourse modes -\nargumentative, narrative, descriptive, dialogic and informative.\n\n- Perform a detailed analysis of the proposed annotated corpus and characterize the performance of\ndifferent classification algorithms.", "### Source Data\n- Source of all the data points in this dataset is Hindi stories written by famous authors of Hindi literature.", "#### Initial Data Collection and Normalization\n\n- All the data was collected from various Hindi websites.\n- We chose against crowd-sourcing the annotation pro- cess because we wanted to directly work with the an- notators for qualitative feedback and to also ensure high quality annotations. \n- We employed three native Hindi speakers with college level education for the an- notation task. \n- We first selected two random stories from our corpus and had the three annotators work on them independently and classify each sentence based on the discourse mode.\n- Please refer to this paper for detailed information: URL", "#### Who are the source language producers?\n\nPlease refer to this paper for detailed information: URL", "### Annotations", "#### Annotation process\n\n- The authors chose against crowd sourcing for labeling this dataset due to its highly sensitive nature.\n- The annotators are domain experts having degress in advanced clinical psychology and gender studies.\n- They were provided a guidelines document with instructions about each task and its definitions, labels and examples.\n- They studied the document, worked a few examples to get used to this annotation task.\n- They also provided feedback for improving the class definitions.\n- The annotation process is not mutually exclusive, implying that presence of one label does not mean the\nabsence of the other one.", "#### Who are the annotators?\n\n- The annotators were three native Hindi speakers with college level education.\n- Please refer to the accompnaying paper for a detailed annotation process.", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset\n- As a future work we would also like to use the presented corpus to see how it could be further used\nin certain downstream tasks such as emotion analysis, machine translation,\ntextual entailment, and speech sythesis for improving storytelling experience in Hindi language.", "### Discussion of Biases", "### Other Known Limitations\n\n- We could not get the best performance using the deep learning model trained on the data, due to\n insufficient data for DL models.", "## Additional Information\n\nPlease refer to this link: URL", "### Dataset Curators\n\n- If you use the corpus in a product or application, then please credit the authors\nand [Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi]\n(URL) appropriately.\nAlso, if you send us an email, we will be thrilled to know about how you have used the corpus.\n- If interested in commercial use of the corpus, send email to midas@URL.\n- Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi, India\ndisclaims any responsibility for the use of the corpus and does not provide technical support.\nHowever, the contact listed above will be happy to respond to queries and clarifications\n- Please feel free to send us an email:\n - with feedback regarding the corpus.\n - with information on how you have used the corpus.\n - if interested in having us analyze your social media data.\n - if interested in a collaborative research project.", "### Licensing Information\n\n- If you use the corpus in a product or application, then please credit the authors\nand [Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi]\n(URL) appropriately.\n\n\n\nPlease cite the following publication if you make use of the dataset: URL", "### Contributions\n\nThanks to @duttahritwik for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-multi-label-classification #annotations_creators-other #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Hindi #license-other #discourse-analysis #region-us \n", "# Dataset Card for Discourse Analysis dataset", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage:\n- Repository: URL\n- Paper: An Annotated Dataset of Discourse Modes in Hindi Stories\n- Point of Contact: URL", "### Dataset Summary\n\n- The Hindi Discourse Analysis dataset is a corpus for analyzing discourse modes present in its sentences.\n- It contains sentences from stories written by 11 famous authors from the 20th Century.\n- 4-5 stories by each author have been selected which were available in the public domain resulting in a collection of 53 stories.\n- Most of these short stories were originally written in Hindi but some of them were written in other Indian languages and later translated to Hindi.\n\nThe corpus contains a total of 10472 sentences belonging to the following categories:\n- Argumentative\n- Descriptive\n- Dialogic\n- Informative\n- Narrative", "### Supported Tasks and Leaderboards\n\n- Discourse Analysis of Hindi.", "### Languages\n\nHindi", "## Dataset Structure\n- The dataset is structured into JSON format.", "### Data Instances\n{'Story_no': 15, 'Sentence': ' गाँठ से साढ़े तीन रुपये लग गये, जो अब पेट में जाकर खनकते भी नहीं! जो तेरी करनी मालिक! ” “इसमें मालिक की क्या करनी है? ”', 'Discourse Mode': 'Dialogue'}", "### Data Fields\n\nSentence number, story number, sentence and discourse mode", "### Data Splits\n\n- Train: 9983", "## Dataset Creation", "### Curation Rationale\n- Present a new publicly available corpus\nconsisting of sentences from short stories written in a\nlow-resource language of Hindi having high quality annotation for five different discourse modes -\nargumentative, narrative, descriptive, dialogic and informative.\n\n- Perform a detailed analysis of the proposed annotated corpus and characterize the performance of\ndifferent classification algorithms.", "### Source Data\n- Source of all the data points in this dataset is Hindi stories written by famous authors of Hindi literature.", "#### Initial Data Collection and Normalization\n\n- All the data was collected from various Hindi websites.\n- We chose against crowd-sourcing the annotation pro- cess because we wanted to directly work with the an- notators for qualitative feedback and to also ensure high quality annotations. \n- We employed three native Hindi speakers with college level education for the an- notation task. \n- We first selected two random stories from our corpus and had the three annotators work on them independently and classify each sentence based on the discourse mode.\n- Please refer to this paper for detailed information: URL", "#### Who are the source language producers?\n\nPlease refer to this paper for detailed information: URL", "### Annotations", "#### Annotation process\n\n- The authors chose against crowd sourcing for labeling this dataset due to its highly sensitive nature.\n- The annotators are domain experts having degress in advanced clinical psychology and gender studies.\n- They were provided a guidelines document with instructions about each task and its definitions, labels and examples.\n- They studied the document, worked a few examples to get used to this annotation task.\n- They also provided feedback for improving the class definitions.\n- The annotation process is not mutually exclusive, implying that presence of one label does not mean the\nabsence of the other one.", "#### Who are the annotators?\n\n- The annotators were three native Hindi speakers with college level education.\n- Please refer to the accompnaying paper for a detailed annotation process.", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset\n- As a future work we would also like to use the presented corpus to see how it could be further used\nin certain downstream tasks such as emotion analysis, machine translation,\ntextual entailment, and speech sythesis for improving storytelling experience in Hindi language.", "### Discussion of Biases", "### Other Known Limitations\n\n- We could not get the best performance using the deep learning model trained on the data, due to\n insufficient data for DL models.", "## Additional Information\n\nPlease refer to this link: URL", "### Dataset Curators\n\n- If you use the corpus in a product or application, then please credit the authors\nand [Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi]\n(URL) appropriately.\nAlso, if you send us an email, we will be thrilled to know about how you have used the corpus.\n- If interested in commercial use of the corpus, send email to midas@URL.\n- Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi, India\ndisclaims any responsibility for the use of the corpus and does not provide technical support.\nHowever, the contact listed above will be happy to respond to queries and clarifications\n- Please feel free to send us an email:\n - with feedback regarding the corpus.\n - with information on how you have used the corpus.\n - if interested in having us analyze your social media data.\n - if interested in a collaborative research project.", "### Licensing Information\n\n- If you use the corpus in a product or application, then please credit the authors\nand [Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi]\n(URL) appropriately.\n\n\n\nPlease cite the following publication if you make use of the dataset: URL", "### Contributions\n\nThanks to @duttahritwik for adding this dataset." ]
565f19f71fa299d2dc3001072d3bab5432b688f4
# Dataset Card for [Dataset Name] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Hippocorpus](https://msropendata.com/datasets/0a83fb6f-a759-4a17-aaa2-fbac84577318) - **Repository:** [Hippocorpus](https://msropendata.com/datasets/0a83fb6f-a759-4a17-aaa2-fbac84577318) - **Paper:** [Recollection versus Imagination: Exploring Human Memory and Cognition via Neural Language Models](http://erichorvitz.com/cognitive_studies_narrative.pdf) - **Point of Contact:** [Eric Horvitz](mailto:[email protected]) ### Dataset Summary To examine the cognitive processes of remembering and imagining and their traces in language, we introduce Hippocorpus, a dataset of 6,854 English diary-like short stories about recalled and imagined events. Using a crowdsourcing framework, we first collect recalled stories and summaries from workers, then provide these summaries to other workers who write imagined stories. Finally, months later, we collect a retold version of the recalled stories from a subset of recalled authors. Our dataset comes paired with author demographics (age, gender, race), their openness to experience, as well as some variables regarding the author's relationship to the event (e.g., how personal the event is, how often they tell its story, etc.). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The dataset can be found in English ## Dataset Structure [More Information Needed] ### Data Instances [More Information Needed] ### Data Fields This CSV file contains all the stories in Hippcorpus v2 (6854 stories) These are the columns in the file: - `AssignmentId`: Unique ID of this story - `WorkTimeInSeconds`: Time in seconds that it took the worker to do the entire HIT (reading instructions, storywriting, questions) - `WorkerId`: Unique ID of the worker (random string, not MTurk worker ID) - `annotatorAge`: Lower limit of the age bucket of the worker. Buckets are: 18-24, 25-29, 30-34, 35-39, 40-44, 45-49, 50-54, 55+ - `annotatorGender`: Gender of the worker - `annotatorRace`: Race/ethnicity of the worker - `distracted`: How distracted were you while writing your story? (5-point Likert) - `draining`: How taxing/draining was writing for you emotionally? (5-point Likert) - `frequency`: How often do you think about or talk about this event? (5-point Likert) - `importance`: How impactful, important, or personal is this story/this event to you? (5-point Likert) - `logTimeSinceEvent`: Log of time (days) since the recalled event happened - `mainEvent`: Short phrase describing the main event described - `memType`: Type of story (recalled, imagined, retold) - `mostSurprising`: Short phrase describing what the most surpring aspect of the story was - `openness`: Continuous variable representing the openness to experience of the worker - `recAgnPairId`: ID of the recalled story that corresponds to this retold story (null for imagined stories). Group on this variable to get the recalled-retold pairs. - `recImgPairId`: ID of the recalled story that corresponds to this imagined story (null for retold stories). Group on this variable to get the recalled-imagined pairs. - `similarity`: How similar to your life does this event/story feel to you? (5-point Likert) - `similarityReason`: Free text annotation of similarity - `story`: Story about the imagined or recalled event (15-25 sentences) - `stressful`: How stressful was this writing task? (5-point Likert) - `summary`: Summary of the events in the story (1-3 sentences) - `timeSinceEvent`: Time (num. days) since the recalled event happened ### Data Splits [More Information Needed] ## Dataset Creation [More Information Needed] ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data [More Information Needed] ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information [More Information Needed] ### Dataset Curators The dataset was initially created by Maarten Sap, Eric Horvitz, Yejin Choi, Noah A. Smith, James W. Pennebaker, during work done at Microsoft Research. ### Licensing Information Hippocorpus is distributed under the [Open Use of Data Agreement v1.0](https://msropendata-web-api.azurewebsites.net/licenses/f1f352a6-243f-4905-8e00-389edbca9e83/view). ### Citation Information ``` @inproceedings{sap-etal-2020-recollection, title = "Recollection versus Imagination: Exploring Human Memory and Cognition via Neural Language Models", author = "Sap, Maarten and Horvitz, Eric and Choi, Yejin and Smith, Noah A. and Pennebaker, James", booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", month = jul, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.acl-main.178", doi = "10.18653/v1/2020.acl-main.178", pages = "1970--1978", abstract = "We investigate the use of NLP as a measure of the cognitive processes involved in storytelling, contrasting imagination and recollection of events. To facilitate this, we collect and release Hippocorpus, a dataset of 7,000 stories about imagined and recalled events. We introduce a measure of narrative flow and use this to examine the narratives for imagined and recalled events. Additionally, we measure the differential recruitment of knowledge attributed to semantic memory versus episodic memory (Tulving, 1972) for imagined and recalled storytelling by comparing the frequency of descriptions of general commonsense events with more specific realis events. Our analyses show that imagined stories have a substantially more linear narrative flow, compared to recalled stories in which adjacent sentences are more disconnected. In addition, while recalled stories rely more on autobiographical events based on episodic memory, imagined stories express more commonsense knowledge based on semantic memory. Finally, our measures reveal the effect of narrativization of memories in stories (e.g., stories about frequently recalled memories flow more linearly; Bartlett, 1932). Our findings highlight the potential of using NLP tools to study the traces of human cognition in language.", } ``` ### Contributions Thanks to [@manandey](https://github.com/manandey) for adding this dataset.
hippocorpus
[ "task_categories:text-classification", "task_ids:text-scoring", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:other", "narrative-flow", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["text-scoring"], "pretty_name": "hippocorpus", "tags": ["narrative-flow"], "dataset_info": {"features": [{"name": "AssignmentId", "dtype": "string"}, {"name": "WorkTimeInSeconds", "dtype": "string"}, {"name": "WorkerId", "dtype": "string"}, {"name": "annotatorAge", "dtype": "float32"}, {"name": "annotatorGender", "dtype": "string"}, {"name": "annotatorRace", "dtype": "string"}, {"name": "distracted", "dtype": "float32"}, {"name": "draining", "dtype": "float32"}, {"name": "frequency", "dtype": "float32"}, {"name": "importance", "dtype": "float32"}, {"name": "logTimeSinceEvent", "dtype": "string"}, {"name": "mainEvent", "dtype": "string"}, {"name": "memType", "dtype": "string"}, {"name": "mostSurprising", "dtype": "string"}, {"name": "openness", "dtype": "string"}, {"name": "recAgnPairId", "dtype": "string"}, {"name": "recImgPairId", "dtype": "string"}, {"name": "similarity", "dtype": "string"}, {"name": "similarityReason", "dtype": "string"}, {"name": "story", "dtype": "string"}, {"name": "stressful", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "timeSinceEvent", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7229795, "num_examples": 6854}], "download_size": 0, "dataset_size": 7229795}}
2024-01-18T11:05:30+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-text-scoring #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-other #narrative-flow #region-us
# Dataset Card for [Dataset Name] ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: Hippocorpus - Repository: Hippocorpus - Paper: Recollection versus Imagination: Exploring Human Memory and Cognition via Neural Language Models - Point of Contact: Eric Horvitz ### Dataset Summary To examine the cognitive processes of remembering and imagining and their traces in language, we introduce Hippocorpus, a dataset of 6,854 English diary-like short stories about recalled and imagined events. Using a crowdsourcing framework, we first collect recalled stories and summaries from workers, then provide these summaries to other workers who write imagined stories. Finally, months later, we collect a retold version of the recalled stories from a subset of recalled authors. Our dataset comes paired with author demographics (age, gender, race), their openness to experience, as well as some variables regarding the author's relationship to the event (e.g., how personal the event is, how often they tell its story, etc.). ### Supported Tasks and Leaderboards ### Languages The dataset can be found in English ## Dataset Structure ### Data Instances ### Data Fields This CSV file contains all the stories in Hippcorpus v2 (6854 stories) These are the columns in the file: - 'AssignmentId': Unique ID of this story - 'WorkTimeInSeconds': Time in seconds that it took the worker to do the entire HIT (reading instructions, storywriting, questions) - 'WorkerId': Unique ID of the worker (random string, not MTurk worker ID) - 'annotatorAge': Lower limit of the age bucket of the worker. Buckets are: 18-24, 25-29, 30-34, 35-39, 40-44, 45-49, 50-54, 55+ - 'annotatorGender': Gender of the worker - 'annotatorRace': Race/ethnicity of the worker - 'distracted': How distracted were you while writing your story? (5-point Likert) - 'draining': How taxing/draining was writing for you emotionally? (5-point Likert) - 'frequency': How often do you think about or talk about this event? (5-point Likert) - 'importance': How impactful, important, or personal is this story/this event to you? (5-point Likert) - 'logTimeSinceEvent': Log of time (days) since the recalled event happened - 'mainEvent': Short phrase describing the main event described - 'memType': Type of story (recalled, imagined, retold) - 'mostSurprising': Short phrase describing what the most surpring aspect of the story was - 'openness': Continuous variable representing the openness to experience of the worker - 'recAgnPairId': ID of the recalled story that corresponds to this retold story (null for imagined stories). Group on this variable to get the recalled-retold pairs. - 'recImgPairId': ID of the recalled story that corresponds to this imagined story (null for retold stories). Group on this variable to get the recalled-imagined pairs. - 'similarity': How similar to your life does this event/story feel to you? (5-point Likert) - 'similarityReason': Free text annotation of similarity - 'story': Story about the imagined or recalled event (15-25 sentences) - 'stressful': How stressful was this writing task? (5-point Likert) - 'summary': Summary of the events in the story (1-3 sentences) - 'timeSinceEvent': Time (num. days) since the recalled event happened ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators The dataset was initially created by Maarten Sap, Eric Horvitz, Yejin Choi, Noah A. Smith, James W. Pennebaker, during work done at Microsoft Research. ### Licensing Information Hippocorpus is distributed under the Open Use of Data Agreement v1.0. ### Contributions Thanks to @manandey for adding this dataset.
[ "# Dataset Card for [Dataset Name]", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: Hippocorpus\n- Repository: Hippocorpus\n- Paper: Recollection versus Imagination: Exploring Human Memory and Cognition via Neural Language Models\n- Point of Contact: Eric Horvitz", "### Dataset Summary\n \nTo examine the cognitive processes of remembering and imagining and their traces in language, we introduce Hippocorpus, a dataset of 6,854 English diary-like short stories about recalled and imagined events. Using a crowdsourcing framework, we first collect recalled stories and summaries from workers, then provide these summaries to other workers who write imagined stories. Finally, months later, we collect a retold version of the recalled stories from a subset of recalled authors. Our dataset comes paired with author demographics (age, gender, race), their openness to experience, as well as some variables regarding the author's relationship to the event (e.g., how personal the event is, how often they tell its story, etc.).", "### Supported Tasks and Leaderboards", "### Languages\n\nThe dataset can be found in English", "## Dataset Structure", "### Data Instances", "### Data Fields\n\nThis CSV file contains all the stories in Hippcorpus v2 (6854 stories)\n\nThese are the columns in the file:\n- 'AssignmentId': Unique ID of this story\n- 'WorkTimeInSeconds': Time in seconds that it took the worker to do the entire HIT (reading instructions, storywriting, questions)\n- 'WorkerId': Unique ID of the worker (random string, not MTurk worker ID)\n- 'annotatorAge': Lower limit of the age bucket of the worker. Buckets are: 18-24, 25-29, 30-34, 35-39, 40-44, 45-49, 50-54, 55+\n- 'annotatorGender': Gender of the worker\n- 'annotatorRace': Race/ethnicity of the worker\n- 'distracted': How distracted were you while writing your story? (5-point Likert)\n- 'draining': How taxing/draining was writing for you emotionally? (5-point Likert)\n- 'frequency': How often do you think about or talk about this event? (5-point Likert)\n- 'importance': How impactful, important, or personal is this story/this event to you? (5-point Likert)\n- 'logTimeSinceEvent': Log of time (days) since the recalled event happened\n- 'mainEvent': Short phrase describing the main event described\n- 'memType': Type of story (recalled, imagined, retold)\n- 'mostSurprising': Short phrase describing what the most surpring aspect of the story was\n- 'openness': Continuous variable representing the openness to experience of the worker\n- 'recAgnPairId': ID of the recalled story that corresponds to this retold story (null for imagined stories). Group on this variable to get the recalled-retold pairs.\n- 'recImgPairId': ID of the recalled story that corresponds to this imagined story (null for retold stories). Group on this variable to get the recalled-imagined pairs.\n- 'similarity': How similar to your life does this event/story feel to you? (5-point Likert)\n- 'similarityReason': Free text annotation of similarity\n- 'story': Story about the imagined or recalled event (15-25 sentences)\n- 'stressful': How stressful was this writing task? (5-point Likert)\n- 'summary': Summary of the events in the story (1-3 sentences)\n- 'timeSinceEvent': Time (num. days) since the recalled event happened", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nThe dataset was initially created by Maarten Sap, Eric Horvitz, Yejin Choi, Noah A. Smith, James W. Pennebaker, during work done at Microsoft Research.", "### Licensing Information\n\nHippocorpus is distributed under the Open Use of Data Agreement v1.0.", "### Contributions\n\nThanks to @manandey for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-text-scoring #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-other #narrative-flow #region-us \n", "# Dataset Card for [Dataset Name]", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: Hippocorpus\n- Repository: Hippocorpus\n- Paper: Recollection versus Imagination: Exploring Human Memory and Cognition via Neural Language Models\n- Point of Contact: Eric Horvitz", "### Dataset Summary\n \nTo examine the cognitive processes of remembering and imagining and their traces in language, we introduce Hippocorpus, a dataset of 6,854 English diary-like short stories about recalled and imagined events. Using a crowdsourcing framework, we first collect recalled stories and summaries from workers, then provide these summaries to other workers who write imagined stories. Finally, months later, we collect a retold version of the recalled stories from a subset of recalled authors. Our dataset comes paired with author demographics (age, gender, race), their openness to experience, as well as some variables regarding the author's relationship to the event (e.g., how personal the event is, how often they tell its story, etc.).", "### Supported Tasks and Leaderboards", "### Languages\n\nThe dataset can be found in English", "## Dataset Structure", "### Data Instances", "### Data Fields\n\nThis CSV file contains all the stories in Hippcorpus v2 (6854 stories)\n\nThese are the columns in the file:\n- 'AssignmentId': Unique ID of this story\n- 'WorkTimeInSeconds': Time in seconds that it took the worker to do the entire HIT (reading instructions, storywriting, questions)\n- 'WorkerId': Unique ID of the worker (random string, not MTurk worker ID)\n- 'annotatorAge': Lower limit of the age bucket of the worker. Buckets are: 18-24, 25-29, 30-34, 35-39, 40-44, 45-49, 50-54, 55+\n- 'annotatorGender': Gender of the worker\n- 'annotatorRace': Race/ethnicity of the worker\n- 'distracted': How distracted were you while writing your story? (5-point Likert)\n- 'draining': How taxing/draining was writing for you emotionally? (5-point Likert)\n- 'frequency': How often do you think about or talk about this event? (5-point Likert)\n- 'importance': How impactful, important, or personal is this story/this event to you? (5-point Likert)\n- 'logTimeSinceEvent': Log of time (days) since the recalled event happened\n- 'mainEvent': Short phrase describing the main event described\n- 'memType': Type of story (recalled, imagined, retold)\n- 'mostSurprising': Short phrase describing what the most surpring aspect of the story was\n- 'openness': Continuous variable representing the openness to experience of the worker\n- 'recAgnPairId': ID of the recalled story that corresponds to this retold story (null for imagined stories). Group on this variable to get the recalled-retold pairs.\n- 'recImgPairId': ID of the recalled story that corresponds to this imagined story (null for retold stories). Group on this variable to get the recalled-imagined pairs.\n- 'similarity': How similar to your life does this event/story feel to you? (5-point Likert)\n- 'similarityReason': Free text annotation of similarity\n- 'story': Story about the imagined or recalled event (15-25 sentences)\n- 'stressful': How stressful was this writing task? (5-point Likert)\n- 'summary': Summary of the events in the story (1-3 sentences)\n- 'timeSinceEvent': Time (num. days) since the recalled event happened", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nThe dataset was initially created by Maarten Sap, Eric Horvitz, Yejin Choi, Noah A. Smith, James W. Pennebaker, during work done at Microsoft Research.", "### Licensing Information\n\nHippocorpus is distributed under the Open Use of Data Agreement v1.0.", "### Contributions\n\nThanks to @manandey for adding this dataset." ]
d26d2925a5be786beb9e2ba53c823e7a3132175f
# Dataset Card for The Hong Kong Cantonese Corpus (HKCanCor) ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://compling.hss.ntu.edu.sg/hkcancor/ - **Repository:** https://github.com/fcbond/hkcancor - **Paper:** [Luke and Wang, 2015](https://github.com/fcbond/hkcancor/blob/master/data/LukeWong_Hong-Kong-Cantonese-Corpus.pdf) - **Leaderboard:** N/A - **Point of Contact:** Luke Kang Kwong ### Dataset Summary The Hong Kong Cantonese Corpus (HKCanCor) comprise transcribed conversations recorded between March 1997 and August 1998. It contains recordings of spontaneous speech (51 texts) and radio programmes (42 texts), which involve 2 to 4 speakers, with 1 text of monologue. In total, the corpus contains around 230,000 Chinese words. The text is word-segmented (i.e., tokenization is at word-level, and each token can span multiple Chinese characters). Tokens are annotated with part-of-speech (POS) tags and romanised Cantonese pronunciation. * Romanisation * Follows conventions set by the Linguistic Society of Hong Kong (LSHK). * POS * The tagset used by this corpus extends the one in the Peita-Fujitsu-Renmin Ribao (PRF) corpus (Duan et al., 2000). Extensions were made to further capture Cantonese-specific phenomena. * To facilitate everyday usage and for better comparability across languages and/or corpora, this dataset also includes the tags mapped to the [Universal Dependencies 2.0](https://universaldependencies.org/u/pos/index.html) format. This mapping references the [PyCantonese](https://github.com/jacksonllee/pycantonese) library. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Yue Chinese / Cantonese (Hong Kong). ## Dataset Structure This corpus has 10801 utterances and approximately 230000 Chinese words. There is no predefined split. ### Data Instances Each instance contains a conversation id, speaker id within that conversation, turn number, part-of-speech tag for each Chinese word in the PRF format and UD2.0 format, and the utterance written in Chinese characters as well as its LSHK format romanisation. For example: ```python { 'conversation_id': 'TNR016-DR070398-HAI6V' 'pos_tags_prf': ['v', 'w'], 'pos_tags_ud': ['VERB', 'PUNCT'], 'speaker': 'B', 'transcriptions': ['hai6', 'VQ1'], 'turn_number': 112, 'tokens': ['係', '。'] } ``` ### Data Fields - conversation_id: unique dialogue-level id - pos_tags_prf: POS tag using the PRF format at token-level - pos_tag_ud: POS tag using the UD2.0 format at token-level - speaker: unique speaker id within dialogue - transcriptions: token-level romanisation in the LSHK format - turn_number: turn number in dialogue - tokens: Chinese word or punctuation at token-level ### Data Splits There are no specified splits in this dataset. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information This work is licensed under a [Creative Commons Attribution 4.0 International License](https://creativecommons.org/licenses/by/4.0/deed.ast). ### Citation Information This corpus was developed by [Luke and Wong, 2015](http://compling.hss.ntu.edu.sg/hkcancor/data/LukeWong_Hong-Kong-Cantonese-Corpus.pdf). ``` @article{luke2015hong, author={Luke, Kang-Kwong and Wong, May LY}, title={The Hong Kong Cantonese corpus: design and uses}, journal={Journal of Chinese Linguistics}, year={2015}, pages={309-330}, month={12} } ``` The POS tagset to Universal Dependency tagset mapping is provided by Jackson Lee, as a part of the [PyCantonese](https://github.com/jacksonllee/pycantonese) library. ``` @misc{lee2020, author = {Lee, Jackson}, title = {PyCantonese: Cantonese Linguistics and NLP in Python}, year = {2020}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/jacksonllee/pycantonese}}, commit = {1d58f44e1cb097faa69de6b617e1d28903b84b98} } ``` ### Contributions Thanks to [@j-chim](https://github.com/j-chim) for adding this dataset.
hkcancor
[ "task_categories:translation", "task_categories:text-generation", "task_categories:fill-mask", "task_ids:dialogue-modeling", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:yue", "license:cc-by-4.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["yue"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["translation", "text-generation", "fill-mask"], "task_ids": ["dialogue-modeling"], "paperswithcode_id": "hong-kong-cantonese-corpus", "pretty_name": "The Hong Kong Cantonese Corpus (HKCanCor)", "dataset_info": {"features": [{"name": "conversation_id", "dtype": "string"}, {"name": "speaker", "dtype": "string"}, {"name": "turn_number", "dtype": "int16"}, {"name": "tokens", "sequence": "string"}, {"name": "transcriptions", "sequence": "string"}, {"name": "pos_tags_prf", "sequence": {"class_label": {"names": {"0": "!", "1": "\"", "2": "#", "3": "'", "4": ",", "5": "-", "6": ".", "7": "...", "8": "?", "9": "A", "10": "AD", "11": "AG", "12": "AIRWAYS0", "13": "AN", "14": "AND", "15": "B", "16": "BG", "17": "BEAN0", "18": "C", "19": "CENTRE0", "20": "CG", "21": "D", "22": "D1", "23": "DG", "24": "E", "25": "ECHO0", "26": "F", "27": "G", "28": "G1", "29": "G2", "30": "H", "31": "HILL0", "32": "I", "33": "IG", "34": "J", "35": "JB", "36": "JM", "37": "JN", "38": "JNS", "39": "JNT", "40": "JNZ", "41": "K", "42": "KONG", "43": "L", "44": "L1", "45": "LG", "46": "M", "47": "MG", "48": "MONTY0", "49": "MOUNTAIN0", "50": "N", "51": "N1", "52": "NG", "53": "NR", "54": "NS", "55": "NSG", "56": "NT", "57": "NX", "58": "NZ", "59": "O", "60": "P", "61": "PEPPER0", "62": "Q", "63": "QG", "64": "R", "65": "RG", "66": "S", "67": "SOUND0", "68": "T", "69": "TELECOM0", "70": "TG", "71": "TOUCH0", "72": "U", "73": "UG", "74": "U0", "75": "V", "76": "V1", "77": "VD", "78": "VG", "79": "VK", "80": "VN", "81": "VU", "82": "VUG", "83": "W", "84": "X", "85": "XA", "86": "XB", "87": "XC", "88": "XD", "89": "XE", "90": "XJ", "91": "XJB", "92": "XJN", "93": "XJNT", "94": "XJNZ", "95": "XJV", "96": "XJA", "97": "XL1", "98": "XM", "99": "XN", "100": "XNG", "101": "XNR", "102": "XNS", "103": "XNT", "104": "XNX", "105": "XNZ", "106": "XO", "107": "XP", "108": "XQ", "109": "XR", "110": "XS", "111": "XT", "112": "XV", "113": "XVG", "114": "XVN", "115": "XX", "116": "Y", "117": "YG", "118": "Y1", "119": "Z"}}}}, {"name": "pos_tags_ud", "sequence": {"class_label": {"names": {"0": "DET", "1": "PRON", "2": "VERB", "3": "NOUN", "4": "ADJ", "5": "PUNCT", "6": "INTJ", "7": "ADV", "8": "V", "9": "PART", "10": "X", "11": "NUM", "12": "PROPN", "13": "AUX", "14": "CCONJ", "15": "ADP"}}}}], "splits": [{"name": "train", "num_bytes": 5746381, "num_examples": 10801}], "download_size": 961514, "dataset_size": 5746381}}
2024-01-18T11:05:35+00:00
[]
[ "yue" ]
TAGS #task_categories-translation #task_categories-text-generation #task_categories-fill-mask #task_ids-dialogue-modeling #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Yue Chinese #license-cc-by-4.0 #region-us
# Dataset Card for The Hong Kong Cantonese Corpus (HKCanCor) ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: URL - Paper: Luke and Wang, 2015 - Leaderboard: N/A - Point of Contact: Luke Kang Kwong ### Dataset Summary The Hong Kong Cantonese Corpus (HKCanCor) comprise transcribed conversations recorded between March 1997 and August 1998. It contains recordings of spontaneous speech (51 texts) and radio programmes (42 texts), which involve 2 to 4 speakers, with 1 text of monologue. In total, the corpus contains around 230,000 Chinese words. The text is word-segmented (i.e., tokenization is at word-level, and each token can span multiple Chinese characters). Tokens are annotated with part-of-speech (POS) tags and romanised Cantonese pronunciation. * Romanisation * Follows conventions set by the Linguistic Society of Hong Kong (LSHK). * POS * The tagset used by this corpus extends the one in the Peita-Fujitsu-Renmin Ribao (PRF) corpus (Duan et al., 2000). Extensions were made to further capture Cantonese-specific phenomena. * To facilitate everyday usage and for better comparability across languages and/or corpora, this dataset also includes the tags mapped to the Universal Dependencies 2.0 format. This mapping references the PyCantonese library. ### Supported Tasks and Leaderboards ### Languages Yue Chinese / Cantonese (Hong Kong). ## Dataset Structure This corpus has 10801 utterances and approximately 230000 Chinese words. There is no predefined split. ### Data Instances Each instance contains a conversation id, speaker id within that conversation, turn number, part-of-speech tag for each Chinese word in the PRF format and UD2.0 format, and the utterance written in Chinese characters as well as its LSHK format romanisation. For example: ### Data Fields - conversation_id: unique dialogue-level id - pos_tags_prf: POS tag using the PRF format at token-level - pos_tag_ud: POS tag using the UD2.0 format at token-level - speaker: unique speaker id within dialogue - transcriptions: token-level romanisation in the LSHK format - turn_number: turn number in dialogue - tokens: Chinese word or punctuation at token-level ### Data Splits There are no specified splits in this dataset. ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information This work is licensed under a Creative Commons Attribution 4.0 International License. This corpus was developed by Luke and Wong, 2015. The POS tagset to Universal Dependency tagset mapping is provided by Jackson Lee, as a part of the PyCantonese library. ### Contributions Thanks to @j-chim for adding this dataset.
[ "# Dataset Card for The Hong Kong Cantonese Corpus (HKCanCor)", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Luke and Wang, 2015\n- Leaderboard: N/A\n- Point of Contact: Luke Kang Kwong", "### Dataset Summary\nThe Hong Kong Cantonese Corpus (HKCanCor) comprise transcribed conversations recorded \nbetween March 1997 and August 1998. It contains recordings of spontaneous speech (51 texts)\nand radio programmes (42 texts), which involve 2 to 4 speakers, with 1 text of monologue.\n\nIn total, the corpus contains around 230,000 Chinese words. The text is word-segmented (i.e., tokenization is at word-level, and each token can span multiple Chinese characters). Tokens are annotated with part-of-speech (POS) tags and romanised Cantonese pronunciation. \n\n* Romanisation\n * Follows conventions set by the Linguistic Society of Hong Kong (LSHK).\n* POS\n * The tagset used by this corpus extends the one in the Peita-Fujitsu-Renmin Ribao (PRF) corpus (Duan et al., 2000). Extensions were made to further capture Cantonese-specific phenomena. \n * To facilitate everyday usage and for better comparability across languages and/or corpora, this dataset also includes the tags mapped to the Universal Dependencies 2.0 format. This mapping references the PyCantonese library.", "### Supported Tasks and Leaderboards", "### Languages\nYue Chinese / Cantonese (Hong Kong).", "## Dataset Structure\nThis corpus has 10801 utterances and approximately 230000 Chinese words. \nThere is no predefined split.", "### Data Instances\nEach instance contains a conversation id, speaker id within that conversation,\nturn number, part-of-speech tag for each Chinese word in the PRF format and UD2.0 format, \nand the utterance written in Chinese characters as well as its LSHK format romanisation.\n\n\nFor example:", "### Data Fields\n- conversation_id: unique dialogue-level id\n- pos_tags_prf: POS tag using the PRF format at token-level\n- pos_tag_ud: POS tag using the UD2.0 format at token-level\n- speaker: unique speaker id within dialogue\n- transcriptions: token-level romanisation in the LSHK format\n- turn_number: turn number in dialogue\n- tokens: Chinese word or punctuation at token-level", "### Data Splits\nThere are no specified splits in this dataset.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\nThis work is licensed under a Creative Commons Attribution 4.0 International License.\n\n\n\nThis corpus was developed by Luke and Wong, 2015.\n\nThe POS tagset to Universal Dependency tagset mapping is provided by Jackson Lee, as a part of the PyCantonese library.", "### Contributions\n\nThanks to @j-chim for adding this dataset." ]
[ "TAGS\n#task_categories-translation #task_categories-text-generation #task_categories-fill-mask #task_ids-dialogue-modeling #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Yue Chinese #license-cc-by-4.0 #region-us \n", "# Dataset Card for The Hong Kong Cantonese Corpus (HKCanCor)", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Luke and Wang, 2015\n- Leaderboard: N/A\n- Point of Contact: Luke Kang Kwong", "### Dataset Summary\nThe Hong Kong Cantonese Corpus (HKCanCor) comprise transcribed conversations recorded \nbetween March 1997 and August 1998. It contains recordings of spontaneous speech (51 texts)\nand radio programmes (42 texts), which involve 2 to 4 speakers, with 1 text of monologue.\n\nIn total, the corpus contains around 230,000 Chinese words. The text is word-segmented (i.e., tokenization is at word-level, and each token can span multiple Chinese characters). Tokens are annotated with part-of-speech (POS) tags and romanised Cantonese pronunciation. \n\n* Romanisation\n * Follows conventions set by the Linguistic Society of Hong Kong (LSHK).\n* POS\n * The tagset used by this corpus extends the one in the Peita-Fujitsu-Renmin Ribao (PRF) corpus (Duan et al., 2000). Extensions were made to further capture Cantonese-specific phenomena. \n * To facilitate everyday usage and for better comparability across languages and/or corpora, this dataset also includes the tags mapped to the Universal Dependencies 2.0 format. This mapping references the PyCantonese library.", "### Supported Tasks and Leaderboards", "### Languages\nYue Chinese / Cantonese (Hong Kong).", "## Dataset Structure\nThis corpus has 10801 utterances and approximately 230000 Chinese words. \nThere is no predefined split.", "### Data Instances\nEach instance contains a conversation id, speaker id within that conversation,\nturn number, part-of-speech tag for each Chinese word in the PRF format and UD2.0 format, \nand the utterance written in Chinese characters as well as its LSHK format romanisation.\n\n\nFor example:", "### Data Fields\n- conversation_id: unique dialogue-level id\n- pos_tags_prf: POS tag using the PRF format at token-level\n- pos_tag_ud: POS tag using the UD2.0 format at token-level\n- speaker: unique speaker id within dialogue\n- transcriptions: token-level romanisation in the LSHK format\n- turn_number: turn number in dialogue\n- tokens: Chinese word or punctuation at token-level", "### Data Splits\nThere are no specified splits in this dataset.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\nThis work is licensed under a Creative Commons Attribution 4.0 International License.\n\n\n\nThis corpus was developed by Luke and Wong, 2015.\n\nThe POS tagset to Universal Dependency tagset mapping is provided by Jackson Lee, as a part of the PyCantonese library.", "### Contributions\n\nThanks to @j-chim for adding this dataset." ]
c36e45ea940ad2c471568a5b362b4ddded75ea4b
# Dataset Card for Headline Grouping (HLGD) ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://github.com/tingofurro/headline_grouping](https://github.com/tingofurro/headline_grouping) - **Repository:** [https://github.com/tingofurro/headline_grouping](https://github.com/tingofurro/headline_grouping) - **Paper:** [https://people.eecs.berkeley.edu/~phillab/pdfs/NAACL2021_HLG.pdf](https://people.eecs.berkeley.edu/~phillab/pdfs/NAACL2021_HLG.pdf) - **Leaderboard:** N/A - **Point of Contact:** phillab (at) berkeley (dot) edu ### Dataset Summary HLGD is a binary classification dataset consisting of 20,056 labeled news headlines pairs indicating whether the two headlines describe the same underlying world event or not. The dataset comes with an existing split between `train`, `validation` and `test` (60-20-20). ### Supported Tasks and Leaderboards The paper (NAACL2021) introducing HLGD proposes three challenges making use of various amounts of data: - Challenge 1: Headline-only. Models must make predictions using only the text of both headlines. - Challenge 2: Headline + Time. Models must make predictions using the headline and publication date of the two headlines. - Challenge 3: Headline + Time + Other. Models can make predictions using the headline, publication date as well as any other relevant meta-data that can be obtained through the URL attached to the headline (full article content, authors, news source, etc.) ### Languages Dataset is in english. ## Dataset Structure ### Data Instances A typical dataset consists of a timeline_id, two headlines (A/B), each associated with a URL, and a date. Finally, a label indicates whether the two headlines describe the same underlying event (1) or not (0). Below is an example from the training set: ``` {'timeline_id': 4, 'headline_a': 'France fines Google nearly $57 million for first major violation of new European privacy regime', 'headline_b': "France hits Google with record EUR50mn fine over 'forced consent' data collection", 'date_a': '2019-01-21', 'date_b': '2019-01-21', 'url_a': 'https://www.chicagotribune.com/business/ct-biz-france-fines-google-privacy-20190121-story.html', 'url_b': 'https://www.rt.com/news/449369-france-hits-google-with-record-fine/', 'label': 1} ``` ### Data Fields - `timeline_id`: Represents the id of the timeline that the headline pair belongs to (values 0 to 9). The dev set is composed of timelines 0 and 5, and the test set timelines 7 and 8 - `headline_a`, `headline_b`: Raw text for the headline pair being compared - `date_a`, `date_b`: Publication date of the respective headlines, in the `YYYY-MM-DD` format - `url_a`, `url_b`: Original URL of the respective headlines. Can be used to retrieve additional meta-data on the headline. - `label`: 1 if the two headlines are part of the the same headline group and describe the same underlying event, 0 otherwise. ### Data Splits | | Train | Dev | Test | | --------------------------- | ------- | ------ | ----- | | Number of examples | 15,492 | 2,069 | 2,495 | ## Dataset Creation ### Curation Rationale The task of grouping headlines from diverse news sources discussing a same underlying event is important to enable interfaces that can present the diversity of coverage of unfolding news events. Many news aggregators (such as Google or Yahoo news) present several sources for a given event, with an objective to highlight coverage diversity. Automatic grouping of news headlines and articles remains challenging as headlines are short, heavily-stylized texts. The HeadLine Grouping Dataset introduces the first benchmark to evaluate NLU model's ability to group headlines according to the underlying event they describe. ### Source Data #### Initial Data Collection and Normalization The data was obtained by collecting 10 news timelines from the NewsLens project by selecting timelines diversified in topic each contained between 80 and 300 news articles. #### Who are the source language producers? The source language producers are journalists or members of the newsroom of 34 news organizations listed in the paper. ### Annotations #### Annotation process Each timeline was annotated for group IDs by 5 independent annotators. The 5 annotations were merged into a single annotation named the global groups. The global group IDs are then used to generate all pairs of headlines within timelines with binary labels: 1 if two headlines are part of the same global group, and 0 otherwise. A heuristic is used to remove negative examples to obtain a final dataset that has class imbalance of 1 positive example to 5 negative examples. #### Who are the annotators? Annotators were authors of the papers and 8 crowd-workers on the Upwork platform. The crowd-workers were native English speakers with experience either in proof-reading or data-entry. ### Personal and Sensitive Information Annotators identity has been anonymized. Due to the public nature of news headline, it is not expected that the headlines will contain personal sensitive information. ## Considerations for Using the Data ### Social Impact of Dataset The purpose of this dataset is to facilitate applications that present diverse news coverage. By simplifying the process of developing models that can group headlines that describe a common event, we hope the community can build applications that show news readers diverse sources covering similar events. We note however that the annotations were performed in majority by crowd-workers and that even though inter-annotator agreement was high, it was not perfect. Bias of the annotators therefore remains in the dataset. ### Discussion of Biases There are several sources of bias in the dataset: - Annotator bias: 10 annotators participated in the creation of the dataset. Their opinions and perspectives influenced the creation of the dataset. - Subject matter bias: HLGD consists of headlines from 10 news timelines from diverse topics (space, tech, politics, etc.). This choice has an impact on the types of positive and negative examples that appear in the dataset. - Source selection bias: 33 English-language news sources are represented in the dataset. This selection of news sources has an effect on the content in the timeline, and the overall dataset. - Time-range of the timelines: the timelines selected range from 2010 to 2020, which has an influence on the language and style of news headlines. ### Other Known Limitations For the task of Headline Grouping, inter-annotator agreement is high (0.814) but not perfect. Some decisions for headline grouping are subjective and depend on interpretation of the reader. ## Additional Information ### Dataset Curators The dataset was initially created by Philippe Laban, Lucas Bandarkar and Marti Hearst at UC Berkeley. ### Licensing Information The licensing status of the dataset depends on the legal status of news headlines. It is commonly held that News Headlines fall under "fair-use" ([American Bar blog post](https://www.americanbar.org/groups/gpsolo/publications/gp_solo/2011/september/fair_use_news_reviews/)) The dataset only distributes headlines, a URL and a publication date. Users of the dataset can then retrieve additional information (such as the body content, author, etc.) directly by querying the URL. ### Citation Information ``` @inproceedings{Laban2021NewsHG, title={News Headline Grouping as a Challenging NLU Task}, author={Laban, Philippe and Bandarkar, Lucas and Hearst, Marti A}, booktitle={NAACL 2021}, publisher = {Association for Computational Linguistics}, year={2021} } ``` ### Contributions Thanks to [@tingofurro](https://github.com/<tingofurro>) for adding this dataset.
hlgd
[ "task_categories:text-classification", "annotations_creators:crowdsourced", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:apache-2.0", "headline-grouping", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": [], "pretty_name": "Headline Grouping (HLGD)", "tags": ["headline-grouping"], "dataset_info": {"features": [{"name": "timeline_id", "dtype": {"class_label": {"names": {"0": 0, "1": 1, "2": 2, "3": 3, "4": 4, "5": 5, "6": 6, "7": 7, "8": 8, "9": 9}}}}, {"name": "headline_a", "dtype": "string"}, {"name": "headline_b", "dtype": "string"}, {"name": "date_a", "dtype": "string"}, {"name": "date_b", "dtype": "string"}, {"name": "url_a", "dtype": "string"}, {"name": "url_b", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "same_event", "1": "different_event"}}}}], "splits": [{"name": "train", "num_bytes": 6447212, "num_examples": 15492}, {"name": "test", "num_bytes": 941145, "num_examples": 2495}, {"name": "validation", "num_bytes": 798302, "num_examples": 2069}], "download_size": 1858948, "dataset_size": 8186659}}
2024-01-18T11:05:37+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #annotations_creators-crowdsourced #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-apache-2.0 #headline-grouping #region-us
Dataset Card for Headline Grouping (HLGD) ========================================= Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: URL * Leaderboard: N/A * Point of Contact: phillab (at) berkeley (dot) edu ### Dataset Summary HLGD is a binary classification dataset consisting of 20,056 labeled news headlines pairs indicating whether the two headlines describe the same underlying world event or not. The dataset comes with an existing split between 'train', 'validation' and 'test' (60-20-20). ### Supported Tasks and Leaderboards The paper (NAACL2021) introducing HLGD proposes three challenges making use of various amounts of data: * Challenge 1: Headline-only. Models must make predictions using only the text of both headlines. * Challenge 2: Headline + Time. Models must make predictions using the headline and publication date of the two headlines. * Challenge 3: Headline + Time + Other. Models can make predictions using the headline, publication date as well as any other relevant meta-data that can be obtained through the URL attached to the headline (full article content, authors, news source, etc.) ### Languages Dataset is in english. Dataset Structure ----------------- ### Data Instances A typical dataset consists of a timeline\_id, two headlines (A/B), each associated with a URL, and a date. Finally, a label indicates whether the two headlines describe the same underlying event (1) or not (0). Below is an example from the training set: ### Data Fields * 'timeline\_id': Represents the id of the timeline that the headline pair belongs to (values 0 to 9). The dev set is composed of timelines 0 and 5, and the test set timelines 7 and 8 * 'headline\_a', 'headline\_b': Raw text for the headline pair being compared * 'date\_a', 'date\_b': Publication date of the respective headlines, in the 'YYYY-MM-DD' format * 'url\_a', 'url\_b': Original URL of the respective headlines. Can be used to retrieve additional meta-data on the headline. * 'label': 1 if the two headlines are part of the the same headline group and describe the same underlying event, 0 otherwise. ### Data Splits Dataset Creation ---------------- ### Curation Rationale The task of grouping headlines from diverse news sources discussing a same underlying event is important to enable interfaces that can present the diversity of coverage of unfolding news events. Many news aggregators (such as Google or Yahoo news) present several sources for a given event, with an objective to highlight coverage diversity. Automatic grouping of news headlines and articles remains challenging as headlines are short, heavily-stylized texts. The HeadLine Grouping Dataset introduces the first benchmark to evaluate NLU model's ability to group headlines according to the underlying event they describe. ### Source Data #### Initial Data Collection and Normalization The data was obtained by collecting 10 news timelines from the NewsLens project by selecting timelines diversified in topic each contained between 80 and 300 news articles. #### Who are the source language producers? The source language producers are journalists or members of the newsroom of 34 news organizations listed in the paper. ### Annotations #### Annotation process Each timeline was annotated for group IDs by 5 independent annotators. The 5 annotations were merged into a single annotation named the global groups. The global group IDs are then used to generate all pairs of headlines within timelines with binary labels: 1 if two headlines are part of the same global group, and 0 otherwise. A heuristic is used to remove negative examples to obtain a final dataset that has class imbalance of 1 positive example to 5 negative examples. #### Who are the annotators? Annotators were authors of the papers and 8 crowd-workers on the Upwork platform. The crowd-workers were native English speakers with experience either in proof-reading or data-entry. ### Personal and Sensitive Information Annotators identity has been anonymized. Due to the public nature of news headline, it is not expected that the headlines will contain personal sensitive information. Considerations for Using the Data --------------------------------- ### Social Impact of Dataset The purpose of this dataset is to facilitate applications that present diverse news coverage. By simplifying the process of developing models that can group headlines that describe a common event, we hope the community can build applications that show news readers diverse sources covering similar events. We note however that the annotations were performed in majority by crowd-workers and that even though inter-annotator agreement was high, it was not perfect. Bias of the annotators therefore remains in the dataset. ### Discussion of Biases There are several sources of bias in the dataset: * Annotator bias: 10 annotators participated in the creation of the dataset. Their opinions and perspectives influenced the creation of the dataset. * Subject matter bias: HLGD consists of headlines from 10 news timelines from diverse topics (space, tech, politics, etc.). This choice has an impact on the types of positive and negative examples that appear in the dataset. * Source selection bias: 33 English-language news sources are represented in the dataset. This selection of news sources has an effect on the content in the timeline, and the overall dataset. * Time-range of the timelines: the timelines selected range from 2010 to 2020, which has an influence on the language and style of news headlines. ### Other Known Limitations For the task of Headline Grouping, inter-annotator agreement is high (0.814) but not perfect. Some decisions for headline grouping are subjective and depend on interpretation of the reader. Additional Information ---------------------- ### Dataset Curators The dataset was initially created by Philippe Laban, Lucas Bandarkar and Marti Hearst at UC Berkeley. ### Licensing Information The licensing status of the dataset depends on the legal status of news headlines. It is commonly held that News Headlines fall under "fair-use" (American Bar blog post) The dataset only distributes headlines, a URL and a publication date. Users of the dataset can then retrieve additional information (such as the body content, author, etc.) directly by querying the URL. ### Contributions Thanks to @tingofurro for adding this dataset.
[ "### Dataset Summary\n\n\nHLGD is a binary classification dataset consisting of 20,056 labeled news headlines pairs indicating whether the two headlines describe the same underlying world event or not. The dataset comes with an existing split between 'train', 'validation' and 'test' (60-20-20).", "### Supported Tasks and Leaderboards\n\n\nThe paper (NAACL2021) introducing HLGD proposes three challenges making use of various amounts of data:\n\n\n* Challenge 1: Headline-only. Models must make predictions using only the text of both headlines.\n* Challenge 2: Headline + Time. Models must make predictions using the headline and publication date of the two headlines.\n* Challenge 3: Headline + Time + Other. Models can make predictions using the headline, publication date as well as any other relevant meta-data that can be obtained through the URL attached to the headline (full article content, authors, news source, etc.)", "### Languages\n\n\nDataset is in english.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA typical dataset consists of a timeline\\_id, two headlines (A/B), each associated with a URL, and a date. Finally, a label indicates whether the two headlines describe the same underlying event (1) or not (0). Below is an example from the training set:", "### Data Fields\n\n\n* 'timeline\\_id': Represents the id of the timeline that the headline pair belongs to (values 0 to 9). The dev set is composed of timelines 0 and 5, and the test set timelines 7 and 8\n* 'headline\\_a', 'headline\\_b': Raw text for the headline pair being compared\n* 'date\\_a', 'date\\_b': Publication date of the respective headlines, in the 'YYYY-MM-DD' format\n* 'url\\_a', 'url\\_b': Original URL of the respective headlines. Can be used to retrieve additional meta-data on the headline.\n* 'label': 1 if the two headlines are part of the the same headline group and describe the same underlying event, 0 otherwise.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nThe task of grouping headlines from diverse news sources discussing a same underlying event is important to enable interfaces that can present the diversity of coverage of unfolding news events. Many news aggregators (such as Google or Yahoo news) present several sources for a given event, with an objective to highlight coverage diversity.\nAutomatic grouping of news headlines and articles remains challenging as headlines are short, heavily-stylized texts.\nThe HeadLine Grouping Dataset introduces the first benchmark to evaluate NLU model's ability to group headlines according to the underlying event they describe.", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nThe data was obtained by collecting 10 news timelines from the NewsLens project by selecting timelines diversified in topic each contained between 80 and 300 news articles.", "#### Who are the source language producers?\n\n\nThe source language producers are journalists or members of the newsroom of 34 news organizations listed in the paper.", "### Annotations", "#### Annotation process\n\n\nEach timeline was annotated for group IDs by 5 independent annotators. The 5 annotations were merged into a single annotation named the global groups.\nThe global group IDs are then used to generate all pairs of headlines within timelines with binary labels: 1 if two headlines are part of the same global group, and 0 otherwise. A heuristic is used to remove negative examples to obtain a final dataset that has class imbalance of 1 positive example to 5 negative examples.", "#### Who are the annotators?\n\n\nAnnotators were authors of the papers and 8 crowd-workers on the Upwork platform. The crowd-workers were native English speakers with experience either in proof-reading or data-entry.", "### Personal and Sensitive Information\n\n\nAnnotators identity has been anonymized. Due to the public nature of news headline, it is not expected that the headlines will contain personal sensitive information.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset\n\n\nThe purpose of this dataset is to facilitate applications that present diverse news coverage.\n\n\nBy simplifying the process of developing models that can group headlines that describe a common event, we hope the community can build applications that show news readers diverse sources covering similar events.\n\n\nWe note however that the annotations were performed in majority by crowd-workers and that even though inter-annotator agreement was high, it was not perfect. Bias of the annotators therefore remains in the dataset.", "### Discussion of Biases\n\n\nThere are several sources of bias in the dataset:\n\n\n* Annotator bias: 10 annotators participated in the creation of the dataset. Their opinions and perspectives influenced the creation of the dataset.\n* Subject matter bias: HLGD consists of headlines from 10 news timelines from diverse topics (space, tech, politics, etc.). This choice has an impact on the types of positive and negative examples that appear in the dataset.\n* Source selection bias: 33 English-language news sources are represented in the dataset. This selection of news sources has an effect on the content in the timeline, and the overall dataset.\n* Time-range of the timelines: the timelines selected range from 2010 to 2020, which has an influence on the language and style of news headlines.", "### Other Known Limitations\n\n\nFor the task of Headline Grouping, inter-annotator agreement is high (0.814) but not perfect. Some decisions for headline grouping are subjective and depend on interpretation of the reader.\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThe dataset was initially created by Philippe Laban, Lucas Bandarkar and Marti Hearst at UC Berkeley.", "### Licensing Information\n\n\nThe licensing status of the dataset depends on the legal status of news headlines. It is commonly held that News Headlines fall under \"fair-use\" (American Bar blog post)\nThe dataset only distributes headlines, a URL and a publication date. Users of the dataset can then retrieve additional information (such as the body content, author, etc.) directly by querying the URL.", "### Contributions\n\n\nThanks to @tingofurro for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #annotations_creators-crowdsourced #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-apache-2.0 #headline-grouping #region-us \n", "### Dataset Summary\n\n\nHLGD is a binary classification dataset consisting of 20,056 labeled news headlines pairs indicating whether the two headlines describe the same underlying world event or not. The dataset comes with an existing split between 'train', 'validation' and 'test' (60-20-20).", "### Supported Tasks and Leaderboards\n\n\nThe paper (NAACL2021) introducing HLGD proposes three challenges making use of various amounts of data:\n\n\n* Challenge 1: Headline-only. Models must make predictions using only the text of both headlines.\n* Challenge 2: Headline + Time. Models must make predictions using the headline and publication date of the two headlines.\n* Challenge 3: Headline + Time + Other. Models can make predictions using the headline, publication date as well as any other relevant meta-data that can be obtained through the URL attached to the headline (full article content, authors, news source, etc.)", "### Languages\n\n\nDataset is in english.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA typical dataset consists of a timeline\\_id, two headlines (A/B), each associated with a URL, and a date. Finally, a label indicates whether the two headlines describe the same underlying event (1) or not (0). Below is an example from the training set:", "### Data Fields\n\n\n* 'timeline\\_id': Represents the id of the timeline that the headline pair belongs to (values 0 to 9). The dev set is composed of timelines 0 and 5, and the test set timelines 7 and 8\n* 'headline\\_a', 'headline\\_b': Raw text for the headline pair being compared\n* 'date\\_a', 'date\\_b': Publication date of the respective headlines, in the 'YYYY-MM-DD' format\n* 'url\\_a', 'url\\_b': Original URL of the respective headlines. Can be used to retrieve additional meta-data on the headline.\n* 'label': 1 if the two headlines are part of the the same headline group and describe the same underlying event, 0 otherwise.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nThe task of grouping headlines from diverse news sources discussing a same underlying event is important to enable interfaces that can present the diversity of coverage of unfolding news events. Many news aggregators (such as Google or Yahoo news) present several sources for a given event, with an objective to highlight coverage diversity.\nAutomatic grouping of news headlines and articles remains challenging as headlines are short, heavily-stylized texts.\nThe HeadLine Grouping Dataset introduces the first benchmark to evaluate NLU model's ability to group headlines according to the underlying event they describe.", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nThe data was obtained by collecting 10 news timelines from the NewsLens project by selecting timelines diversified in topic each contained between 80 and 300 news articles.", "#### Who are the source language producers?\n\n\nThe source language producers are journalists or members of the newsroom of 34 news organizations listed in the paper.", "### Annotations", "#### Annotation process\n\n\nEach timeline was annotated for group IDs by 5 independent annotators. The 5 annotations were merged into a single annotation named the global groups.\nThe global group IDs are then used to generate all pairs of headlines within timelines with binary labels: 1 if two headlines are part of the same global group, and 0 otherwise. A heuristic is used to remove negative examples to obtain a final dataset that has class imbalance of 1 positive example to 5 negative examples.", "#### Who are the annotators?\n\n\nAnnotators were authors of the papers and 8 crowd-workers on the Upwork platform. The crowd-workers were native English speakers with experience either in proof-reading or data-entry.", "### Personal and Sensitive Information\n\n\nAnnotators identity has been anonymized. Due to the public nature of news headline, it is not expected that the headlines will contain personal sensitive information.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset\n\n\nThe purpose of this dataset is to facilitate applications that present diverse news coverage.\n\n\nBy simplifying the process of developing models that can group headlines that describe a common event, we hope the community can build applications that show news readers diverse sources covering similar events.\n\n\nWe note however that the annotations were performed in majority by crowd-workers and that even though inter-annotator agreement was high, it was not perfect. Bias of the annotators therefore remains in the dataset.", "### Discussion of Biases\n\n\nThere are several sources of bias in the dataset:\n\n\n* Annotator bias: 10 annotators participated in the creation of the dataset. Their opinions and perspectives influenced the creation of the dataset.\n* Subject matter bias: HLGD consists of headlines from 10 news timelines from diverse topics (space, tech, politics, etc.). This choice has an impact on the types of positive and negative examples that appear in the dataset.\n* Source selection bias: 33 English-language news sources are represented in the dataset. This selection of news sources has an effect on the content in the timeline, and the overall dataset.\n* Time-range of the timelines: the timelines selected range from 2010 to 2020, which has an influence on the language and style of news headlines.", "### Other Known Limitations\n\n\nFor the task of Headline Grouping, inter-annotator agreement is high (0.814) but not perfect. Some decisions for headline grouping are subjective and depend on interpretation of the reader.\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThe dataset was initially created by Philippe Laban, Lucas Bandarkar and Marti Hearst at UC Berkeley.", "### Licensing Information\n\n\nThe licensing status of the dataset depends on the legal status of news headlines. It is commonly held that News Headlines fall under \"fair-use\" (American Bar blog post)\nThe dataset only distributes headlines, a URL and a publication date. Users of the dataset can then retrieve additional information (such as the body content, author, etc.) directly by querying the URL.", "### Contributions\n\n\nThanks to @tingofurro for adding this dataset." ]
2560d85e98ccffe0b66f08bbe9ae9fcd7a1c2605
# Dataset Card for [Dataset Name] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Hope Speech Detection for Equality, Diversity, and Inclusion-EACL 2021](https://competitions.codalab.org/competitions/27653#learn_the_details) - **Repository:** [HopeEDI data repository](https://competitions.codalab.org/competitions/27653#participate-get_data) - **Paper:** [HopeEDI: A Multilingual Hope Speech Detection Dataset for Equality, Diversity, and Inclusion](https://www.aclweb.org/anthology/2020.peoples-1.5/) - **Leaderboard:** [Rank list](https://competitions.codalab.org/competitions/27653#results) - **Point of Contact:** [Bharathi Raja Chakravarthi](mailto:[email protected]) ### Dataset Summary A Hope Speech dataset for Equality, Diversity and Inclusion (HopeEDI) containing user-generated comments from the social media platform YouTube with 28,451, 20,198 and 10,705 comments in English, Tamil and Malayalam, respectively, manually labelled as containing hope speech or not. To our knowledge, this is the first research of its kind to annotate hope speech for equality, diversity and inclusion in a multilingual setting. ### Supported Tasks and Leaderboards To identify hope speech in the comments/posts in social media. ### Languages English, Tamil and Malayalam ## Dataset Structure ### Data Instances An example from the English dataset looks as follows: | text | label | | :------ | :----- | | all lives matter .without that we never have peace so to me forever all lives matter. | Hope_speech | | I think it's cool that you give people a voice to speak out with here on this channel. | Hope_speech | An example from the Tamil dataset looks as follows: | text | label | | :------ | :----- | | Idha solla ivalo naala | Non_hope_speech | | இன்று தேசிய பெண் குழந்தைகள் தினம்.. பெண் குழந்தைகளை போற்றுவோம்..அவர்களை பாதுகாப்போம்... | Hope_speech | An example from the Malayalam dataset looks as follows: | text | label | | :------ | :----- | | ഇത്രെയും കഷ്ടപ്പെട്ട് വളർത്തിയ ആ അമ്മയുടെ മുഖം കണ്ടപ്പോൾ കണ്ണ് നിറഞ്ഞു പോയി | Hope_speech | | snehikunavar aanayalum pennayalum onnichu jeevikatte..aareyum compel cheythitallalooo..parasparamulla ishtathodeyalle...avarum jeevikatte..🥰🥰 | Hope_speech | ### Data Fields English - `text`: English comment. - `label`: list of the possible values: "Hope_speech", "Non_hope_speech", "not-English" Tamil - `text`: Tamil-English code mixed comment. - `label`: list of the possible values: "Hope_speech", "Non_hope_speech", "not-Tamil" Malayalam - `text`: Malayalam-English code mixed comment. - `label`: list of the possible values: "Hope_speech", "Non_hope_speech", "not-malayalam" ### Data Splits | | train | validation | | ----- |------:|-----------:| | English | 22762 | 2843 | | Tamil | 16160 | 2018 | | Malayalam | 8564 | 1070 | ## Dataset Creation ### Curation Rationale Hope is considered significant for the well-being, recuperation and restoration of human life by health professionals. Hate speech or offensive language detection dataset is not available for code-mixed Tamil and code-mixed Malayalam, and it does not take into account LGBTIQ, women in STEM and other minorities. Thus, we cannot use existing hate speech or offensive language detection datasets to detect hope or non-hope for EDI of minorities. ### Source Data #### Initial Data Collection and Normalization For English, we collected data on recent topics of EDI, including women in STEM, LGBTIQ issues, COVID-19, Black Lives Matters, United Kingdom (UK) versus China, United States of America (USA) versus China and Australia versus China from YouTube video comments. The data was collected from videos of people from English-speaking countries, such as Australia, Canada, the Republic of Ireland, United Kingdom, the United States of America and New Zealand. For Tamil and Malayalam, we collected data from India on the recent topics regarding LGBTIQ issues, COVID-19, women in STEM, the Indo-China war and Dravidian affairs. #### Who are the source language producers? Youtube users ### Annotations #### Annotation process We created Google forms to collect annotations from annotators. Each form contained a maximum of 100 comments, and each page contained a maximum of 10 comments to maintain the quality of annotation. We collected information on the gender, educational background and the medium of schooling of the annotator to know the diversity of the annotator and avoid bias. We educated annotators by providing them with YouTube videos on EDI. A minimum of three annotators annotated each form. #### Who are the annotators? For English language comments, annotators were from Australia, the Republic of Ireland, the United Kingdom and the United States of America. For Tamil, we were able to get annotations from both people from the state of Tamil Nadu of India and from Sri Lanka. Most of the annotators were graduate or post-graduate students. ### Personal and Sensitive Information Social media data is highly sensitive, and even more so when it is related to the minority population, such as the LGBTIQ community or women. We have taken full consideration to minimise the risk associated with individual identity in the data by removing personal information from dataset, such as names but not celebrity names. However, to study EDI, we needed to keep information relating to the following characteristics; racial, gender, sexual orientation, ethnic origin and philosophical beliefs. Annotators were only shown anonymised posts and agreed to make no attempts to contact the comment creator. The dataset will only be made available for research purpose to the researcher who agree to follow ethical guidelines ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information This work is licensed under a [Creative Commons Attribution 4.0 International Licence](http://creativecommons.org/licenses/by/4.0/.) ### Citation Information ``` @inproceedings{chakravarthi-2020-hopeedi, title = "{H}ope{EDI}: A Multilingual Hope Speech Detection Dataset for Equality, Diversity, and Inclusion", author = "Chakravarthi, Bharathi Raja", booktitle = "Proceedings of the Third Workshop on Computational Modeling of People's Opinions, Personality, and Emotion's in Social Media", month = dec, year = "2020", address = "Barcelona, Spain (Online)", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.peoples-1.5", pages = "41--53", abstract = "Over the past few years, systems have been developed to control online content and eliminate abusive, offensive or hate speech content. However, people in power sometimes misuse this form of censorship to obstruct the democratic right of freedom of speech. Therefore, it is imperative that research should take a positive reinforcement approach towards online content that is encouraging, positive and supportive contents. Until now, most studies have focused on solving this problem of negativity in the English language, though the problem is much more than just harmful content. Furthermore, it is multilingual as well. Thus, we have constructed a Hope Speech dataset for Equality, Diversity and Inclusion (HopeEDI) containing user-generated comments from the social media platform YouTube with 28,451, 20,198 and 10,705 comments in English, Tamil and Malayalam, respectively, manually labelled as containing hope speech or not. To our knowledge, this is the first research of its kind to annotate hope speech for equality, diversity and inclusion in a multilingual setting. We determined that the inter-annotator agreement of our dataset using Krippendorff{'}s alpha. Further, we created several baselines to benchmark the resulting dataset and the results have been expressed using precision, recall and F1-score. The dataset is publicly available for the research community. We hope that this resource will spur further research on encouraging inclusive and responsive speech that reinforces positiveness.", } ``` ### Contributions Thanks to [@jamespaultg](https://github.com/jamespaultg) for adding this dataset.
hope_edi
[ "task_categories:text-classification", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "multilinguality:multilingual", "size_categories:10K<n<100K", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "language:ml", "language:ta", "license:cc-by-4.0", "hope-speech-classification", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["crowdsourced"], "language": ["en", "ml", "ta"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual", "multilingual"], "size_categories": ["10K<n<100K", "1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": [], "paperswithcode_id": "hopeedi", "pretty_name": "HopeEDI: A Multilingual Hope Speech Detection Dataset for Equality, Diversity, and Inclusion", "config_names": ["english", "malayalam", "tamil"], "tags": ["hope-speech-classification"], "dataset_info": [{"config_name": "english", "features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "Hope_speech", "1": "Non_hope_speech", "2": "not-English"}}}}], "splits": [{"name": "train", "num_bytes": 2306656, "num_examples": 22762}, {"name": "validation", "num_bytes": 288663, "num_examples": 2843}], "download_size": 2739901, "dataset_size": 2595319}, {"config_name": "tamil", "features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "Hope_speech", "1": "Non_hope_speech", "2": "not-Tamil"}}}}], "splits": [{"name": "train", "num_bytes": 1531013, "num_examples": 16160}, {"name": "validation", "num_bytes": 197378, "num_examples": 2018}], "download_size": 1795767, "dataset_size": 1728391}, {"config_name": "malayalam", "features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "Hope_speech", "1": "Non_hope_speech", "2": "not-malayalam"}}}}], "splits": [{"name": "train", "num_bytes": 1492031, "num_examples": 8564}, {"name": "validation", "num_bytes": 180713, "num_examples": 1070}], "download_size": 1721534, "dataset_size": 1672744}]}
2024-01-18T11:05:39+00:00
[]
[ "en", "ml", "ta" ]
TAGS #task_categories-text-classification #annotations_creators-expert-generated #language_creators-crowdsourced #multilinguality-monolingual #multilinguality-multilingual #size_categories-10K<n<100K #size_categories-1K<n<10K #source_datasets-original #language-English #language-Malayalam #language-Tamil #license-cc-by-4.0 #hope-speech-classification #region-us
Dataset Card for [Dataset Name] =============================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: Hope Speech Detection for Equality, Diversity, and Inclusion-EACL 2021 * Repository: HopeEDI data repository * Paper: HopeEDI: A Multilingual Hope Speech Detection Dataset for Equality, Diversity, and Inclusion * Leaderboard: Rank list * Point of Contact: Bharathi Raja Chakravarthi ### Dataset Summary A Hope Speech dataset for Equality, Diversity and Inclusion (HopeEDI) containing user-generated comments from the social media platform YouTube with 28,451, 20,198 and 10,705 comments in English, Tamil and Malayalam, respectively, manually labelled as containing hope speech or not. To our knowledge, this is the first research of its kind to annotate hope speech for equality, diversity and inclusion in a multilingual setting. ### Supported Tasks and Leaderboards To identify hope speech in the comments/posts in social media. ### Languages English, Tamil and Malayalam Dataset Structure ----------------- ### Data Instances An example from the English dataset looks as follows: An example from the Tamil dataset looks as follows: An example from the Malayalam dataset looks as follows: ### Data Fields English * 'text': English comment. * 'label': list of the possible values: "Hope\_speech", "Non\_hope\_speech", "not-English" Tamil * 'text': Tamil-English code mixed comment. * 'label': list of the possible values: "Hope\_speech", "Non\_hope\_speech", "not-Tamil" Malayalam * 'text': Malayalam-English code mixed comment. * 'label': list of the possible values: "Hope\_speech", "Non\_hope\_speech", "not-malayalam" ### Data Splits Dataset Creation ---------------- ### Curation Rationale Hope is considered significant for the well-being, recuperation and restoration of human life by health professionals. Hate speech or offensive language detection dataset is not available for code-mixed Tamil and code-mixed Malayalam, and it does not take into account LGBTIQ, women in STEM and other minorities. Thus, we cannot use existing hate speech or offensive language detection datasets to detect hope or non-hope for EDI of minorities. ### Source Data #### Initial Data Collection and Normalization For English, we collected data on recent topics of EDI, including women in STEM, LGBTIQ issues, COVID-19, Black Lives Matters, United Kingdom (UK) versus China, United States of America (USA) versus China and Australia versus China from YouTube video comments. The data was collected from videos of people from English-speaking countries, such as Australia, Canada, the Republic of Ireland, United Kingdom, the United States of America and New Zealand. For Tamil and Malayalam, we collected data from India on the recent topics regarding LGBTIQ issues, COVID-19, women in STEM, the Indo-China war and Dravidian affairs. #### Who are the source language producers? Youtube users ### Annotations #### Annotation process We created Google forms to collect annotations from annotators. Each form contained a maximum of 100 comments, and each page contained a maximum of 10 comments to maintain the quality of annotation. We collected information on the gender, educational background and the medium of schooling of the annotator to know the diversity of the annotator and avoid bias. We educated annotators by providing them with YouTube videos on EDI. A minimum of three annotators annotated each form. #### Who are the annotators? For English language comments, annotators were from Australia, the Republic of Ireland, the United Kingdom and the United States of America. For Tamil, we were able to get annotations from both people from the state of Tamil Nadu of India and from Sri Lanka. Most of the annotators were graduate or post-graduate students. ### Personal and Sensitive Information Social media data is highly sensitive, and even more so when it is related to the minority population, such as the LGBTIQ community or women. We have taken full consideration to minimise the risk associated with individual identity in the data by removing personal information from dataset, such as names but not celebrity names. However, to study EDI, we needed to keep information relating to the following characteristics; racial, gender, sexual orientation, ethnic origin and philosophical beliefs. Annotators were only shown anonymised posts and agreed to make no attempts to contact the comment creator. The dataset will only be made available for research purpose to the researcher who agree to follow ethical guidelines Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information This work is licensed under a Creative Commons Attribution 4.0 International Licence ### Contributions Thanks to @jamespaultg for adding this dataset.
[ "### Dataset Summary\n\n\nA Hope Speech dataset for Equality, Diversity and Inclusion (HopeEDI) containing user-generated comments from the social media platform YouTube with 28,451, 20,198 and 10,705 comments in English, Tamil and Malayalam, respectively, manually labelled as containing hope speech or not. To our knowledge, this is the first research of its kind to annotate hope speech for equality, diversity and inclusion in a multilingual setting.", "### Supported Tasks and Leaderboards\n\n\nTo identify hope speech in the comments/posts in social media.", "### Languages\n\n\nEnglish, Tamil and Malayalam\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example from the English dataset looks as follows:\n\n\n\nAn example from the Tamil dataset looks as follows:\n\n\n\nAn example from the Malayalam dataset looks as follows:", "### Data Fields\n\n\nEnglish\n\n\n* 'text': English comment.\n* 'label': list of the possible values: \"Hope\\_speech\", \"Non\\_hope\\_speech\", \"not-English\"\n\n\nTamil\n\n\n* 'text': Tamil-English code mixed comment.\n* 'label': list of the possible values: \"Hope\\_speech\", \"Non\\_hope\\_speech\", \"not-Tamil\"\n\n\nMalayalam\n\n\n* 'text': Malayalam-English code mixed comment.\n* 'label': list of the possible values: \"Hope\\_speech\", \"Non\\_hope\\_speech\", \"not-malayalam\"", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nHope is considered significant for the well-being, recuperation and restoration of human life by health professionals.\nHate speech or offensive language detection dataset is not available for code-mixed Tamil and code-mixed Malayalam, and it does not take into account LGBTIQ, women in STEM and other minorities. Thus, we cannot use existing hate speech or offensive language detection datasets to detect hope or non-hope for EDI of minorities.", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nFor English, we collected data on recent topics of EDI, including women in STEM, LGBTIQ issues, COVID-19, Black Lives Matters, United Kingdom (UK) versus China, United States of America (USA) versus China and Australia versus China from YouTube video comments. The data was collected from videos of people from English-speaking countries, such as Australia, Canada, the Republic of Ireland, United Kingdom, the United States of America and New Zealand.\n\n\nFor Tamil and Malayalam, we collected data from India on the recent topics regarding LGBTIQ issues, COVID-19, women in STEM, the Indo-China war and Dravidian affairs.", "#### Who are the source language producers?\n\n\nYoutube users", "### Annotations", "#### Annotation process\n\n\nWe created Google forms to collect annotations from annotators. Each form contained a maximum of 100 comments, and each page contained a maximum of 10 comments to maintain the quality of annotation. We collected information on the gender, educational background and the medium of schooling of the annotator to know the diversity of the annotator and avoid bias. We educated annotators by providing them with YouTube videos on EDI. A minimum of three annotators annotated each form.", "#### Who are the annotators?\n\n\nFor English language comments, annotators were from Australia, the Republic of Ireland, the United Kingdom and the United States of America. For Tamil, we were able to get annotations from both people from the state of Tamil Nadu of India and from Sri Lanka. Most of the annotators were graduate or post-graduate students.", "### Personal and Sensitive Information\n\n\nSocial media data is highly sensitive, and even more so when it is related to the minority population, such as the LGBTIQ community or women. We have taken full consideration to minimise the risk associated with individual identity in the data by removing personal information from dataset, such as names but not celebrity names. However, to study EDI, we needed to keep information relating to the following characteristics; racial, gender, sexual orientation, ethnic origin and philosophical beliefs. Annotators were only shown anonymised posts and agreed to make no attempts to contact the comment creator. The dataset will only be made available for research purpose to the researcher who agree to follow ethical\nguidelines\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nThis work is licensed under a Creative Commons Attribution 4.0 International Licence", "### Contributions\n\n\nThanks to @jamespaultg for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #annotations_creators-expert-generated #language_creators-crowdsourced #multilinguality-monolingual #multilinguality-multilingual #size_categories-10K<n<100K #size_categories-1K<n<10K #source_datasets-original #language-English #language-Malayalam #language-Tamil #license-cc-by-4.0 #hope-speech-classification #region-us \n", "### Dataset Summary\n\n\nA Hope Speech dataset for Equality, Diversity and Inclusion (HopeEDI) containing user-generated comments from the social media platform YouTube with 28,451, 20,198 and 10,705 comments in English, Tamil and Malayalam, respectively, manually labelled as containing hope speech or not. To our knowledge, this is the first research of its kind to annotate hope speech for equality, diversity and inclusion in a multilingual setting.", "### Supported Tasks and Leaderboards\n\n\nTo identify hope speech in the comments/posts in social media.", "### Languages\n\n\nEnglish, Tamil and Malayalam\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example from the English dataset looks as follows:\n\n\n\nAn example from the Tamil dataset looks as follows:\n\n\n\nAn example from the Malayalam dataset looks as follows:", "### Data Fields\n\n\nEnglish\n\n\n* 'text': English comment.\n* 'label': list of the possible values: \"Hope\\_speech\", \"Non\\_hope\\_speech\", \"not-English\"\n\n\nTamil\n\n\n* 'text': Tamil-English code mixed comment.\n* 'label': list of the possible values: \"Hope\\_speech\", \"Non\\_hope\\_speech\", \"not-Tamil\"\n\n\nMalayalam\n\n\n* 'text': Malayalam-English code mixed comment.\n* 'label': list of the possible values: \"Hope\\_speech\", \"Non\\_hope\\_speech\", \"not-malayalam\"", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nHope is considered significant for the well-being, recuperation and restoration of human life by health professionals.\nHate speech or offensive language detection dataset is not available for code-mixed Tamil and code-mixed Malayalam, and it does not take into account LGBTIQ, women in STEM and other minorities. Thus, we cannot use existing hate speech or offensive language detection datasets to detect hope or non-hope for EDI of minorities.", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nFor English, we collected data on recent topics of EDI, including women in STEM, LGBTIQ issues, COVID-19, Black Lives Matters, United Kingdom (UK) versus China, United States of America (USA) versus China and Australia versus China from YouTube video comments. The data was collected from videos of people from English-speaking countries, such as Australia, Canada, the Republic of Ireland, United Kingdom, the United States of America and New Zealand.\n\n\nFor Tamil and Malayalam, we collected data from India on the recent topics regarding LGBTIQ issues, COVID-19, women in STEM, the Indo-China war and Dravidian affairs.", "#### Who are the source language producers?\n\n\nYoutube users", "### Annotations", "#### Annotation process\n\n\nWe created Google forms to collect annotations from annotators. Each form contained a maximum of 100 comments, and each page contained a maximum of 10 comments to maintain the quality of annotation. We collected information on the gender, educational background and the medium of schooling of the annotator to know the diversity of the annotator and avoid bias. We educated annotators by providing them with YouTube videos on EDI. A minimum of three annotators annotated each form.", "#### Who are the annotators?\n\n\nFor English language comments, annotators were from Australia, the Republic of Ireland, the United Kingdom and the United States of America. For Tamil, we were able to get annotations from both people from the state of Tamil Nadu of India and from Sri Lanka. Most of the annotators were graduate or post-graduate students.", "### Personal and Sensitive Information\n\n\nSocial media data is highly sensitive, and even more so when it is related to the minority population, such as the LGBTIQ community or women. We have taken full consideration to minimise the risk associated with individual identity in the data by removing personal information from dataset, such as names but not celebrity names. However, to study EDI, we needed to keep information relating to the following characteristics; racial, gender, sexual orientation, ethnic origin and philosophical beliefs. Annotators were only shown anonymised posts and agreed to make no attempts to contact the comment creator. The dataset will only be made available for research purpose to the researcher who agree to follow ethical\nguidelines\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nThis work is licensed under a Creative Commons Attribution 4.0 International Licence", "### Contributions\n\n\nThanks to @jamespaultg for adding this dataset." ]
087b2e421aa4e6999e5ec0cb486a1d5c35fc1d71
# Dataset Card for "hotpot_qa" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://hotpotqa.github.io/](https://hotpotqa.github.io/) - **Repository:** https://github.com/hotpotqa/hotpot - **Paper:** [HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering](https://arxiv.org/abs/1809.09600) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 1.27 GB - **Size of the generated dataset:** 1.24 GB - **Total amount of disk used:** 2.52 GB ### Dataset Summary HotpotQA is a new dataset with 113k Wikipedia-based question-answer pairs with four key features: (1) the questions require finding and reasoning over multiple supporting documents to answer; (2) the questions are diverse and not constrained to any pre-existing knowledge bases or knowledge schemas; (3) we provide sentence-level supporting facts required for reasoning, allowingQA systems to reason with strong supervision and explain the predictions; (4) we offer a new type of factoid comparison questions to test QA systems’ ability to extract relevant facts and perform necessary comparison. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### distractor - **Size of downloaded dataset files:** 612.75 MB - **Size of the generated dataset:** 598.66 MB - **Total amount of disk used:** 1.21 GB An example of 'validation' looks as follows. ``` { "answer": "This is the answer", "context": { "sentences": [["Sent 1"], ["Sent 21", "Sent 22"]], "title": ["Title1", "Title 2"] }, "id": "000001", "level": "medium", "question": "What is the answer?", "supporting_facts": { "sent_id": [0, 1, 3], "title": ["Title of para 1", "Title of para 2", "Title of para 3"] }, "type": "comparison" } ``` #### fullwiki - **Size of downloaded dataset files:** 660.10 MB - **Size of the generated dataset:** 645.80 MB - **Total amount of disk used:** 1.31 GB An example of 'train' looks as follows. ``` { "answer": "This is the answer", "context": { "sentences": [["Sent 1"], ["Sent 2"]], "title": ["Title1", "Title 2"] }, "id": "000001", "level": "hard", "question": "What is the answer?", "supporting_facts": { "sent_id": [0, 1, 3], "title": ["Title of para 1", "Title of para 2", "Title of para 3"] }, "type": "bridge" } ``` ### Data Fields The data fields are the same among all splits. #### distractor - `id`: a `string` feature. - `question`: a `string` feature. - `answer`: a `string` feature. - `type`: a `string` feature. - `level`: a `string` feature. - `supporting_facts`: a dictionary feature containing: - `title`: a `string` feature. - `sent_id`: a `int32` feature. - `context`: a dictionary feature containing: - `title`: a `string` feature. - `sentences`: a `list` of `string` features. #### fullwiki - `id`: a `string` feature. - `question`: a `string` feature. - `answer`: a `string` feature. - `type`: a `string` feature. - `level`: a `string` feature. - `supporting_facts`: a dictionary feature containing: - `title`: a `string` feature. - `sent_id`: a `int32` feature. - `context`: a dictionary feature containing: - `title`: a `string` feature. - `sentences`: a `list` of `string` features. ### Data Splits #### distractor | |train|validation| |----------|----:|---------:| |distractor|90447| 7405| #### fullwiki | |train|validation|test| |--------|----:|---------:|---:| |fullwiki|90447| 7405|7405| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information HotpotQA is distributed under a [CC BY-SA 4.0 License](http://creativecommons.org/licenses/by-sa/4.0/). ### Citation Information ``` @inproceedings{yang2018hotpotqa, title={{HotpotQA}: A Dataset for Diverse, Explainable Multi-hop Question Answering}, author={Yang, Zhilin and Qi, Peng and Zhang, Saizheng and Bengio, Yoshua and Cohen, William W. and Salakhutdinov, Ruslan and Manning, Christopher D.}, booktitle={Conference on Empirical Methods in Natural Language Processing ({EMNLP})}, year={2018} } ``` ### Contributions Thanks to [@albertvillanova](https://github.com/albertvillanova), [@ghomasHudson](https://github.com/ghomasHudson) for adding this dataset.
hotpot_qa
[ "task_categories:question-answering", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:cc-by-sa-4.0", "multi-hop", "arxiv:1809.09600", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": [], "paperswithcode_id": "hotpotqa", "pretty_name": "HotpotQA", "tags": ["multi-hop"], "dataset_info": [{"config_name": "distractor", "features": [{"name": "id", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "level", "dtype": "string"}, {"name": "supporting_facts", "sequence": [{"name": "title", "dtype": "string"}, {"name": "sent_id", "dtype": "int32"}]}, {"name": "context", "sequence": [{"name": "title", "dtype": "string"}, {"name": "sentences", "sequence": "string"}]}], "splits": [{"name": "train", "num_bytes": 552949315, "num_examples": 90447}, {"name": "validation", "num_bytes": 45716111, "num_examples": 7405}], "download_size": 612746344, "dataset_size": 598665426}, {"config_name": "fullwiki", "features": [{"name": "id", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "level", "dtype": "string"}, {"name": "supporting_facts", "sequence": [{"name": "title", "dtype": "string"}, {"name": "sent_id", "dtype": "int32"}]}, {"name": "context", "sequence": [{"name": "title", "dtype": "string"}, {"name": "sentences", "sequence": "string"}]}], "splits": [{"name": "train", "num_bytes": 552949315, "num_examples": 90447}, {"name": "validation", "num_bytes": 46848601, "num_examples": 7405}, {"name": "test", "num_bytes": 46000102, "num_examples": 7405}], "download_size": 660094672, "dataset_size": 645798018}]}
2024-01-18T11:05:40+00:00
[ "1809.09600" ]
[ "en" ]
TAGS #task_categories-question-answering #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-cc-by-sa-4.0 #multi-hop #arxiv-1809.09600 #region-us
Dataset Card for "hotpot\_qa" ============================= Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering * Point of Contact: * Size of downloaded dataset files: 1.27 GB * Size of the generated dataset: 1.24 GB * Total amount of disk used: 2.52 GB ### Dataset Summary HotpotQA is a new dataset with 113k Wikipedia-based question-answer pairs with four key features: (1) the questions require finding and reasoning over multiple supporting documents to answer; (2) the questions are diverse and not constrained to any pre-existing knowledge bases or knowledge schemas; (3) we provide sentence-level supporting facts required for reasoning, allowingQA systems to reason with strong supervision and explain the predictions; (4) we offer a new type of factoid comparison questions to test QA systems’ ability to extract relevant facts and perform necessary comparison. ### Supported Tasks and Leaderboards ### Languages Dataset Structure ----------------- ### Data Instances #### distractor * Size of downloaded dataset files: 612.75 MB * Size of the generated dataset: 598.66 MB * Total amount of disk used: 1.21 GB An example of 'validation' looks as follows. #### fullwiki * Size of downloaded dataset files: 660.10 MB * Size of the generated dataset: 645.80 MB * Total amount of disk used: 1.31 GB An example of 'train' looks as follows. ### Data Fields The data fields are the same among all splits. #### distractor * 'id': a 'string' feature. * 'question': a 'string' feature. * 'answer': a 'string' feature. * 'type': a 'string' feature. * 'level': a 'string' feature. * 'supporting\_facts': a dictionary feature containing: + 'title': a 'string' feature. + 'sent\_id': a 'int32' feature. * 'context': a dictionary feature containing: + 'title': a 'string' feature. + 'sentences': a 'list' of 'string' features. #### fullwiki * 'id': a 'string' feature. * 'question': a 'string' feature. * 'answer': a 'string' feature. * 'type': a 'string' feature. * 'level': a 'string' feature. * 'supporting\_facts': a dictionary feature containing: + 'title': a 'string' feature. + 'sent\_id': a 'int32' feature. * 'context': a dictionary feature containing: + 'title': a 'string' feature. + 'sentences': a 'list' of 'string' features. ### Data Splits #### distractor #### fullwiki Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information HotpotQA is distributed under a CC BY-SA 4.0 License. ### Contributions Thanks to @albertvillanova, @ghomasHudson for adding this dataset.
[ "### Dataset Summary\n\n\nHotpotQA is a new dataset with 113k Wikipedia-based question-answer pairs with four key features: (1) the questions require finding and reasoning over multiple supporting documents to answer; (2) the questions are diverse and not constrained to any pre-existing knowledge bases or knowledge schemas; (3) we provide sentence-level supporting facts required for reasoning, allowingQA systems to reason with strong supervision and explain the predictions; (4) we offer a new type of factoid comparison questions to test QA systems’ ability to extract relevant facts and perform necessary comparison.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### distractor\n\n\n* Size of downloaded dataset files: 612.75 MB\n* Size of the generated dataset: 598.66 MB\n* Total amount of disk used: 1.21 GB\n\n\nAn example of 'validation' looks as follows.", "#### fullwiki\n\n\n* Size of downloaded dataset files: 660.10 MB\n* Size of the generated dataset: 645.80 MB\n* Total amount of disk used: 1.31 GB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### distractor\n\n\n* 'id': a 'string' feature.\n* 'question': a 'string' feature.\n* 'answer': a 'string' feature.\n* 'type': a 'string' feature.\n* 'level': a 'string' feature.\n* 'supporting\\_facts': a dictionary feature containing:\n\t+ 'title': a 'string' feature.\n\t+ 'sent\\_id': a 'int32' feature.\n* 'context': a dictionary feature containing:\n\t+ 'title': a 'string' feature.\n\t+ 'sentences': a 'list' of 'string' features.", "#### fullwiki\n\n\n* 'id': a 'string' feature.\n* 'question': a 'string' feature.\n* 'answer': a 'string' feature.\n* 'type': a 'string' feature.\n* 'level': a 'string' feature.\n* 'supporting\\_facts': a dictionary feature containing:\n\t+ 'title': a 'string' feature.\n\t+ 'sent\\_id': a 'int32' feature.\n* 'context': a dictionary feature containing:\n\t+ 'title': a 'string' feature.\n\t+ 'sentences': a 'list' of 'string' features.", "### Data Splits", "#### distractor", "#### fullwiki\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nHotpotQA is distributed under a CC BY-SA 4.0 License.", "### Contributions\n\n\nThanks to @albertvillanova, @ghomasHudson for adding this dataset." ]
[ "TAGS\n#task_categories-question-answering #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-cc-by-sa-4.0 #multi-hop #arxiv-1809.09600 #region-us \n", "### Dataset Summary\n\n\nHotpotQA is a new dataset with 113k Wikipedia-based question-answer pairs with four key features: (1) the questions require finding and reasoning over multiple supporting documents to answer; (2) the questions are diverse and not constrained to any pre-existing knowledge bases or knowledge schemas; (3) we provide sentence-level supporting facts required for reasoning, allowingQA systems to reason with strong supervision and explain the predictions; (4) we offer a new type of factoid comparison questions to test QA systems’ ability to extract relevant facts and perform necessary comparison.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### distractor\n\n\n* Size of downloaded dataset files: 612.75 MB\n* Size of the generated dataset: 598.66 MB\n* Total amount of disk used: 1.21 GB\n\n\nAn example of 'validation' looks as follows.", "#### fullwiki\n\n\n* Size of downloaded dataset files: 660.10 MB\n* Size of the generated dataset: 645.80 MB\n* Total amount of disk used: 1.31 GB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### distractor\n\n\n* 'id': a 'string' feature.\n* 'question': a 'string' feature.\n* 'answer': a 'string' feature.\n* 'type': a 'string' feature.\n* 'level': a 'string' feature.\n* 'supporting\\_facts': a dictionary feature containing:\n\t+ 'title': a 'string' feature.\n\t+ 'sent\\_id': a 'int32' feature.\n* 'context': a dictionary feature containing:\n\t+ 'title': a 'string' feature.\n\t+ 'sentences': a 'list' of 'string' features.", "#### fullwiki\n\n\n* 'id': a 'string' feature.\n* 'question': a 'string' feature.\n* 'answer': a 'string' feature.\n* 'type': a 'string' feature.\n* 'level': a 'string' feature.\n* 'supporting\\_facts': a dictionary feature containing:\n\t+ 'title': a 'string' feature.\n\t+ 'sent\\_id': a 'int32' feature.\n* 'context': a dictionary feature containing:\n\t+ 'title': a 'string' feature.\n\t+ 'sentences': a 'list' of 'string' features.", "### Data Splits", "#### distractor", "#### fullwiki\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nHotpotQA is distributed under a CC BY-SA 4.0 License.", "### Contributions\n\n\nThanks to @albertvillanova, @ghomasHudson for adding this dataset." ]
c0e43052759879b3461642ca6c0dd26658f47691
# Dataset Card for HoVer ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://hover-nlp.github.io/ - **Repository:** https://github.com/hover-nlp/hover - **Paper:** https://arxiv.org/abs/2011.03088 - **Leaderboard:** https://hover-nlp.github.io/ - **Point of Contact:** [More Information Needed] ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances A sample training set is provided below ``` {'id': 14856, 'uid': 'a0cf45ea-b5cd-4c4e-9ffa-73b39ebd78ce', 'claim': 'The park at which Tivolis Koncertsal is located opened on 15 August 1843.', 'supporting_facts': [{'key': 'Tivolis Koncertsal', 'value': 0}, {'key': 'Tivoli Gardens', 'value': 1}], 'label': 'SUPPORTED', 'num_hops': 2, 'hpqa_id': '5abca1a55542993a06baf937'} ``` Please note that in test set sentence only id, uid and claim are available. Labels are not available in test set and are represented by -1. ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
hover
[ "task_categories:text-retrieval", "task_ids:fact-checking-retrieval", "annotations_creators:expert-generated", "language_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-sa-4.0", "arxiv:2011.03088", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated", "found"], "language": ["en"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-retrieval"], "task_ids": ["fact-checking-retrieval"], "paperswithcode_id": "hover", "pretty_name": "HoVer", "dataset_info": {"features": [{"name": "id", "dtype": "int32"}, {"name": "uid", "dtype": "string"}, {"name": "claim", "dtype": "string"}, {"name": "supporting_facts", "list": [{"name": "key", "dtype": "string"}, {"name": "value", "dtype": "int32"}]}, {"name": "label", "dtype": {"class_label": {"names": {"0": "NOT_SUPPORTED", "1": "SUPPORTED"}}}}, {"name": "num_hops", "dtype": "int32"}, {"name": "hpqa_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5532178, "num_examples": 18171}, {"name": "validation", "num_bytes": 1299252, "num_examples": 4000}, {"name": "test", "num_bytes": 927513, "num_examples": 4000}], "download_size": 12257835, "dataset_size": 7758943}}
2024-01-18T11:05:51+00:00
[ "2011.03088" ]
[ "en" ]
TAGS #task_categories-text-retrieval #task_ids-fact-checking-retrieval #annotations_creators-expert-generated #language_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-sa-4.0 #arxiv-2011.03088 #region-us
# Dataset Card for HoVer ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: URL - Paper: URL - Leaderboard: URL - Point of Contact: ### Dataset Summary ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances A sample training set is provided below Please note that in test set sentence only id, uid and claim are available. Labels are not available in test set and are represented by -1. ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions Thanks to @abhishekkrthakur for adding this dataset.
[ "# Dataset Card for HoVer", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: URL\n- Point of Contact:", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances\n\nA sample training set is provided below\n\n\n\nPlease note that in test set sentence only id, uid and claim are available. Labels are not available in test set and are represented by -1.", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @abhishekkrthakur for adding this dataset." ]
[ "TAGS\n#task_categories-text-retrieval #task_ids-fact-checking-retrieval #annotations_creators-expert-generated #language_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-sa-4.0 #arxiv-2011.03088 #region-us \n", "# Dataset Card for HoVer", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: URL\n- Point of Contact:", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances\n\nA sample training set is provided below\n\n\n\nPlease note that in test set sentence only id, uid and claim are available. Labels are not available in test set and are represented by -1.", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @abhishekkrthakur for adding this dataset." ]
1aaf3a5ee169d028f2d6fff576e61dcbff41c33a
# Dataset Card for hrenwac_para ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://nlp.ffzg.hr/resources/corpora/hrenwac/ - **Repository:** http://nlp.ffzg.hr/data/corpora/hrenwac/hrenwac.en-hr.txt.gz - **Paper:** http://workshop2013.iwslt.org/downloads/IWSLT-2013-Cettolo.pdf - **Leaderboard:** - **Point of Contact:** [Nikola Ljubešič](mailto:[email protected]) ### Dataset Summary The hrenWaC corpus version 2.0 consists of parallel Croatian-English texts crawled from the .hr top-level domain for Croatia. The corpus was built with Spidextor (https://github.com/abumatran/spidextor), a tool that glues together the output of SpiderLing used for crawling and Bitextor used for bitext extraction. The accuracy of the extracted bitext on the segment level is around 80% and on the word level around 84%. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Dataset is bilingual with Croatian and English languages. ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Dataset is under the [CC-BY-SA 3.0](http://creativecommons.org/licenses/by-sa/3.0/) license. ### Citation Information ``` @misc{11356/1058, title = {Croatian-English parallel corpus {hrenWaC} 2.0}, author = {Ljube{\v s}i{\'c}, Nikola and Espl{\`a}-Gomis, Miquel and Ortiz Rojas, Sergio and Klubi{\v c}ka, Filip and Toral, Antonio}, url = {http://hdl.handle.net/11356/1058}, note = {Slovenian language resource repository {CLARIN}.{SI}}, copyright = {{CLARIN}.{SI} User Licence for Internet Corpora}, year = {2016} } ``` ### Contributions Thanks to [@IvanZidov](https://github.com/IvanZidov) for adding this dataset.
hrenwac_para
[ "task_categories:translation", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:translation", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "language:hr", "license:cc-by-sa-3.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en", "hr"], "license": ["cc-by-sa-3.0"], "multilinguality": ["translation"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["translation"], "task_ids": [], "pretty_name": "HrenwacPara", "dataset_info": {"features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "hr"]}}}], "config_name": "hrenWaC", "splits": [{"name": "train", "num_bytes": 29602110, "num_examples": 99001}], "download_size": 11640281, "dataset_size": 29602110}}
2024-01-18T11:05:53+00:00
[]
[ "en", "hr" ]
TAGS #task_categories-translation #annotations_creators-no-annotation #language_creators-found #multilinguality-translation #size_categories-10K<n<100K #source_datasets-original #language-English #language-Croatian #license-cc-by-sa-3.0 #region-us
# Dataset Card for hrenwac_para ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: URL - Paper: URL - Leaderboard: - Point of Contact: Nikola Ljubešič ### Dataset Summary The hrenWaC corpus version 2.0 consists of parallel Croatian-English texts crawled from the .hr top-level domain for Croatia. The corpus was built with Spidextor (URL a tool that glues together the output of SpiderLing used for crawling and Bitextor used for bitext extraction. The accuracy of the extracted bitext on the segment level is around 80% and on the word level around 84%. ### Supported Tasks and Leaderboards ### Languages Dataset is bilingual with Croatian and English languages. ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information Dataset is under the CC-BY-SA 3.0 license. ### Contributions Thanks to @IvanZidov for adding this dataset.
[ "# Dataset Card for hrenwac_para", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard:\n- Point of Contact: Nikola Ljubešič", "### Dataset Summary\n\nThe hrenWaC corpus version 2.0 consists of parallel Croatian-English texts crawled from the .hr top-level domain for Croatia. The corpus was built with Spidextor (URL a tool that glues together the output of SpiderLing used for crawling and Bitextor used for bitext extraction. The accuracy of the extracted bitext on the segment level is around 80% and on the word level around 84%.", "### Supported Tasks and Leaderboards", "### Languages\n\nDataset is bilingual with Croatian and English languages.", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nDataset is under the CC-BY-SA 3.0 license.", "### Contributions\n\nThanks to @IvanZidov for adding this dataset." ]
[ "TAGS\n#task_categories-translation #annotations_creators-no-annotation #language_creators-found #multilinguality-translation #size_categories-10K<n<100K #source_datasets-original #language-English #language-Croatian #license-cc-by-sa-3.0 #region-us \n", "# Dataset Card for hrenwac_para", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard:\n- Point of Contact: Nikola Ljubešič", "### Dataset Summary\n\nThe hrenWaC corpus version 2.0 consists of parallel Croatian-English texts crawled from the .hr top-level domain for Croatia. The corpus was built with Spidextor (URL a tool that glues together the output of SpiderLing used for crawling and Bitextor used for bitext extraction. The accuracy of the extracted bitext on the segment level is around 80% and on the word level around 84%.", "### Supported Tasks and Leaderboards", "### Languages\n\nDataset is bilingual with Croatian and English languages.", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nDataset is under the CC-BY-SA 3.0 license.", "### Contributions\n\nThanks to @IvanZidov for adding this dataset." ]
0f095ee5b45da1ead67626976a928a311688a7e4
# Dataset Card for HrWac ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://nlp.ffzg.hr/resources/corpora/hrwac/ - **Repository:** https://www.clarin.si/repository/xmlui/handle/11356/1064 - **Paper:** http://nlp.ffzg.hr/data/publications/nljubesi/ljubesic11-hrwac.pdf - **Leaderboard:** - **Point of Contact:** [Nikola Ljubešič](mailto:[email protected]) ### Dataset Summary The Croatian web corpus hrWaC was built by crawling the .hr top-level domain in 2011 and again in 2014. The corpus was near-deduplicated on paragraph level, normalised via diacritic restoration, morphosyntactically annotated and lemmatised. The corpus is shuffled by paragraphs. Each paragraph contains metadata on the URL, domain and language identification (Croatian vs. Serbian). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Dataset is monolingual in Croatian language. ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields - sentence: sentences as strings ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Dataset is under the [CC-BY-SA 3.0](http://creativecommons.org/licenses/by-sa/3.0/) license. ### Citation Information ``` @misc{11356/1064, title = {Croatian web corpus {hrWaC} 2.1}, author = {Ljube{\v s}i{\'c}, Nikola and Klubi{\v c}ka, Filip}, url = {http://hdl.handle.net/11356/1064}, note = {Slovenian language resource repository {CLARIN}.{SI}}, copyright = {Creative Commons - Attribution-{ShareAlike} 4.0 International ({CC} {BY}-{SA} 4.0)}, year = {2016} } ``` ### Contributions Thanks to [@IvanZidov](https://github.com/IvanZidov) for adding this dataset.
hrwac
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:1B<n<10B", "source_datasets:original", "language:hr", "license:cc-by-sa-3.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["hr"], "license": ["cc-by-sa-3.0"], "multilinguality": ["monolingual"], "size_categories": ["1B<n<10B"], "source_datasets": ["original"], "task_categories": ["text-generation", "fill-mask"], "task_ids": ["language-modeling", "masked-language-modeling"], "pretty_name": "HrWac", "dataset_info": {"features": [{"name": "sentence", "dtype": "string"}], "config_name": "hrwac", "splits": [{"name": "train", "num_bytes": 43994569015, "num_examples": 1736944727}], "download_size": 9217221471, "dataset_size": 43994569015}}
2024-01-18T11:05:54+00:00
[]
[ "hr" ]
TAGS #task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-1B<n<10B #source_datasets-original #language-Croatian #license-cc-by-sa-3.0 #region-us
# Dataset Card for HrWac ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: URL - Paper: URL - Leaderboard: - Point of Contact: Nikola Ljubešič ### Dataset Summary The Croatian web corpus hrWaC was built by crawling the .hr top-level domain in 2011 and again in 2014. The corpus was near-deduplicated on paragraph level, normalised via diacritic restoration, morphosyntactically annotated and lemmatised. The corpus is shuffled by paragraphs. Each paragraph contains metadata on the URL, domain and language identification (Croatian vs. Serbian). ### Supported Tasks and Leaderboards ### Languages Dataset is monolingual in Croatian language. ## Dataset Structure ### Data Instances ### Data Fields - sentence: sentences as strings ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information Dataset is under the CC-BY-SA 3.0 license. ### Contributions Thanks to @IvanZidov for adding this dataset.
[ "# Dataset Card for HrWac", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard:\n- Point of Contact: Nikola Ljubešič", "### Dataset Summary\n\nThe Croatian web corpus hrWaC was built by crawling the .hr top-level domain in 2011 and again in 2014. The corpus was near-deduplicated on paragraph level, normalised via diacritic restoration, morphosyntactically annotated and lemmatised. The corpus is shuffled by paragraphs. Each paragraph contains metadata on the URL, domain and language identification (Croatian vs. Serbian).", "### Supported Tasks and Leaderboards", "### Languages\n\nDataset is monolingual in Croatian language.", "## Dataset Structure", "### Data Instances", "### Data Fields\n\n- sentence: sentences as strings", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nDataset is under the CC-BY-SA 3.0 license.", "### Contributions\n\nThanks to @IvanZidov for adding this dataset." ]
[ "TAGS\n#task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-1B<n<10B #source_datasets-original #language-Croatian #license-cc-by-sa-3.0 #region-us \n", "# Dataset Card for HrWac", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard:\n- Point of Contact: Nikola Ljubešič", "### Dataset Summary\n\nThe Croatian web corpus hrWaC was built by crawling the .hr top-level domain in 2011 and again in 2014. The corpus was near-deduplicated on paragraph level, normalised via diacritic restoration, morphosyntactically annotated and lemmatised. The corpus is shuffled by paragraphs. Each paragraph contains metadata on the URL, domain and language identification (Croatian vs. Serbian).", "### Supported Tasks and Leaderboards", "### Languages\n\nDataset is monolingual in Croatian language.", "## Dataset Structure", "### Data Instances", "### Data Fields\n\n- sentence: sentences as strings", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nDataset is under the CC-BY-SA 3.0 license.", "### Contributions\n\nThanks to @IvanZidov for adding this dataset." ]
2ac5775fe2cb97e6ff096aa5274b0a19a1ea3872
# Dataset Card for [Dataset Name] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:**[Humicroedit](https://www.cs.rochester.edu/u/nhossain/humicroedit.html) - **Repository:** - **Paper:**["President Vows to Cut Taxes Hair": Dataset and Analysis of Creative Text Editing for Humorous Headlines.](http://cs.rochester.edu/~nhossain/humicroedit-naacl-19.pdf) - **Leaderboard:** - **Point of Contact:**[[email protected]] ### Dataset Summary This is the task dataset for SemEval-2020 Task 7: Assessing Humor in Edited News Headlines. ### Supported Tasks and Leaderboards [Task Description Page](https://competitions.codalab.org/competitions/20970) - Regression Task: In this task, given the original and the edited headline, the participant is required to predict the mean funniness of the edited headline. Success on this task is typically measured by achieving a *low* Mean Square Error. - Predict the funnier of the two edited headlines: Given the original headline and two edited versions, the participant has to predict which edited version is the funnier of the two. Success on this task is typically measured by achieving a *high* accuracy. ### Languages English ## Dataset Structure ### Data Instances For subtask-1, i.e Given the original and the edited headline, predict the mean funniness of the edited headline. ``` { 'id': 1183, 'original': 'Kushner to visit <Mexico/> following latest trump tirades.', 'edit': 'therapist', 'grades': '33332', 'meanGrade': 2.8 } ``` For subtask-2, i.e Given the original headline and two edited versions, predict which edited version is the funnier of the two. ``` { 'id': 1183, 'original1': 'Gene Cernan , Last <Astronaut/> on the Moon , Dies at 82', 'edit1': 'Dancer', 'grades1': '1113', 'meanGrade1': 1.2, 'original2': 'Gene Cernan , Last Astronaut on the Moon , <Dies/> at 82', 'edit2': 'impregnated', 'grades2': '30001', 'meanGrade2': 0.8, 'label': 1 } ``` ### Data Fields For subtask-1 - `id`: Unique identifier of an edited headline. - `original`: The headline with replaced word(s) identified with the </> tag. - `edit`: The new word which replaces the word marked in </> tag in the original field. - `grades`: 'grades' are the concatenation of all the grades by different annotators. - `mean` is the mean of all the judges scores. For subtask-2 - `id`: Unique identifier of an edited headline. - `original1`: The original headline with replaced word(s) identified with </> tag. - `edit1`: The new word which replaces the word marked in </> tag in the `original1` field. - `grades1`: The concatenation of all the grades annotated by different annotators for sentence1. - `meanGrade1` is the mean of all the judges scores for sentence1. - `original2`: The original headline with replaced word(s) identified with </> tag. - `edit2`: The new word which replaces the word marked in </> tag in the `original1` field. - `grades2`: The concatenation of all the grades annotated by different annotators for the sentence2. - `meanGrade2` is the mean of all the judges scores for sentence2. - `label` is 1 if sentence1 is more humourous than sentence2, 2 if sentence 2 is more humorous than sentence1, 0 if both the sentences are equally humorous ### Data Splits | Sub Task | Train | Dev | Test | Funlines| | ----- | ------ | ---- | ---- |-----| | Subtask-1:Regression | 9652 | 2419 | 3024| 8248 | | Subtask-2: Funnier headline prediction| 9381 | 2355 | 2960| 1958 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization Crowd-sourced the data by gamifying it as on the website funlines.co. Players rate the headlines on a scale of 0-4. Players are scored based on their editing and rating, and they are ranked on the game’s leaderboard page. #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @article{hossain2019president, title={" President Vows to Cut< Taxes> Hair": Dataset and Analysis of Creative Text Editing for Humorous Headlines}, author={Hossain, Nabil and Krumm, John and Gamon, Michael}, journal={arXiv preprint arXiv:1906.00274}, year={2019} }``` ### Contributions Thanks to [@saradhix](https://github.com/saradhix) for adding this dataset.
humicroedit
[ "task_categories:text-classification", "task_ids:text-scoring", "annotations_creators:crowdsourced", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:unknown", "funnier-headline-identification", "funniness-score-prediction", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced", "expert-generated"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["text-scoring"], "paperswithcode_id": "humicroedit", "pretty_name": "Humicroedit", "config_names": ["subtask-1", "subtask-2"], "tags": ["funnier-headline-identification", "funniness-score-prediction"], "dataset_info": [{"config_name": "subtask-1", "features": [{"name": "id", "dtype": "string"}, {"name": "original", "dtype": "string"}, {"name": "edit", "dtype": "string"}, {"name": "grades", "dtype": "string"}, {"name": "meanGrade", "dtype": "float32"}], "splits": [{"name": "train", "num_bytes": 1058589, "num_examples": 9652}, {"name": "test", "num_bytes": 332113, "num_examples": 3024}, {"name": "validation", "num_bytes": 269083, "num_examples": 2419}, {"name": "funlines", "num_bytes": 942376, "num_examples": 8248}], "download_size": 1621456, "dataset_size": 2602161}, {"config_name": "subtask-2", "features": [{"name": "id", "dtype": "string"}, {"name": "original1", "dtype": "string"}, {"name": "edit1", "dtype": "string"}, {"name": "grades1", "dtype": "string"}, {"name": "meanGrade1", "dtype": "float32"}, {"name": "original2", "dtype": "string"}, {"name": "edit2", "dtype": "string"}, {"name": "grades2", "dtype": "string"}, {"name": "meanGrade2", "dtype": "float32"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "equal", "1": "sentence1", "2": "sentence2"}}}}], "splits": [{"name": "train", "num_bytes": 2102667, "num_examples": 9381}, {"name": "test", "num_bytes": 665087, "num_examples": 2960}, {"name": "validation", "num_bytes": 535044, "num_examples": 2355}, {"name": "funlines", "num_bytes": 451416, "num_examples": 1958}], "download_size": 1621456, "dataset_size": 3754214}]}
2024-01-18T11:05:56+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-text-scoring #annotations_creators-crowdsourced #annotations_creators-expert-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-unknown #funnier-headline-identification #funniness-score-prediction #region-us
Dataset Card for [Dataset Name] =============================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage:Humicroedit * Repository: * Paper:"President Vows to Cut Taxes Hair": Dataset and Analysis of Creative Text Editing for Humorous Headlines. * Leaderboard: * Point of Contact:[nhossain@URL] ### Dataset Summary This is the task dataset for SemEval-2020 Task 7: Assessing Humor in Edited News Headlines. ### Supported Tasks and Leaderboards Task Description Page * Regression Task: In this task, given the original and the edited headline, the participant is required to predict the mean funniness of the edited headline. Success on this task is typically measured by achieving a *low* Mean Square Error. * Predict the funnier of the two edited headlines: Given the original headline and two edited versions, the participant has to predict which edited version is the funnier of the two. Success on this task is typically measured by achieving a *high* accuracy. ### Languages English Dataset Structure ----------------- ### Data Instances For subtask-1, i.e Given the original and the edited headline, predict the mean funniness of the edited headline. For subtask-2, i.e Given the original headline and two edited versions, predict which edited version is the funnier of the two. ### Data Fields For subtask-1 * 'id': Unique identifier of an edited headline. * 'original': The headline with replaced word(s) identified with the </> tag. * 'edit': The new word which replaces the word marked in </> tag in the original field. * 'grades': 'grades' are the concatenation of all the grades by different annotators. * 'mean' is the mean of all the judges scores. For subtask-2 * 'id': Unique identifier of an edited headline. * 'original1': The original headline with replaced word(s) identified with </> tag. * 'edit1': The new word which replaces the word marked in </> tag in the 'original1' field. * 'grades1': The concatenation of all the grades annotated by different annotators for sentence1. * 'meanGrade1' is the mean of all the judges scores for sentence1. * 'original2': The original headline with replaced word(s) identified with </> tag. * 'edit2': The new word which replaces the word marked in </> tag in the 'original1' field. * 'grades2': The concatenation of all the grades annotated by different annotators for the sentence2. * 'meanGrade2' is the mean of all the judges scores for sentence2. * 'label' is 1 if sentence1 is more humourous than sentence2, 2 if sentence 2 is more humorous than sentence1, 0 if both the sentences are equally humorous ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization Crowd-sourced the data by gamifying it as on the website URL. Players rate the headlines on a scale of 0-4. Players are scored based on their editing and rating, and they are ranked on the game’s leaderboard page. #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to @saradhix for adding this dataset.
[ "### Dataset Summary\n\n\nThis is the task dataset for SemEval-2020 Task 7: Assessing Humor in Edited News Headlines.", "### Supported Tasks and Leaderboards\n\n\nTask Description Page\n\n\n* Regression Task: In this task, given the original and the edited headline, the participant is required to predict the mean funniness of the edited headline. Success on this task is typically measured by achieving a *low* Mean Square Error.\n* Predict the funnier of the two edited headlines: Given the original headline and two edited versions, the participant has to predict which edited version is the funnier of the two. Success on this task is typically measured by achieving a *high* accuracy.", "### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nFor subtask-1, i.e Given the original and the edited headline, predict the mean funniness of the edited headline.\n\n\nFor subtask-2, i.e Given the original headline and two edited versions, predict which edited version is the funnier of the two.", "### Data Fields\n\n\nFor subtask-1\n\n\n* 'id': Unique identifier of an edited headline.\n* 'original': The headline with replaced word(s) identified with the </> tag.\n* 'edit': The new word which replaces the word marked in </> tag in the original field.\n* 'grades': 'grades' are the concatenation of all the grades by different annotators.\n* 'mean' is the mean of all the judges scores.\n\n\nFor subtask-2\n\n\n* 'id': Unique identifier of an edited headline.\n* 'original1': The original headline with replaced word(s) identified with </> tag.\n* 'edit1': The new word which replaces the word marked in </> tag in the 'original1' field.\n* 'grades1': The concatenation of all the grades annotated by different annotators for sentence1.\n* 'meanGrade1' is the mean of all the judges scores for sentence1.\n* 'original2': The original headline with replaced word(s) identified with </> tag.\n* 'edit2': The new word which replaces the word marked in </> tag in the 'original1' field.\n* 'grades2': The concatenation of all the grades annotated by different annotators for the sentence2.\n* 'meanGrade2' is the mean of all the judges scores for sentence2.\n* 'label' is 1 if sentence1 is more humourous than sentence2,\n2 if sentence 2 is more humorous than sentence1,\n0 if both the sentences are equally humorous", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nCrowd-sourced the data by gamifying it as on the website URL. Players rate the headlines on a scale of 0-4.\nPlayers are scored based on their editing and rating, and they\nare ranked on the game’s leaderboard page.", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @saradhix for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-text-scoring #annotations_creators-crowdsourced #annotations_creators-expert-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-unknown #funnier-headline-identification #funniness-score-prediction #region-us \n", "### Dataset Summary\n\n\nThis is the task dataset for SemEval-2020 Task 7: Assessing Humor in Edited News Headlines.", "### Supported Tasks and Leaderboards\n\n\nTask Description Page\n\n\n* Regression Task: In this task, given the original and the edited headline, the participant is required to predict the mean funniness of the edited headline. Success on this task is typically measured by achieving a *low* Mean Square Error.\n* Predict the funnier of the two edited headlines: Given the original headline and two edited versions, the participant has to predict which edited version is the funnier of the two. Success on this task is typically measured by achieving a *high* accuracy.", "### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nFor subtask-1, i.e Given the original and the edited headline, predict the mean funniness of the edited headline.\n\n\nFor subtask-2, i.e Given the original headline and two edited versions, predict which edited version is the funnier of the two.", "### Data Fields\n\n\nFor subtask-1\n\n\n* 'id': Unique identifier of an edited headline.\n* 'original': The headline with replaced word(s) identified with the </> tag.\n* 'edit': The new word which replaces the word marked in </> tag in the original field.\n* 'grades': 'grades' are the concatenation of all the grades by different annotators.\n* 'mean' is the mean of all the judges scores.\n\n\nFor subtask-2\n\n\n* 'id': Unique identifier of an edited headline.\n* 'original1': The original headline with replaced word(s) identified with </> tag.\n* 'edit1': The new word which replaces the word marked in </> tag in the 'original1' field.\n* 'grades1': The concatenation of all the grades annotated by different annotators for sentence1.\n* 'meanGrade1' is the mean of all the judges scores for sentence1.\n* 'original2': The original headline with replaced word(s) identified with </> tag.\n* 'edit2': The new word which replaces the word marked in </> tag in the 'original1' field.\n* 'grades2': The concatenation of all the grades annotated by different annotators for the sentence2.\n* 'meanGrade2' is the mean of all the judges scores for sentence2.\n* 'label' is 1 if sentence1 is more humourous than sentence2,\n2 if sentence 2 is more humorous than sentence1,\n0 if both the sentences are equally humorous", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nCrowd-sourced the data by gamifying it as on the website URL. Players rate the headlines on a scale of 0-4.\nPlayers are scored based on their editing and rating, and they\nare ranked on the game’s leaderboard page.", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @saradhix for adding this dataset." ]
27cd56d09fd79f1b7c5d8dc2306d2da413c2987e
# Dataset Card for HybridQA ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://hybridqa.github.io/index.html - **Repository:** [GitHub](https://github.com/wenhuchen/HybridQA) - **Paper:** [HybridQA: A Dataset of Multi-Hop Question Answering over Tabular and Textual Data](https://arxiv.org/abs/1909.05358) - **Leaderboard:** [HybridQA Competition](https://competitions.codalab.org/competitions/24420) - **Point of Contact:** [Wenhu Chen]([email protected]) ### Dataset Summary Existing question answering datasets focus on dealing with homogeneous information, based either only on text or KB/Table information alone. However, as human knowledge is distributed over heterogeneous forms, using homogeneous information alone might lead to severe coverage problems. To fill in the gap, we present HybridQA, a new large-scale question-answering dataset that requires reasoning on heterogeneous information. Each question is aligned with a Wikipedia table and multiple free-form corpora linked with the entities in the table. The questions are designed to aggregate both tabular information and text information, i.e., lack of either form would render the question unanswerable. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The dataset is in English language. ## Dataset Structure ### Data Instances A typical example looks like this ``` { "question_id": "00009b9649d0dd0a", "question": "Who were the builders of the mosque in Herat with fire temples ?", "table_id": "List_of_mosques_in_Afghanistan_0", "answer_text": "Ghurids", "question_postag": "WP VBD DT NNS IN DT NN IN NNP IN NN NNS .", "table": { "url": "https://en.wikipedia.org/wiki/List_of_mosques_in_Afghanistan", "title": "List of mosques in Afghanistan", "header": [ "Name", "Province", "City", "Year", "Remarks" ], "data": [ { "value": "Kabul", "urls": [ { "summary": "Kabul ( Persian : کابل , romanized : Kābol , Pashto : کابل , romanized : Kābəl ) is the capital and largest city of Afghanistan...", "url": "/wiki/Kabul" } ] } ] }, "section_title": "", "section_text": "", "uid": "List_of_mosques_in_Afghanistan_0", "intro": "The following is an incomplete list of large mosques in Afghanistan:" } ``` ### Data Fields - `question_id` (str) - `question` (str) - `table_id` (str) - `answer_text` (str) - `question_postag` (str) - `table` (dict): - `url` (str) - `title` (str) - `header` (list of str) - `data` (list of dict): - `value` (str) - `urls` (list of dict): - `url` (str) - `summary` (str) - `section_title` (str) - `section_text` (str) - `uid` (str) - `intro` (str) ### Data Splits The dataset is split into `train`, `dev` and `test` splits. | | train | validation | test | | --------------- |------:|-----------:|-----:| | N. Instances | 62682 | 3466 | 3463 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information The dataset is under a [Creative Commons Attribution 4.0 International License](https://creativecommons.org/licenses/by/4.0/). ### Citation Information [More Information Needed] ``` @article{chen2020hybridqa, title={HybridQA: A Dataset of Multi-Hop Question Answering over Tabular and Textual Data}, author={Chen, Wenhu and Zha, Hanwen and Chen, Zhiyu and Xiong, Wenhan and Wang, Hong and Wang, William}, journal={Findings of EMNLP 2020}, year={2020} } ``` ### Contributions Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset.
hybrid_qa
[ "task_categories:question-answering", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-4.0", "multihop-tabular-text-qa", "arxiv:1909.05358", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": [], "paperswithcode_id": "hybridqa", "pretty_name": "HybridQA", "tags": ["multihop-tabular-text-qa"], "dataset_info": {"config_name": "hybrid_qa", "features": [{"name": "question_id", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "table_id", "dtype": "string"}, {"name": "answer_text", "dtype": "string"}, {"name": "question_postag", "dtype": "string"}, {"name": "table", "struct": [{"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "header", "sequence": "string"}, {"name": "data", "list": [{"name": "value", "dtype": "string"}, {"name": "urls", "list": [{"name": "url", "dtype": "string"}, {"name": "summary", "dtype": "string"}]}]}, {"name": "section_title", "dtype": "string"}, {"name": "section_text", "dtype": "string"}, {"name": "uid", "dtype": "string"}, {"name": "intro", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 2745712265, "num_examples": 62682}, {"name": "validation", "num_bytes": 153511944, "num_examples": 3466}, {"name": "test", "num_bytes": 148795847, "num_examples": 3463}], "download_size": 217436855, "dataset_size": 3048020056}}
2023-12-18T10:04:15+00:00
[ "1909.05358" ]
[ "en" ]
TAGS #task_categories-question-answering #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-4.0 #multihop-tabular-text-qa #arxiv-1909.05358 #region-us
Dataset Card for HybridQA ========================= Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: GitHub * Paper: HybridQA: A Dataset of Multi-Hop Question Answering over Tabular and Textual Data * Leaderboard: HybridQA Competition * Point of Contact: Wenhu Chen ### Dataset Summary Existing question answering datasets focus on dealing with homogeneous information, based either only on text or KB/Table information alone. However, as human knowledge is distributed over heterogeneous forms, using homogeneous information alone might lead to severe coverage problems. To fill in the gap, we present HybridQA, a new large-scale question-answering dataset that requires reasoning on heterogeneous information. Each question is aligned with a Wikipedia table and multiple free-form corpora linked with the entities in the table. The questions are designed to aggregate both tabular information and text information, i.e., lack of either form would render the question unanswerable. ### Supported Tasks and Leaderboards ### Languages The dataset is in English language. Dataset Structure ----------------- ### Data Instances A typical example looks like this ### Data Fields * 'question\_id' (str) * 'question' (str) * 'table\_id' (str) * 'answer\_text' (str) * 'question\_postag' (str) * 'table' (dict): + 'url' (str) + 'title' (str) + 'header' (list of str) + 'data' (list of dict): - 'value' (str) - 'urls' (list of dict): * 'url' (str) * 'summary' (str) * 'section\_title' (str) * 'section\_text' (str) * 'uid' (str) * 'intro' (str) ### Data Splits The dataset is split into 'train', 'dev' and 'test' splits. Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information The dataset is under a Creative Commons Attribution 4.0 International License. ### Contributions Thanks to @patil-suraj for adding this dataset.
[ "### Dataset Summary\n\n\nExisting question answering datasets focus on dealing with homogeneous information, based either only on text or\nKB/Table information alone. However, as human knowledge is distributed over heterogeneous forms,\nusing homogeneous information alone might lead to severe coverage problems.\nTo fill in the gap, we present HybridQA, a new large-scale question-answering dataset that\nrequires reasoning on heterogeneous information. Each question is aligned with a Wikipedia table\nand multiple free-form corpora linked with the entities in the table. The questions are designed\nto aggregate both tabular information and text information, i.e.,\nlack of either form would render the question unanswerable.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nThe dataset is in English language.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA typical example looks like this", "### Data Fields\n\n\n* 'question\\_id' (str)\n* 'question' (str)\n* 'table\\_id' (str)\n* 'answer\\_text' (str)\n* 'question\\_postag' (str)\n* 'table' (dict):\n\t+ 'url' (str)\n\t+ 'title' (str)\n\t+ 'header' (list of str)\n\t+ 'data' (list of dict):\n\t\t- 'value' (str)\n\t\t- 'urls' (list of dict):\n\t\t\t* 'url' (str)\n\t\t\t* 'summary' (str)\n* 'section\\_title' (str)\n* 'section\\_text' (str)\n* 'uid' (str)\n* 'intro' (str)", "### Data Splits\n\n\nThe dataset is split into 'train', 'dev' and 'test' splits.\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nThe dataset is under a Creative Commons Attribution 4.0 International License.", "### Contributions\n\n\nThanks to @patil-suraj for adding this dataset." ]
[ "TAGS\n#task_categories-question-answering #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-4.0 #multihop-tabular-text-qa #arxiv-1909.05358 #region-us \n", "### Dataset Summary\n\n\nExisting question answering datasets focus on dealing with homogeneous information, based either only on text or\nKB/Table information alone. However, as human knowledge is distributed over heterogeneous forms,\nusing homogeneous information alone might lead to severe coverage problems.\nTo fill in the gap, we present HybridQA, a new large-scale question-answering dataset that\nrequires reasoning on heterogeneous information. Each question is aligned with a Wikipedia table\nand multiple free-form corpora linked with the entities in the table. The questions are designed\nto aggregate both tabular information and text information, i.e.,\nlack of either form would render the question unanswerable.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nThe dataset is in English language.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA typical example looks like this", "### Data Fields\n\n\n* 'question\\_id' (str)\n* 'question' (str)\n* 'table\\_id' (str)\n* 'answer\\_text' (str)\n* 'question\\_postag' (str)\n* 'table' (dict):\n\t+ 'url' (str)\n\t+ 'title' (str)\n\t+ 'header' (list of str)\n\t+ 'data' (list of dict):\n\t\t- 'value' (str)\n\t\t- 'urls' (list of dict):\n\t\t\t* 'url' (str)\n\t\t\t* 'summary' (str)\n* 'section\\_title' (str)\n* 'section\\_text' (str)\n* 'uid' (str)\n* 'intro' (str)", "### Data Splits\n\n\nThe dataset is split into 'train', 'dev' and 'test' splits.\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nThe dataset is under a Creative Commons Attribution 4.0 International License.", "### Contributions\n\n\nThanks to @patil-suraj for adding this dataset." ]
c315cc4a12a27cde08fd55c0beda41ced8b75923
# Dataset Card for "hyperpartisan_news_detection" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://pan.webis.de/semeval19/semeval19-web/](https://pan.webis.de/semeval19/semeval19-web/) - **Repository:** https://github.com/pan-webis-de/pan-code/tree/master/semeval19 - **Paper:** https://aclanthology.org/S19-2145 - **Data:** https://doi.org/10.5281/zenodo.1489920 - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 1.00 GB - **Size of the generated dataset:** 5.61 GB - **Total amount of disk used:** 6.62 GB ### Dataset Summary Hyperpartisan News Detection was a dataset created for PAN @ SemEval 2019 Task 4. Given a news article text, decide whether it follows a hyperpartisan argumentation, i.e., whether it exhibits blind, prejudiced, or unreasoning allegiance to one party, faction, cause, or person. There are 2 parts: - byarticle: Labeled through crowdsourcing on an article basis. The data contains only articles for which a consensus among the crowdsourcing workers existed. - bypublisher: Labeled by the overall bias of the publisher as provided by BuzzFeed journalists or MediaBiasFactCheck.com. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### byarticle - **Size of downloaded dataset files:** 1.00 MB - **Size of the generated dataset:** 2.80 MB - **Total amount of disk used:** 3.81 MB An example of 'train' looks as follows. ``` This example was too long and was cropped: { "hyperpartisan": true, "published_at": "2020-01-01", "text": "\"<p>This is a sample article which will contain lots of text</p>\\n \\n<p>Lorem ipsum dolor sit amet, consectetur adipiscing el...", "title": "Example article 1", "url": "http://www.example.com/example1" } ``` #### bypublisher - **Size of downloaded dataset files:** 1.00 GB - **Size of the generated dataset:** 5.61 GB - **Total amount of disk used:** 6.61 GB An example of 'train' looks as follows. ``` This example was too long and was cropped: { "bias": 3, "hyperpartisan": false, "published_at": "2020-01-01", "text": "\"<p>This is a sample article which will contain lots of text</p>\\n \\n<p>Phasellus bibendum porta nunc, id venenatis tortor fi...", "title": "Example article 4", "url": "https://example.com/example4" } ``` ### Data Fields The data fields are the same among all splits. #### byarticle - `text`: a `string` feature. - `title`: a `string` feature. - `hyperpartisan`: a `bool` feature. - `url`: a `string` feature. - `published_at`: a `string` feature. #### bypublisher - `text`: a `string` feature. - `title`: a `string` feature. - `hyperpartisan`: a `bool` feature. - `url`: a `string` feature. - `published_at`: a `string` feature. - `bias`: a classification label, with possible values including `right` (0), `right-center` (1), `least` (2), `left-center` (3), `left` (4). ### Data Splits #### byarticle | |train| |---------|----:| |byarticle| 645| #### bypublisher | |train |validation| |-----------|-----:|---------:| |bypublisher|600000| 150000| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information The collection (including labels) are licensed under a [Creative Commons Attribution 4.0 International License](http://creativecommons.org/licenses/by/4.0/). ### Citation Information ``` @inproceedings{kiesel-etal-2019-semeval, title = "{S}em{E}val-2019 Task 4: Hyperpartisan News Detection", author = "Kiesel, Johannes and Mestre, Maria and Shukla, Rishabh and Vincent, Emmanuel and Adineh, Payam and Corney, David and Stein, Benno and Potthast, Martin", booktitle = "Proceedings of the 13th International Workshop on Semantic Evaluation", month = jun, year = "2019", address = "Minneapolis, Minnesota, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/S19-2145", doi = "10.18653/v1/S19-2145", pages = "829--839", abstract = "Hyperpartisan news is news that takes an extreme left-wing or right-wing standpoint. If one is able to reliably compute this meta information, news articles may be automatically tagged, this way encouraging or discouraging readers to consume the text. It is an open question how successfully hyperpartisan news detection can be automated, and the goal of this SemEval task was to shed light on the state of the art. We developed new resources for this purpose, including a manually labeled dataset with 1,273 articles, and a second dataset with 754,000 articles, labeled via distant supervision. The interest of the research community in our task exceeded all our expectations: The datasets were downloaded about 1,000 times, 322 teams registered, of which 184 configured a virtual machine on our shared task cloud service TIRA, of which in turn 42 teams submitted a valid run. The best team achieved an accuracy of 0.822 on a balanced sample (yes : no hyperpartisan) drawn from the manually tagged corpus; an ensemble of the submitted systems increased the accuracy by 0.048.", } ``` ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@ghomasHudson](https://github.com/ghomasHudson) for adding this dataset.
hyperpartisan_news_detection
[ "task_categories:text-classification", "annotations_creators:crowdsourced", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "language:en", "license:cc-by-4.0", "bias-classification", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced", "expert-generated"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": [], "pretty_name": "HyperpartisanNewsDetection", "tags": ["bias-classification"], "dataset_info": [{"config_name": "byarticle", "features": [{"name": "text", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "hyperpartisan", "dtype": "bool"}, {"name": "url", "dtype": "string"}, {"name": "published_at", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2803943, "num_examples": 645}], "download_size": 1000352, "dataset_size": 2803943}, {"config_name": "bypublisher", "features": [{"name": "text", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "hyperpartisan", "dtype": "bool"}, {"name": "url", "dtype": "string"}, {"name": "published_at", "dtype": "string"}, {"name": "bias", "dtype": {"class_label": {"names": {"0": "right", "1": "right-center", "2": "least", "3": "left-center", "4": "left"}}}}], "splits": [{"name": "train", "num_bytes": 2805711609, "num_examples": 600000}, {"name": "validation", "num_bytes": 960356598, "num_examples": 150000}], "download_size": 1003195420, "dataset_size": 5611423218}]}
2023-06-13T06:46:19+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #annotations_creators-crowdsourced #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-cc-by-4.0 #bias-classification #region-us
Dataset Card for "hyperpartisan\_news\_detection" ================================================= Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: URL * Data: URL * Point of Contact: * Size of downloaded dataset files: 1.00 GB * Size of the generated dataset: 5.61 GB * Total amount of disk used: 6.62 GB ### Dataset Summary Hyperpartisan News Detection was a dataset created for PAN @ SemEval 2019 Task 4. Given a news article text, decide whether it follows a hyperpartisan argumentation, i.e., whether it exhibits blind, prejudiced, or unreasoning allegiance to one party, faction, cause, or person. There are 2 parts: * byarticle: Labeled through crowdsourcing on an article basis. The data contains only articles for which a consensus among the crowdsourcing workers existed. * bypublisher: Labeled by the overall bias of the publisher as provided by BuzzFeed journalists or URL. ### Supported Tasks and Leaderboards ### Languages Dataset Structure ----------------- ### Data Instances #### byarticle * Size of downloaded dataset files: 1.00 MB * Size of the generated dataset: 2.80 MB * Total amount of disk used: 3.81 MB An example of 'train' looks as follows. #### bypublisher * Size of downloaded dataset files: 1.00 GB * Size of the generated dataset: 5.61 GB * Total amount of disk used: 6.61 GB An example of 'train' looks as follows. ### Data Fields The data fields are the same among all splits. #### byarticle * 'text': a 'string' feature. * 'title': a 'string' feature. * 'hyperpartisan': a 'bool' feature. * 'url': a 'string' feature. * 'published\_at': a 'string' feature. #### bypublisher * 'text': a 'string' feature. * 'title': a 'string' feature. * 'hyperpartisan': a 'bool' feature. * 'url': a 'string' feature. * 'published\_at': a 'string' feature. * 'bias': a classification label, with possible values including 'right' (0), 'right-center' (1), 'least' (2), 'left-center' (3), 'left' (4). ### Data Splits #### byarticle #### bypublisher Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information The collection (including labels) are licensed under a Creative Commons Attribution 4.0 International License. ### Contributions Thanks to @thomwolf, @ghomasHudson for adding this dataset.
[ "### Dataset Summary\n\n\nHyperpartisan News Detection was a dataset created for PAN @ SemEval 2019 Task 4.\nGiven a news article text, decide whether it follows a hyperpartisan argumentation, i.e., whether it exhibits blind, prejudiced, or unreasoning allegiance to one party, faction, cause, or person.\n\n\nThere are 2 parts:\n\n\n* byarticle: Labeled through crowdsourcing on an article basis. The data contains only articles for which a consensus among the crowdsourcing workers existed.\n* bypublisher: Labeled by the overall bias of the publisher as provided by BuzzFeed journalists or URL.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### byarticle\n\n\n* Size of downloaded dataset files: 1.00 MB\n* Size of the generated dataset: 2.80 MB\n* Total amount of disk used: 3.81 MB\n\n\nAn example of 'train' looks as follows.", "#### bypublisher\n\n\n* Size of downloaded dataset files: 1.00 GB\n* Size of the generated dataset: 5.61 GB\n* Total amount of disk used: 6.61 GB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### byarticle\n\n\n* 'text': a 'string' feature.\n* 'title': a 'string' feature.\n* 'hyperpartisan': a 'bool' feature.\n* 'url': a 'string' feature.\n* 'published\\_at': a 'string' feature.", "#### bypublisher\n\n\n* 'text': a 'string' feature.\n* 'title': a 'string' feature.\n* 'hyperpartisan': a 'bool' feature.\n* 'url': a 'string' feature.\n* 'published\\_at': a 'string' feature.\n* 'bias': a classification label, with possible values including 'right' (0), 'right-center' (1), 'least' (2), 'left-center' (3), 'left' (4).", "### Data Splits", "#### byarticle", "#### bypublisher\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nThe collection (including labels) are licensed under a Creative Commons Attribution 4.0 International License.", "### Contributions\n\n\nThanks to @thomwolf, @ghomasHudson for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #annotations_creators-crowdsourced #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-cc-by-4.0 #bias-classification #region-us \n", "### Dataset Summary\n\n\nHyperpartisan News Detection was a dataset created for PAN @ SemEval 2019 Task 4.\nGiven a news article text, decide whether it follows a hyperpartisan argumentation, i.e., whether it exhibits blind, prejudiced, or unreasoning allegiance to one party, faction, cause, or person.\n\n\nThere are 2 parts:\n\n\n* byarticle: Labeled through crowdsourcing on an article basis. The data contains only articles for which a consensus among the crowdsourcing workers existed.\n* bypublisher: Labeled by the overall bias of the publisher as provided by BuzzFeed journalists or URL.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### byarticle\n\n\n* Size of downloaded dataset files: 1.00 MB\n* Size of the generated dataset: 2.80 MB\n* Total amount of disk used: 3.81 MB\n\n\nAn example of 'train' looks as follows.", "#### bypublisher\n\n\n* Size of downloaded dataset files: 1.00 GB\n* Size of the generated dataset: 5.61 GB\n* Total amount of disk used: 6.61 GB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### byarticle\n\n\n* 'text': a 'string' feature.\n* 'title': a 'string' feature.\n* 'hyperpartisan': a 'bool' feature.\n* 'url': a 'string' feature.\n* 'published\\_at': a 'string' feature.", "#### bypublisher\n\n\n* 'text': a 'string' feature.\n* 'title': a 'string' feature.\n* 'hyperpartisan': a 'bool' feature.\n* 'url': a 'string' feature.\n* 'published\\_at': a 'string' feature.\n* 'bias': a classification label, with possible values including 'right' (0), 'right-center' (1), 'least' (2), 'left-center' (3), 'left' (4).", "### Data Splits", "#### byarticle", "#### bypublisher\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nThe collection (including labels) are licensed under a Creative Commons Attribution 4.0 International License.", "### Contributions\n\n\nThanks to @thomwolf, @ghomasHudson for adding this dataset." ]
c88fa968aef60653649a37eb617d220f9ff5f470
# Dataset Card for `iapp_wiki_qa_squad` ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/iapp-technology/iapp-wiki-qa-dataset - **Repository:** https://github.com/iapp-technology/iapp-wiki-qa-dataset - **Paper:** - **Leaderboard:** - **Point of Contact:** https://github.com/iapp-technology/iapp-wiki-qa-dataset ### Dataset Summary `iapp_wiki_qa_squad` is an extractive question answering dataset from Thai Wikipedia articles. It is adapted from [the original iapp-wiki-qa-dataset](https://github.com/iapp-technology/iapp-wiki-qa-dataset) to [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) format, resulting in 5761/742/739 questions from 1529/191/192 articles. ### Supported Tasks and Leaderboards extractive question answering ### Languages Thai ## Dataset Structure ### Data Instances An example from the dataset: ``` {'article_id': '0U2lA8nJQESIxbZrjZQc', 'question_id': '0U2lA8nJQESIxbZrjZQc_000', 'context': 'นายสุวัฒน์ วรรณศิริกุล (1 พฤศจิกายน พ.ศ. 2476 - 31 กรกฎาคม พ.ศ. 2555) อดีตรองหัวหน้าพรรคพลังประชาชน อดีตประธานสมาชิกสภาผู้แทนราษฎร และประธานภาคกรุงเทพมหานคร พรรคพลังประชาชน อดีตสมาชิกสภาผู้แทนราษฎรกรุงเทพมหานครหลายสมัย ได้รับการเลือกตั้งเป็นสมาชิกสภาผู้แทนราษฎรครั้งแรกในปี พ.ศ. 2529 ในสังกัดพรรคประชากรไทย และสังกัดพรรคพลังประชาชน เป็นพรรคสุดท้าย', 'question': 'สุวัฒน์ วรรณศิริกุล เกิดวันที่เท่าไร', 'answers': {'text': ['1 พฤศจิกายน พ.ศ. 2476'], 'answer_start': [24], 'answer_end': [45]}, 'title': 'สุวัฒน์ วรรณศิริกุล', 'created_by': 'gmnjGRF0y0g7QRZDd9Qgz3AgiHJ3', 'created_on': '2019-08-18 05:05:51.358000+00:00', 'is_pay': {'date': None, 'status': False}} {'article_id': '01KZTrxgvC5mOovXFMPJ', 'question_id': '01KZTrxgvC5mOovXFMPJ_000', 'context': 'พัทธ์ธีรา ศรุติพงศ์โภคิน (เกิด 3 ธันวาคม พ.ศ. 2533) หรือชื่อเล่นว่า อร เป็นนักแสดงหญิงชาวไทย สำเร็จมัธยมศึกษาจากCatholic Cathedral College ประเทศนิวซีแลนด์ และปริญญาตรีจากRaffles International College สาขา Business Marketing\n\nเข้าสู่วงการตั้งแต่อายุ 6 ขวบ จากการแสดงละครเวทีกับ ครูชลประคัลภ์ จันทร์เรือง จากนั้นก็เล่นโฆษณาในวัยเด็ก 2- 3 ชิ้น และยังเคยแสดงช่วงละครสั้น ในรายการซุปเปอร์จิ๋ว ประมาณปี 2542\n\nปัจจุบันเป็นทั้ง นักแสดง , พิธีกร และ วีเจ อยู่ที่คลื่น เก็ท 102.5 Bangkok International Hits Music Station และยังเป็นพิธีกรให้กับช่อง ทรู มิวสิก', 'question': 'พัทธ์ธีรา ศรุติพงศ์โภคิน เกิดวันที่เท่าไร', 'answers': {'text': ['3 ธันวาคม พ.ศ. 2533'], 'answer_start': [31], 'answer_end': [50]}, 'title': 'พัทธ์ธีรา ศรุติพงศ์โภคิน', 'created_by': 'gmnjGRF0y0g7QRZDd9Qgz3AgiHJ3', 'created_on': '2019-08-07 14:00:38.778000+00:00', 'is_pay': {'status': True, 'total': 2.5, 'date': '2019-08-13 10:47:28.095000+00:00'}} ``` ### Data Fields ``` { "question_id": question id "article_id": article id "title": article title "context": article texts "question": question "answers": { "text": answer text "answer_start": answer beginning position "answer_end": answer exclusive upper bound position } ), } ``` ### Data Splits | | train | valid | test | |-------------|-------|-------|------| | # questions | 5761 | 742 | 739 | | # articles | 1529 | 191 | 192 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization From the original `iapp-wiki-qa-dataset`, [@cstorm125](https://github.com/cstorm125/) applied the following processing: - Select questions with one, non-empty answer - Select questions whose answers match `textDetection` fields - Select questions whose answers are 100-character long or shorter - 80/10/10 train-validation-split at article level #### Who are the source language producers? Wikipedia authors for contexts and annotators hired by [iApp](https://iapp.co.th/) for questions and answer annotations ### Annotations #### Annotation process Annotators hired by [iApp](https://iapp.co.th/) are asked create questions and answers for each article. #### Who are the annotators? Annotators hired by [iApp](https://iapp.co.th/) ### Personal and Sensitive Information All contents are from Wikipedia. No personal and sensitive information is expected to be included. ## Considerations for Using the Data ### Social Impact of Dataset - open-domain, extractive question answering in Thai ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators Original dataset by [iApp](https://iapp.co.th/). SQuAD formattting by [PyThaiNLP](https://github.com/PyThaiNLP/). ### Licensing Information MIT ### Citation Information ``` @dataset{kobkrit_viriyayudhakorn_2021_4539916, author = {Kobkrit Viriyayudhakorn and Charin Polpanumas}, title = {iapp\_wiki\_qa\_squad}, month = feb, year = 2021, publisher = {Zenodo}, version = 1, doi = {10.5281/zenodo.4539916}, url = {https://doi.org/10.5281/zenodo.4539916} } ``` ### Contributions Thanks to [@cstorm125](https://github.com/cstorm125) for adding this dataset.
iapp_wiki_qa_squad
[ "task_categories:question-answering", "task_ids:extractive-qa", "task_ids:open-domain-qa", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:extended|other-iapp-wiki-qa-dataset", "language:th", "license:mit", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["th"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["extended|other-iapp-wiki-qa-dataset"], "task_categories": ["question-answering"], "task_ids": ["extractive-qa", "open-domain-qa"], "pretty_name": "IappWikiQaSquad", "dataset_info": {"features": [{"name": "question_id", "dtype": "string"}, {"name": "article_id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}, {"name": "answer_end", "dtype": "int32"}]}], "config_name": "iapp_wiki_qa_squad", "splits": [{"name": "train", "num_bytes": 16107541, "num_examples": 5761}, {"name": "validation", "num_bytes": 2120768, "num_examples": 742}, {"name": "test", "num_bytes": 2032016, "num_examples": 739}], "download_size": 2876630, "dataset_size": 20260325}}
2024-01-18T11:05:58+00:00
[]
[ "th" ]
TAGS #task_categories-question-answering #task_ids-extractive-qa #task_ids-open-domain-qa #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-extended|other-iapp-wiki-qa-dataset #language-Thai #license-mit #region-us
Dataset Card for 'iapp\_wiki\_qa\_squad' ======================================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: * Leaderboard: * Point of Contact: URL ### Dataset Summary 'iapp\_wiki\_qa\_squad' is an extractive question answering dataset from Thai Wikipedia articles. It is adapted from the original iapp-wiki-qa-dataset to SQuAD format, resulting in 5761/742/739 questions from 1529/191/192 articles. ### Supported Tasks and Leaderboards extractive question answering ### Languages Thai Dataset Structure ----------------- ### Data Instances An example from the dataset: ### Data Fields ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization From the original 'iapp-wiki-qa-dataset', @cstorm125 applied the following processing: * Select questions with one, non-empty answer * Select questions whose answers match 'textDetection' fields * Select questions whose answers are 100-character long or shorter * 80/10/10 train-validation-split at article level #### Who are the source language producers? Wikipedia authors for contexts and annotators hired by iApp for questions and answer annotations ### Annotations #### Annotation process Annotators hired by iApp are asked create questions and answers for each article. #### Who are the annotators? Annotators hired by iApp ### Personal and Sensitive Information All contents are from Wikipedia. No personal and sensitive information is expected to be included. Considerations for Using the Data --------------------------------- ### Social Impact of Dataset * open-domain, extractive question answering in Thai ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators Original dataset by iApp. SQuAD formattting by PyThaiNLP. ### Licensing Information MIT ### Contributions Thanks to @cstorm125 for adding this dataset.
[ "### Dataset Summary\n\n\n'iapp\\_wiki\\_qa\\_squad' is an extractive question answering dataset from Thai Wikipedia articles. It is adapted from the original iapp-wiki-qa-dataset to SQuAD format, resulting in 5761/742/739 questions from 1529/191/192 articles.", "### Supported Tasks and Leaderboards\n\n\nextractive question answering", "### Languages\n\n\nThai\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example from the dataset:", "### Data Fields", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nFrom the original 'iapp-wiki-qa-dataset', @cstorm125 applied the following processing:\n\n\n* Select questions with one, non-empty answer\n* Select questions whose answers match 'textDetection' fields\n* Select questions whose answers are 100-character long or shorter\n* 80/10/10 train-validation-split at article level", "#### Who are the source language producers?\n\n\nWikipedia authors for contexts and annotators hired by iApp for questions and answer annotations", "### Annotations", "#### Annotation process\n\n\nAnnotators hired by iApp are asked create questions and answers for each article.", "#### Who are the annotators?\n\n\nAnnotators hired by iApp", "### Personal and Sensitive Information\n\n\nAll contents are from Wikipedia. No personal and sensitive information is expected to be included.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset\n\n\n* open-domain, extractive question answering in Thai", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nOriginal dataset by iApp. SQuAD formattting by PyThaiNLP.", "### Licensing Information\n\n\nMIT", "### Contributions\n\n\nThanks to @cstorm125 for adding this dataset." ]
[ "TAGS\n#task_categories-question-answering #task_ids-extractive-qa #task_ids-open-domain-qa #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-extended|other-iapp-wiki-qa-dataset #language-Thai #license-mit #region-us \n", "### Dataset Summary\n\n\n'iapp\\_wiki\\_qa\\_squad' is an extractive question answering dataset from Thai Wikipedia articles. It is adapted from the original iapp-wiki-qa-dataset to SQuAD format, resulting in 5761/742/739 questions from 1529/191/192 articles.", "### Supported Tasks and Leaderboards\n\n\nextractive question answering", "### Languages\n\n\nThai\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example from the dataset:", "### Data Fields", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nFrom the original 'iapp-wiki-qa-dataset', @cstorm125 applied the following processing:\n\n\n* Select questions with one, non-empty answer\n* Select questions whose answers match 'textDetection' fields\n* Select questions whose answers are 100-character long or shorter\n* 80/10/10 train-validation-split at article level", "#### Who are the source language producers?\n\n\nWikipedia authors for contexts and annotators hired by iApp for questions and answer annotations", "### Annotations", "#### Annotation process\n\n\nAnnotators hired by iApp are asked create questions and answers for each article.", "#### Who are the annotators?\n\n\nAnnotators hired by iApp", "### Personal and Sensitive Information\n\n\nAll contents are from Wikipedia. No personal and sensitive information is expected to be included.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset\n\n\n* open-domain, extractive question answering in Thai", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nOriginal dataset by iApp. SQuAD formattting by PyThaiNLP.", "### Licensing Information\n\n\nMIT", "### Contributions\n\n\nThanks to @cstorm125 for adding this dataset." ]
513e6fc17cf5600fd27b95821692b77c8cb893d4
# Dataset Card for Indonesian Clickbait Headlines ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://data.mendeley.com/datasets/k42j7x2kpn/1 - **Repository:** - **Paper:** [CLICK-ID: A Novel Dataset for Indonesian Clickbait Headlines](https://www.sciencedirect.com/science/article/pii/S2352340920311252#!) - **Leaderboard:** - **Point of Contact:** [Andika William](mailto:[email protected]), [Yunita Sari](mailto:[email protected]) ### Dataset Summary The CLICK-ID dataset is a collection of Indonesian news headlines that was collected from 12 local online news publishers; detikNews, Fimela, Kapanlagi, Kompas, Liputan6, Okezone, Posmetro-Medan, Republika, Sindonews, Tempo, Tribunnews, and Wowkeren. This dataset is comprised of mainly two parts; (i) 46,119 raw article data, and (ii) 15,000 clickbait annotated sample headlines. Annotation was conducted with 3 annotator examining each headline. Judgment were based only on the headline. The majority then is considered as the ground truth. In the annotated sample, our annotation shows 6,290 clickbait and 8,710 non-clickbait. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Indonesian ## Dataset Structure ### Data Instances An example of the annotated article: ``` { 'id': '100', 'label': 1, 'title': "SAH! Ini Daftar Nama Menteri Kabinet Jokowi - Ma'ruf Amin" } > ``` ### Data Fields #### Annotated - `id`: id of the sample - `title`: the title of the news article - `label`: the label of the article, either non-clickbait or clickbait #### Raw - `id`: id of the sample - `title`: the title of the news article - `source`: the name of the publisher/newspaper - `date`: date - `category`: the category of the article - `sub-category`: the sub category of the article - `content`: the content of the article - `url`: the url of the article ### Data Splits The dataset contains train set. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Creative Commons Attribution 4.0 International license ### Citation Information ``` @article{WILLIAM2020106231, title = "CLICK-ID: A novel dataset for Indonesian clickbait headlines", journal = "Data in Brief", volume = "32", pages = "106231", year = "2020", issn = "2352-3409", doi = "https://doi.org/10.1016/j.dib.2020.106231", url = "http://www.sciencedirect.com/science/article/pii/S2352340920311252", author = "Andika William and Yunita Sari", keywords = "Indonesian, Natural Language Processing, News articles, Clickbait, Text-classification", abstract = "News analysis is a popular task in Natural Language Processing (NLP). In particular, the problem of clickbait in news analysis has gained attention in recent years [1, 2]. However, the majority of the tasks has been focused on English news, in which there is already a rich representative resource. For other languages, such as Indonesian, there is still a lack of resource for clickbait tasks. Therefore, we introduce the CLICK-ID dataset of Indonesian news headlines extracted from 12 Indonesian online news publishers. It is comprised of 15,000 annotated headlines with clickbait and non-clickbait labels. Using the CLICK-ID dataset, we then developed an Indonesian clickbait classification model achieving favourable performance. We believe that this corpus will be useful for replicable experiments in clickbait detection or other experiments in NLP areas." } ``` ### Contributions Thanks to [@cahya-wirawan](https://github.com/cahya-wirawan) for adding this dataset.
id_clickbait
[ "task_categories:text-classification", "task_ids:fact-checking", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:id", "license:cc-by-4.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["id"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["fact-checking"], "pretty_name": "Indonesian Clickbait Headlines", "dataset_info": [{"config_name": "annotated", "features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "non-clickbait", "1": "clickbait"}}}}], "splits": [{"name": "train", "num_bytes": 1268698, "num_examples": 15000}], "download_size": 150769127, "dataset_size": 1268698}, {"config_name": "raw", "features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "date", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "sub-category", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 81669386, "num_examples": 38655}], "download_size": 150769127, "dataset_size": 81669386}]}
2024-01-18T11:06:03+00:00
[]
[ "id" ]
TAGS #task_categories-text-classification #task_ids-fact-checking #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Indonesian #license-cc-by-4.0 #region-us
# Dataset Card for Indonesian Clickbait Headlines ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: - Paper: CLICK-ID: A Novel Dataset for Indonesian Clickbait Headlines - Leaderboard: - Point of Contact: Andika William, Yunita Sari ### Dataset Summary The CLICK-ID dataset is a collection of Indonesian news headlines that was collected from 12 local online news publishers; detikNews, Fimela, Kapanlagi, Kompas, Liputan6, Okezone, Posmetro-Medan, Republika, Sindonews, Tempo, Tribunnews, and Wowkeren. This dataset is comprised of mainly two parts; (i) 46,119 raw article data, and (ii) 15,000 clickbait annotated sample headlines. Annotation was conducted with 3 annotator examining each headline. Judgment were based only on the headline. The majority then is considered as the ground truth. In the annotated sample, our annotation shows 6,290 clickbait and 8,710 non-clickbait. ### Supported Tasks and Leaderboards ### Languages Indonesian ## Dataset Structure ### Data Instances An example of the annotated article: ### Data Fields #### Annotated - 'id': id of the sample - 'title': the title of the news article - 'label': the label of the article, either non-clickbait or clickbait #### Raw - 'id': id of the sample - 'title': the title of the news article - 'source': the name of the publisher/newspaper - 'date': date - 'category': the category of the article - 'sub-category': the sub category of the article - 'content': the content of the article - 'url': the url of the article ### Data Splits The dataset contains train set. ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information Creative Commons Attribution 4.0 International license ### Contributions Thanks to @cahya-wirawan for adding this dataset.
[ "# Dataset Card for Indonesian Clickbait Headlines", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper: CLICK-ID: A Novel Dataset for Indonesian Clickbait Headlines\n- Leaderboard:\n- Point of Contact: Andika William, Yunita Sari", "### Dataset Summary\n\nThe CLICK-ID dataset is a collection of Indonesian news headlines that was collected from 12 local online news \npublishers; detikNews, Fimela, Kapanlagi, Kompas, Liputan6, Okezone, Posmetro-Medan, Republika, Sindonews, Tempo,\nTribunnews, and Wowkeren. This dataset is comprised of mainly two parts; (i) 46,119 raw article data, and (ii)\n15,000 clickbait annotated sample headlines. Annotation was conducted with 3 annotator examining each headline.\nJudgment were based only on the headline. The majority then is considered as the ground truth. In the annotated\nsample, our annotation shows 6,290 clickbait and 8,710 non-clickbait.", "### Supported Tasks and Leaderboards", "### Languages\nIndonesian", "## Dataset Structure", "### Data Instances\nAn example of the annotated article:", "### Data Fields", "#### Annotated\n- 'id': id of the sample\n- 'title': the title of the news article\n- 'label': the label of the article, either non-clickbait or clickbait", "#### Raw\n- 'id': id of the sample\n- 'title': the title of the news article\n- 'source': the name of the publisher/newspaper\n- 'date': date\n- 'category': the category of the article\n- 'sub-category': the sub category of the article\n- 'content': the content of the article\n- 'url': the url of the article", "### Data Splits\n\nThe dataset contains train set.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nCreative Commons Attribution 4.0 International license", "### Contributions\n\nThanks to @cahya-wirawan for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-fact-checking #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Indonesian #license-cc-by-4.0 #region-us \n", "# Dataset Card for Indonesian Clickbait Headlines", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper: CLICK-ID: A Novel Dataset for Indonesian Clickbait Headlines\n- Leaderboard:\n- Point of Contact: Andika William, Yunita Sari", "### Dataset Summary\n\nThe CLICK-ID dataset is a collection of Indonesian news headlines that was collected from 12 local online news \npublishers; detikNews, Fimela, Kapanlagi, Kompas, Liputan6, Okezone, Posmetro-Medan, Republika, Sindonews, Tempo,\nTribunnews, and Wowkeren. This dataset is comprised of mainly two parts; (i) 46,119 raw article data, and (ii)\n15,000 clickbait annotated sample headlines. Annotation was conducted with 3 annotator examining each headline.\nJudgment were based only on the headline. The majority then is considered as the ground truth. In the annotated\nsample, our annotation shows 6,290 clickbait and 8,710 non-clickbait.", "### Supported Tasks and Leaderboards", "### Languages\nIndonesian", "## Dataset Structure", "### Data Instances\nAn example of the annotated article:", "### Data Fields", "#### Annotated\n- 'id': id of the sample\n- 'title': the title of the news article\n- 'label': the label of the article, either non-clickbait or clickbait", "#### Raw\n- 'id': id of the sample\n- 'title': the title of the news article\n- 'source': the name of the publisher/newspaper\n- 'date': date\n- 'category': the category of the article\n- 'sub-category': the sub category of the article\n- 'content': the content of the article\n- 'url': the url of the article", "### Data Splits\n\nThe dataset contains train set.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nCreative Commons Attribution 4.0 International license", "### Contributions\n\nThanks to @cahya-wirawan for adding this dataset." ]
195d4cd9c6c209f4c4c96fc13fcf4d59d7ee8315
# Dataset Card for Large-scale Indonesian Summarization ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [IndoLEM (Indonesian Language Evaluation Montage)](https://indolem.github.io/) - **Repository:** [Liputan6: Summarization Corpus for Indonesian](https://github.com/fajri91/sum_liputan6/) - **Paper:** https://arxiv.org/abs/2011.00679 - **Leaderboard:** - **Point of Contact:** [Fajri Koto](mailto:[email protected]), [Jey Han Lau](mailto:[email protected]), [Timothy Baldwin](mailto:[email protected]), ### Dataset Summary In this paper, we introduce a large-scale Indonesian summarization dataset. We harvest articles from this http URL, an online news portal, and obtain 215,827 document-summary pairs. We leverage pre-trained language models to develop benchmark extractive and abstractive summarization methods over the dataset with multilingual and monolingual BERT-based models. We include a thorough error analysis by examining machine-generated summaries that have low ROUGE scores, and expose both issues with ROUGE it-self, as well as with extractive and abstractive summarization models. The dataset has two variants: "canonical" and "xtreme". The "xtreme" variant discards development and test document–summary pairs where the summary has fewer than 90% novel 4-grams (the training data remains the same as the canonical variant). You need to manually request the liputan6 dataset using the form in https://github.com/fajri91/sum_liputan6/ and uncompress it. The liputan6 dataset can then be loaded using the following command `datasets.load_dataset("id_liputan6", 'canonical', data_dir="<path/to/uncompressed_folder>")` or `datasets.load_dataset("id_liputan6", 'xtreme', data_dir="<path/to/uncompressed_folder>")`. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Indonesian ## Dataset Structure ``` { 'id': 'string', 'url': 'string', 'clean_article': 'string', 'clean_article': 'string', 'extractive_summary': 'string' } ``` ### Data Instances An example of the dataset: ``` { 'clean_article': 'Liputan6.com, Ambon: Partai Bulan Bintang wilayah Maluku bertekad membantu pemerintah menyelesaikan konflik di provinsi tersebut. Syaratnya, penanganan penyelesaian konflik Maluku harus dimulai dari awal kerusuhan, yakni 19 Januari 1999. Demikian hasil Musyawarah Wilayah I PBB Maluku yang dimulai Sabtu pekan silam dan berakhir Senin (31/12) di Ambon. Menurut seorang fungsionaris PBB Ridwan Hasan, persoalan di Maluku bisa selesai asalkan pemerintah dan aparat keamanan serius menangani setiap persoalan di Maluku secara komprehensif dan bijaksana. Itulah sebabnya, PBB wilayah Maluku akan menjadikan penyelesaian konflik sebagai agenda utama partai. PBB Maluku juga akan mendukung penegakan hukum secara terpadu dan tanpa pandang bulu. Siapa saja yang melanggar hukum harus ditindak. Ridwan berharap, Ketua PBB Maluku yang baru, Ali Fauzi, dapat menindak lanjuti agenda politik partai yang telah diamanatkan dan mau mendukung penegakan hukum di Maluku. (ULF/Sahlan Heluth).', 'clean_summary': 'Konflik Ambon telah berlangsung selama tiga tahun. Partai Bulan Bintang wilayah Maluku siap membantu pemerintah menyelesaikan kasus di provinsi tersebut.', 'extractive_summary': 'Liputan6.com, Ambon: Partai Bulan Bintang wilayah Maluku bertekad membantu pemerintah menyelesaikan konflik di provinsi tersebut. Siapa saja yang melanggar hukum harus ditindak.', 'id': '26408', 'url': 'https://www.liputan6.com/news/read/26408/pbb-siap-membantu-penyelesaian-konflik-ambon' } ``` ### Data Fields - `id`: id of the sample - `url`: the url to the original article - `clean_article`: the original article - `clean_article`: the abstractive summarization - `extractive_summary`: the extractive summarization ### Data Splits The dataset is splitted in to train, validation and test sets. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @inproceedings{Koto2020Liputan6AL, title={Liputan6: A Large-scale Indonesian Dataset for Text Summarization}, author={Fajri Koto and Jey Han Lau and Timothy Baldwin}, booktitle={AACL/IJCNLP}, year={2020} } ``` ### Contributions Thanks to [@cahya-wirawan](https://github.com/cahya-wirawan) for adding this dataset.
id_liputan6
[ "task_categories:summarization", "task_ids:news-articles-summarization", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:id", "license:unknown", "extractive-summarization", "arxiv:2011.00679", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["id"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["summarization"], "task_ids": ["news-articles-summarization"], "pretty_name": "Large-scale Indonesian Summarization", "tags": ["extractive-summarization"], "dataset_info": [{"config_name": "canonical", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "clean_article", "dtype": "string"}, {"name": "clean_summary", "dtype": "string"}, {"name": "extractive_summary", "dtype": "string"}], "splits": [{"name": "validation", "num_bytes": 20944658, "num_examples": 10972}, {"name": "test", "num_bytes": 20526768, "num_examples": 10972}, {"name": "train", "num_bytes": 382245586, "num_examples": 193883}], "download_size": 0, "dataset_size": 423717012}, {"config_name": "xtreme", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "clean_article", "dtype": "string"}, {"name": "clean_summary", "dtype": "string"}, {"name": "extractive_summary", "dtype": "string"}], "splits": [{"name": "validation", "num_bytes": 9652946, "num_examples": 4948}, {"name": "test", "num_bytes": 7574550, "num_examples": 3862}], "download_size": 0, "dataset_size": 17227496}]}
2024-01-18T11:06:07+00:00
[ "2011.00679" ]
[ "id" ]
TAGS #task_categories-summarization #task_ids-news-articles-summarization #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-Indonesian #license-unknown #extractive-summarization #arxiv-2011.00679 #region-us
# Dataset Card for Large-scale Indonesian Summarization ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: IndoLEM (Indonesian Language Evaluation Montage) - Repository: Liputan6: Summarization Corpus for Indonesian - Paper: URL - Leaderboard: - Point of Contact: Fajri Koto, Jey Han Lau, Timothy Baldwin, ### Dataset Summary In this paper, we introduce a large-scale Indonesian summarization dataset. We harvest articles from this http URL, an online news portal, and obtain 215,827 document-summary pairs. We leverage pre-trained language models to develop benchmark extractive and abstractive summarization methods over the dataset with multilingual and monolingual BERT-based models. We include a thorough error analysis by examining machine-generated summaries that have low ROUGE scores, and expose both issues with ROUGE it-self, as well as with extractive and abstractive summarization models. The dataset has two variants: "canonical" and "xtreme". The "xtreme" variant discards development and test document–summary pairs where the summary has fewer than 90% novel 4-grams (the training data remains the same as the canonical variant). You need to manually request the liputan6 dataset using the form in URL and uncompress it. The liputan6 dataset can then be loaded using the following command 'datasets.load_dataset("id_liputan6", 'canonical', data_dir="<path/to/uncompressed_folder>")' or 'datasets.load_dataset("id_liputan6", 'xtreme', data_dir="<path/to/uncompressed_folder>")'. ### Supported Tasks and Leaderboards ### Languages Indonesian ## Dataset Structure ### Data Instances An example of the dataset: ### Data Fields - 'id': id of the sample - 'url': the url to the original article - 'clean_article': the original article - 'clean_article': the abstractive summarization - 'extractive_summary': the extractive summarization ### Data Splits The dataset is splitted in to train, validation and test sets. ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions Thanks to @cahya-wirawan for adding this dataset.
[ "# Dataset Card for Large-scale Indonesian Summarization", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: IndoLEM (Indonesian Language Evaluation Montage)\n- Repository: Liputan6: Summarization Corpus for Indonesian\n- Paper: URL\n- Leaderboard:\n- Point of Contact: Fajri Koto,\nJey Han Lau, Timothy Baldwin,", "### Dataset Summary\n\nIn this paper, we introduce a large-scale Indonesian summarization dataset. We harvest articles from this http URL,\nan online news portal, and obtain 215,827 document-summary pairs. We leverage pre-trained language models to develop\nbenchmark extractive and abstractive summarization methods over the dataset with multilingual and monolingual\nBERT-based models. We include a thorough error analysis by examining machine-generated summaries that have\nlow ROUGE scores, and expose both issues with ROUGE it-self, as well as with extractive and abstractive\nsummarization models.\n\nThe dataset has two variants: \"canonical\" and \"xtreme\". The \"xtreme\" variant discards development and test \ndocument–summary pairs where the summary has fewer than 90% novel 4-grams (the training data remains the same \nas the canonical variant).\n\nYou need to manually request the liputan6 dataset using the form in URL\nand uncompress it. The liputan6 dataset can then be loaded using the following command \n'datasets.load_dataset(\"id_liputan6\", 'canonical', data_dir=\"<path/to/uncompressed_folder>\")' or\n'datasets.load_dataset(\"id_liputan6\", 'xtreme', data_dir=\"<path/to/uncompressed_folder>\")'.", "### Supported Tasks and Leaderboards", "### Languages\nIndonesian", "## Dataset Structure", "### Data Instances\n\nAn example of the dataset:", "### Data Fields\n- 'id': id of the sample\n- 'url': the url to the original article\n- 'clean_article': the original article\n- 'clean_article': the abstractive summarization\n- 'extractive_summary': the extractive summarization", "### Data Splits\n\nThe dataset is splitted in to train, validation and test sets.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @cahya-wirawan for adding this dataset." ]
[ "TAGS\n#task_categories-summarization #task_ids-news-articles-summarization #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-Indonesian #license-unknown #extractive-summarization #arxiv-2011.00679 #region-us \n", "# Dataset Card for Large-scale Indonesian Summarization", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: IndoLEM (Indonesian Language Evaluation Montage)\n- Repository: Liputan6: Summarization Corpus for Indonesian\n- Paper: URL\n- Leaderboard:\n- Point of Contact: Fajri Koto,\nJey Han Lau, Timothy Baldwin,", "### Dataset Summary\n\nIn this paper, we introduce a large-scale Indonesian summarization dataset. We harvest articles from this http URL,\nan online news portal, and obtain 215,827 document-summary pairs. We leverage pre-trained language models to develop\nbenchmark extractive and abstractive summarization methods over the dataset with multilingual and monolingual\nBERT-based models. We include a thorough error analysis by examining machine-generated summaries that have\nlow ROUGE scores, and expose both issues with ROUGE it-self, as well as with extractive and abstractive\nsummarization models.\n\nThe dataset has two variants: \"canonical\" and \"xtreme\". The \"xtreme\" variant discards development and test \ndocument–summary pairs where the summary has fewer than 90% novel 4-grams (the training data remains the same \nas the canonical variant).\n\nYou need to manually request the liputan6 dataset using the form in URL\nand uncompress it. The liputan6 dataset can then be loaded using the following command \n'datasets.load_dataset(\"id_liputan6\", 'canonical', data_dir=\"<path/to/uncompressed_folder>\")' or\n'datasets.load_dataset(\"id_liputan6\", 'xtreme', data_dir=\"<path/to/uncompressed_folder>\")'.", "### Supported Tasks and Leaderboards", "### Languages\nIndonesian", "## Dataset Structure", "### Data Instances\n\nAn example of the dataset:", "### Data Fields\n- 'id': id of the sample\n- 'url': the url to the original article\n- 'clean_article': the original article\n- 'clean_article': the abstractive summarization\n- 'extractive_summary': the extractive summarization", "### Data Splits\n\nThe dataset is splitted in to train, validation and test sets.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @cahya-wirawan for adding this dataset." ]
bb8f32df27dfdd27ad5b16c23c2fb7e5917a3146
# Dataset Card for [Dataset Name] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [PT Gria Inovasi Teknologi](https://grit.id/) - **Repository:** [Nergrit Corpus](https://github.com/grit-id/nergrit-corpus) - **Paper:** - **Leaderboard:** - **Point of Contact:** [Taufiqur Rohman](mailto:[email protected]) ### Dataset Summary Nergrit Corpus is a dataset collection of Indonesian Named Entity Recognition, Statement Extraction, and Sentiment Analysis developed by [PT Gria Inovasi Teknologi (GRIT)](https://grit.id/). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Indonesian ## Dataset Structure A data point consists of sentences seperated by empty line and tab-seperated tokens and tags. ``` {'id': '0', 'tokens': ['Gubernur', 'Bank', 'Indonesia', 'menggelar', 'konferensi', 'pers'], 'ner_tags': [9, 28, 28, 38, 38, 38], } ``` ### Data Instances [More Information Needed] ### Data Fields - `id`: id of the sample - `tokens`: the tokens of the example text - `ner_tags`: the NER tags of each token #### Named Entity Recognition The ner_tags correspond to this list: ``` "B-CRD", "B-DAT", "B-EVT", "B-FAC", "B-GPE", "B-LAN", "B-LAW", "B-LOC", "B-MON", "B-NOR", "B-ORD", "B-ORG", "B-PER", "B-PRC", "B-PRD", "B-QTY", "B-REG", "B-TIM", "B-WOA", "I-CRD", "I-DAT", "I-EVT", "I-FAC", "I-GPE", "I-LAN", "I-LAW", "I-LOC", "I-MON", "I-NOR", "I-ORD", "I-ORG", "I-PER", "I-PRC", "I-PRD", "I-QTY", "I-REG", "I-TIM", "I-WOA", "O", ``` The ner_tags have the same format as in the CoNLL shared task: a B denotes the first item of a phrase and an I any non-initial word. The dataset contains 19 following entities ``` 'CRD': Cardinal 'DAT': Date 'EVT': Event 'FAC': Facility 'GPE': Geopolitical Entity 'LAW': Law Entity (such as Undang-Undang) 'LOC': Location 'MON': Money 'NOR': Political Organization 'ORD': Ordinal 'ORG': Organization 'PER': Person 'PRC': Percent 'PRD': Product 'QTY': Quantity 'REG': Religion 'TIM': Time 'WOA': Work of Art 'LAN': Language ``` #### Sentiment Analysis The ner_tags correspond to this list: ``` "B-NEG", "B-NET", "B-POS", "I-NEG", "I-NET", "I-POS", "O", ``` #### Statement Extraction The ner_tags correspond to this list: ``` "B-BREL", "B-FREL", "B-STAT", "B-WHO", "I-BREL", "I-FREL", "I-STAT", "I-WHO", "O" ``` The ner_tags have the same format as in the CoNLL shared task: a B denotes the first item of a phrase and an I any non-initial word. ### Data Splits The dataset is splitted in to train, validation and test sets. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? The annotators are listed in the [Nergrit Corpus repository](https://github.com/grit-id/nergrit-corpus) ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@cahya-wirawan](https://github.com/cahya-wirawan) for adding this dataset.
id_nergrit_corpus
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:id", "license:other", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["id"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "paperswithcode_id": "nergrit-corpus", "pretty_name": "Nergrit Corpus", "dataset_info": [{"config_name": "ner", "features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "B-CRD", "1": "B-DAT", "2": "B-EVT", "3": "B-FAC", "4": "B-GPE", "5": "B-LAN", "6": "B-LAW", "7": "B-LOC", "8": "B-MON", "9": "B-NOR", "10": "B-ORD", "11": "B-ORG", "12": "B-PER", "13": "B-PRC", "14": "B-PRD", "15": "B-QTY", "16": "B-REG", "17": "B-TIM", "18": "B-WOA", "19": "I-CRD", "20": "I-DAT", "21": "I-EVT", "22": "I-FAC", "23": "I-GPE", "24": "I-LAN", "25": "I-LAW", "26": "I-LOC", "27": "I-MON", "28": "I-NOR", "29": "I-ORD", "30": "I-ORG", "31": "I-PER", "32": "I-PRC", "33": "I-PRD", "34": "I-QTY", "35": "I-REG", "36": "I-TIM", "37": "I-WOA", "38": "O"}}}}], "splits": [{"name": "train", "num_bytes": 5428411, "num_examples": 12532}, {"name": "test", "num_bytes": 1135577, "num_examples": 2399}, {"name": "validation", "num_bytes": 1086437, "num_examples": 2521}], "download_size": 14988232, "dataset_size": 7650425}, {"config_name": "sentiment", "features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "B-NEG", "1": "B-NET", "2": "B-POS", "3": "I-NEG", "4": "I-NET", "5": "I-POS", "6": "O"}}}}], "splits": [{"name": "train", "num_bytes": 3167972, "num_examples": 7485}, {"name": "test", "num_bytes": 1097517, "num_examples": 2317}, {"name": "validation", "num_bytes": 337679, "num_examples": 782}], "download_size": 14988232, "dataset_size": 4603168}, {"config_name": "statement", "features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "B-BREL", "1": "B-FREL", "2": "B-STAT", "3": "B-WHO", "4": "I-BREL", "5": "I-FREL", "6": "I-STAT", "7": "I-WHO", "8": "O"}}}}], "splits": [{"name": "train", "num_bytes": 1469081, "num_examples": 2405}, {"name": "test", "num_bytes": 182553, "num_examples": 335}, {"name": "validation", "num_bytes": 105119, "num_examples": 176}], "download_size": 14988232, "dataset_size": 1756753}]}
2024-01-18T11:06:08+00:00
[]
[ "id" ]
TAGS #task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Indonesian #license-other #region-us
# Dataset Card for [Dataset Name] ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: PT Gria Inovasi Teknologi - Repository: Nergrit Corpus - Paper: - Leaderboard: - Point of Contact: Taufiqur Rohman ### Dataset Summary Nergrit Corpus is a dataset collection of Indonesian Named Entity Recognition, Statement Extraction, and Sentiment Analysis developed by PT Gria Inovasi Teknologi (GRIT). ### Supported Tasks and Leaderboards ### Languages Indonesian ## Dataset Structure A data point consists of sentences seperated by empty line and tab-seperated tokens and tags. ### Data Instances ### Data Fields - 'id': id of the sample - 'tokens': the tokens of the example text - 'ner_tags': the NER tags of each token #### Named Entity Recognition The ner_tags correspond to this list: The ner_tags have the same format as in the CoNLL shared task: a B denotes the first item of a phrase and an I any non-initial word. The dataset contains 19 following entities #### Sentiment Analysis The ner_tags correspond to this list: #### Statement Extraction The ner_tags correspond to this list: The ner_tags have the same format as in the CoNLL shared task: a B denotes the first item of a phrase and an I any non-initial word. ### Data Splits The dataset is splitted in to train, validation and test sets. ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? The annotators are listed in the Nergrit Corpus repository ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions Thanks to @cahya-wirawan for adding this dataset.
[ "# Dataset Card for [Dataset Name]", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: PT Gria Inovasi Teknologi\n- Repository: Nergrit Corpus\n- Paper:\n- Leaderboard:\n- Point of Contact: Taufiqur Rohman", "### Dataset Summary\n\nNergrit Corpus is a dataset collection of Indonesian Named Entity Recognition, Statement Extraction, \nand Sentiment Analysis developed by PT Gria Inovasi Teknologi (GRIT).", "### Supported Tasks and Leaderboards", "### Languages\n\nIndonesian", "## Dataset Structure\n\nA data point consists of sentences seperated by empty line and tab-seperated tokens and tags.", "### Data Instances", "### Data Fields\n- 'id': id of the sample\n- 'tokens': the tokens of the example text\n- 'ner_tags': the NER tags of each token", "#### Named Entity Recognition\nThe ner_tags correspond to this list:\n\nThe ner_tags have the same format as in the CoNLL shared task: a B denotes the first item of a phrase and an I any \nnon-initial word. The dataset contains 19 following entities", "#### Sentiment Analysis\nThe ner_tags correspond to this list:", "#### Statement Extraction\nThe ner_tags correspond to this list:\n\nThe ner_tags have the same format as in the CoNLL shared task: a B denotes the first item of a phrase and an I any \nnon-initial word.", "### Data Splits\n\nThe dataset is splitted in to train, validation and test sets.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?\nThe annotators are listed in the\nNergrit Corpus repository", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @cahya-wirawan for adding this dataset." ]
[ "TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Indonesian #license-other #region-us \n", "# Dataset Card for [Dataset Name]", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: PT Gria Inovasi Teknologi\n- Repository: Nergrit Corpus\n- Paper:\n- Leaderboard:\n- Point of Contact: Taufiqur Rohman", "### Dataset Summary\n\nNergrit Corpus is a dataset collection of Indonesian Named Entity Recognition, Statement Extraction, \nand Sentiment Analysis developed by PT Gria Inovasi Teknologi (GRIT).", "### Supported Tasks and Leaderboards", "### Languages\n\nIndonesian", "## Dataset Structure\n\nA data point consists of sentences seperated by empty line and tab-seperated tokens and tags.", "### Data Instances", "### Data Fields\n- 'id': id of the sample\n- 'tokens': the tokens of the example text\n- 'ner_tags': the NER tags of each token", "#### Named Entity Recognition\nThe ner_tags correspond to this list:\n\nThe ner_tags have the same format as in the CoNLL shared task: a B denotes the first item of a phrase and an I any \nnon-initial word. The dataset contains 19 following entities", "#### Sentiment Analysis\nThe ner_tags correspond to this list:", "#### Statement Extraction\nThe ner_tags correspond to this list:\n\nThe ner_tags have the same format as in the CoNLL shared task: a B denotes the first item of a phrase and an I any \nnon-initial word.", "### Data Splits\n\nThe dataset is splitted in to train, validation and test sets.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?\nThe annotators are listed in the\nNergrit Corpus repository", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @cahya-wirawan for adding this dataset." ]
b198fc34a45b0a10d0e9a498c120056bc8e32397
# Dataset Card for Indonesian Newspapers 2018 ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Indonesian Newspapers](https://github.com/feryandi/Dataset-Artikel) - **Repository:** [Indonesian Newspapers](https://github.com/feryandi/Dataset-Artikel) - **Paper:** - **Leaderboard:** - **Point of Contact:** [[email protected]](mailto:[email protected]), [[email protected]](mailto:[email protected]) ### Dataset Summary The dataset contains around 500K articles (136M of words) from 7 Indonesian newspapers: Detik, Kompas, Tempo, CNN Indonesia, Sindo, Republika and Poskota. The articles are dated between 1st January 2018 and 20th August 2018 (with few exceptions dated earlier). The size of uncompressed 500K json files (newspapers-json.tgz) is around 2.2GB, and the cleaned uncompressed in a big text file (newspapers.txt.gz) is about 1GB. The original source in Google Drive contains also a dataset in html format which include raw data (pictures, css, javascript, ...) from the online news website. A copy of the original dataset is available at https://cloud.uncool.ai/index.php/s/mfYEAgKQoY3ebbM ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Indonesian ## Dataset Structure ``` { 'id': 'string', 'url': 'string', 'date': 'string', 'title': 'string', 'content': 'string' } ``` ### Data Instances An instance from the dataset is ``` {'id': '0', 'url': 'https://www.cnnindonesia.com/olahraga/20161221234219-156-181385/lorenzo-ingin-samai-rekor-rossi-dan-stoner', 'date': '2016-12-22 07:00:00', 'title': 'Lorenzo Ingin Samai Rekor Rossi dan Stoner', 'content': 'Jakarta, CNN Indonesia -- Setelah bergabung dengan Ducati, Jorge Lorenzo berharap bisa masuk dalam jajaran pebalap yang mampu jadi juara dunia kelas utama dengan dua pabrikan berbeda. Pujian Max Biaggi untuk Valentino Rossi Jorge Lorenzo Hadir dalam Ucapan Selamat Natal Yamaha Iannone: Saya Sering Jatuh Karena Ingin yang Terbaik Sepanjang sejarah, hanya ada lima pebalap yang mampu jadi juara kelas utama (500cc/MotoGP) dengan dua pabrikan berbeda, yaitu Geoff Duke, Giacomo Agostini, Eddie Lawson, Valentino Rossi, dan Casey Stoner. Lorenzo ingin bergabung dalam jajaran legenda tersebut. “Fakta ini sangat penting bagi saya karena hanya ada lima pebalap yang mampu menang dengan dua pabrikan berbeda dalam sejarah balap motor.” “Kedatangan saya ke Ducati juga menghadirkan tantangan yang sangat menarik karena hampir tak ada yang bisa menang dengan Ducati sebelumnya, kecuali Casey Stoner. Hal itu jadi motivasi yang sangat bagus bagi saya,” tutur Lorenzo seperti dikutip dari Crash Lorenzo saat ini diliputi rasa penasaran yang besar untuk menunggang sepeda motor Desmosedici yang dipakai tim Ducati karena ia baru sekali menjajal motor tersebut pada sesi tes di Valencia, usai MotoGP musim 2016 berakhir. “Saya sangat tertarik dengan Ducati arena saya hanya memiliki kesempatan mencoba motor itu di Valencia dua hari setelah musim berakhir. Setelah itu saya tak boleh lagi menjajalnya hingga akhir Januari mendatang. Jadi saya menjalani penantian selama dua bulan yang panjang,” kata pebalap asal Spanyol ini. Dengan kondisi tersebut, maka Lorenzo memanfaatkan waktu yang ada untuk liburan dan melepaskan penat. “Setidaknya apa yang terjadi pada saya saat ini sangat bagus karena saya jadi memiliki waktu bebas dan sedikit liburan.” “Namun tentunya saya tak akan larut dalam liburan karena saya harus lebih bersiap, terutama dalam kondisi fisik dibandingkan sebelumnya, karena saya akan menunggangi motor yang sulit dikendarai,” ucap Lorenzo. Selama sembilan musim bersama Yamaha, Lorenzo sendiri sudah tiga kali jadi juara dunia, yaitu pada 2010, 2012, dan 2015. (kid)'} ``` ### Data Fields - `id`: id of the sample - `url`: the url to the original article - `date`: the publishing date of the article - `title`: the title of the article - `content`: the content of the article ### Data Splits The dataset contains train set of 499164 samples. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. The dataset is shared for the sole purpose of aiding open scientific research in Bahasa Indonesia (computing or linguistics), and can only be used for that purpose. The ownership of each article within the dataset belongs to the respective newspaper from which it was extracted; and the maintainer of the repository does not claim ownership of any of the content within it. If you think, by any means, that this dataset breaches any established copyrights; please contact the repository maintainer. ### Citation Information [N/A] ### Contributions Thanks to [@cahya-wirawan](https://github.com/cahya-wirawan) for adding this dataset.
id_newspapers_2018
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:id", "license:cc-by-4.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["id"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["text-generation", "fill-mask"], "task_ids": ["language-modeling", "masked-language-modeling"], "pretty_name": "Indonesian Newspapers 2018", "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "date", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "content", "dtype": "string"}], "config_name": "id_newspapers_2018", "splits": [{"name": "train", "num_bytes": 1116031922, "num_examples": 499164}], "download_size": 446018349, "dataset_size": 1116031922}}
2024-01-18T11:06:10+00:00
[]
[ "id" ]
TAGS #task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-Indonesian #license-cc-by-4.0 #region-us
# Dataset Card for Indonesian Newspapers 2018 ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: Indonesian Newspapers - Repository: Indonesian Newspapers - Paper: - Leaderboard: - Point of Contact: feryandi.n@URL, cahya.wirawan@URL ### Dataset Summary The dataset contains around 500K articles (136M of words) from 7 Indonesian newspapers: Detik, Kompas, Tempo, CNN Indonesia, Sindo, Republika and Poskota. The articles are dated between 1st January 2018 and 20th August 2018 (with few exceptions dated earlier). The size of uncompressed 500K json files (URL) is around 2.2GB, and the cleaned uncompressed in a big text file (URL) is about 1GB. The original source in Google Drive contains also a dataset in html format which include raw data (pictures, css, javascript, ...) from the online news website. A copy of the original dataset is available at URL ### Supported Tasks and Leaderboards ### Languages Indonesian ## Dataset Structure ### Data Instances An instance from the dataset is ### Data Fields - 'id': id of the sample - 'url': the url to the original article - 'date': the publishing date of the article - 'title': the title of the article - 'content': the content of the article ### Data Splits The dataset contains train set of 499164 samples. ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. The dataset is shared for the sole purpose of aiding open scientific research in Bahasa Indonesia (computing or linguistics), and can only be used for that purpose. The ownership of each article within the dataset belongs to the respective newspaper from which it was extracted; and the maintainer of the repository does not claim ownership of any of the content within it. If you think, by any means, that this dataset breaches any established copyrights; please contact the repository maintainer. [N/A] ### Contributions Thanks to @cahya-wirawan for adding this dataset.
[ "# Dataset Card for Indonesian Newspapers 2018", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: Indonesian Newspapers\n- Repository: Indonesian Newspapers\n- Paper:\n- Leaderboard:\n- Point of Contact: feryandi.n@URL,\ncahya.wirawan@URL", "### Dataset Summary\n\nThe dataset contains around 500K articles (136M of words) from 7 Indonesian newspapers: Detik, Kompas, Tempo,\nCNN Indonesia, Sindo, Republika and Poskota. The articles are dated between 1st January 2018 and 20th August 2018\n(with few exceptions dated earlier). The size of uncompressed 500K json files (URL) is around 2.2GB,\nand the cleaned uncompressed in a big text file (URL) is about 1GB. The original source in Google Drive\ncontains also a dataset in html format which include raw data (pictures, css, javascript, ...)\nfrom the online news website. A copy of the original dataset is available at\nURL", "### Supported Tasks and Leaderboards", "### Languages\nIndonesian", "## Dataset Structure", "### Data Instances\n\nAn instance from the dataset is", "### Data Fields\n- 'id': id of the sample\n- 'url': the url to the original article\n- 'date': the publishing date of the article\n- 'title': the title of the article\n- 'content': the content of the article", "### Data Splits\n\nThe dataset contains train set of 499164 samples.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nThis work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. The dataset is shared for the sole purpose of aiding open scientific research in Bahasa Indonesia (computing or linguistics), and can only be used for that purpose. The ownership of each article within the dataset belongs to the respective newspaper from which it was extracted; and the maintainer of the repository does not claim ownership of any of the content within it. If you think, by any means, that this dataset breaches any established copyrights; please contact the repository maintainer.\n\n\n\n[N/A]", "### Contributions\n\nThanks to @cahya-wirawan for adding this dataset." ]
[ "TAGS\n#task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-Indonesian #license-cc-by-4.0 #region-us \n", "# Dataset Card for Indonesian Newspapers 2018", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: Indonesian Newspapers\n- Repository: Indonesian Newspapers\n- Paper:\n- Leaderboard:\n- Point of Contact: feryandi.n@URL,\ncahya.wirawan@URL", "### Dataset Summary\n\nThe dataset contains around 500K articles (136M of words) from 7 Indonesian newspapers: Detik, Kompas, Tempo,\nCNN Indonesia, Sindo, Republika and Poskota. The articles are dated between 1st January 2018 and 20th August 2018\n(with few exceptions dated earlier). The size of uncompressed 500K json files (URL) is around 2.2GB,\nand the cleaned uncompressed in a big text file (URL) is about 1GB. The original source in Google Drive\ncontains also a dataset in html format which include raw data (pictures, css, javascript, ...)\nfrom the online news website. A copy of the original dataset is available at\nURL", "### Supported Tasks and Leaderboards", "### Languages\nIndonesian", "## Dataset Structure", "### Data Instances\n\nAn instance from the dataset is", "### Data Fields\n- 'id': id of the sample\n- 'url': the url to the original article\n- 'date': the publishing date of the article\n- 'title': the title of the article\n- 'content': the content of the article", "### Data Splits\n\nThe dataset contains train set of 499164 samples.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nThis work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. The dataset is shared for the sole purpose of aiding open scientific research in Bahasa Indonesia (computing or linguistics), and can only be used for that purpose. The ownership of each article within the dataset belongs to the respective newspaper from which it was extracted; and the maintainer of the repository does not claim ownership of any of the content within it. If you think, by any means, that this dataset breaches any established copyrights; please contact the repository maintainer.\n\n\n\n[N/A]", "### Contributions\n\nThanks to @cahya-wirawan for adding this dataset." ]
e0274ceeb1e0dceb26608f8379e0fa76eadaabd6
# Dataset Card for [Dataset Name] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [PANL BPPT](http://digilib.bppt.go.id/sampul/p92-budiono.pdf) - **Repository:** [PANL BPPT Repository](https://github.com/cahya-wirawan/indonesian-language-models/raw/master/data/BPPTIndToEngCorpusHalfM.zip) - **Paper:** [Resource Report: Building Parallel Text Corpora for Multi-Domain Translation System](http://digilib.bppt.go.id/sampul/p92-budiono.pdf) - **Leaderboard:** - **Point of Contact:** ### Dataset Summary Parallel Text Corpora for Multi-Domain Translation System created by BPPT (Indonesian Agency for the Assessment and Application of Technology) for PAN Localization Project (A Regional Initiative to Develop Local Language Computing Capacity in Asia). The dataset contains around 24K sentences divided in 4 difference topics (Economic, international, Science and Technology and Sport). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Indonesian ## Dataset Structure [More Information Needed] ### Data Instances An example of the dataset: ``` { 'id': '0', 'topic': 0, 'translation': { 'en': 'Minister of Finance Sri Mulyani Indrawati said that a sharp correction of the composite inde x by up to 4 pct in Wedenesday?s trading was a mere temporary effect of regional factors like decline in plantation commodity prices and the financial crisis in Thailand.', 'id': 'Menteri Keuangan Sri Mulyani mengatakan koreksi tajam pada Indeks Harga Saham Gabungan IHSG hingga sekitar 4 persen dalam perdagangan Rabu 10/1 hanya efek sesaat dari faktor-faktor regional seperti penurunan harga komoditi perkebunan dan krisis finansial di Thailand.' } } ``` ### Data Fields - `id`: id of the sample - `translation`: the parallel sentence english-indonesian - `topic`: the topic of the sentence. It could be one of the following: - Economic - International - Science and Technology - Sport ### Data Splits The dataset is splitted in to train, validation and test sets. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @inproceedings{id_panl_bppt, author = {PAN Localization - BPPT}, title = {Parallel Text Corpora, English Indonesian}, year = {2009}, url = {http://digilib.bppt.go.id/sampul/p92-budiono.pdf}, } ``` ### Contributions Thanks to [@cahya-wirawan](https://github.com/cahya-wirawan) for adding this dataset.
id_panl_bppt
[ "task_categories:translation", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:translation", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "language:id", "license:unknown", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en", "id"], "license": ["unknown"], "multilinguality": ["translation"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["translation"], "task_ids": [], "pretty_name": "IdPanlBppt", "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "translation", "dtype": {"translation": {"languages": ["en", "id"]}}}, {"name": "topic", "dtype": {"class_label": {"names": {"0": "Economy", "1": "International", "2": "Science", "3": "Sport"}}}}], "config_name": "id_panl_bppt", "splits": [{"name": "train", "num_bytes": 7455924, "num_examples": 24021}], "download_size": 2366973, "dataset_size": 7455924}}
2024-01-18T11:06:12+00:00
[]
[ "en", "id" ]
TAGS #task_categories-translation #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-translation #size_categories-10K<n<100K #source_datasets-original #language-English #language-Indonesian #license-unknown #region-us
# Dataset Card for [Dataset Name] ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: PANL BPPT - Repository: PANL BPPT Repository - Paper: Resource Report: Building Parallel Text Corpora for Multi-Domain Translation System - Leaderboard: - Point of Contact: ### Dataset Summary Parallel Text Corpora for Multi-Domain Translation System created by BPPT (Indonesian Agency for the Assessment and Application of Technology) for PAN Localization Project (A Regional Initiative to Develop Local Language Computing Capacity in Asia). The dataset contains around 24K sentences divided in 4 difference topics (Economic, international, Science and Technology and Sport). ### Supported Tasks and Leaderboards ### Languages Indonesian ## Dataset Structure ### Data Instances An example of the dataset: ### Data Fields - 'id': id of the sample - 'translation': the parallel sentence english-indonesian - 'topic': the topic of the sentence. It could be one of the following: - Economic - International - Science and Technology - Sport ### Data Splits The dataset is splitted in to train, validation and test sets. ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions Thanks to @cahya-wirawan for adding this dataset.
[ "# Dataset Card for [Dataset Name]", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: PANL BPPT\n- Repository: PANL BPPT Repository\n- Paper: Resource Report: Building Parallel Text Corpora for Multi-Domain Translation System\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\nParallel Text Corpora for Multi-Domain Translation System created by BPPT (Indonesian Agency for the Assessment and \nApplication of Technology) for PAN Localization Project (A Regional Initiative to Develop Local Language Computing \nCapacity in Asia). The dataset contains around 24K sentences divided in 4 difference topics (Economic, international,\nScience and Technology and Sport).", "### Supported Tasks and Leaderboards", "### Languages\n\nIndonesian", "## Dataset Structure", "### Data Instances\n\nAn example of the dataset:", "### Data Fields\n- 'id': id of the sample\n- 'translation': the parallel sentence english-indonesian\n- 'topic': the topic of the sentence. It could be one of the following:\n - Economic\n - International\n - Science and Technology\n - Sport", "### Data Splits\n\nThe dataset is splitted in to train, validation and test sets.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @cahya-wirawan for adding this dataset." ]
[ "TAGS\n#task_categories-translation #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-translation #size_categories-10K<n<100K #source_datasets-original #language-English #language-Indonesian #license-unknown #region-us \n", "# Dataset Card for [Dataset Name]", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: PANL BPPT\n- Repository: PANL BPPT Repository\n- Paper: Resource Report: Building Parallel Text Corpora for Multi-Domain Translation System\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\nParallel Text Corpora for Multi-Domain Translation System created by BPPT (Indonesian Agency for the Assessment and \nApplication of Technology) for PAN Localization Project (A Regional Initiative to Develop Local Language Computing \nCapacity in Asia). The dataset contains around 24K sentences divided in 4 difference topics (Economic, international,\nScience and Technology and Sport).", "### Supported Tasks and Leaderboards", "### Languages\n\nIndonesian", "## Dataset Structure", "### Data Instances\n\nAn example of the dataset:", "### Data Fields\n- 'id': id of the sample\n- 'translation': the parallel sentence english-indonesian\n- 'topic': the topic of the sentence. It could be one of the following:\n - Economic\n - International\n - Science and Technology\n - Sport", "### Data Splits\n\nThe dataset is splitted in to train, validation and test sets.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @cahya-wirawan for adding this dataset." ]
c4b838aa48d1e72b838a8250cd5220915b716ec6
# Dataset Card for id_puisi ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [puisi-pantun-generator](https://github.com/ilhamfp/puisi-pantun-generator) - **Repository:** [puisi-pantun-generator](https://github.com/ilhamfp/puisi-pantun-generator) - **Paper:** [N/A] - **Leaderboard:** [N/A] - **Point of Contact:** [Ilham Firdausi Putra]([email protected]) ### Dataset Summary Puisi (poem) is an Indonesian poetic form. The dataset contains 7223 Indonesian puisi with its title and author. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Indonesian ## Dataset Structure ### Data Instances ``` { 'puisi_with_header': 'TEPERANGKAP Oleh Mangku Langit Jingga Mungkin kau membiarkan aku Membiarkan perasaan ini larut Memberi ruang jiwaku hampa Agar tetap terbiasa nikmati Perangkap yang kau buat Perisai yang kau banggakan Takkan jadi tameng bagimu Aku mengerti betapa hebatnya Perangkap mu hei sang dewi Ku akan terus merasa terbiasa Dengan pesona indahmu Ku masih akan nikmati hadirmu Berjalanlah pada hati yang sama Satu hati denganku Walau ku terperangkap Namunku nikmati dan jalani', 'title': 'TEPERANGKAP', 'author': 'Oleh Mangku Langit Jingga', 'puisi': 'Mungkin kau membiarkan aku Membiarkan perasaan ini larut Memberi ruang jiwaku hampa Agar tetap terbiasa nikmati Perangkap yang kau buat Perisai yang kau banggakan Takkan jadi tameng bagimu Aku mengerti betapa hebatnya Perangkap mu hei sang dewi Ku akan terus merasa terbiasa Dengan pesona indahmu Ku masih akan nikmati hadirmu Berjalanlah pada hati yang sama Satu hati denganku Walau ku terperangkap Namunku nikmati dan jalani', } ``` ### Data Fields - `puisi_with_header`: the raw text from scraping - `title`: the title extracted from the raw text using regex - `author`: the author extracted from the raw text using regex - `puisi`: the poem with title and author extracted out using regex ### Data Splits The dataset contains only a train set. ## Dataset Creation ### Curation Rationale The dataset was initially collected as an experiment to generate an Indonesian poem using GPT-2. ### Source Data #### Initial Data Collection and Normalization The dataset was scraped using BeautifulSoup from lokerpuisi.web.id (the data no longer exist on the original blog). The title and author column was produced using regex match from puisi_with_header column. #### Who are the source language producers? The poems were generated by humans. The users of the original blog voluntarily submit their original poems to get published on the blog. ### Annotations #### Annotation process [N/A] #### Who are the annotators? [N/A] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations The regex match used to extract the title & author from the raw text is not perfect. Some title & text is still failed to get extracted. ## Additional Information ### Dataset Curators Ilham Firdausi Putra ### Licensing Information MIT License ### Citation Information [N/A] ### Contributions Thanks to [@ilhamfp](https://github.com/ilhamfp) for adding this dataset.
id_puisi
[ "task_categories:text2text-generation", "task_categories:text-generation", "task_categories:fill-mask", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:id", "license:mit", "poem-generation", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["id"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text2text-generation", "text-generation", "fill-mask"], "task_ids": [], "pretty_name": "Indonesian Puisi", "tags": ["poem-generation"], "dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "author", "dtype": "string"}, {"name": "puisi", "dtype": "string"}, {"name": "puisi_with_header", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 10613475, "num_examples": 7223}], "download_size": 10558108, "dataset_size": 10613475}}
2024-01-18T11:06:13+00:00
[]
[ "id" ]
TAGS #task_categories-text2text-generation #task_categories-text-generation #task_categories-fill-mask #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Indonesian #license-mit #poem-generation #region-us
# Dataset Card for id_puisi ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: puisi-pantun-generator - Repository: puisi-pantun-generator - Paper: [N/A] - Leaderboard: [N/A] - Point of Contact: Ilham Firdausi Putra ### Dataset Summary Puisi (poem) is an Indonesian poetic form. The dataset contains 7223 Indonesian puisi with its title and author. ### Supported Tasks and Leaderboards ### Languages Indonesian ## Dataset Structure ### Data Instances ### Data Fields - 'puisi_with_header': the raw text from scraping - 'title': the title extracted from the raw text using regex - 'author': the author extracted from the raw text using regex - 'puisi': the poem with title and author extracted out using regex ### Data Splits The dataset contains only a train set. ## Dataset Creation ### Curation Rationale The dataset was initially collected as an experiment to generate an Indonesian poem using GPT-2. ### Source Data #### Initial Data Collection and Normalization The dataset was scraped using BeautifulSoup from URL (the data no longer exist on the original blog). The title and author column was produced using regex match from puisi_with_header column. #### Who are the source language producers? The poems were generated by humans. The users of the original blog voluntarily submit their original poems to get published on the blog. ### Annotations #### Annotation process [N/A] #### Who are the annotators? [N/A] ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations The regex match used to extract the title & author from the raw text is not perfect. Some title & text is still failed to get extracted. ## Additional Information ### Dataset Curators Ilham Firdausi Putra ### Licensing Information MIT License [N/A] ### Contributions Thanks to @ilhamfp for adding this dataset.
[ "# Dataset Card for id_puisi", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: puisi-pantun-generator\n- Repository: puisi-pantun-generator\n- Paper: [N/A]\n- Leaderboard: [N/A]\n- Point of Contact: Ilham Firdausi Putra", "### Dataset Summary\n\nPuisi (poem) is an Indonesian poetic form. The dataset contains 7223 Indonesian puisi with its title and author.", "### Supported Tasks and Leaderboards", "### Languages\n\nIndonesian", "## Dataset Structure", "### Data Instances", "### Data Fields\n\n- 'puisi_with_header': the raw text from scraping\n- 'title': the title extracted from the raw text using regex\n- 'author': the author extracted from the raw text using regex\n- 'puisi': the poem with title and author extracted out using regex", "### Data Splits\n\nThe dataset contains only a train set.", "## Dataset Creation", "### Curation Rationale\n\nThe dataset was initially collected as an experiment to generate an Indonesian poem using GPT-2.", "### Source Data", "#### Initial Data Collection and Normalization\n\nThe dataset was scraped using BeautifulSoup from URL (the data no longer exist on the original blog). The title and author column was produced using regex match from puisi_with_header column.", "#### Who are the source language producers?\n\nThe poems were generated by humans. The users of the original blog voluntarily submit their original poems to get published on the blog.", "### Annotations", "#### Annotation process\n\n[N/A]", "#### Who are the annotators?\n\n[N/A]", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\nThe regex match used to extract the title & author from the raw text is not perfect. Some title & text is still failed to get extracted.", "## Additional Information", "### Dataset Curators\n\nIlham Firdausi Putra", "### Licensing Information\n\nMIT License\n\n\n\n[N/A]", "### Contributions\n\nThanks to @ilhamfp for adding this dataset." ]
[ "TAGS\n#task_categories-text2text-generation #task_categories-text-generation #task_categories-fill-mask #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Indonesian #license-mit #poem-generation #region-us \n", "# Dataset Card for id_puisi", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: puisi-pantun-generator\n- Repository: puisi-pantun-generator\n- Paper: [N/A]\n- Leaderboard: [N/A]\n- Point of Contact: Ilham Firdausi Putra", "### Dataset Summary\n\nPuisi (poem) is an Indonesian poetic form. The dataset contains 7223 Indonesian puisi with its title and author.", "### Supported Tasks and Leaderboards", "### Languages\n\nIndonesian", "## Dataset Structure", "### Data Instances", "### Data Fields\n\n- 'puisi_with_header': the raw text from scraping\n- 'title': the title extracted from the raw text using regex\n- 'author': the author extracted from the raw text using regex\n- 'puisi': the poem with title and author extracted out using regex", "### Data Splits\n\nThe dataset contains only a train set.", "## Dataset Creation", "### Curation Rationale\n\nThe dataset was initially collected as an experiment to generate an Indonesian poem using GPT-2.", "### Source Data", "#### Initial Data Collection and Normalization\n\nThe dataset was scraped using BeautifulSoup from URL (the data no longer exist on the original blog). The title and author column was produced using regex match from puisi_with_header column.", "#### Who are the source language producers?\n\nThe poems were generated by humans. The users of the original blog voluntarily submit their original poems to get published on the blog.", "### Annotations", "#### Annotation process\n\n[N/A]", "#### Who are the annotators?\n\n[N/A]", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\nThe regex match used to extract the title & author from the raw text is not perfect. Some title & text is still failed to get extracted.", "## Additional Information", "### Dataset Curators\n\nIlham Firdausi Putra", "### Licensing Information\n\nMIT License\n\n\n\n[N/A]", "### Contributions\n\nThanks to @ilhamfp for adding this dataset." ]
11f1daffee34b4e3737836cd6f04623815ddf41b
# Dataset Card for IgboNLP Datasets ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** None - **Repository:** https://github.com/IgnatiusEzeani/IGBONLP/tree/master/ig_en_mt - **Paper:** https://arxiv.org/abs/2004.00648 - **Leaderboard:** [More Information Needed] - **Point of Contact:** [More Information Needed] ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
igbo_english_machine_translation
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "language:ig", "license:unknown", "arxiv:2004.00648", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["en", "ig"], "license": ["unknown"], "multilinguality": ["multilingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["translation"], "task_ids": [], "paperswithcode_id": "igbonlp-datasets", "pretty_name": "IgboNLP Datasets", "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "translation", "dtype": {"translation": {"languages": ["ig", "en"]}}}], "config_name": "ig-en", "splits": [{"name": "train", "num_bytes": 2367989, "num_examples": 10000}, {"name": "validation", "num_bytes": 60154, "num_examples": 200}, {"name": "test", "num_bytes": 298670, "num_examples": 552}], "download_size": 2580255, "dataset_size": 2726813}}
2024-01-18T11:06:15+00:00
[ "2004.00648" ]
[ "en", "ig" ]
TAGS #task_categories-translation #annotations_creators-found #language_creators-found #multilinguality-multilingual #size_categories-10K<n<100K #source_datasets-original #language-English #language-Igbo #license-unknown #arxiv-2004.00648 #region-us
# Dataset Card for IgboNLP Datasets ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: None - Repository: URL - Paper: URL - Leaderboard: - Point of Contact: ### Dataset Summary ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions Thanks to @abhishekkrthakur for adding this dataset.
[ "# Dataset Card for IgboNLP Datasets", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: None\n- Repository: URL\n- Paper: URL\n- Leaderboard: \n- Point of Contact:", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @abhishekkrthakur for adding this dataset." ]
[ "TAGS\n#task_categories-translation #annotations_creators-found #language_creators-found #multilinguality-multilingual #size_categories-10K<n<100K #source_datasets-original #language-English #language-Igbo #license-unknown #arxiv-2004.00648 #region-us \n", "# Dataset Card for IgboNLP Datasets", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: None\n- Repository: URL\n- Paper: URL\n- Leaderboard: \n- Point of Contact:", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @abhishekkrthakur for adding this dataset." ]
7479b25a44837f38adac76125e8d5ac8451a5073
# Dataset Card for Igbo Monolingual Dataset ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/IgnatiusEzeani/IGBONLP/tree/master/ig_monoling - **Repository:** https://github.com/IgnatiusEzeani/IGBONLP/tree/master/ig_monoling - **Paper:** https://arxiv.org/abs/2004.00648 ### Dataset Summary A dataset is a collection of Monolingual Igbo sentences. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Igbo (ig) ## Dataset Structure ### Data Instances Here is an example from the bb-igbo config: ``` {'content': 'Ike Ekweremmadụ\n\nIke ịda jụụ otụ nkeji banyere oke ogbugbu na-eme n\'ala Naijiria agwụla Ekweremmadụ\n\nOsote onye-isi ndị ome-iwu Naịjirịa bụ Ike Ekweremadu ekwuola na ike agwụla ndị Sịnatị iji otu nkeji darajụụ akwanyere ndị egburu n\'ime oke ọgbaghara dị na Naịjirịa oge ọ bula.\n\nEkweremadu katọrọ mwakpọ na ogbugbu ndị Naịjirịa aka ha dị ọcha nke ndị Fulani na-achị ehi mere, kwuo na ike agwụla ndị ome- iwu ịkwanyere ha ugwu n\'otu nkeji\'\n\nCheta n\'otu ịzụka gara-aga ka emere akwam ozu mmadụ ruru iri asaa egburu na Local Gọọmenti Logo na Guma nke Benue Steeti, e be ihe kariri mmadụ iri ise ka akụkọ kwuru n\'egburu na Taraba Steeti.\n\nEkweremadu gosiri iwe gbasara ogbugbu ndị mmadụ na nzukọ ndị ome-iwu n\'ụbọchị taa, kwuo na Naịjirịa ga-ebu ụzọ nwe udo na nchekwa, tupu e kwuowa okwu iwulite obodo.\n\nỌ sịrị: "Ndị ome-iwu abụghị sọ ọsọ ndị ihe a metụtara, kama ndị Naịjirịa niile.\n\n\'Ike agwụla anyị iji otu nkeji dị jụụ maka nkwanye ugwu. Ihe anyị chọrọ bụ udo na nchekwa tupu echewa echịchị nwuli obodo."', 'date': '2018-01-19T17:07:38Z', 'description': "N'ihi oke ogbugbu ndị mmadụ na Naịjirịa gbagburu gburu, osota onyeisi ndị ome-iwu Naịjirịa bụ Ike Ekweremadu ekwuola na ihe Naịjiria chọrọ bụ nchekwa tara ọchịchị, tupu ekwuwa okwu ihe ọzọ.", 'headline': 'Ekweremadu: Ike agwụla ndị ụlọ ome iwu', 'source': 'https://www.bbc.com/igbo/42712250', 'tags': [], 'title': 'Ekweremadu: Ike agwụla ndị ụlọ ome iwu'} ``` ### Data Fields For config 'eze_goes_to_school': - format, title, chapters For config 'bbc-igbo' : - source, title, description, date (Missing date values replaced with empty strings), headline, content, tags (Missing tags replaced with empty list) For config 'igbo-radio': - source, headline, author, date, description, content For config 'jw-ot-igbo': - format, title, chapters For config 'jw-nt-igbo': - format, title, chapters For config 'jw-books': - title, content, format, date (Missing date values replaced with empty strings) For config 'jw-teta': - title, content, format, date (Missing date values replaced with empty strings) For config 'jw-ulo_nche': - title, content, format, date (Missing date values replaced with empty strings) For config 'jw-ulo_nche_naamu': - title, content, format, date (Missing date values replaced with empty strings) ### Data Splits | bbc-igbo | eze_goes_to_school |igbo-radio| jw-books|jw-nt-igbo| jw-ot-igbo | jw-teta |jw-ulo_nche |jw-ulo_nche_naamu | ------------- |:-------------:|:-------------:|:-------------:|:-------------:|:-------------:|:-------------:|:-------------:|:-------------:| | 1297 | 1 | 440 | 48 | 27 | 39 | 37 | 55 | 88 ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information @misc{ezeani2020igboenglish, title={Igbo-English Machine Translation: An Evaluation Benchmark}, author={Ignatius Ezeani and Paul Rayson and Ikechukwu Onyenwe and Chinedu Uchechukwu and Mark Hepple}, year={2020}, eprint={2004.00648}, archivePrefix={arXiv}, primaryClass={cs.CL} } ### Contributions Thanks to [@purvimisal](https://github.com/purvimisal) for adding this dataset.
igbo_monolingual
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "size_categories:n<1K", "source_datasets:original", "language:ig", "license:unknown", "arxiv:2004.00648", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["ig"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K", "n<1K"], "source_datasets": ["original"], "task_categories": ["text-generation", "fill-mask"], "task_ids": ["language-modeling", "masked-language-modeling"], "pretty_name": "Igbo Monolingual Dataset", "config_names": ["bbc-igbo", "eze_goes_to_school", "igbo-radio", "jw-books", "jw-nt-igbo", "jw-ot-igbo", "jw-teta", "jw-ulo_nche", "jw-ulo_nche_naamu"], "dataset_info": [{"config_name": "eze_goes_to_school", "features": [{"name": "format", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "chapters", "sequence": [{"name": "title", "dtype": "string"}, {"name": "content", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 128309, "num_examples": 1}], "download_size": 8260947, "dataset_size": 128309}, {"config_name": "bbc-igbo", "features": [{"name": "source", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "date", "dtype": "string"}, {"name": "headline", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "tags", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 3488908, "num_examples": 1297}], "download_size": 8260947, "dataset_size": 3488908}, {"config_name": "igbo-radio", "features": [{"name": "source", "dtype": "string"}, {"name": "headline", "dtype": "string"}, {"name": "author", "dtype": "string"}, {"name": "date", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "content", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1129644, "num_examples": 440}], "download_size": 8260947, "dataset_size": 1129644}, {"config_name": "jw-ot-igbo", "features": [{"name": "format", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "chapters", "sequence": [{"name": "title", "dtype": "string"}, {"name": "content", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 3489314, "num_examples": 39}], "download_size": 8260947, "dataset_size": 3489314}, {"config_name": "jw-nt-igbo", "features": [{"name": "format", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "chapters", "sequence": [{"name": "title", "dtype": "string"}, {"name": "content", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1228779, "num_examples": 27}], "download_size": 8260947, "dataset_size": 1228779}, {"config_name": "jw-books", "features": [{"name": "title", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "format", "dtype": "string"}, {"name": "date", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 9456342, "num_examples": 48}], "download_size": 8260947, "dataset_size": 9456342}, {"config_name": "jw-teta", "features": [{"name": "title", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "format", "dtype": "string"}, {"name": "date", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 991111, "num_examples": 37}], "download_size": 8260947, "dataset_size": 991111}, {"config_name": "jw-ulo_nche", "features": [{"name": "title", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "format", "dtype": "string"}, {"name": "date", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1952360, "num_examples": 55}], "download_size": 8260947, "dataset_size": 1952360}, {"config_name": "jw-ulo_nche_naamu", "features": [{"name": "title", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "format", "dtype": "string"}, {"name": "date", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7248017, "num_examples": 88}], "download_size": 8260947, "dataset_size": 7248017}]}
2024-01-18T11:06:21+00:00
[ "2004.00648" ]
[ "ig" ]
TAGS #task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #size_categories-n<1K #source_datasets-original #language-Igbo #license-unknown #arxiv-2004.00648 #region-us
Dataset Card for Igbo Monolingual Dataset ========================================= Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: URL ### Dataset Summary A dataset is a collection of Monolingual Igbo sentences. ### Supported Tasks and Leaderboards ### Languages Igbo (ig) Dataset Structure ----------------- ### Data Instances Here is an example from the bb-igbo config: ### Data Fields For config 'eze\_goes\_to\_school': * format, title, chapters For config 'bbc-igbo' : * source, title, description, date (Missing date values replaced with empty strings), headline, content, tags (Missing tags replaced with empty list) For config 'igbo-radio': * source, headline, author, date, description, content For config 'jw-ot-igbo': * format, title, chapters For config 'jw-nt-igbo': * format, title, chapters For config 'jw-books': * title, content, format, date (Missing date values replaced with empty strings) For config 'jw-teta': * title, content, format, date (Missing date values replaced with empty strings) For config 'jw-ulo\_nche': * title, content, format, date (Missing date values replaced with empty strings) For config 'jw-ulo\_nche\_naamu': * title, content, format, date (Missing date values replaced with empty strings) ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information @misc{ezeani2020igboenglish, title={Igbo-English Machine Translation: An Evaluation Benchmark}, author={Ignatius Ezeani and Paul Rayson and Ikechukwu Onyenwe and Chinedu Uchechukwu and Mark Hepple}, year={2020}, eprint={2004.00648}, archivePrefix={arXiv}, primaryClass={cs.CL} } ### Contributions Thanks to @purvimisal for adding this dataset.
[ "### Dataset Summary\n\n\nA dataset is a collection of Monolingual Igbo sentences.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nIgbo (ig)\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nHere is an example from the bb-igbo config:", "### Data Fields\n\n\nFor config 'eze\\_goes\\_to\\_school':\n\n\n* format, title, chapters\n\n\nFor config 'bbc-igbo' :\n\n\n* source, title, description, date (Missing date values replaced with empty strings), headline, content, tags (Missing tags replaced with empty list)\n\n\nFor config 'igbo-radio':\n\n\n* source, headline, author, date, description, content\n\n\nFor config 'jw-ot-igbo':\n\n\n* format, title, chapters\n\n\nFor config 'jw-nt-igbo':\n\n\n* format, title, chapters\n\n\nFor config 'jw-books':\n\n\n* title, content, format, date (Missing date values replaced with empty strings)\n\n\nFor config 'jw-teta':\n\n\n* title, content, format, date (Missing date values replaced with empty strings)\n\n\nFor config 'jw-ulo\\_nche':\n\n\n* title, content, format, date (Missing date values replaced with empty strings)\n\n\nFor config 'jw-ulo\\_nche\\_naamu':\n\n\n* title, content, format, date (Missing date values replaced with empty strings)", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\n@misc{ezeani2020igboenglish, \n\ntitle={Igbo-English Machine Translation: An Evaluation Benchmark}, \n\nauthor={Ignatius Ezeani and Paul Rayson and Ikechukwu Onyenwe and Chinedu Uchechukwu and Mark Hepple}, \n\nyear={2020}, \n\neprint={2004.00648}, \n\narchivePrefix={arXiv}, \n\nprimaryClass={cs.CL} \n\n}", "### Contributions\n\n\nThanks to @purvimisal for adding this dataset." ]
[ "TAGS\n#task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #size_categories-n<1K #source_datasets-original #language-Igbo #license-unknown #arxiv-2004.00648 #region-us \n", "### Dataset Summary\n\n\nA dataset is a collection of Monolingual Igbo sentences.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nIgbo (ig)\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nHere is an example from the bb-igbo config:", "### Data Fields\n\n\nFor config 'eze\\_goes\\_to\\_school':\n\n\n* format, title, chapters\n\n\nFor config 'bbc-igbo' :\n\n\n* source, title, description, date (Missing date values replaced with empty strings), headline, content, tags (Missing tags replaced with empty list)\n\n\nFor config 'igbo-radio':\n\n\n* source, headline, author, date, description, content\n\n\nFor config 'jw-ot-igbo':\n\n\n* format, title, chapters\n\n\nFor config 'jw-nt-igbo':\n\n\n* format, title, chapters\n\n\nFor config 'jw-books':\n\n\n* title, content, format, date (Missing date values replaced with empty strings)\n\n\nFor config 'jw-teta':\n\n\n* title, content, format, date (Missing date values replaced with empty strings)\n\n\nFor config 'jw-ulo\\_nche':\n\n\n* title, content, format, date (Missing date values replaced with empty strings)\n\n\nFor config 'jw-ulo\\_nche\\_naamu':\n\n\n* title, content, format, date (Missing date values replaced with empty strings)", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\n@misc{ezeani2020igboenglish, \n\ntitle={Igbo-English Machine Translation: An Evaluation Benchmark}, \n\nauthor={Ignatius Ezeani and Paul Rayson and Ikechukwu Onyenwe and Chinedu Uchechukwu and Mark Hepple}, \n\nyear={2020}, \n\neprint={2004.00648}, \n\narchivePrefix={arXiv}, \n\nprimaryClass={cs.CL} \n\n}", "### Contributions\n\n\nThanks to @purvimisal for adding this dataset." ]
a9679ac085dad7749f02aea3c1899a0985525c9d
# Dataset Card for Igbo NER dataset ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/IgnatiusEzeani/IGBONLP/tree/master/ig_ner - **Repository:** https://github.com/IgnatiusEzeani/IGBONLP/tree/master/ig_ner - **Paper:** https://arxiv.org/abs/2004.00648 ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances Here is an example from the dataset: ``` {'content_n': 'content_0', 'named_entity': 'Ike Ekweremmadụ', 'sentences': ['Ike Ekweremmadụ', "Ike ịda jụụ otụ nkeji banyere oke ogbugbu na-eme n'ala Naijiria agwụla Ekweremmadụ"]} ``` ### Data Fields - content_n : ID - named_entity : Name of the entity - sentences : List of sentences for the entity ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information @misc{ezeani2020igboenglish, title={Igbo-English Machine Translation: An Evaluation Benchmark}, author={Ignatius Ezeani and Paul Rayson and Ikechukwu Onyenwe and Chinedu Uchechukwu and Mark Hepple}, year={2020}, eprint={2004.00648}, archivePrefix={arXiv}, primaryClass={cs.CL} } ### Contributions Thanks to [@purvimisal](https://github.com/purvimisal) for adding this dataset.
igbo_ner
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:ig", "license:unknown", "arxiv:2004.00648", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["ig"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "pretty_name": "Igbo NER dataset", "dataset_info": [{"config_name": "ner_data", "features": [{"name": "content_n", "dtype": "string"}, {"name": "named_entity", "dtype": "string"}, {"name": "sentences", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 60315228, "num_examples": 30715}], "download_size": 3311204, "dataset_size": 60315228}, {"config_name": "free_text", "features": [{"name": "sentences", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1172152, "num_examples": 10000}], "download_size": 1132151, "dataset_size": 1172152}]}
2024-01-18T11:06:23+00:00
[ "2004.00648" ]
[ "ig" ]
TAGS #task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Igbo #license-unknown #arxiv-2004.00648 #region-us
# Dataset Card for Igbo NER dataset ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: URL - Paper: URL ### Dataset Summary ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances Here is an example from the dataset: ### Data Fields - content_n : ID - named_entity : Name of the entity - sentences : List of sentences for the entity ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information @misc{ezeani2020igboenglish, title={Igbo-English Machine Translation: An Evaluation Benchmark}, author={Ignatius Ezeani and Paul Rayson and Ikechukwu Onyenwe and Chinedu Uchechukwu and Mark Hepple}, year={2020}, eprint={2004.00648}, archivePrefix={arXiv}, primaryClass={cs.CL} } ### Contributions Thanks to @purvimisal for adding this dataset.
[ "# Dataset Card for Igbo NER dataset", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances\n\nHere is an example from the dataset:", "### Data Fields\n\n- content_n : ID \n- named_entity : Name of the entity \n- sentences : List of sentences for the entity", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\n\n\n\n\n@misc{ezeani2020igboenglish, \n title={Igbo-English Machine Translation: An Evaluation Benchmark}, \n author={Ignatius Ezeani and Paul Rayson and Ikechukwu Onyenwe and Chinedu Uchechukwu and Mark Hepple}, \n year={2020}, \n eprint={2004.00648}, \n archivePrefix={arXiv}, \n primaryClass={cs.CL} \n}", "### Contributions\n\nThanks to @purvimisal for adding this dataset." ]
[ "TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Igbo #license-unknown #arxiv-2004.00648 #region-us \n", "# Dataset Card for Igbo NER dataset", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances\n\nHere is an example from the dataset:", "### Data Fields\n\n- content_n : ID \n- named_entity : Name of the entity \n- sentences : List of sentences for the entity", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\n\n\n\n\n@misc{ezeani2020igboenglish, \n title={Igbo-English Machine Translation: An Evaluation Benchmark}, \n author={Ignatius Ezeani and Paul Rayson and Ikechukwu Onyenwe and Chinedu Uchechukwu and Mark Hepple}, \n year={2020}, \n eprint={2004.00648}, \n archivePrefix={arXiv}, \n primaryClass={cs.CL} \n}", "### Contributions\n\nThanks to @purvimisal for adding this dataset." ]
b611a7b8f9739d3da8a321bf49f19b60da614d85
# Dataset Card for ilist ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** https://github.com/kmi-linguistics/vardial2018 - **Paper:** [Language Identification and Morphosyntactic Tagging: The Second VarDial Evaluation Campaign](https://aclanthology.org/W18-3901/) - **Leaderboard:** - **Point of Contact:** [email protected] ### Dataset Summary This dataset is introduced in a task which aimed at identifying 5 closely-related languages of Indo-Aryan language family: Hindi (also known as Khari Boli), Braj Bhasha, Awadhi, Bhojpuri and Magahi. These languages form part of a continuum starting from Western Uttar Pradesh (Hindi and Braj Bhasha) to Eastern Uttar Pradesh (Awadhi and Bhojpuri) and the neighbouring Eastern state of Bihar (Bhojpuri and Magahi). For this task, participants were provided with a dataset of approximately 15,000 sentences in each language, mainly from the domain of literature, published over the web as well as in print. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Hindi, Braj Bhasha, Awadhi, Bhojpuri and Magahi ## Dataset Structure ### Data Instances ``` { "language_id": 4, "text": 'तभी बारिश हुई थी जिसका गीलापन इन मूर्तियों को इन तस्वीरों में एक अलग रूप देता है .' } ``` ### Data Fields - `text`: text which you want to classify - `language_id`: label for the text as an integer from 0 to 4 The language ids correspond to the following languages: "AWA", "BRA", "MAG", "BHO", "HIN". ### Data Splits | | train | valid | test | |----------------------|-------|-------|-------| | # of input sentences | 70351 | 9692 | 10329 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data The data for this task was collected from both hard printed and digital sources. Printed materials were obtained from different institutions that promote these languages. We also gathered data from libraries, as well as from local literary and cultural groups. We collected printed stories, novels and essays in books, magazines, and newspapers. #### Initial Data Collection and Normalization We scanned the printed materials, then we performed OCR, and finally we asked native speakers of the respective languages to correct the OCR output. Since there are no specific OCR models available for these languages, we used the Google OCR for Hindi, part of the Drive API. Since all the languages used the Devanagari script, we expected the OCR to work reasonably well, and overall it did. We further managed to get some blogs in Magahi and Bhojpuri. #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information This work is licensed under a Creative Commons Attribution 4.0 International License: http://creativecommons.org/licenses/by/4.0/ ### Citation Information ``` @inproceedings{zampieri-etal-2018-language, title = "Language Identification and Morphosyntactic Tagging: The Second {V}ar{D}ial Evaluation Campaign", author = {Zampieri, Marcos and Malmasi, Shervin and Nakov, Preslav and Ali, Ahmed and Shon, Suwon and Glass, James and Scherrer, Yves and Samard{\v{z}}i{\'c}, Tanja and Ljube{\v{s}}i{\'c}, Nikola and Tiedemann, J{\"o}rg and van der Lee, Chris and Grondelaers, Stefan and Oostdijk, Nelleke and Speelman, Dirk and van den Bosch, Antal and Kumar, Ritesh and Lahiri, Bornini and Jain, Mayank}, booktitle = "Proceedings of the Fifth Workshop on {NLP} for Similar Languages, Varieties and Dialects ({V}ar{D}ial 2018)", month = aug, year = "2018", address = "Santa Fe, New Mexico, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/W18-3901", pages = "1--17", } ``` ### Contributions Thanks to [@vasudevgupta7](https://github.com/vasudevgupta7) for adding this dataset.
ilist
[ "task_categories:text-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:multilingual", "size_categories:10K<n<100K", "source_datasets:original", "language:awa", "language:bho", "language:bra", "language:hi", "language:mag", "license:cc-by-4.0", "language-identification", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["awa", "bho", "bra", "hi", "mag"], "license": ["cc-by-4.0"], "multilinguality": ["multilingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": [], "pretty_name": "ilist", "tags": ["language-identification"], "dataset_info": {"features": [{"name": "language_id", "dtype": {"class_label": {"names": {"0": "AWA", "1": "BRA", "2": "MAG", "3": "BHO", "4": "HIN"}}}}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 14362998, "num_examples": 70351}, {"name": "test", "num_bytes": 2146857, "num_examples": 9692}, {"name": "validation", "num_bytes": 2407643, "num_examples": 10329}], "download_size": 18284850, "dataset_size": 18917498}}
2024-01-18T11:06:24+00:00
[]
[ "awa", "bho", "bra", "hi", "mag" ]
TAGS #task_categories-text-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-multilingual #size_categories-10K<n<100K #source_datasets-original #language-Awadhi #language-Bhojpuri #language-Braj #language-Hindi #language-Magahi #license-cc-by-4.0 #language-identification #region-us
Dataset Card for ilist ====================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: * Repository: URL * Paper: Language Identification and Morphosyntactic Tagging: The Second VarDial Evaluation Campaign * Leaderboard: * Point of Contact: URL@URL ### Dataset Summary This dataset is introduced in a task which aimed at identifying 5 closely-related languages of Indo-Aryan language family: Hindi (also known as Khari Boli), Braj Bhasha, Awadhi, Bhojpuri and Magahi. These languages form part of a continuum starting from Western Uttar Pradesh (Hindi and Braj Bhasha) to Eastern Uttar Pradesh (Awadhi and Bhojpuri) and the neighbouring Eastern state of Bihar (Bhojpuri and Magahi). For this task, participants were provided with a dataset of approximately 15,000 sentences in each language, mainly from the domain of literature, published over the web as well as in print. ### Supported Tasks and Leaderboards ### Languages Hindi, Braj Bhasha, Awadhi, Bhojpuri and Magahi Dataset Structure ----------------- ### Data Instances ### Data Fields * 'text': text which you want to classify * 'language\_id': label for the text as an integer from 0 to 4 The language ids correspond to the following languages: "AWA", "BRA", "MAG", "BHO", "HIN". ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data The data for this task was collected from both hard printed and digital sources. Printed materials were obtained from different institutions that promote these languages. We also gathered data from libraries, as well as from local literary and cultural groups. We collected printed stories, novels and essays in books, magazines, and newspapers. #### Initial Data Collection and Normalization We scanned the printed materials, then we performed OCR, and finally we asked native speakers of the respective languages to correct the OCR output. Since there are no specific OCR models available for these languages, we used the Google OCR for Hindi, part of the Drive API. Since all the languages used the Devanagari script, we expected the OCR to work reasonably well, and overall it did. We further managed to get some blogs in Magahi and Bhojpuri. #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information This work is licensed under a Creative Commons Attribution 4.0 International License: URL ### Contributions Thanks to @vasudevgupta7 for adding this dataset.
[ "### Dataset Summary\n\n\nThis dataset is introduced in a task which aimed at identifying 5 closely-related languages of Indo-Aryan language family: Hindi (also known as Khari Boli), Braj Bhasha, Awadhi, Bhojpuri and Magahi. These languages form part of a continuum starting from Western Uttar Pradesh (Hindi and Braj Bhasha) to Eastern Uttar Pradesh (Awadhi and Bhojpuri) and the neighbouring Eastern state of Bihar (Bhojpuri and Magahi).\n\n\nFor this task, participants were provided with a dataset of approximately 15,000 sentences in each language, mainly from the domain of literature, published over the web as well as in print.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nHindi, Braj Bhasha, Awadhi, Bhojpuri and Magahi\n\n\nDataset Structure\n-----------------", "### Data Instances", "### Data Fields\n\n\n* 'text': text which you want to classify\n* 'language\\_id': label for the text as an integer from 0 to 4\nThe language ids correspond to the following languages: \"AWA\", \"BRA\", \"MAG\", \"BHO\", \"HIN\".", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data\n\n\nThe data for this task was collected from both hard printed and digital sources. Printed materials were\nobtained from different institutions that promote these languages. We also gathered data from libraries,\nas well as from local literary and cultural groups. We collected printed stories, novels and essays in\nbooks, magazines, and newspapers.", "#### Initial Data Collection and Normalization\n\n\nWe scanned the printed materials, then we performed OCR, and\nfinally we asked native speakers of the respective languages to correct the OCR output. Since there are\nno specific OCR models available for these languages, we used the Google OCR for Hindi, part of the\nDrive API. Since all the languages used the Devanagari script, we expected the OCR to work reasonably\nwell, and overall it did. We further managed to get some blogs in Magahi and Bhojpuri.", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nThis work is licensed under a Creative Commons Attribution 4.0 International License: URL", "### Contributions\n\n\nThanks to @vasudevgupta7 for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-multilingual #size_categories-10K<n<100K #source_datasets-original #language-Awadhi #language-Bhojpuri #language-Braj #language-Hindi #language-Magahi #license-cc-by-4.0 #language-identification #region-us \n", "### Dataset Summary\n\n\nThis dataset is introduced in a task which aimed at identifying 5 closely-related languages of Indo-Aryan language family: Hindi (also known as Khari Boli), Braj Bhasha, Awadhi, Bhojpuri and Magahi. These languages form part of a continuum starting from Western Uttar Pradesh (Hindi and Braj Bhasha) to Eastern Uttar Pradesh (Awadhi and Bhojpuri) and the neighbouring Eastern state of Bihar (Bhojpuri and Magahi).\n\n\nFor this task, participants were provided with a dataset of approximately 15,000 sentences in each language, mainly from the domain of literature, published over the web as well as in print.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nHindi, Braj Bhasha, Awadhi, Bhojpuri and Magahi\n\n\nDataset Structure\n-----------------", "### Data Instances", "### Data Fields\n\n\n* 'text': text which you want to classify\n* 'language\\_id': label for the text as an integer from 0 to 4\nThe language ids correspond to the following languages: \"AWA\", \"BRA\", \"MAG\", \"BHO\", \"HIN\".", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data\n\n\nThe data for this task was collected from both hard printed and digital sources. Printed materials were\nobtained from different institutions that promote these languages. We also gathered data from libraries,\nas well as from local literary and cultural groups. We collected printed stories, novels and essays in\nbooks, magazines, and newspapers.", "#### Initial Data Collection and Normalization\n\n\nWe scanned the printed materials, then we performed OCR, and\nfinally we asked native speakers of the respective languages to correct the OCR output. Since there are\nno specific OCR models available for these languages, we used the Google OCR for Hindi, part of the\nDrive API. Since all the languages used the Devanagari script, we expected the OCR to work reasonably\nwell, and overall it did. We further managed to get some blogs in Magahi and Bhojpuri.", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nThis work is licensed under a Creative Commons Attribution 4.0 International License: URL", "### Contributions\n\n\nThanks to @vasudevgupta7 for adding this dataset." ]
e6281661ce1c48d982bc483cf8a173c1bbeb5d31
# Dataset Card for "imdb" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [http://ai.stanford.edu/~amaas/data/sentiment/](http://ai.stanford.edu/~amaas/data/sentiment/) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 84.13 MB - **Size of the generated dataset:** 133.23 MB - **Total amount of disk used:** 217.35 MB ### Dataset Summary Large Movie Review Dataset. This is a dataset for binary sentiment classification containing substantially more data than previous benchmark datasets. We provide a set of 25,000 highly polar movie reviews for training, and 25,000 for testing. There is additional unlabeled data for use as well. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### plain_text - **Size of downloaded dataset files:** 84.13 MB - **Size of the generated dataset:** 133.23 MB - **Total amount of disk used:** 217.35 MB An example of 'train' looks as follows. ``` { "label": 0, "text": "Goodbye world2\n" } ``` ### Data Fields The data fields are the same among all splits. #### plain_text - `text`: a `string` feature. - `label`: a classification label, with possible values including `neg` (0), `pos` (1). ### Data Splits | name |train|unsupervised|test | |----------|----:|-----------:|----:| |plain_text|25000| 50000|25000| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @InProceedings{maas-EtAl:2011:ACL-HLT2011, author = {Maas, Andrew L. and Daly, Raymond E. and Pham, Peter T. and Huang, Dan and Ng, Andrew Y. and Potts, Christopher}, title = {Learning Word Vectors for Sentiment Analysis}, booktitle = {Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies}, month = {June}, year = {2011}, address = {Portland, Oregon, USA}, publisher = {Association for Computational Linguistics}, pages = {142--150}, url = {http://www.aclweb.org/anthology/P11-1015} } ``` ### Contributions Thanks to [@ghazi-f](https://github.com/ghazi-f), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lhoestq](https://github.com/lhoestq), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
imdb
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:other", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["sentiment-classification"], "paperswithcode_id": "imdb-movie-reviews", "pretty_name": "IMDB", "dataset_info": {"config_name": "plain_text", "features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "neg", "1": "pos"}}}}], "splits": [{"name": "train", "num_bytes": 33432823, "num_examples": 25000}, {"name": "test", "num_bytes": 32650685, "num_examples": 25000}, {"name": "unsupervised", "num_bytes": 67106794, "num_examples": 50000}], "download_size": 83446840, "dataset_size": 133190302}, "configs": [{"config_name": "plain_text", "data_files": [{"split": "train", "path": "plain_text/train-*"}, {"split": "test", "path": "plain_text/test-*"}, {"split": "unsupervised", "path": "plain_text/unsupervised-*"}], "default": true}], "train-eval-index": [{"config": "plain_text", "task": "text-classification", "task_id": "binary_classification", "splits": {"train_split": "train", "eval_split": "test"}, "col_mapping": {"text": "text", "label": "target"}, "metrics": [{"type": "accuracy"}, {"name": "Accuracy"}, {"type": "f1", "name": "F1 macro", "args": {"average": "macro"}}, {"type": "f1", "name": "F1 micro", "args": {"average": "micro"}}, {"type": "f1", "name": "F1 weighted", "args": {"average": "weighted"}}, {"type": "precision", "name": "Precision macro", "args": {"average": "macro"}}, {"type": "precision", "name": "Precision micro", "args": {"average": "micro"}}, {"type": "precision", "name": "Precision weighted", "args": {"average": "weighted"}}, {"type": "recall", "name": "Recall macro", "args": {"average": "macro"}}, {"type": "recall", "name": "Recall micro", "args": {"average": "micro"}}, {"type": "recall", "name": "Recall weighted", "args": {"average": "weighted"}}]}]}
2024-01-04T12:09:45+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-sentiment-classification #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-other #region-us
Dataset Card for "imdb" ======================= Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: * Paper: * Point of Contact: * Size of downloaded dataset files: 84.13 MB * Size of the generated dataset: 133.23 MB * Total amount of disk used: 217.35 MB ### Dataset Summary Large Movie Review Dataset. This is a dataset for binary sentiment classification containing substantially more data than previous benchmark datasets. We provide a set of 25,000 highly polar movie reviews for training, and 25,000 for testing. There is additional unlabeled data for use as well. ### Supported Tasks and Leaderboards ### Languages Dataset Structure ----------------- ### Data Instances #### plain\_text * Size of downloaded dataset files: 84.13 MB * Size of the generated dataset: 133.23 MB * Total amount of disk used: 217.35 MB An example of 'train' looks as follows. ### Data Fields The data fields are the same among all splits. #### plain\_text * 'text': a 'string' feature. * 'label': a classification label, with possible values including 'neg' (0), 'pos' (1). ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to @ghazi-f, @patrickvonplaten, @lhoestq, @thomwolf for adding this dataset.
[ "### Dataset Summary\n\n\nLarge Movie Review Dataset.\nThis is a dataset for binary sentiment classification containing substantially more data than previous benchmark datasets. We provide a set of 25,000 highly polar movie reviews for training, and 25,000 for testing. There is additional unlabeled data for use as well.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### plain\\_text\n\n\n* Size of downloaded dataset files: 84.13 MB\n* Size of the generated dataset: 133.23 MB\n* Total amount of disk used: 217.35 MB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### plain\\_text\n\n\n* 'text': a 'string' feature.\n* 'label': a classification label, with possible values including 'neg' (0), 'pos' (1).", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @ghazi-f, @patrickvonplaten, @lhoestq, @thomwolf for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-sentiment-classification #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-other #region-us \n", "### Dataset Summary\n\n\nLarge Movie Review Dataset.\nThis is a dataset for binary sentiment classification containing substantially more data than previous benchmark datasets. We provide a set of 25,000 highly polar movie reviews for training, and 25,000 for testing. There is additional unlabeled data for use as well.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### plain\\_text\n\n\n* Size of downloaded dataset files: 84.13 MB\n* Size of the generated dataset: 133.23 MB\n* Total amount of disk used: 217.35 MB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### plain\\_text\n\n\n* 'text': a 'string' feature.\n* 'label': a classification label, with possible values including 'neg' (0), 'pos' (1).", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @ghazi-f, @patrickvonplaten, @lhoestq, @thomwolf for adding this dataset." ]
a0eb2564ec6e64cdb5762d73a1a8b68bb0c60bd8
# Dataset Card for ImDB Urdu Reviews ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Github](https://github.com/mirfan899/Urdu) - **Repository:** [Github](https://github.com/mirfan899/Urdu) - **Paper:** [Aclweb](http://www.aclweb.org/anthology/P11-1015) - **Leaderboard:** - **Point of Contact:** [Ikram Ali](https://github.com/akkefa) ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields - sentence: The movie review which was translated into Urdu. - sentiment: The sentiment exhibited in the review, either positive or negative. ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@chaitnayabasava](https://github.com/chaitnayabasava) for adding this dataset.
imdb_urdu_reviews
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:found", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:ur", "license:odbl", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["found"], "language_creators": ["machine-generated"], "language": ["ur"], "license": ["odbl"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["sentiment-classification"], "pretty_name": "ImDB Urdu Reviews", "dataset_info": {"features": [{"name": "sentence", "dtype": "string"}, {"name": "sentiment", "dtype": {"class_label": {"names": {"0": "positive", "1": "negative"}}}}], "splits": [{"name": "train", "num_bytes": 114670811, "num_examples": 50000}], "download_size": 31510992, "dataset_size": 114670811}}
2024-01-18T11:06:26+00:00
[]
[ "ur" ]
TAGS #task_categories-text-classification #task_ids-sentiment-classification #annotations_creators-found #language_creators-machine-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Urdu #license-odbl #region-us
# Dataset Card for ImDB Urdu Reviews ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: Github - Repository: Github - Paper: Aclweb - Leaderboard: - Point of Contact: Ikram Ali ### Dataset Summary ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields - sentence: The movie review which was translated into Urdu. - sentiment: The sentiment exhibited in the review, either positive or negative. ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions Thanks to @chaitnayabasava for adding this dataset.
[ "# Dataset Card for ImDB Urdu Reviews", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: Github\n- Repository: Github\n- Paper: Aclweb\n- Leaderboard:\n- Point of Contact: Ikram Ali", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields\n\n- sentence: The movie review which was translated into Urdu.\n- sentiment: The sentiment exhibited in the review, either positive or negative.", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @chaitnayabasava for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-sentiment-classification #annotations_creators-found #language_creators-machine-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Urdu #license-odbl #region-us \n", "# Dataset Card for ImDB Urdu Reviews", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: Github\n- Repository: Github\n- Paper: Aclweb\n- Leaderboard:\n- Point of Contact: Ikram Ali", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields\n\n- sentence: The movie review which was translated into Urdu.\n- sentiment: The sentiment exhibited in the review, either positive or negative.", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @chaitnayabasava for adding this dataset." ]
1f25e2870d4598ee04522bdb4cd4c7a65b26d5ba
# Dataset Card for IMPPRES ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Github](https://github.com/facebookresearch/Imppres) - **Repository:** [Github](https://github.com/facebookresearch/Imppres) - **Paper:** [Aclweb](https://www.aclweb.org/anthology/2020.acl-main.768) - **Leaderboard:** - **Point of Contact:** ### Dataset Summary Over >25k semiautomatically generated sentence pairs illustrating well-studied pragmatic inference types. IMPPRES is an NLI dataset following the format of SNLI (Bowman et al., 2015), MultiNLI (Williams et al., 2018) and XNLI (Conneau et al., 2018), which was created to evaluate how well trained NLI models recognize several classes of presuppositions and scalar implicatures. ### Supported Tasks and Leaderboards Natural Language Inference. ### Languages English. ## Dataset Structure ### Data Instances The data consists of 2 configurations: implicature and presupposition. Each configuration consists of several different sub-datasets: **Pressupposition** - all_n_presupposition - change_of_state - cleft_uniqueness - possessed_definites_existence - question_presupposition - both_presupposition - cleft_existence - only_presupposition - possessed_definites_uniqueness **Implicature** - connectives - gradable_adjective - gradable_verb - modals - numerals_10_100 - numerals_2_3 - quantifiers Each sentence type in IMPPRES is generated according to a template that specifies the linear order of the constituents in the sentence. The constituents are sampled from a vocabulary of over 3000 lexical items annotated with grammatical features needed to ensure wellformedness. We semiautomatically generate IMPPRES using a codebase developed by Warstadt et al. (2019a) and significantly expanded for the BLiMP dataset (Warstadt et al., 2019b). Here is an instance of the raw presupposition data from any sub-dataset: ```buildoutcfg { "sentence1": "All ten guys that proved to boast might have been divorcing.", "sentence2": "There are exactly ten guys that proved to boast.", "trigger": "modal", "presupposition": "positive", "gold_label": "entailment", "UID": "all_n_presupposition", "pairID": "9e", "paradigmID": 0 } ``` and the raw implicature data from any sub-dataset: ```buildoutcfg { "sentence1": "That teenager couldn't yell.", "sentence2": "That teenager could yell.", "gold_label_log": "contradiction", "gold_label_prag": "contradiction", "spec_relation": "negation", "item_type": "control", "trigger": "modal", "lexemes": "can - have to" } ``` ### Data Fields **Presupposition** There is a slight mapping from the raw data fields in the presupposition sub-datasets and the fields appearing in the HuggingFace Datasets. When dealing with the HF Dataset, the following mapping of fields happens: ```buildoutcfg "premise" -> "sentence1" "hypothesis"-> "sentence2" "trigger" -> "trigger" or "Not_In_Example" "trigger1" -> "trigger1" or "Not_In_Example" "trigger2" -> "trigger2" or "Not_In_Example" "presupposition" -> "presupposition" or "Not_In_Example" "gold_label" -> "gold_label" "UID" -> "UID" "pairID" -> "pairID" "paradigmID" -> "paradigmID" ``` For the most part, the majority of the raw fields remain unchanged. However, when it comes to the various `trigger` fields, a new mapping was introduced. There are some examples in the dataset that only have the `trigger` field while other examples have the `trigger1` and `trigger2` field without the `trigger` or `presupposition` field. Nominally, most examples look like the example in the Data Instances section above. Occassionally, however, some examples will look like: ```buildoutcfg { 'sentence1': 'Did that committee know when Lissa walked through the cafe?', 'sentence2': 'That committee knew when Lissa walked through the cafe.', 'trigger1': 'interrogative', 'trigger2': 'unembedded', 'gold_label': 'neutral', 'control_item': True, 'UID': 'question_presupposition', 'pairID': '1821n', 'paradigmID': 95 } ``` In this example, `trigger1` and `trigger2` appear and `presupposition` and `trigger` are removed. This maintains the length of the dictionary. To account for these examples, we have thus introduced the mapping above such that all examples accessed through the HF Datasets interface will have the same size as well as the same fields. In the event that an example does not have a value for one of the fields, the field is maintained in the dictionary but given a value of `Not_In_Example`. To illustrate this point, the example given in the Data Instances section above would look like the following in the HF Datasets: ```buildoutcfg { "premise": "All ten guys that proved to boast might have been divorcing.", "hypothesis": "There are exactly ten guys that proved to boast.", "trigger": "modal", "trigger1": "Not_In_Example", "trigger2": "Not_In_Example" "presupposition": "positive", "gold_label": "entailment", "UID": "all_n_presupposition", "pairID": "9e", "paradigmID": 0 } ``` Below is description of the fields: ```buildoutcfg "premise": The premise. "hypothesis": The hypothesis. "trigger": A detailed discussion of trigger types appears in the paper. "trigger1": A detailed discussion of trigger types appears in the paper. "trigger2": A detailed discussion of trigger types appears in the paper. "presupposition": positive or negative. "gold_label": Corresponds to entailment, contradiction, or neutral. "UID": Unique id. "pairID": Sentence pair ID. "paradigmID": ? ``` It is not immediately clear what the difference is between `trigger`, `trigger1`, and `trigger2` is or what the `paradigmID` refers to. **Implicature** The `implicature` fields only have the mapping below: ```buildoutcfg "premise" -> "sentence1" "hypothesis"-> "sentence2" ``` Here is a description of the fields: ```buildoutcfg "premise": The premise. "hypothesis": The hypothesis. "gold_label_log": Gold label for a logical reading of the sentence pair. "gold_label_prag": Gold label for a pragmatic reading of the sentence pair. "spec_relation": ? "item_type": ? "trigger": A detailed discussion of trigger types appears in the paper. "lexemes": ? ``` ### Data Splits As the dataset was created to test already trained models, the only split that exists is for testing. ## Dataset Creation ### Curation Rationale IMPPRES was created to evaluate how well trained NLI models recognize several classes of presuppositions and scalar implicatures. ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? The annotations were generated semi-automatically. ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information IMPPRES is available under a Creative Commons Attribution-NonCommercial 4.0 International Public License ("The License"). You may not use these files except in compliance with the License. Please see the LICENSE file for more information before you use the dataset. ### Citation Information ```buildoutcfg @inproceedings{jeretic-etal-2020-natural, title = "Are Natural Language Inference Models {IMPPRESsive}? {L}earning {IMPlicature} and {PRESupposition}", author = "Jereti\v{c}, Paloma and Warstadt, Alex and Bhooshan, Suvrat and Williams, Adina", booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", month = jul, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.acl-main.768", doi = "10.18653/v1/2020.acl-main.768", pages = "8690--8705", abstract = "Natural language inference (NLI) is an increasingly important task for natural language understanding, which requires one to infer whether a sentence entails another. However, the ability of NLI models to make pragmatic inferences remains understudied. We create an IMPlicature and PRESupposition diagnostic dataset (IMPPRES), consisting of 32K semi-automatically generated sentence pairs illustrating well-studied pragmatic inference types. We use IMPPRES to evaluate whether BERT, InferSent, and BOW NLI models trained on MultiNLI (Williams et al., 2018) learn to make pragmatic inferences. Although MultiNLI appears to contain very few pairs illustrating these inference types, we find that BERT learns to draw pragmatic inferences. It reliably treats scalar implicatures triggered by {``}some{''} as entailments. For some presupposition triggers like {``}only{''}, BERT reliably recognizes the presupposition as an entailment, even when the trigger is embedded under an entailment canceling operator like negation. BOW and InferSent show weaker evidence of pragmatic reasoning. We conclude that NLI training encourages models to learn some, but not all, pragmatic inferences.", } ``` ### Contributions Thanks to [@aclifton314](https://github.com/aclifton314) for adding this dataset.
imppres
[ "task_categories:text-classification", "task_ids:natural-language-inference", "annotations_creators:machine-generated", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:cc-by-nc-4.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["machine-generated"], "language_creators": ["machine-generated"], "language": ["en"], "license": ["cc-by-nc-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["natural-language-inference"], "paperswithcode_id": "imppres", "pretty_name": "IMPPRES", "dataset_info": [{"config_name": "implicature_connectives", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "gold_label_log", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}, {"name": "gold_label_prag", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}, {"name": "spec_relation", "dtype": "string"}, {"name": "item_type", "dtype": "string"}, {"name": "trigger", "dtype": "string"}, {"name": "lexemes", "dtype": "string"}], "splits": [{"name": "connectives", "num_bytes": 221844, "num_examples": 1200}], "download_size": 25478, "dataset_size": 221844}, {"config_name": "implicature_gradable_adjective", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "gold_label_log", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}, {"name": "gold_label_prag", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}, {"name": "spec_relation", "dtype": "string"}, {"name": "item_type", "dtype": "string"}, {"name": "trigger", "dtype": "string"}, {"name": "lexemes", "dtype": "string"}], "splits": [{"name": "gradable_adjective", "num_bytes": 153648, "num_examples": 1200}], "download_size": 17337, "dataset_size": 153648}, {"config_name": "implicature_gradable_verb", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "gold_label_log", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}, {"name": "gold_label_prag", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}, {"name": "spec_relation", "dtype": "string"}, {"name": "item_type", "dtype": "string"}, {"name": "trigger", "dtype": "string"}, {"name": "lexemes", "dtype": "string"}], "splits": [{"name": "gradable_verb", "num_bytes": 180678, "num_examples": 1200}], "download_size": 21504, "dataset_size": 180678}, {"config_name": "implicature_modals", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "gold_label_log", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}, {"name": "gold_label_prag", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}, {"name": "spec_relation", "dtype": "string"}, {"name": "item_type", "dtype": "string"}, {"name": "trigger", "dtype": "string"}, {"name": "lexemes", "dtype": "string"}], "splits": [{"name": "modals", "num_bytes": 178536, "num_examples": 1200}], "download_size": 21179, "dataset_size": 178536}, {"config_name": "implicature_numerals_10_100", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "gold_label_log", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}, {"name": "gold_label_prag", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}, {"name": "spec_relation", "dtype": "string"}, {"name": "item_type", "dtype": "string"}, {"name": "trigger", "dtype": "string"}, {"name": "lexemes", "dtype": "string"}], "splits": [{"name": "numerals_10_100", "num_bytes": 208596, "num_examples": 1200}], "download_size": 22640, "dataset_size": 208596}, {"config_name": "implicature_numerals_2_3", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "gold_label_log", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}, {"name": "gold_label_prag", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}, {"name": "spec_relation", "dtype": "string"}, {"name": "item_type", "dtype": "string"}, {"name": "trigger", "dtype": "string"}, {"name": "lexemes", "dtype": "string"}], "splits": [{"name": "numerals_2_3", "num_bytes": 188760, "num_examples": 1200}], "download_size": 22218, "dataset_size": 188760}, {"config_name": "implicature_quantifiers", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "gold_label_log", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}, {"name": "gold_label_prag", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}, {"name": "spec_relation", "dtype": "string"}, {"name": "item_type", "dtype": "string"}, {"name": "trigger", "dtype": "string"}, {"name": "lexemes", "dtype": "string"}], "splits": [{"name": "quantifiers", "num_bytes": 176790, "num_examples": 1200}], "download_size": 21017, "dataset_size": 176790}, {"config_name": "presupposition_all_n_presupposition", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "trigger", "dtype": "string"}, {"name": "trigger1", "dtype": "string"}, {"name": "trigger2", "dtype": "string"}, {"name": "presupposition", "dtype": "string"}, {"name": "gold_label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}, {"name": "UID", "dtype": "string"}, {"name": "pairID", "dtype": "string"}, {"name": "paradigmID", "dtype": "int16"}], "splits": [{"name": "all_n_presupposition", "num_bytes": 458460, "num_examples": 1900}], "download_size": 43038, "dataset_size": 458460}, {"config_name": "presupposition_both_presupposition", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "trigger", "dtype": "string"}, {"name": "trigger1", "dtype": "string"}, {"name": "trigger2", "dtype": "string"}, {"name": "presupposition", "dtype": "string"}, {"name": "gold_label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}, {"name": "UID", "dtype": "string"}, {"name": "pairID", "dtype": "string"}, {"name": "paradigmID", "dtype": "int16"}], "splits": [{"name": "both_presupposition", "num_bytes": 432760, "num_examples": 1900}], "download_size": 41142, "dataset_size": 432760}, {"config_name": "presupposition_change_of_state", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "trigger", "dtype": "string"}, {"name": "trigger1", "dtype": "string"}, {"name": "trigger2", "dtype": "string"}, {"name": "presupposition", "dtype": "string"}, {"name": "gold_label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}, {"name": "UID", "dtype": "string"}, {"name": "pairID", "dtype": "string"}, {"name": "paradigmID", "dtype": "int16"}], "splits": [{"name": "change_of_state", "num_bytes": 308595, "num_examples": 1900}], "download_size": 35814, "dataset_size": 308595}, {"config_name": "presupposition_cleft_existence", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "trigger", "dtype": "string"}, {"name": "trigger1", "dtype": "string"}, {"name": "trigger2", "dtype": "string"}, {"name": "presupposition", "dtype": "string"}, {"name": "gold_label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}, {"name": "UID", "dtype": "string"}, {"name": "pairID", "dtype": "string"}, {"name": "paradigmID", "dtype": "int16"}], "splits": [{"name": "cleft_existence", "num_bytes": 363206, "num_examples": 1900}], "download_size": 37597, "dataset_size": 363206}, {"config_name": "presupposition_cleft_uniqueness", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "trigger", "dtype": "string"}, {"name": "trigger1", "dtype": "string"}, {"name": "trigger2", "dtype": "string"}, {"name": "presupposition", "dtype": "string"}, {"name": "gold_label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}, {"name": "UID", "dtype": "string"}, {"name": "pairID", "dtype": "string"}, {"name": "paradigmID", "dtype": "int16"}], "splits": [{"name": "cleft_uniqueness", "num_bytes": 388747, "num_examples": 1900}], "download_size": 38279, "dataset_size": 388747}, {"config_name": "presupposition_only_presupposition", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "trigger", "dtype": "string"}, {"name": "trigger1", "dtype": "string"}, {"name": "trigger2", "dtype": "string"}, {"name": "presupposition", "dtype": "string"}, {"name": "gold_label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}, {"name": "UID", "dtype": "string"}, {"name": "pairID", "dtype": "string"}, {"name": "paradigmID", "dtype": "int16"}], "splits": [{"name": "only_presupposition", "num_bytes": 348986, "num_examples": 1900}], "download_size": 38126, "dataset_size": 348986}, {"config_name": "presupposition_possessed_definites_existence", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "trigger", "dtype": "string"}, {"name": "trigger1", "dtype": "string"}, {"name": "trigger2", "dtype": "string"}, {"name": "presupposition", "dtype": "string"}, {"name": "gold_label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}, {"name": "UID", "dtype": "string"}, {"name": "pairID", "dtype": "string"}, {"name": "paradigmID", "dtype": "int16"}], "splits": [{"name": "possessed_definites_existence", "num_bytes": 362302, "num_examples": 1900}], "download_size": 38712, "dataset_size": 362302}, {"config_name": "presupposition_possessed_definites_uniqueness", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "trigger", "dtype": "string"}, {"name": "trigger1", "dtype": "string"}, {"name": "trigger2", "dtype": "string"}, {"name": "presupposition", "dtype": "string"}, {"name": "gold_label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}, {"name": "UID", "dtype": "string"}, {"name": "pairID", "dtype": "string"}, {"name": "paradigmID", "dtype": "int16"}], "splits": [{"name": "possessed_definites_uniqueness", "num_bytes": 459371, "num_examples": 1900}], "download_size": 42068, "dataset_size": 459371}, {"config_name": "presupposition_question_presupposition", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "trigger", "dtype": "string"}, {"name": "trigger1", "dtype": "string"}, {"name": "trigger2", "dtype": "string"}, {"name": "presupposition", "dtype": "string"}, {"name": "gold_label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}, {"name": "UID", "dtype": "string"}, {"name": "pairID", "dtype": "string"}, {"name": "paradigmID", "dtype": "int16"}], "splits": [{"name": "question_presupposition", "num_bytes": 397195, "num_examples": 1900}], "download_size": 41247, "dataset_size": 397195}], "configs": [{"config_name": "implicature_connectives", "data_files": [{"split": "connectives", "path": "implicature_connectives/connectives-*"}]}, {"config_name": "implicature_gradable_adjective", "data_files": [{"split": "gradable_adjective", "path": "implicature_gradable_adjective/gradable_adjective-*"}]}, {"config_name": "implicature_gradable_verb", "data_files": [{"split": "gradable_verb", "path": "implicature_gradable_verb/gradable_verb-*"}]}, {"config_name": "implicature_modals", "data_files": [{"split": "modals", "path": "implicature_modals/modals-*"}]}, {"config_name": "implicature_numerals_10_100", "data_files": [{"split": "numerals_10_100", "path": "implicature_numerals_10_100/numerals_10_100-*"}]}, {"config_name": "implicature_numerals_2_3", "data_files": [{"split": "numerals_2_3", "path": "implicature_numerals_2_3/numerals_2_3-*"}]}, {"config_name": "implicature_quantifiers", "data_files": [{"split": "quantifiers", "path": "implicature_quantifiers/quantifiers-*"}]}, {"config_name": "presupposition_all_n_presupposition", "data_files": [{"split": "all_n_presupposition", "path": "presupposition_all_n_presupposition/all_n_presupposition-*"}]}, {"config_name": "presupposition_both_presupposition", "data_files": [{"split": "both_presupposition", "path": "presupposition_both_presupposition/both_presupposition-*"}]}, {"config_name": "presupposition_change_of_state", "data_files": [{"split": "change_of_state", "path": "presupposition_change_of_state/change_of_state-*"}]}, {"config_name": "presupposition_cleft_existence", "data_files": [{"split": "cleft_existence", "path": "presupposition_cleft_existence/cleft_existence-*"}]}, {"config_name": "presupposition_cleft_uniqueness", "data_files": [{"split": "cleft_uniqueness", "path": "presupposition_cleft_uniqueness/cleft_uniqueness-*"}]}, {"config_name": "presupposition_only_presupposition", "data_files": [{"split": "only_presupposition", "path": "presupposition_only_presupposition/only_presupposition-*"}]}, {"config_name": "presupposition_possessed_definites_existence", "data_files": [{"split": "possessed_definites_existence", "path": "presupposition_possessed_definites_existence/possessed_definites_existence-*"}]}, {"config_name": "presupposition_possessed_definites_uniqueness", "data_files": [{"split": "possessed_definites_uniqueness", "path": "presupposition_possessed_definites_uniqueness/possessed_definites_uniqueness-*"}]}, {"config_name": "presupposition_question_presupposition", "data_files": [{"split": "question_presupposition", "path": "presupposition_question_presupposition/question_presupposition-*"}]}]}
2024-01-08T12:36:27+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-natural-language-inference #annotations_creators-machine-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-cc-by-nc-4.0 #region-us
# Dataset Card for IMPPRES ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: Github - Repository: Github - Paper: Aclweb - Leaderboard: - Point of Contact: ### Dataset Summary Over >25k semiautomatically generated sentence pairs illustrating well-studied pragmatic inference types. IMPPRES is an NLI dataset following the format of SNLI (Bowman et al., 2015), MultiNLI (Williams et al., 2018) and XNLI (Conneau et al., 2018), which was created to evaluate how well trained NLI models recognize several classes of presuppositions and scalar implicatures. ### Supported Tasks and Leaderboards Natural Language Inference. ### Languages English. ## Dataset Structure ### Data Instances The data consists of 2 configurations: implicature and presupposition. Each configuration consists of several different sub-datasets: Pressupposition - all_n_presupposition - change_of_state - cleft_uniqueness - possessed_definites_existence - question_presupposition - both_presupposition - cleft_existence - only_presupposition - possessed_definites_uniqueness Implicature - connectives - gradable_adjective - gradable_verb - modals - numerals_10_100 - numerals_2_3 - quantifiers Each sentence type in IMPPRES is generated according to a template that specifies the linear order of the constituents in the sentence. The constituents are sampled from a vocabulary of over 3000 lexical items annotated with grammatical features needed to ensure wellformedness. We semiautomatically generate IMPPRES using a codebase developed by Warstadt et al. (2019a) and significantly expanded for the BLiMP dataset (Warstadt et al., 2019b). Here is an instance of the raw presupposition data from any sub-dataset: and the raw implicature data from any sub-dataset: ### Data Fields Presupposition There is a slight mapping from the raw data fields in the presupposition sub-datasets and the fields appearing in the HuggingFace Datasets. When dealing with the HF Dataset, the following mapping of fields happens: For the most part, the majority of the raw fields remain unchanged. However, when it comes to the various 'trigger' fields, a new mapping was introduced. There are some examples in the dataset that only have the 'trigger' field while other examples have the 'trigger1' and 'trigger2' field without the 'trigger' or 'presupposition' field. Nominally, most examples look like the example in the Data Instances section above. Occassionally, however, some examples will look like: In this example, 'trigger1' and 'trigger2' appear and 'presupposition' and 'trigger' are removed. This maintains the length of the dictionary. To account for these examples, we have thus introduced the mapping above such that all examples accessed through the HF Datasets interface will have the same size as well as the same fields. In the event that an example does not have a value for one of the fields, the field is maintained in the dictionary but given a value of 'Not_In_Example'. To illustrate this point, the example given in the Data Instances section above would look like the following in the HF Datasets: Below is description of the fields: It is not immediately clear what the difference is between 'trigger', 'trigger1', and 'trigger2' is or what the 'paradigmID' refers to. Implicature The 'implicature' fields only have the mapping below: Here is a description of the fields: ### Data Splits As the dataset was created to test already trained models, the only split that exists is for testing. ## Dataset Creation ### Curation Rationale IMPPRES was created to evaluate how well trained NLI models recognize several classes of presuppositions and scalar implicatures. ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? The annotations were generated semi-automatically. ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information IMPPRES is available under a Creative Commons Attribution-NonCommercial 4.0 International Public License ("The License"). You may not use these files except in compliance with the License. Please see the LICENSE file for more information before you use the dataset. ### Contributions Thanks to @aclifton314 for adding this dataset.
[ "# Dataset Card for IMPPRES", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: Github\n- Repository: Github\n- Paper: Aclweb\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\nOver >25k semiautomatically generated sentence pairs illustrating well-studied pragmatic inference types. IMPPRES is an NLI dataset following the format of SNLI (Bowman et al., 2015), MultiNLI (Williams et al., 2018) and XNLI (Conneau et al., 2018), which was created to evaluate how well trained NLI models recognize several classes of presuppositions and scalar implicatures.", "### Supported Tasks and Leaderboards\n\nNatural Language Inference.", "### Languages\n\nEnglish.", "## Dataset Structure", "### Data Instances\n\nThe data consists of 2 configurations: implicature and presupposition.\nEach configuration consists of several different sub-datasets:\n\nPressupposition\n- all_n_presupposition \n- change_of_state \n- cleft_uniqueness\n- possessed_definites_existence \n- question_presupposition\n- both_presupposition \n- cleft_existence \n- only_presupposition\n- possessed_definites_uniqueness\n\nImplicature\n- connectives \n- gradable_adjective \n- gradable_verb \n- modals\n- numerals_10_100 \n- numerals_2_3 \n- quantifiers\n\nEach sentence type in IMPPRES is generated according to a template that specifies the linear order of the constituents in the sentence. The constituents are sampled from a vocabulary of over 3000 lexical items annotated with grammatical features needed to ensure wellformedness. We semiautomatically generate IMPPRES using a codebase developed by Warstadt et al. (2019a) and significantly expanded for the BLiMP dataset (Warstadt et al., 2019b).\n\nHere is an instance of the raw presupposition data from any sub-dataset:\n\nand the raw implicature data from any sub-dataset:", "### Data Fields\nPresupposition\n\nThere is a slight mapping from the raw data fields in the presupposition sub-datasets and the fields appearing in the HuggingFace Datasets. \nWhen dealing with the HF Dataset, the following mapping of fields happens:\n\nFor the most part, the majority of the raw fields remain unchanged. However, when it comes to the various 'trigger' fields, a new mapping was introduced. \nThere are some examples in the dataset that only have the 'trigger' field while other examples have the 'trigger1' and 'trigger2' field without the 'trigger' or 'presupposition' field. \nNominally, most examples look like the example in the Data Instances section above. Occassionally, however, some examples will look like:\n\nIn this example, 'trigger1' and 'trigger2' appear and 'presupposition' and 'trigger' are removed. This maintains the length of the dictionary.\nTo account for these examples, we have thus introduced the mapping above such that all examples accessed through the HF Datasets interface will have the same size as well as the same fields.\nIn the event that an example does not have a value for one of the fields, the field is maintained in the dictionary but given a value of 'Not_In_Example'. \n\nTo illustrate this point, the example given in the Data Instances section above would look like the following in the HF Datasets:\n\n\nBelow is description of the fields:\n\nIt is not immediately clear what the difference is between 'trigger', 'trigger1', and 'trigger2' is or what the 'paradigmID' refers to.\n\nImplicature\n\nThe 'implicature' fields only have the mapping below:\n\nHere is a description of the fields:", "### Data Splits\n\nAs the dataset was created to test already trained models, the only split that exists is for testing.", "## Dataset Creation", "### Curation Rationale\n\nIMPPRES was created to evaluate how well trained NLI models recognize several classes of presuppositions and scalar implicatures.", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?\n\nThe annotations were generated semi-automatically.", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nIMPPRES is available under a Creative Commons Attribution-NonCommercial 4.0 International Public License (\"The License\"). You may not use these files except in compliance with the License. Please see the LICENSE file for more information before you use the dataset.", "### Contributions\n\nThanks to @aclifton314 for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-natural-language-inference #annotations_creators-machine-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-cc-by-nc-4.0 #region-us \n", "# Dataset Card for IMPPRES", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: Github\n- Repository: Github\n- Paper: Aclweb\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\nOver >25k semiautomatically generated sentence pairs illustrating well-studied pragmatic inference types. IMPPRES is an NLI dataset following the format of SNLI (Bowman et al., 2015), MultiNLI (Williams et al., 2018) and XNLI (Conneau et al., 2018), which was created to evaluate how well trained NLI models recognize several classes of presuppositions and scalar implicatures.", "### Supported Tasks and Leaderboards\n\nNatural Language Inference.", "### Languages\n\nEnglish.", "## Dataset Structure", "### Data Instances\n\nThe data consists of 2 configurations: implicature and presupposition.\nEach configuration consists of several different sub-datasets:\n\nPressupposition\n- all_n_presupposition \n- change_of_state \n- cleft_uniqueness\n- possessed_definites_existence \n- question_presupposition\n- both_presupposition \n- cleft_existence \n- only_presupposition\n- possessed_definites_uniqueness\n\nImplicature\n- connectives \n- gradable_adjective \n- gradable_verb \n- modals\n- numerals_10_100 \n- numerals_2_3 \n- quantifiers\n\nEach sentence type in IMPPRES is generated according to a template that specifies the linear order of the constituents in the sentence. The constituents are sampled from a vocabulary of over 3000 lexical items annotated with grammatical features needed to ensure wellformedness. We semiautomatically generate IMPPRES using a codebase developed by Warstadt et al. (2019a) and significantly expanded for the BLiMP dataset (Warstadt et al., 2019b).\n\nHere is an instance of the raw presupposition data from any sub-dataset:\n\nand the raw implicature data from any sub-dataset:", "### Data Fields\nPresupposition\n\nThere is a slight mapping from the raw data fields in the presupposition sub-datasets and the fields appearing in the HuggingFace Datasets. \nWhen dealing with the HF Dataset, the following mapping of fields happens:\n\nFor the most part, the majority of the raw fields remain unchanged. However, when it comes to the various 'trigger' fields, a new mapping was introduced. \nThere are some examples in the dataset that only have the 'trigger' field while other examples have the 'trigger1' and 'trigger2' field without the 'trigger' or 'presupposition' field. \nNominally, most examples look like the example in the Data Instances section above. Occassionally, however, some examples will look like:\n\nIn this example, 'trigger1' and 'trigger2' appear and 'presupposition' and 'trigger' are removed. This maintains the length of the dictionary.\nTo account for these examples, we have thus introduced the mapping above such that all examples accessed through the HF Datasets interface will have the same size as well as the same fields.\nIn the event that an example does not have a value for one of the fields, the field is maintained in the dictionary but given a value of 'Not_In_Example'. \n\nTo illustrate this point, the example given in the Data Instances section above would look like the following in the HF Datasets:\n\n\nBelow is description of the fields:\n\nIt is not immediately clear what the difference is between 'trigger', 'trigger1', and 'trigger2' is or what the 'paradigmID' refers to.\n\nImplicature\n\nThe 'implicature' fields only have the mapping below:\n\nHere is a description of the fields:", "### Data Splits\n\nAs the dataset was created to test already trained models, the only split that exists is for testing.", "## Dataset Creation", "### Curation Rationale\n\nIMPPRES was created to evaluate how well trained NLI models recognize several classes of presuppositions and scalar implicatures.", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?\n\nThe annotations were generated semi-automatically.", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nIMPPRES is available under a Creative Commons Attribution-NonCommercial 4.0 International Public License (\"The License\"). You may not use these files except in compliance with the License. Please see the LICENSE file for more information before you use the dataset.", "### Contributions\n\nThanks to @aclifton314 for adding this dataset." ]
6dcbebd49d6b12965d433bffcd07e9a786211703
# Dataset Card for "indic_glue" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://ai4bharat.iitm.ac.in/indic-glue - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [IndicNLPSuite: Monolingual Corpora, Evaluation Benchmarks and Pre-trained Multilingual Language Models for Indian Languages](https://aclanthology.org/2020.findings-emnlp.445/) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 3.51 GB - **Size of the generated dataset:** 1.65 GB - **Total amount of disk used:** 5.16 GB ### Dataset Summary IndicGLUE is a natural language understanding benchmark for Indian languages. It contains a wide variety of tasks and covers 11 major Indian languages - as, bn, gu, hi, kn, ml, mr, or, pa, ta, te. The Winograd Schema Challenge (Levesque et al., 2011) is a reading comprehension task in which a system must read a sentence with a pronoun and select the referent of that pronoun from a list of choices. The examples are manually constructed to foil simple statistical methods: Each one is contingent on contextual information provided by a single word or phrase in the sentence. To convert the problem into sentence pair classification, we construct sentence pairs by replacing the ambiguous pronoun with each possible referent. The task is to predict if the sentence with the pronoun substituted is entailed by the original sentence. We use a small evaluation set consisting of new examples derived from fiction books that was shared privately by the authors of the original corpus. While the included training set is balanced between two classes, the test set is imbalanced between them (65% not entailment). Also, due to a data quirk, the development set is adversarial: hypotheses are sometimes shared between training and development examples, so if a model memorizes the training examples, they will predict the wrong label on corresponding development set example. As with QNLI, each example is evaluated separately, so there is not a systematic correspondence between a model's score on this task and its score on the unconverted original task. We call converted dataset WNLI (Winograd NLI). This dataset is translated and publicly released for 3 Indian languages by AI4Bharat. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### actsa-sc.te - **Size of downloaded dataset files:** 0.38 MB - **Size of the generated dataset:** 1.71 MB - **Total amount of disk used:** 2.09 MB An example of 'validation' looks as follows. ``` This example was too long and was cropped: { "label": 0, "text": "\"ప్రయాణాల్లో ఉన్నవారికోసం బస్ స్టేషన్లు, రైల్వే స్టేషన్లలో పల్స్పోలియో బూతులను ఏర్పాటు చేసి చిన్నారులకు పోలియో చుక్కలు వేసేలా ఏర..." } ``` #### bbca.hi - **Size of downloaded dataset files:** 5.77 MB - **Size of the generated dataset:** 27.63 MB - **Total amount of disk used:** 33.40 MB An example of 'train' looks as follows. ``` This example was too long and was cropped: { "label": "pakistan", "text": "\"नेटिजन यानि इंटरनेट पर सक्रिय नागरिक अब ट्विटर पर सरकार द्वारा लगाए प्रतिबंधों के समर्थन या विरोध में अपने विचार व्यक्त करते है..." } ``` #### copa.en - **Size of downloaded dataset files:** 0.75 MB - **Size of the generated dataset:** 0.12 MB - **Total amount of disk used:** 0.87 MB An example of 'validation' looks as follows. ``` { "choice1": "I swept the floor in the unoccupied room.", "choice2": "I shut off the light in the unoccupied room.", "label": 1, "premise": "I wanted to conserve energy.", "question": "effect" } ``` #### copa.gu - **Size of downloaded dataset files:** 0.75 MB - **Size of the generated dataset:** 0.23 MB - **Total amount of disk used:** 0.99 MB An example of 'train' looks as follows. ``` This example was too long and was cropped: { "choice1": "\"સ્ત્રી જાણતી હતી કે તેનો મિત્ર મુશ્કેલ સમયમાંથી પસાર થઈ રહ્યો છે.\"...", "choice2": "\"મહિલાને લાગ્યું કે તેના મિત્રએ તેની દયાળુ લાભ લીધો છે.\"...", "label": 0, "premise": "મહિલાએ તેના મિત્રની મુશ્કેલ વર્તન સહન કરી.", "question": "cause" } ``` #### copa.hi - **Size of downloaded dataset files:** 0.75 MB - **Size of the generated dataset:** 0.23 MB - **Total amount of disk used:** 0.99 MB An example of 'validation' looks as follows. ``` { "choice1": "मैंने उसका प्रस्ताव ठुकरा दिया।", "choice2": "उन्होंने मुझे उत्पाद खरीदने के लिए राजी किया।", "label": 0, "premise": "मैंने सेल्समैन की पिच पर शक किया।", "question": "effect" } ``` ### Data Fields The data fields are the same among all splits. #### actsa-sc.te - `text`: a `string` feature. - `label`: a classification label, with possible values including `positive` (0), `negative` (1). #### bbca.hi - `label`: a `string` feature. - `text`: a `string` feature. #### copa.en - `premise`: a `string` feature. - `choice1`: a `string` feature. - `choice2`: a `string` feature. - `question`: a `string` feature. - `label`: a `int32` feature. #### copa.gu - `premise`: a `string` feature. - `choice1`: a `string` feature. - `choice2`: a `string` feature. - `question`: a `string` feature. - `label`: a `int32` feature. #### copa.hi - `premise`: a `string` feature. - `choice1`: a `string` feature. - `choice2`: a `string` feature. - `question`: a `string` feature. - `label`: a `int32` feature. ### Data Splits #### actsa-sc.te | |train|validation|test| |-----------|----:|---------:|---:| |actsa-sc.te| 4328| 541| 541| #### bbca.hi | |train|test| |-------|----:|---:| |bbca.hi| 3467| 866| #### copa.en | |train|validation|test| |-------|----:|---------:|---:| |copa.en| 400| 100| 500| #### copa.gu | |train|validation|test| |-------|----:|---------:|---:| |copa.gu| 362| 88| 448| #### copa.hi | |train|validation|test| |-------|----:|---------:|---:| |copa.hi| 362| 88| 449| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @inproceedings{kakwani-etal-2020-indicnlpsuite, title = "{I}ndic{NLPS}uite: Monolingual Corpora, Evaluation Benchmarks and Pre-trained Multilingual Language Models for {I}ndian Languages", author = "Kakwani, Divyanshu and Kunchukuttan, Anoop and Golla, Satish and N.C., Gokul and Bhattacharyya, Avik and Khapra, Mitesh M. and Kumar, Pratyush", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.findings-emnlp.445", doi = "10.18653/v1/2020.findings-emnlp.445", pages = "4948--4961", } @inproceedings{Levesque2011TheWS, title={The Winograd Schema Challenge}, author={H. Levesque and E. Davis and L. Morgenstern}, booktitle={KR}, year={2011} } ``` ### Contributions Thanks to [@sumanthd17](https://github.com/sumanthd17) for adding this dataset.
indic_glue
[ "task_categories:text-classification", "task_categories:token-classification", "task_categories:multiple-choice", "task_ids:topic-classification", "task_ids:natural-language-inference", "task_ids:sentiment-analysis", "task_ids:semantic-similarity-scoring", "task_ids:named-entity-recognition", "task_ids:multiple-choice-qa", "annotations_creators:other", "language_creators:found", "multilinguality:multilingual", "size_categories:100K<n<1M", "source_datasets:extended|other", "language:as", "language:bn", "language:en", "language:gu", "language:hi", "language:kn", "language:ml", "language:mr", "language:or", "language:pa", "language:ta", "language:te", "license:other", "discourse-mode-classification", "paraphrase-identification", "cross-lingual-similarity", "headline-classification", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["other"], "language_creators": ["found"], "language": ["as", "bn", "en", "gu", "hi", "kn", "ml", "mr", "or", "pa", "ta", "te"], "license": ["other"], "multilinguality": ["multilingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["extended|other"], "task_categories": ["text-classification", "token-classification", "multiple-choice"], "task_ids": ["topic-classification", "natural-language-inference", "sentiment-analysis", "semantic-similarity-scoring", "named-entity-recognition", "multiple-choice-qa"], "pretty_name": "IndicGLUE", "tags": ["discourse-mode-classification", "paraphrase-identification", "cross-lingual-similarity", "headline-classification"], "dataset_info": [{"config_name": "actsa-sc.te", "features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "positive", "1": "negative"}}}}], "splits": [{"name": "train", "num_bytes": 1370907, "num_examples": 4328}, {"name": "validation", "num_bytes": 166089, "num_examples": 541}, {"name": "test", "num_bytes": 168291, "num_examples": 541}], "download_size": 727630, "dataset_size": 1705287}, {"config_name": "bbca.hi", "features": [{"name": "label", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 22126205, "num_examples": 3467}, {"name": "test", "num_bytes": 5501148, "num_examples": 866}], "download_size": 10349015, "dataset_size": 27627353}, {"config_name": "copa.en", "features": [{"name": "premise", "dtype": "string"}, {"name": "choice1", "dtype": "string"}, {"name": "choice2", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "label", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 46033, "num_examples": 400}, {"name": "validation", "num_bytes": 11679, "num_examples": 100}, {"name": "test", "num_bytes": 55846, "num_examples": 500}], "download_size": 79431, "dataset_size": 113558}, {"config_name": "copa.gu", "features": [{"name": "premise", "dtype": "string"}, {"name": "choice1", "dtype": "string"}, {"name": "choice2", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "label", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 92097, "num_examples": 362}, {"name": "validation", "num_bytes": 23450, "num_examples": 88}, {"name": "test", "num_bytes": 109997, "num_examples": 448}], "download_size": 107668, "dataset_size": 225544}, {"config_name": "copa.hi", "features": [{"name": "premise", "dtype": "string"}, {"name": "choice1", "dtype": "string"}, {"name": "choice2", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "label", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 93376, "num_examples": 362}, {"name": "validation", "num_bytes": 23559, "num_examples": 88}, {"name": "test", "num_bytes": 112830, "num_examples": 449}], "download_size": 104233, "dataset_size": 229765}, {"config_name": "copa.mr", "features": [{"name": "premise", "dtype": "string"}, {"name": "choice1", "dtype": "string"}, {"name": "choice2", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "label", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 93441, "num_examples": 362}, {"name": "validation", "num_bytes": 23874, "num_examples": 88}, {"name": "test", "num_bytes": 112055, "num_examples": 449}], "download_size": 105962, "dataset_size": 229370}, {"config_name": "csqa.as", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "options", "sequence": "string"}, {"name": "out_of_context_options", "sequence": "string"}], "splits": [{"name": "test", "num_bytes": 3800523, "num_examples": 2942}], "download_size": 1390423, "dataset_size": 3800523}, {"config_name": "csqa.bn", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "options", "sequence": "string"}, {"name": "out_of_context_options", "sequence": "string"}], "splits": [{"name": "test", "num_bytes": 54671018, "num_examples": 38845}], "download_size": 19648180, "dataset_size": 54671018}, {"config_name": "csqa.gu", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "options", "sequence": "string"}, {"name": "out_of_context_options", "sequence": "string"}], "splits": [{"name": "test", "num_bytes": 29131607, "num_examples": 22861}], "download_size": 6027825, "dataset_size": 29131607}, {"config_name": "csqa.hi", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "options", "sequence": "string"}, {"name": "out_of_context_options", "sequence": "string"}], "splits": [{"name": "test", "num_bytes": 40409347, "num_examples": 35140}], "download_size": 14711258, "dataset_size": 40409347}, {"config_name": "csqa.kn", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "options", "sequence": "string"}, {"name": "out_of_context_options", "sequence": "string"}], "splits": [{"name": "test", "num_bytes": 21199816, "num_examples": 13666}], "download_size": 7669655, "dataset_size": 21199816}, {"config_name": "csqa.ml", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "options", "sequence": "string"}, {"name": "out_of_context_options", "sequence": "string"}], "splits": [{"name": "test", "num_bytes": 47220836, "num_examples": 26537}], "download_size": 17382215, "dataset_size": 47220836}, {"config_name": "csqa.mr", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "options", "sequence": "string"}, {"name": "out_of_context_options", "sequence": "string"}], "splits": [{"name": "test", "num_bytes": 13667174, "num_examples": 11370}], "download_size": 5072738, "dataset_size": 13667174}, {"config_name": "csqa.or", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "options", "sequence": "string"}, {"name": "out_of_context_options", "sequence": "string"}], "splits": [{"name": "test", "num_bytes": 2562365, "num_examples": 1975}], "download_size": 948046, "dataset_size": 2562365}, {"config_name": "csqa.pa", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "options", "sequence": "string"}, {"name": "out_of_context_options", "sequence": "string"}], "splits": [{"name": "test", "num_bytes": 5806097, "num_examples": 5667}], "download_size": 2194109, "dataset_size": 5806097}, {"config_name": "csqa.ta", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "options", "sequence": "string"}, {"name": "out_of_context_options", "sequence": "string"}], "splits": [{"name": "test", "num_bytes": 61868481, "num_examples": 38590}], "download_size": 20789467, "dataset_size": 61868481}, {"config_name": "csqa.te", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "options", "sequence": "string"}, {"name": "out_of_context_options", "sequence": "string"}], "splits": [{"name": "test", "num_bytes": 58784997, "num_examples": 41338}], "download_size": 17447618, "dataset_size": 58784997}, {"config_name": "cvit-mkb-clsr.en-bn", "features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 1990957, "num_examples": 5522}], "download_size": 945551, "dataset_size": 1990957}, {"config_name": "cvit-mkb-clsr.en-gu", "features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 2303377, "num_examples": 6463}], "download_size": 1093313, "dataset_size": 2303377}, {"config_name": "cvit-mkb-clsr.en-hi", "features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 1855989, "num_examples": 5169}], "download_size": 890609, "dataset_size": 1855989}, {"config_name": "cvit-mkb-clsr.en-ml", "features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 1990089, "num_examples": 4886}], "download_size": 868956, "dataset_size": 1990089}, {"config_name": "cvit-mkb-clsr.en-mr", "features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 2130601, "num_examples": 5760}], "download_size": 993961, "dataset_size": 2130601}, {"config_name": "cvit-mkb-clsr.en-or", "features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 274873, "num_examples": 752}], "download_size": 134334, "dataset_size": 274873}, {"config_name": "cvit-mkb-clsr.en-ta", "features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 2565178, "num_examples": 5637}], "download_size": 1091653, "dataset_size": 2565178}, {"config_name": "cvit-mkb-clsr.en-te", "features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 1771129, "num_examples": 5049}], "download_size": 840410, "dataset_size": 1771129}, {"config_name": "cvit-mkb-clsr.en-ur", "features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 288430, "num_examples": 1006}], "download_size": 166129, "dataset_size": 288430}, {"config_name": "iitp-mr.hi", "features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "negative", "1": "neutral", "2": "positive"}}}}], "splits": [{"name": "train", "num_bytes": 6704905, "num_examples": 2480}, {"name": "validation", "num_bytes": 822218, "num_examples": 310}, {"name": "test", "num_bytes": 702373, "num_examples": 310}], "download_size": 3151762, "dataset_size": 8229496}, {"config_name": "iitp-pr.hi", "features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "negative", "1": "neutral", "2": "positive"}}}}], "splits": [{"name": "train", "num_bytes": 945589, "num_examples": 4182}, {"name": "validation", "num_bytes": 120100, "num_examples": 523}, {"name": "test", "num_bytes": 121910, "num_examples": 523}], "download_size": 509822, "dataset_size": 1187599}, {"config_name": "inltkh.gu", "features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entertainment", "1": "business", "2": "tech", "3": "sports", "4": "state", "5": "spirituality", "6": "tamil-cinema", "7": "positive", "8": "negative", "9": "neutral"}}}}], "splits": [{"name": "train", "num_bytes": 883063, "num_examples": 5269}, {"name": "validation", "num_bytes": 111201, "num_examples": 659}, {"name": "test", "num_bytes": 110757, "num_examples": 659}], "download_size": 515094, "dataset_size": 1105021}, {"config_name": "inltkh.ml", "features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entertainment", "1": "business", "2": "tech", "3": "sports", "4": "state", "5": "spirituality", "6": "tamil-cinema", "7": "positive", "8": "negative", "9": "neutral"}}}}], "splits": [{"name": "train", "num_bytes": 1108145, "num_examples": 5036}, {"name": "validation", "num_bytes": 140055, "num_examples": 630}, {"name": "test", "num_bytes": 138847, "num_examples": 630}], "download_size": 571019, "dataset_size": 1387047}, {"config_name": "inltkh.mr", "features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entertainment", "1": "business", "2": "tech", "3": "sports", "4": "state", "5": "spirituality", "6": "tamil-cinema", "7": "positive", "8": "negative", "9": "neutral"}}}}], "splits": [{"name": "train", "num_bytes": 1462614, "num_examples": 9672}, {"name": "validation", "num_bytes": 180306, "num_examples": 1210}, {"name": "test", "num_bytes": 180558, "num_examples": 1210}], "download_size": 840304, "dataset_size": 1823478}, {"config_name": "inltkh.ta", "features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entertainment", "1": "business", "2": "tech", "3": "sports", "4": "state", "5": "spirituality", "6": "tamil-cinema", "7": "positive", "8": "negative", "9": "neutral"}}}}], "splits": [{"name": "train", "num_bytes": 2659569, "num_examples": 5346}, {"name": "validation", "num_bytes": 316083, "num_examples": 669}, {"name": "test", "num_bytes": 320465, "num_examples": 669}], "download_size": 1271262, "dataset_size": 3296117}, {"config_name": "inltkh.te", "features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entertainment", "1": "business", "2": "tech", "3": "sports", "4": "state", "5": "spirituality", "6": "tamil-cinema", "7": "positive", "8": "negative", "9": "neutral"}}}}], "splits": [{"name": "train", "num_bytes": 1361667, "num_examples": 4328}, {"name": "validation", "num_bytes": 170471, "num_examples": 541}, {"name": "test", "num_bytes": 173149, "num_examples": 541}], "download_size": 726293, "dataset_size": 1705287}, {"config_name": "md.hi", "features": [{"name": "sentence", "dtype": "string"}, {"name": "discourse_mode", "dtype": "string"}, {"name": "story_number", "dtype": "int32"}, {"name": "id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 1672109, "num_examples": 7974}, {"name": "validation", "num_bytes": 211187, "num_examples": 997}, {"name": "test", "num_bytes": 210175, "num_examples": 997}], "download_size": 939801, "dataset_size": 2093471}, {"config_name": "sna.bn", "features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "kolkata", "1": "state", "2": "national", "3": "sports", "4": "entertainment", "5": "international"}}}}], "splits": [{"name": "train", "num_bytes": 46070046, "num_examples": 11284}, {"name": "validation", "num_bytes": 5648126, "num_examples": 1411}, {"name": "test", "num_bytes": 5799979, "num_examples": 1411}], "download_size": 21415940, "dataset_size": 57518151}, {"config_name": "wiki-ner.as", "features": [{"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "B-LOC", "1": "B-ORG", "2": "B-PER", "3": "I-LOC", "4": "I-ORG", "5": "I-PER", "6": "O"}}}}, {"name": "additional_info", "sequence": {"sequence": "string"}}], "splits": [{"name": "train", "num_bytes": 374983, "num_examples": 1021}, {"name": "validation", "num_bytes": 49312, "num_examples": 157}, {"name": "test", "num_bytes": 50456, "num_examples": 160}], "download_size": 72919, "dataset_size": 474751}, {"config_name": "wiki-ner.bn", "features": [{"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "B-LOC", "1": "B-ORG", "2": "B-PER", "3": "I-LOC", "4": "I-ORG", "5": "I-PER", "6": "O"}}}}, {"name": "additional_info", "sequence": {"sequence": "string"}}], "splits": [{"name": "train", "num_bytes": 7502824, "num_examples": 20223}, {"name": "validation", "num_bytes": 988683, "num_examples": 2985}, {"name": "test", "num_bytes": 985941, "num_examples": 2690}], "download_size": 1278219, "dataset_size": 9477448}, {"config_name": "wiki-ner.gu", "features": [{"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "B-LOC", "1": "B-ORG", "2": "B-PER", "3": "I-LOC", "4": "I-ORG", "5": "I-PER", "6": "O"}}}}, {"name": "additional_info", "sequence": {"sequence": "string"}}], "splits": [{"name": "train", "num_bytes": 1571588, "num_examples": 2343}, {"name": "validation", "num_bytes": 192804, "num_examples": 297}, {"name": "test", "num_bytes": 197877, "num_examples": 255}], "download_size": 329660, "dataset_size": 1962269}, {"config_name": "wiki-ner.hi", "features": [{"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "B-LOC", "1": "B-ORG", "2": "B-PER", "3": "I-LOC", "4": "I-ORG", "5": "I-PER", "6": "O"}}}}, {"name": "additional_info", "sequence": {"sequence": "string"}}], "splits": [{"name": "train", "num_bytes": 3762505, "num_examples": 9463}, {"name": "validation", "num_bytes": 468678, "num_examples": 1114}, {"name": "test", "num_bytes": 475253, "num_examples": 1256}], "download_size": 948132, "dataset_size": 4706436}, {"config_name": "wiki-ner.kn", "features": [{"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "B-LOC", "1": "B-ORG", "2": "B-PER", "3": "I-LOC", "4": "I-ORG", "5": "I-PER", "6": "O"}}}}, {"name": "additional_info", "sequence": {"sequence": "string"}}], "splits": [{"name": "train", "num_bytes": 1352027, "num_examples": 2679}, {"name": "validation", "num_bytes": 179538, "num_examples": 412}, {"name": "test", "num_bytes": 180791, "num_examples": 476}], "download_size": 421877, "dataset_size": 1712356}, {"config_name": "wiki-ner.ml", "features": [{"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "B-LOC", "1": "B-ORG", "2": "B-PER", "3": "I-LOC", "4": "I-ORG", "5": "I-PER", "6": "O"}}}}, {"name": "additional_info", "sequence": {"sequence": "string"}}], "splits": [{"name": "train", "num_bytes": 7678887, "num_examples": 15620}, {"name": "validation", "num_bytes": 969947, "num_examples": 2067}, {"name": "test", "num_bytes": 991102, "num_examples": 2042}], "download_size": 2390442, "dataset_size": 9639936}, {"config_name": "wiki-ner.mr", "features": [{"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "B-LOC", "1": "B-ORG", "2": "B-PER", "3": "I-LOC", "4": "I-ORG", "5": "I-PER", "6": "O"}}}}, {"name": "additional_info", "sequence": {"sequence": "string"}}], "splits": [{"name": "train", "num_bytes": 5431489, "num_examples": 12151}, {"name": "validation", "num_bytes": 701637, "num_examples": 1498}, {"name": "test", "num_bytes": 655682, "num_examples": 1329}], "download_size": 1410663, "dataset_size": 6788808}, {"config_name": "wiki-ner.or", "features": [{"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "B-LOC", "1": "B-ORG", "2": "B-PER", "3": "I-LOC", "4": "I-ORG", "5": "I-PER", "6": "O"}}}}, {"name": "additional_info", "sequence": {"sequence": "string"}}], "splits": [{"name": "train", "num_bytes": 493758, "num_examples": 1077}, {"name": "validation", "num_bytes": 58568, "num_examples": 132}, {"name": "test", "num_bytes": 62211, "num_examples": 153}], "download_size": 102783, "dataset_size": 614537}, {"config_name": "wiki-ner.pa", "features": [{"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "B-LOC", "1": "B-ORG", "2": "B-PER", "3": "I-LOC", "4": "I-ORG", "5": "I-PER", "6": "O"}}}}, {"name": "additional_info", "sequence": {"sequence": "string"}}], "splits": [{"name": "train", "num_bytes": 520244, "num_examples": 1408}, {"name": "validation", "num_bytes": 61170, "num_examples": 186}, {"name": "test", "num_bytes": 61788, "num_examples": 179}], "download_size": 149727, "dataset_size": 643202}, {"config_name": "wiki-ner.ta", "features": [{"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "B-LOC", "1": "B-ORG", "2": "B-PER", "3": "I-LOC", "4": "I-ORG", "5": "I-PER", "6": "O"}}}}, {"name": "additional_info", "sequence": {"sequence": "string"}}], "splits": [{"name": "train", "num_bytes": 10117080, "num_examples": 20466}, {"name": "validation", "num_bytes": 1267188, "num_examples": 2586}, {"name": "test", "num_bytes": 1321626, "num_examples": 2611}], "download_size": 2819083, "dataset_size": 12705894}, {"config_name": "wiki-ner.te", "features": [{"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "B-LOC", "1": "B-ORG", "2": "B-PER", "3": "I-LOC", "4": "I-ORG", "5": "I-PER", "6": "O"}}}}, {"name": "additional_info", "sequence": {"sequence": "string"}}], "splits": [{"name": "train", "num_bytes": 3881211, "num_examples": 7978}, {"name": "validation", "num_bytes": 458509, "num_examples": 841}, {"name": "test", "num_bytes": 507806, "num_examples": 1110}], "download_size": 1006881, "dataset_size": 4847526}, {"config_name": "wnli.en", "features": [{"name": "hypothesis", "dtype": "string"}, {"name": "premise", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "not_entailment", "1": "entailment", "2": "None"}}}}], "splits": [{"name": "train", "num_bytes": 104569, "num_examples": 635}, {"name": "validation", "num_bytes": 11878, "num_examples": 71}, {"name": "test", "num_bytes": 37297, "num_examples": 146}], "download_size": 57667, "dataset_size": 153744}, {"config_name": "wnli.gu", "features": [{"name": "hypothesis", "dtype": "string"}, {"name": "premise", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "not_entailment", "1": "entailment", "2": "None"}}}}], "splits": [{"name": "train", "num_bytes": 251554, "num_examples": 635}, {"name": "validation", "num_bytes": 28175, "num_examples": 71}, {"name": "test", "num_bytes": 94578, "num_examples": 146}], "download_size": 98032, "dataset_size": 374307}, {"config_name": "wnli.hi", "features": [{"name": "hypothesis", "dtype": "string"}, {"name": "premise", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "not_entailment", "1": "entailment", "2": "None"}}}}], "splits": [{"name": "train", "num_bytes": 253334, "num_examples": 635}, {"name": "validation", "num_bytes": 28676, "num_examples": 71}, {"name": "test", "num_bytes": 90823, "num_examples": 146}], "download_size": 99450, "dataset_size": 372833}, {"config_name": "wnli.mr", "features": [{"name": "hypothesis", "dtype": "string"}, {"name": "premise", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "not_entailment", "1": "entailment", "2": "None"}}}}], "splits": [{"name": "train", "num_bytes": 256649, "num_examples": 635}, {"name": "validation", "num_bytes": 29218, "num_examples": 71}, {"name": "test", "num_bytes": 97128, "num_examples": 146}], "download_size": 103774, "dataset_size": 382995}, {"config_name": "wstp.as", "features": [{"name": "sectionText", "dtype": "string"}, {"name": "correctTitle", "dtype": "string"}, {"name": "titleA", "dtype": "string"}, {"name": "titleB", "dtype": "string"}, {"name": "titleC", "dtype": "string"}, {"name": "titleD", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 13581336, "num_examples": 5000}, {"name": "validation", "num_bytes": 1698968, "num_examples": 625}, {"name": "test", "num_bytes": 1697650, "num_examples": 626}], "download_size": 6959458, "dataset_size": 16977954}, {"config_name": "wstp.bn", "features": [{"name": "sectionText", "dtype": "string"}, {"name": "correctTitle", "dtype": "string"}, {"name": "titleA", "dtype": "string"}, {"name": "titleB", "dtype": "string"}, {"name": "titleC", "dtype": "string"}, {"name": "titleD", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 143340457, "num_examples": 47580}, {"name": "validation", "num_bytes": 17759236, "num_examples": 5947}, {"name": "test", "num_bytes": 17633865, "num_examples": 5948}], "download_size": 69145372, "dataset_size": 178733558}, {"config_name": "wstp.gu", "features": [{"name": "sectionText", "dtype": "string"}, {"name": "correctTitle", "dtype": "string"}, {"name": "titleA", "dtype": "string"}, {"name": "titleB", "dtype": "string"}, {"name": "titleC", "dtype": "string"}, {"name": "titleD", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 39353464, "num_examples": 10004}, {"name": "validation", "num_bytes": 4887752, "num_examples": 1251}, {"name": "test", "num_bytes": 4699158, "num_examples": 1251}], "download_size": 19763249, "dataset_size": 48940374}, {"config_name": "wstp.hi", "features": [{"name": "sectionText", "dtype": "string"}, {"name": "correctTitle", "dtype": "string"}, {"name": "titleA", "dtype": "string"}, {"name": "titleB", "dtype": "string"}, {"name": "titleC", "dtype": "string"}, {"name": "titleD", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 158529578, "num_examples": 44069}, {"name": "validation", "num_bytes": 19371904, "num_examples": 5509}, {"name": "test", "num_bytes": 19593001, "num_examples": 5509}], "download_size": 77868574, "dataset_size": 197494483}, {"config_name": "wstp.kn", "features": [{"name": "sectionText", "dtype": "string"}, {"name": "correctTitle", "dtype": "string"}, {"name": "titleA", "dtype": "string"}, {"name": "titleB", "dtype": "string"}, {"name": "titleC", "dtype": "string"}, {"name": "titleD", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 139950313, "num_examples": 35379}, {"name": "validation", "num_bytes": 17789782, "num_examples": 4422}, {"name": "test", "num_bytes": 17897031, "num_examples": 4423}], "download_size": 67719504, "dataset_size": 175637126}, {"config_name": "wstp.ml", "features": [{"name": "sectionText", "dtype": "string"}, {"name": "correctTitle", "dtype": "string"}, {"name": "titleA", "dtype": "string"}, {"name": "titleB", "dtype": "string"}, {"name": "titleC", "dtype": "string"}, {"name": "titleD", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 88360504, "num_examples": 27527}, {"name": "validation", "num_bytes": 11193340, "num_examples": 3441}, {"name": "test", "num_bytes": 11150914, "num_examples": 3441}], "download_size": 42336357, "dataset_size": 110704758}, {"config_name": "wstp.mr", "features": [{"name": "sectionText", "dtype": "string"}, {"name": "correctTitle", "dtype": "string"}, {"name": "titleA", "dtype": "string"}, {"name": "titleB", "dtype": "string"}, {"name": "titleC", "dtype": "string"}, {"name": "titleD", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 28302341, "num_examples": 10446}, {"name": "validation", "num_bytes": 3328798, "num_examples": 1306}, {"name": "test", "num_bytes": 3631684, "num_examples": 1306}], "download_size": 13886208, "dataset_size": 35262823}, {"config_name": "wstp.or", "features": [{"name": "sectionText", "dtype": "string"}, {"name": "correctTitle", "dtype": "string"}, {"name": "titleA", "dtype": "string"}, {"name": "titleB", "dtype": "string"}, {"name": "titleC", "dtype": "string"}, {"name": "titleD", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 10900006, "num_examples": 4015}, {"name": "validation", "num_bytes": 1264935, "num_examples": 502}, {"name": "test", "num_bytes": 1344652, "num_examples": 502}], "download_size": 5319128, "dataset_size": 13509593}, {"config_name": "wstp.pa", "features": [{"name": "sectionText", "dtype": "string"}, {"name": "correctTitle", "dtype": "string"}, {"name": "titleA", "dtype": "string"}, {"name": "titleB", "dtype": "string"}, {"name": "titleC", "dtype": "string"}, {"name": "titleD", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 22189730, "num_examples": 8772}, {"name": "validation", "num_bytes": 2789186, "num_examples": 1097}, {"name": "test", "num_bytes": 2685767, "num_examples": 1097}], "download_size": 11201369, "dataset_size": 27664683}, {"config_name": "wstp.ta", "features": [{"name": "sectionText", "dtype": "string"}, {"name": "correctTitle", "dtype": "string"}, {"name": "titleA", "dtype": "string"}, {"name": "titleB", "dtype": "string"}, {"name": "titleC", "dtype": "string"}, {"name": "titleD", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 151929218, "num_examples": 48940}, {"name": "validation", "num_bytes": 18817167, "num_examples": 6117}, {"name": "test", "num_bytes": 18815071, "num_examples": 6118}], "download_size": 68699092, "dataset_size": 189561456}, {"config_name": "wstp.te", "features": [{"name": "sectionText", "dtype": "string"}, {"name": "correctTitle", "dtype": "string"}, {"name": "titleA", "dtype": "string"}, {"name": "titleB", "dtype": "string"}, {"name": "titleC", "dtype": "string"}, {"name": "titleD", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 151696691, "num_examples": 80000}, {"name": "validation", "num_bytes": 19003169, "num_examples": 10000}, {"name": "test", "num_bytes": 18991913, "num_examples": 10000}], "download_size": 50158580, "dataset_size": 189691773}], "configs": [{"config_name": "actsa-sc.te", "data_files": [{"split": "train", "path": "actsa-sc.te/train-*"}, {"split": "validation", "path": "actsa-sc.te/validation-*"}, {"split": "test", "path": "actsa-sc.te/test-*"}]}, {"config_name": "bbca.hi", "data_files": [{"split": "train", "path": "bbca.hi/train-*"}, {"split": "test", "path": "bbca.hi/test-*"}]}, {"config_name": "copa.en", "data_files": [{"split": "train", "path": "copa.en/train-*"}, {"split": "validation", "path": "copa.en/validation-*"}, {"split": "test", "path": "copa.en/test-*"}]}, {"config_name": "copa.gu", "data_files": [{"split": "train", "path": "copa.gu/train-*"}, {"split": "validation", "path": "copa.gu/validation-*"}, {"split": "test", "path": "copa.gu/test-*"}]}, {"config_name": "copa.hi", "data_files": [{"split": "train", "path": "copa.hi/train-*"}, {"split": "validation", "path": "copa.hi/validation-*"}, {"split": "test", "path": "copa.hi/test-*"}]}, {"config_name": "copa.mr", "data_files": [{"split": "train", "path": "copa.mr/train-*"}, {"split": "validation", "path": "copa.mr/validation-*"}, {"split": "test", "path": "copa.mr/test-*"}]}, {"config_name": "csqa.as", "data_files": [{"split": "test", "path": "csqa.as/test-*"}]}, {"config_name": "csqa.bn", "data_files": [{"split": "test", "path": "csqa.bn/test-*"}]}, {"config_name": "csqa.gu", "data_files": [{"split": "test", "path": "csqa.gu/test-*"}]}, {"config_name": "csqa.hi", "data_files": [{"split": "test", "path": "csqa.hi/test-*"}]}, {"config_name": "csqa.kn", "data_files": [{"split": "test", "path": "csqa.kn/test-*"}]}, {"config_name": "csqa.ml", "data_files": [{"split": "test", "path": "csqa.ml/test-*"}]}, {"config_name": "csqa.mr", "data_files": [{"split": "test", "path": "csqa.mr/test-*"}]}, {"config_name": "csqa.or", "data_files": [{"split": "test", "path": "csqa.or/test-*"}]}, {"config_name": "csqa.pa", "data_files": [{"split": "test", "path": "csqa.pa/test-*"}]}, {"config_name": "csqa.ta", "data_files": [{"split": "test", "path": "csqa.ta/test-*"}]}, {"config_name": "csqa.te", "data_files": [{"split": "test", "path": "csqa.te/test-*"}]}, {"config_name": "cvit-mkb-clsr.en-bn", "data_files": [{"split": "test", "path": "cvit-mkb-clsr.en-bn/test-*"}]}, {"config_name": "cvit-mkb-clsr.en-gu", "data_files": [{"split": "test", "path": "cvit-mkb-clsr.en-gu/test-*"}]}, {"config_name": "cvit-mkb-clsr.en-hi", "data_files": [{"split": "test", "path": "cvit-mkb-clsr.en-hi/test-*"}]}, {"config_name": "cvit-mkb-clsr.en-ml", "data_files": [{"split": "test", "path": "cvit-mkb-clsr.en-ml/test-*"}]}, {"config_name": "cvit-mkb-clsr.en-mr", "data_files": [{"split": "test", "path": "cvit-mkb-clsr.en-mr/test-*"}]}, {"config_name": "cvit-mkb-clsr.en-or", "data_files": [{"split": "test", "path": "cvit-mkb-clsr.en-or/test-*"}]}, {"config_name": "cvit-mkb-clsr.en-ta", "data_files": [{"split": "test", "path": "cvit-mkb-clsr.en-ta/test-*"}]}, {"config_name": "cvit-mkb-clsr.en-te", "data_files": [{"split": "test", "path": "cvit-mkb-clsr.en-te/test-*"}]}, {"config_name": "cvit-mkb-clsr.en-ur", "data_files": [{"split": "test", "path": "cvit-mkb-clsr.en-ur/test-*"}]}, {"config_name": "iitp-mr.hi", "data_files": [{"split": "train", "path": "iitp-mr.hi/train-*"}, {"split": "validation", "path": "iitp-mr.hi/validation-*"}, {"split": "test", "path": "iitp-mr.hi/test-*"}]}, {"config_name": "iitp-pr.hi", "data_files": [{"split": "train", "path": "iitp-pr.hi/train-*"}, {"split": "validation", "path": "iitp-pr.hi/validation-*"}, {"split": "test", "path": "iitp-pr.hi/test-*"}]}, {"config_name": "inltkh.gu", "data_files": [{"split": "train", "path": "inltkh.gu/train-*"}, {"split": "validation", "path": "inltkh.gu/validation-*"}, {"split": "test", "path": "inltkh.gu/test-*"}]}, {"config_name": "inltkh.ml", "data_files": [{"split": "train", "path": "inltkh.ml/train-*"}, {"split": "validation", "path": "inltkh.ml/validation-*"}, {"split": "test", "path": "inltkh.ml/test-*"}]}, {"config_name": "inltkh.mr", "data_files": [{"split": "train", "path": "inltkh.mr/train-*"}, {"split": "validation", "path": "inltkh.mr/validation-*"}, {"split": "test", "path": "inltkh.mr/test-*"}]}, {"config_name": "inltkh.ta", "data_files": [{"split": "train", "path": "inltkh.ta/train-*"}, {"split": "validation", "path": "inltkh.ta/validation-*"}, {"split": "test", "path": "inltkh.ta/test-*"}]}, {"config_name": "inltkh.te", "data_files": [{"split": "train", "path": "inltkh.te/train-*"}, {"split": "validation", "path": "inltkh.te/validation-*"}, {"split": "test", "path": "inltkh.te/test-*"}]}, {"config_name": "md.hi", "data_files": [{"split": "train", "path": "md.hi/train-*"}, {"split": "validation", "path": "md.hi/validation-*"}, {"split": "test", "path": "md.hi/test-*"}]}, {"config_name": "sna.bn", "data_files": [{"split": "train", "path": "sna.bn/train-*"}, {"split": "validation", "path": "sna.bn/validation-*"}, {"split": "test", "path": "sna.bn/test-*"}]}, {"config_name": "wiki-ner.as", "data_files": [{"split": "train", "path": "wiki-ner.as/train-*"}, {"split": "validation", "path": "wiki-ner.as/validation-*"}, {"split": "test", "path": "wiki-ner.as/test-*"}]}, {"config_name": "wiki-ner.bn", "data_files": [{"split": "train", "path": "wiki-ner.bn/train-*"}, {"split": "validation", "path": "wiki-ner.bn/validation-*"}, {"split": "test", "path": "wiki-ner.bn/test-*"}]}, {"config_name": "wiki-ner.gu", "data_files": [{"split": "train", "path": "wiki-ner.gu/train-*"}, {"split": "validation", "path": "wiki-ner.gu/validation-*"}, {"split": "test", "path": "wiki-ner.gu/test-*"}]}, {"config_name": "wiki-ner.hi", "data_files": [{"split": "train", "path": "wiki-ner.hi/train-*"}, {"split": "validation", "path": "wiki-ner.hi/validation-*"}, {"split": "test", "path": "wiki-ner.hi/test-*"}]}, {"config_name": "wiki-ner.kn", "data_files": [{"split": "train", "path": "wiki-ner.kn/train-*"}, {"split": "validation", "path": "wiki-ner.kn/validation-*"}, {"split": "test", "path": "wiki-ner.kn/test-*"}]}, {"config_name": "wiki-ner.ml", "data_files": [{"split": "train", "path": "wiki-ner.ml/train-*"}, {"split": "validation", "path": "wiki-ner.ml/validation-*"}, {"split": "test", "path": "wiki-ner.ml/test-*"}]}, {"config_name": "wiki-ner.mr", "data_files": [{"split": "train", "path": "wiki-ner.mr/train-*"}, {"split": "validation", "path": "wiki-ner.mr/validation-*"}, {"split": "test", "path": "wiki-ner.mr/test-*"}]}, {"config_name": "wiki-ner.or", "data_files": [{"split": "train", "path": "wiki-ner.or/train-*"}, {"split": "validation", "path": "wiki-ner.or/validation-*"}, {"split": "test", "path": "wiki-ner.or/test-*"}]}, {"config_name": "wiki-ner.pa", "data_files": [{"split": "train", "path": "wiki-ner.pa/train-*"}, {"split": "validation", "path": "wiki-ner.pa/validation-*"}, {"split": "test", "path": "wiki-ner.pa/test-*"}]}, {"config_name": "wiki-ner.ta", "data_files": [{"split": "train", "path": "wiki-ner.ta/train-*"}, {"split": "validation", "path": "wiki-ner.ta/validation-*"}, {"split": "test", "path": "wiki-ner.ta/test-*"}]}, {"config_name": "wiki-ner.te", "data_files": [{"split": "train", "path": "wiki-ner.te/train-*"}, {"split": "validation", "path": "wiki-ner.te/validation-*"}, {"split": "test", "path": "wiki-ner.te/test-*"}]}, {"config_name": "wnli.en", "data_files": [{"split": "train", "path": "wnli.en/train-*"}, {"split": "validation", "path": "wnli.en/validation-*"}, {"split": "test", "path": "wnli.en/test-*"}]}, {"config_name": "wnli.gu", "data_files": [{"split": "train", "path": "wnli.gu/train-*"}, {"split": "validation", "path": "wnli.gu/validation-*"}, {"split": "test", "path": "wnli.gu/test-*"}]}, {"config_name": "wnli.hi", "data_files": [{"split": "train", "path": "wnli.hi/train-*"}, {"split": "validation", "path": "wnli.hi/validation-*"}, {"split": "test", "path": "wnli.hi/test-*"}]}, {"config_name": "wnli.mr", "data_files": [{"split": "train", "path": "wnli.mr/train-*"}, {"split": "validation", "path": "wnli.mr/validation-*"}, {"split": "test", "path": "wnli.mr/test-*"}]}, {"config_name": "wstp.as", "data_files": [{"split": "train", "path": "wstp.as/train-*"}, {"split": "validation", "path": "wstp.as/validation-*"}, {"split": "test", "path": "wstp.as/test-*"}]}, {"config_name": "wstp.bn", "data_files": [{"split": "train", "path": "wstp.bn/train-*"}, {"split": "validation", "path": "wstp.bn/validation-*"}, {"split": "test", "path": "wstp.bn/test-*"}]}, {"config_name": "wstp.gu", "data_files": [{"split": "train", "path": "wstp.gu/train-*"}, {"split": "validation", "path": "wstp.gu/validation-*"}, {"split": "test", "path": "wstp.gu/test-*"}]}, {"config_name": "wstp.hi", "data_files": [{"split": "train", "path": "wstp.hi/train-*"}, {"split": "validation", "path": "wstp.hi/validation-*"}, {"split": "test", "path": "wstp.hi/test-*"}]}, {"config_name": "wstp.kn", "data_files": [{"split": "train", "path": "wstp.kn/train-*"}, {"split": "validation", "path": "wstp.kn/validation-*"}, {"split": "test", "path": "wstp.kn/test-*"}]}, {"config_name": "wstp.ml", "data_files": [{"split": "train", "path": "wstp.ml/train-*"}, {"split": "validation", "path": "wstp.ml/validation-*"}, {"split": "test", "path": "wstp.ml/test-*"}]}, {"config_name": "wstp.mr", "data_files": [{"split": "train", "path": "wstp.mr/train-*"}, {"split": "validation", "path": "wstp.mr/validation-*"}, {"split": "test", "path": "wstp.mr/test-*"}]}, {"config_name": "wstp.or", "data_files": [{"split": "train", "path": "wstp.or/train-*"}, {"split": "validation", "path": "wstp.or/validation-*"}, {"split": "test", "path": "wstp.or/test-*"}]}, {"config_name": "wstp.pa", "data_files": [{"split": "train", "path": "wstp.pa/train-*"}, {"split": "validation", "path": "wstp.pa/validation-*"}, {"split": "test", "path": "wstp.pa/test-*"}]}, {"config_name": "wstp.ta", "data_files": [{"split": "train", "path": "wstp.ta/train-*"}, {"split": "validation", "path": "wstp.ta/validation-*"}, {"split": "test", "path": "wstp.ta/test-*"}]}, {"config_name": "wstp.te", "data_files": [{"split": "train", "path": "wstp.te/train-*"}, {"split": "validation", "path": "wstp.te/validation-*"}, {"split": "test", "path": "wstp.te/test-*"}]}]}
2024-01-04T12:36:30+00:00
[]
[ "as", "bn", "en", "gu", "hi", "kn", "ml", "mr", "or", "pa", "ta", "te" ]
TAGS #task_categories-text-classification #task_categories-token-classification #task_categories-multiple-choice #task_ids-topic-classification #task_ids-natural-language-inference #task_ids-sentiment-analysis #task_ids-semantic-similarity-scoring #task_ids-named-entity-recognition #task_ids-multiple-choice-qa #annotations_creators-other #language_creators-found #multilinguality-multilingual #size_categories-100K<n<1M #source_datasets-extended|other #language-Assamese #language-Bengali #language-English #language-Gujarati #language-Hindi #language-Kannada #language-Malayalam #language-Marathi #language-Oriya (macrolanguage) #language-Panjabi #language-Tamil #language-Telugu #license-other #discourse-mode-classification #paraphrase-identification #cross-lingual-similarity #headline-classification #region-us
Dataset Card for "indic\_glue" ============================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: * Paper: IndicNLPSuite: Monolingual Corpora, Evaluation Benchmarks and Pre-trained Multilingual Language Models for Indian Languages * Point of Contact: * Size of downloaded dataset files: 3.51 GB * Size of the generated dataset: 1.65 GB * Total amount of disk used: 5.16 GB ### Dataset Summary IndicGLUE is a natural language understanding benchmark for Indian languages. It contains a wide variety of tasks and covers 11 major Indian languages - as, bn, gu, hi, kn, ml, mr, or, pa, ta, te. The Winograd Schema Challenge (Levesque et al., 2011) is a reading comprehension task in which a system must read a sentence with a pronoun and select the referent of that pronoun from a list of choices. The examples are manually constructed to foil simple statistical methods: Each one is contingent on contextual information provided by a single word or phrase in the sentence. To convert the problem into sentence pair classification, we construct sentence pairs by replacing the ambiguous pronoun with each possible referent. The task is to predict if the sentence with the pronoun substituted is entailed by the original sentence. We use a small evaluation set consisting of new examples derived from fiction books that was shared privately by the authors of the original corpus. While the included training set is balanced between two classes, the test set is imbalanced between them (65% not entailment). Also, due to a data quirk, the development set is adversarial: hypotheses are sometimes shared between training and development examples, so if a model memorizes the training examples, they will predict the wrong label on corresponding development set example. As with QNLI, each example is evaluated separately, so there is not a systematic correspondence between a model's score on this task and its score on the unconverted original task. We call converted dataset WNLI (Winograd NLI). This dataset is translated and publicly released for 3 Indian languages by AI4Bharat. ### Supported Tasks and Leaderboards ### Languages Dataset Structure ----------------- ### Data Instances #### URL * Size of downloaded dataset files: 0.38 MB * Size of the generated dataset: 1.71 MB * Total amount of disk used: 2.09 MB An example of 'validation' looks as follows. #### URL * Size of downloaded dataset files: 5.77 MB * Size of the generated dataset: 27.63 MB * Total amount of disk used: 33.40 MB An example of 'train' looks as follows. #### URL * Size of downloaded dataset files: 0.75 MB * Size of the generated dataset: 0.12 MB * Total amount of disk used: 0.87 MB An example of 'validation' looks as follows. #### URL * Size of downloaded dataset files: 0.75 MB * Size of the generated dataset: 0.23 MB * Total amount of disk used: 0.99 MB An example of 'train' looks as follows. #### URL * Size of downloaded dataset files: 0.75 MB * Size of the generated dataset: 0.23 MB * Total amount of disk used: 0.99 MB An example of 'validation' looks as follows. ### Data Fields The data fields are the same among all splits. #### URL * 'text': a 'string' feature. * 'label': a classification label, with possible values including 'positive' (0), 'negative' (1). #### URL * 'label': a 'string' feature. * 'text': a 'string' feature. #### URL * 'premise': a 'string' feature. * 'choice1': a 'string' feature. * 'choice2': a 'string' feature. * 'question': a 'string' feature. * 'label': a 'int32' feature. #### URL * 'premise': a 'string' feature. * 'choice1': a 'string' feature. * 'choice2': a 'string' feature. * 'question': a 'string' feature. * 'label': a 'int32' feature. #### URL * 'premise': a 'string' feature. * 'choice1': a 'string' feature. * 'choice2': a 'string' feature. * 'question': a 'string' feature. * 'label': a 'int32' feature. ### Data Splits #### URL #### URL #### URL #### URL #### URL Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to @sumanthd17 for adding this dataset.
[ "### Dataset Summary\n\n\nIndicGLUE is a natural language understanding benchmark for Indian languages. It contains a wide\nvariety of tasks and covers 11 major Indian languages - as, bn, gu, hi, kn, ml, mr, or, pa, ta, te.\n\n\nThe Winograd Schema Challenge (Levesque et al., 2011) is a reading comprehension task\nin which a system must read a sentence with a pronoun and select the referent of that pronoun from\na list of choices. The examples are manually constructed to foil simple statistical methods: Each\none is contingent on contextual information provided by a single word or phrase in the sentence.\nTo convert the problem into sentence pair classification, we construct sentence pairs by replacing\nthe ambiguous pronoun with each possible referent. The task is to predict if the sentence with the\npronoun substituted is entailed by the original sentence. We use a small evaluation set consisting of\nnew examples derived from fiction books that was shared privately by the authors of the original\ncorpus. While the included training set is balanced between two classes, the test set is imbalanced\nbetween them (65% not entailment). Also, due to a data quirk, the development set is adversarial:\nhypotheses are sometimes shared between training and development examples, so if a model memorizes the\ntraining examples, they will predict the wrong label on corresponding development set\nexample. As with QNLI, each example is evaluated separately, so there is not a systematic correspondence\nbetween a model's score on this task and its score on the unconverted original task. We\ncall converted dataset WNLI (Winograd NLI). This dataset is translated and publicly released for 3\nIndian languages by AI4Bharat.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### URL\n\n\n* Size of downloaded dataset files: 0.38 MB\n* Size of the generated dataset: 1.71 MB\n* Total amount of disk used: 2.09 MB\n\n\nAn example of 'validation' looks as follows.", "#### URL\n\n\n* Size of downloaded dataset files: 5.77 MB\n* Size of the generated dataset: 27.63 MB\n* Total amount of disk used: 33.40 MB\n\n\nAn example of 'train' looks as follows.", "#### URL\n\n\n* Size of downloaded dataset files: 0.75 MB\n* Size of the generated dataset: 0.12 MB\n* Total amount of disk used: 0.87 MB\n\n\nAn example of 'validation' looks as follows.", "#### URL\n\n\n* Size of downloaded dataset files: 0.75 MB\n* Size of the generated dataset: 0.23 MB\n* Total amount of disk used: 0.99 MB\n\n\nAn example of 'train' looks as follows.", "#### URL\n\n\n* Size of downloaded dataset files: 0.75 MB\n* Size of the generated dataset: 0.23 MB\n* Total amount of disk used: 0.99 MB\n\n\nAn example of 'validation' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### URL\n\n\n* 'text': a 'string' feature.\n* 'label': a classification label, with possible values including 'positive' (0), 'negative' (1).", "#### URL\n\n\n* 'label': a 'string' feature.\n* 'text': a 'string' feature.", "#### URL\n\n\n* 'premise': a 'string' feature.\n* 'choice1': a 'string' feature.\n* 'choice2': a 'string' feature.\n* 'question': a 'string' feature.\n* 'label': a 'int32' feature.", "#### URL\n\n\n* 'premise': a 'string' feature.\n* 'choice1': a 'string' feature.\n* 'choice2': a 'string' feature.\n* 'question': a 'string' feature.\n* 'label': a 'int32' feature.", "#### URL\n\n\n* 'premise': a 'string' feature.\n* 'choice1': a 'string' feature.\n* 'choice2': a 'string' feature.\n* 'question': a 'string' feature.\n* 'label': a 'int32' feature.", "### Data Splits", "#### URL", "#### URL", "#### URL", "#### URL", "#### URL\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @sumanthd17 for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_categories-token-classification #task_categories-multiple-choice #task_ids-topic-classification #task_ids-natural-language-inference #task_ids-sentiment-analysis #task_ids-semantic-similarity-scoring #task_ids-named-entity-recognition #task_ids-multiple-choice-qa #annotations_creators-other #language_creators-found #multilinguality-multilingual #size_categories-100K<n<1M #source_datasets-extended|other #language-Assamese #language-Bengali #language-English #language-Gujarati #language-Hindi #language-Kannada #language-Malayalam #language-Marathi #language-Oriya (macrolanguage) #language-Panjabi #language-Tamil #language-Telugu #license-other #discourse-mode-classification #paraphrase-identification #cross-lingual-similarity #headline-classification #region-us \n", "### Dataset Summary\n\n\nIndicGLUE is a natural language understanding benchmark for Indian languages. It contains a wide\nvariety of tasks and covers 11 major Indian languages - as, bn, gu, hi, kn, ml, mr, or, pa, ta, te.\n\n\nThe Winograd Schema Challenge (Levesque et al., 2011) is a reading comprehension task\nin which a system must read a sentence with a pronoun and select the referent of that pronoun from\na list of choices. The examples are manually constructed to foil simple statistical methods: Each\none is contingent on contextual information provided by a single word or phrase in the sentence.\nTo convert the problem into sentence pair classification, we construct sentence pairs by replacing\nthe ambiguous pronoun with each possible referent. The task is to predict if the sentence with the\npronoun substituted is entailed by the original sentence. We use a small evaluation set consisting of\nnew examples derived from fiction books that was shared privately by the authors of the original\ncorpus. While the included training set is balanced between two classes, the test set is imbalanced\nbetween them (65% not entailment). Also, due to a data quirk, the development set is adversarial:\nhypotheses are sometimes shared between training and development examples, so if a model memorizes the\ntraining examples, they will predict the wrong label on corresponding development set\nexample. As with QNLI, each example is evaluated separately, so there is not a systematic correspondence\nbetween a model's score on this task and its score on the unconverted original task. We\ncall converted dataset WNLI (Winograd NLI). This dataset is translated and publicly released for 3\nIndian languages by AI4Bharat.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### URL\n\n\n* Size of downloaded dataset files: 0.38 MB\n* Size of the generated dataset: 1.71 MB\n* Total amount of disk used: 2.09 MB\n\n\nAn example of 'validation' looks as follows.", "#### URL\n\n\n* Size of downloaded dataset files: 5.77 MB\n* Size of the generated dataset: 27.63 MB\n* Total amount of disk used: 33.40 MB\n\n\nAn example of 'train' looks as follows.", "#### URL\n\n\n* Size of downloaded dataset files: 0.75 MB\n* Size of the generated dataset: 0.12 MB\n* Total amount of disk used: 0.87 MB\n\n\nAn example of 'validation' looks as follows.", "#### URL\n\n\n* Size of downloaded dataset files: 0.75 MB\n* Size of the generated dataset: 0.23 MB\n* Total amount of disk used: 0.99 MB\n\n\nAn example of 'train' looks as follows.", "#### URL\n\n\n* Size of downloaded dataset files: 0.75 MB\n* Size of the generated dataset: 0.23 MB\n* Total amount of disk used: 0.99 MB\n\n\nAn example of 'validation' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### URL\n\n\n* 'text': a 'string' feature.\n* 'label': a classification label, with possible values including 'positive' (0), 'negative' (1).", "#### URL\n\n\n* 'label': a 'string' feature.\n* 'text': a 'string' feature.", "#### URL\n\n\n* 'premise': a 'string' feature.\n* 'choice1': a 'string' feature.\n* 'choice2': a 'string' feature.\n* 'question': a 'string' feature.\n* 'label': a 'int32' feature.", "#### URL\n\n\n* 'premise': a 'string' feature.\n* 'choice1': a 'string' feature.\n* 'choice2': a 'string' feature.\n* 'question': a 'string' feature.\n* 'label': a 'int32' feature.", "#### URL\n\n\n* 'premise': a 'string' feature.\n* 'choice1': a 'string' feature.\n* 'choice2': a 'string' feature.\n* 'question': a 'string' feature.\n* 'label': a 'int32' feature.", "### Data Splits", "#### URL", "#### URL", "#### URL", "#### URL", "#### URL\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @sumanthd17 for adding this dataset." ]
3c976110fc13596004dc36279fc4c453ff2c18aa
# Dataset Card for IndoNLI ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** [GitHub](https://github.com/ir-nlp-csui/indonli) - **Paper:** [EMNLP 2021](https://aclanthology.org/2021.emnlp-main.821/) - **Point of Contact:** [GitHub](https://github.com/ir-nlp-csui/indonli) ### Dataset Summary IndoNLI is the first human-elicited Natural Language Inference (NLI) dataset for Indonesian. IndoNLI is annotated by both crowd workers and experts. The expert-annotated data is used exclusively as a test set. It is designed to provide a challenging test-bed for Indonesian NLI by explicitly incorporating various linguistic phenomena such as numerical reasoning, structural changes, idioms, or temporal and spatial reasoning. ### Supported Tasks and Leaderboards - Natural Language Inference for Indonesian ### Languages Indonesian ## Dataset Structure ### Data Instances An example of `train` looks as follows. ``` { "premise": "Keindahan alam yang terdapat di Gunung Batu Jonggol ini dapat Anda manfaatkan sebagai objek fotografi yang cantik.", "hypothesis": "Keindahan alam tidak dapat difoto.", "label": 2 } ``` ### Data Fields The data fields are: - `premise`: a `string` feature - `hypothesis`: a `string` feature - `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2). ### Data Splits The data is split across `train`, `valid`, `test_lay`, and `test_expert`. `test_expert` is written by expert annotators, whereas the rest are written by lay annotators. | split | # examples | |----------|-------:| |train| 10330| |valid| 2197| |test_lay| 2201| |test_expert| 2984| A small subset of `test_expert` is used as a diasnostic tool. For more info, please visit https://github.com/ir-nlp-csui/indonli ## Dataset Creation ### Curation Rationale Indonesian NLP is considered under-resourced. Up until now, there is no publicly available human-annotated NLI dataset for Indonesian. ### Source Data #### Initial Data Collection and Normalization The premise were collected from Indonesian Wikipedia and from other public Indonesian dataset: Indonesian PUD and GSD treebanks provided by the [Universal Dependencies 2.5](https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-3105) and [IndoSum](https://github.com/kata-ai/indosum) The hypothesis were written by annotators. #### Who are the source language producers? The data was produced by humans. ### Annotations #### Annotation process We start by writing the hypothesis, given the premise and the target label. Then, we ask 2 different independent annotators to predict the label, given the premise and hypothesis. If all 3 (the original hypothesis + 2 independent annotators) agree with the label, then the annotation process ends for that sample. Otherwise, we incrementally ask additional annotator until 3 annotators agree with the label. If there's no majority concensus after 5 annotations, the sample is removed. #### Who are the annotators? Lay annotators were computer science students, and expert annotators were NLP scientists with 7+ years research experience in NLP. All annotators are native speakers. Additionally, expert annotators were explicitly instructed to provide challenging examples by incorporating various linguistic phenomena such as numerical reasoning, structural changes, idioms, or temporal and spatial reasoning. Annotators were compensated based on hourly rate. ### Personal and Sensitive Information There might be some personal information coming from Wikipedia and news, especially the information of famous/important people. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases INDONLI is created using premise sentences taken from Wikipedia and news. These data sources may contain some bias. ### Other Known Limitations No other known limitations ## Additional Information ### Dataset Curators This dataset is the result of the collaborative work of Indonesian researchers from the University of Indonesia, kata.ai, New York University, Fondazione Bruno Kessler, and the University of St Andrews. ### Licensing Information CC-BY-SA 4.0. Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use. ShareAlike — If you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original. No additional restrictions — You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits. Please contact authors for any information on the dataset. ### Citation Information ``` @inproceedings{mahendra-etal-2021-indonli, title = "{I}ndo{NLI}: A Natural Language Inference Dataset for {I}ndonesian", author = "Mahendra, Rahmad and Aji, Alham Fikri and Louvan, Samuel and Rahman, Fahrurrozi and Vania, Clara", booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.821", pages = "10511--10527", } ``` ### Contributions Thanks to [@afaji](https://github.com/afaji) for adding this dataset.
indonli
[ "task_categories:text-classification", "task_ids:natural-language-inference", "annotations_creators:expert-generated", "annotations_creators:crowdsourced", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:id", "license:cc-by-sa-4.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated", "crowdsourced"], "language_creators": ["expert-generated"], "language": ["id"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["natural-language-inference"], "paperswithcode_id": "indonli", "pretty_name": "IndoNLI", "dataset_info": {"features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}], "config_name": "indonli", "splits": [{"name": "train", "num_bytes": 2265687, "num_examples": 10330}, {"name": "validation", "num_bytes": 465299, "num_examples": 2197}, {"name": "test_lay", "num_bytes": 473849, "num_examples": 2201}, {"name": "test_expert", "num_bytes": 911916, "num_examples": 2984}], "download_size": 6977877, "dataset_size": 4116751}}
2024-01-18T11:06:28+00:00
[]
[ "id" ]
TAGS #task_categories-text-classification #task_ids-natural-language-inference #annotations_creators-expert-generated #annotations_creators-crowdsourced #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Indonesian #license-cc-by-sa-4.0 #region-us
Dataset Card for IndoNLI ======================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Repository: GitHub * Paper: EMNLP 2021 * Point of Contact: GitHub ### Dataset Summary IndoNLI is the first human-elicited Natural Language Inference (NLI) dataset for Indonesian. IndoNLI is annotated by both crowd workers and experts. The expert-annotated data is used exclusively as a test set. It is designed to provide a challenging test-bed for Indonesian NLI by explicitly incorporating various linguistic phenomena such as numerical reasoning, structural changes, idioms, or temporal and spatial reasoning. ### Supported Tasks and Leaderboards * Natural Language Inference for Indonesian ### Languages Indonesian Dataset Structure ----------------- ### Data Instances An example of 'train' looks as follows. ### Data Fields The data fields are: * 'premise': a 'string' feature * 'hypothesis': a 'string' feature * 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2). ### Data Splits The data is split across 'train', 'valid', 'test\_lay', and 'test\_expert'. 'test\_expert' is written by expert annotators, whereas the rest are written by lay annotators. A small subset of 'test\_expert' is used as a diasnostic tool. For more info, please visit URL Dataset Creation ---------------- ### Curation Rationale Indonesian NLP is considered under-resourced. Up until now, there is no publicly available human-annotated NLI dataset for Indonesian. ### Source Data #### Initial Data Collection and Normalization The premise were collected from Indonesian Wikipedia and from other public Indonesian dataset: Indonesian PUD and GSD treebanks provided by the Universal Dependencies 2.5 and IndoSum The hypothesis were written by annotators. #### Who are the source language producers? The data was produced by humans. ### Annotations #### Annotation process We start by writing the hypothesis, given the premise and the target label. Then, we ask 2 different independent annotators to predict the label, given the premise and hypothesis. If all 3 (the original hypothesis + 2 independent annotators) agree with the label, then the annotation process ends for that sample. Otherwise, we incrementally ask additional annotator until 3 annotators agree with the label. If there's no majority concensus after 5 annotations, the sample is removed. #### Who are the annotators? Lay annotators were computer science students, and expert annotators were NLP scientists with 7+ years research experience in NLP. All annotators are native speakers. Additionally, expert annotators were explicitly instructed to provide challenging examples by incorporating various linguistic phenomena such as numerical reasoning, structural changes, idioms, or temporal and spatial reasoning. Annotators were compensated based on hourly rate. ### Personal and Sensitive Information There might be some personal information coming from Wikipedia and news, especially the information of famous/important people. Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases INDONLI is created using premise sentences taken from Wikipedia and news. These data sources may contain some bias. ### Other Known Limitations No other known limitations Additional Information ---------------------- ### Dataset Curators This dataset is the result of the collaborative work of Indonesian researchers from the University of Indonesia, URL, New York University, Fondazione Bruno Kessler, and the University of St Andrews. ### Licensing Information CC-BY-SA 4.0. Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use. ShareAlike — If you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original. No additional restrictions — You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits. Please contact authors for any information on the dataset. ### Contributions Thanks to @afaji for adding this dataset.
[ "### Dataset Summary\n\n\nIndoNLI is the first human-elicited Natural Language Inference (NLI) dataset for Indonesian.\nIndoNLI is annotated by both crowd workers and experts. The expert-annotated data is used exclusively as a test set. It is designed to provide a challenging test-bed for Indonesian NLI by explicitly incorporating various linguistic phenomena such as numerical reasoning, structural changes, idioms, or temporal and spatial reasoning.", "### Supported Tasks and Leaderboards\n\n\n* Natural Language Inference for Indonesian", "### Languages\n\n\nIndonesian\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are:\n\n\n* 'premise': a 'string' feature\n* 'hypothesis': a 'string' feature\n* 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).", "### Data Splits\n\n\nThe data is split across 'train', 'valid', 'test\\_lay', and 'test\\_expert'.\n\n\n'test\\_expert' is written by expert annotators, whereas the rest are written by lay annotators.\n\n\n\nA small subset of 'test\\_expert' is used as a diasnostic tool. For more info, please visit URL\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nIndonesian NLP is considered under-resourced. Up until now, there is no publicly available human-annotated NLI dataset for Indonesian.", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nThe premise were collected from Indonesian Wikipedia and from other public Indonesian dataset: Indonesian PUD and GSD treebanks provided by the Universal Dependencies 2.5 and IndoSum\n\n\nThe hypothesis were written by annotators.", "#### Who are the source language producers?\n\n\nThe data was produced by humans.", "### Annotations", "#### Annotation process\n\n\nWe start by writing the hypothesis, given the premise and the target label. Then, we ask 2 different independent annotators to predict the label, given the premise and hypothesis. If all 3 (the original hypothesis + 2 independent annotators) agree with the label, then the annotation process ends for that sample. Otherwise, we incrementally ask additional annotator until 3 annotators agree with the label. If there's no majority concensus after 5 annotations, the sample is removed.", "#### Who are the annotators?\n\n\nLay annotators were computer science students, and expert annotators were NLP scientists with 7+ years research experience in NLP. All annotators are native speakers.\nAdditionally, expert annotators were explicitly instructed to provide challenging examples by incorporating various linguistic phenomena such as numerical reasoning, structural changes, idioms, or temporal and spatial reasoning. Annotators were compensated based on hourly rate.", "### Personal and Sensitive Information\n\n\nThere might be some personal information coming from Wikipedia and news, especially the information of famous/important people.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases\n\n\nINDONLI is created using premise sentences taken from Wikipedia and news. These data sources may contain some bias.", "### Other Known Limitations\n\n\nNo other known limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThis dataset is the result of the collaborative work of Indonesian researchers from the University of Indonesia, URL, New York University, Fondazione Bruno Kessler, and the University of St Andrews.", "### Licensing Information\n\n\nCC-BY-SA 4.0.\n\n\nAttribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.\n\n\nShareAlike — If you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original.\n\n\nNo additional restrictions — You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.\n\n\nPlease contact authors for any information on the dataset.", "### Contributions\n\n\nThanks to @afaji for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-natural-language-inference #annotations_creators-expert-generated #annotations_creators-crowdsourced #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Indonesian #license-cc-by-sa-4.0 #region-us \n", "### Dataset Summary\n\n\nIndoNLI is the first human-elicited Natural Language Inference (NLI) dataset for Indonesian.\nIndoNLI is annotated by both crowd workers and experts. The expert-annotated data is used exclusively as a test set. It is designed to provide a challenging test-bed for Indonesian NLI by explicitly incorporating various linguistic phenomena such as numerical reasoning, structural changes, idioms, or temporal and spatial reasoning.", "### Supported Tasks and Leaderboards\n\n\n* Natural Language Inference for Indonesian", "### Languages\n\n\nIndonesian\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are:\n\n\n* 'premise': a 'string' feature\n* 'hypothesis': a 'string' feature\n* 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).", "### Data Splits\n\n\nThe data is split across 'train', 'valid', 'test\\_lay', and 'test\\_expert'.\n\n\n'test\\_expert' is written by expert annotators, whereas the rest are written by lay annotators.\n\n\n\nA small subset of 'test\\_expert' is used as a diasnostic tool. For more info, please visit URL\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nIndonesian NLP is considered under-resourced. Up until now, there is no publicly available human-annotated NLI dataset for Indonesian.", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nThe premise were collected from Indonesian Wikipedia and from other public Indonesian dataset: Indonesian PUD and GSD treebanks provided by the Universal Dependencies 2.5 and IndoSum\n\n\nThe hypothesis were written by annotators.", "#### Who are the source language producers?\n\n\nThe data was produced by humans.", "### Annotations", "#### Annotation process\n\n\nWe start by writing the hypothesis, given the premise and the target label. Then, we ask 2 different independent annotators to predict the label, given the premise and hypothesis. If all 3 (the original hypothesis + 2 independent annotators) agree with the label, then the annotation process ends for that sample. Otherwise, we incrementally ask additional annotator until 3 annotators agree with the label. If there's no majority concensus after 5 annotations, the sample is removed.", "#### Who are the annotators?\n\n\nLay annotators were computer science students, and expert annotators were NLP scientists with 7+ years research experience in NLP. All annotators are native speakers.\nAdditionally, expert annotators were explicitly instructed to provide challenging examples by incorporating various linguistic phenomena such as numerical reasoning, structural changes, idioms, or temporal and spatial reasoning. Annotators were compensated based on hourly rate.", "### Personal and Sensitive Information\n\n\nThere might be some personal information coming from Wikipedia and news, especially the information of famous/important people.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases\n\n\nINDONLI is created using premise sentences taken from Wikipedia and news. These data sources may contain some bias.", "### Other Known Limitations\n\n\nNo other known limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThis dataset is the result of the collaborative work of Indonesian researchers from the University of Indonesia, URL, New York University, Fondazione Bruno Kessler, and the University of St Andrews.", "### Licensing Information\n\n\nCC-BY-SA 4.0.\n\n\nAttribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.\n\n\nShareAlike — If you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original.\n\n\nNo additional restrictions — You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.\n\n\nPlease contact authors for any information on the dataset.", "### Contributions\n\n\nThanks to @afaji for adding this dataset." ]
939bfb4e87cd0f4f717f4222ec19c55cdc302982
# Dataset Card for IndoNLU ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [IndoNLU Website](https://www.indobenchmark.com/) - **Repository:** [IndoNLU GitHub](https://github.com/indobenchmark/indonlu) - **Paper:** [IndoNLU: Benchmark and Resources for Evaluating Indonesian Natural Language Understanding](https://www.aclweb.org/anthology/2020aacl-main.85.pdf) - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Needs More Information] ### Dataset Summary The IndoNLU benchmark is a collection of resources for training, evaluating, and analyzing natural language understanding systems for Bahasa Indonesia (Indonesian language). There are 12 datasets in IndoNLU benchmark for Indonesian natural language understanding. 1. `EmoT`: An emotion classification dataset collected from the social media platform Twitter. The dataset consists of around 4000 Indonesian colloquial language tweets, covering five different emotion labels: anger, fear, happy, love, and sadness 2. `SmSA`: This sentence-level sentiment analysis dataset is a collection of comments and reviews in Indonesian obtained from multiple online platforms. The text was crawled and then annotated by several Indonesian linguists to construct this dataset. There are three possible sentiments on the `SmSA` dataset: positive, negative, and neutral 3. `CASA`: An aspect-based sentiment analysis dataset consisting of around a thousand car reviews collected from multiple Indonesian online automobile platforms. The dataset covers six aspects of car quality. We define the task to be a multi-label classification task, where each label represents a sentiment for a single aspect with three possible values: positive, negative, and neutral. 4. `HoASA`: An aspect-based sentiment analysis dataset consisting of hotel reviews collected from the hotel aggregator platform, [AiryRooms](https://github.com/annisanurulazhar/absa-playground). The dataset covers ten different aspects of hotel quality. Similar to the `CASA` dataset, each review is labeled with a single sentiment label for each aspect. There are four possible sentiment classes for each sentiment label: positive, negative, neutral, and positive-negative. The positivenegative label is given to a review that contains multiple sentiments of the same aspect but for different objects (e.g., cleanliness of bed and toilet). 5. `WReTE`: The Wiki Revision Edits Textual Entailment dataset consists of 450 sentence pairs constructed from Wikipedia revision history. The dataset contains pairs of sentences and binary semantic relations between the pairs. The data are labeled as entailed when the meaning of the second sentence can be derived from the first one, and not entailed otherwise. 6. `POSP`: This Indonesian part-of-speech tagging (POS) dataset is collected from Indonesian news websites. The dataset consists of around 8000 sentences with 26 POS tags. The POS tag labels follow the [Indonesian Association of Computational Linguistics (INACL) POS Tagging Convention](http://inacl.id/inacl/wp-content/uploads/2017/06/INACL-POS-Tagging-Convention-26-Mei.pdf). 7. `BaPOS`: This POS tagging dataset contains about 1000 sentences, collected from the [PAN Localization Project](http://www.panl10n.net/). In this dataset, each word is tagged by one of [23 POS tag classes](https://bahasa.cs.ui.ac.id/postag/downloads/Tagset.pdf). Data splitting used in this benchmark follows the experimental setting used by [Kurniawan and Aji (2018)](https://arxiv.org/abs/1809.03391). 8. `TermA`: This span-extraction dataset is collected from the hotel aggregator platform, [AiryRooms](https://github.com/jordhy97/final_project). The dataset consists of thousands of hotel reviews, which each contain a span label for aspect and sentiment words representing the opinion of the reviewer on the corresponding aspect. The labels use Inside-Outside-Beginning (IOB) tagging representation with two kinds of tags, aspect and sentiment. 9. `KEPS`: This keyphrase extraction dataset consists of text from Twitter discussing banking products and services and is written in the Indonesian language. A phrase containing important information is considered a keyphrase. Text may contain one or more keyphrases since important phrases can be located at different positions. The dataset follows the IOB chunking format, which represents the position of the keyphrase. 10. `NERGrit`: This NER dataset is taken from the [Grit-ID repository](https://github.com/grit-id/nergrit-corpus), and the labels are spans in IOB chunking representation. The dataset consists of three kinds of named entity tags, PERSON (name of person), PLACE (name of location), and ORGANIZATION (name of organization). 11. `NERP`: This NER dataset (Hoesen and Purwarianti, 2018) contains texts collected from several Indonesian news websites. There are five labels available in this dataset, PER (name of person), LOC (name of location), IND (name of product or brand), EVT (name of the event), and FNB (name of food and beverage). Similar to the `TermA` dataset, the `NERP` dataset uses the IOB chunking format. 12. `FacQA`: The goal of the FacQA dataset is to find the answer to a question from a provided short passage from a news article. Each row in the FacQA dataset consists of a question, a short passage, and a label phrase, which can be found inside the corresponding short passage. There are six categories of questions: date, location, name, organization, person, and quantitative. ### Supported Tasks and Leaderboards [Needs More Information] ### Languages Indonesian ## Dataset Structure ### Data Instances 1. `EmoT` dataset A data point consists of `tweet` and `label`. An example from the train set looks as follows: ``` { 'tweet': 'Ini adalah hal yang paling membahagiakan saat biasku foto bersama ELF #ReturnOfTheLittlePrince #HappyHeeChulDay' 'label': 4, } ``` 2. `SmSA` dataset A data point consists of `text` and `label`. An example from the train set looks as follows: ``` { 'text': 'warung ini dimiliki oleh pengusaha pabrik tahu yang sudah puluhan tahun terkenal membuat tahu putih di bandung . tahu berkualitas , dipadu keahlian memasak , dipadu kretivitas , jadilah warung yang menyajikan menu utama berbahan tahu , ditambah menu umum lain seperti ayam . semuanya selera indonesia . harga cukup terjangkau . jangan lewatkan tahu bletoka nya , tidak kalah dengan yang asli dari tegal !' 'label': 0, } ``` 3. `CASA` dataset A data point consists of `sentence` and multi-label `feature`, `machine`, `others`, `part`, `price`, and `service`. An example from the train set looks as follows: ``` { 'sentence': 'Saya memakai Honda Jazz GK5 tahun 2014 ( pertama meluncur ) . Mobil nya bagus dan enak sesuai moto nya menyenangkan untuk dikendarai', 'fuel': 1, 'machine': 1, 'others': 2, 'part': 1, 'price': 1, 'service': 1 } ``` 4. `HoASA` dataset A data point consists of `sentence` and multi-label `ac`, `air_panas`, `bau`, `general`, `kebersihan`, `linen`, `service`, `sunrise_meal`, `tv`, and `wifi`. An example from the train set looks as follows: ``` { 'sentence': 'kebersihan kurang...', 'ac': 1, 'air_panas': 1, 'bau': 1, 'general': 1, 'kebersihan': 0, 'linen': 1, 'service': 1, 'sunrise_meal': 1, 'tv': 1, 'wifi': 1 } ``` 5. `WreTE` dataset A data point consists of `premise`, `hypothesis`, `category`, and `label`. An example from the train set looks as follows: ``` { 'premise': 'Pada awalnya bangsa Israel hanya terdiri dari satu kelompok keluarga di antara banyak kelompok keluarga yang hidup di tanah Kanan pada abad 18 SM .', 'hypothesis': 'Pada awalnya bangsa Yahudi hanya terdiri dari satu kelompok keluarga di antara banyak kelompok keluarga yang hidup di tanah Kanan pada abad 18 SM .' 'category': 'menolak perubahan teks terakhir oleh istimewa kontribusi pengguna 141 109 98 87 141 109 98 87 dan mengembalikan revisi 6958053 oleh johnthorne', 'label': 0, } ``` 6. `POSP` dataset A data point consists of `tokens` and `pos_tags`. An example from the train set looks as follows: ``` { 'tokens': ['kepala', 'dinas', 'tata', 'kota', 'manado', 'amos', 'kenda', 'menyatakan', 'tidak', 'tahu', '-', 'menahu', 'soal', 'pencabutan', 'baliho', '.', 'ia', 'enggan', 'berkomentar', 'banyak', 'karena', 'merasa', 'bukan', 'kewenangannya', '.'], 'pos_tags': [11, 6, 11, 11, 7, 7, 7, 9, 23, 4, 21, 9, 11, 11, 11, 21, 3, 2, 4, 1, 19, 9, 23, 11, 21] } ``` 7. `BaPOS` dataset A data point consists of `tokens` and `pos_tags`. An example from the train set looks as follows: ``` { 'tokens': ['Kera', 'untuk', 'amankan', 'pesta', 'olahraga'], 'pos_tags': [27, 8, 26, 27, 30] } ``` 8. `TermA` dataset A data point consists of `tokens` and `seq_label`. An example from the train set looks as follows: ``` { 'tokens': ['kamar', 'saya', 'ada', 'kendala', 'di', 'ac', 'tidak', 'berfungsi', 'optimal', '.', 'dan', 'juga', 'wifi', 'koneksi', 'kurang', 'stabil', '.'], 'seq_label': [1, 1, 1, 1, 1, 4, 3, 0, 0, 1, 1, 1, 4, 2, 3, 0, 1] } ``` 9. `KEPS` dataset A data point consists of `tokens` and `seq_label`. An example from the train set looks as follows: ``` { 'tokens': ['Setelah', 'melalui', 'proses', 'telepon', 'yang', 'panjang', 'tutup', 'sudah', 'kartu', 'kredit', 'bca', 'Ribet'], 'seq_label': [0, 1, 1, 2, 0, 0, 1, 0, 1, 2, 2, 1] } ``` 10. `NERGrit` dataset A data point consists of `tokens` and `ner_tags`. An example from the train set looks as follows: ``` { 'tokens': ['Kontribusinya', 'terhadap', 'industri', 'musik', 'telah', 'mengumpulkan', 'banyak', 'prestasi', 'termasuk', 'lima', 'Grammy', 'Awards', ',', 'serta', 'dua', 'belas', 'nominasi', ';', 'dua', 'Guinness', 'World', 'Records', ';', 'dan', 'penjualannya', 'diperkirakan', 'sekitar', '64', 'juta', 'rekaman', '.'], 'ner_tags': [5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5]} ``` 11. `NERP` dataset A data point consists of `tokens` and `ner_tags`. An example from the train set looks as follows: ``` { 'tokens': ['kepala', 'dinas', 'tata', 'kota', 'manado', 'amos', 'kenda', 'menyatakan', 'tidak', 'tahu', '-', 'menahu', 'soal', 'pencabutan', 'baliho', '.', 'ia', 'enggan', 'berkomentar', 'banyak', 'karena', 'merasa', 'bukan', 'kewenangannya', '.'], 'ner_tags': [9, 9, 9, 9, 2, 7, 0, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9] } ``` 12. `FacQA` dataset A data point consists of `question`, `passage`, and `seq_label`. An example from the train set looks as follows: ``` { 'passage': ['Lewat', 'telepon', 'ke', 'kantor', 'berita', 'lokal', 'Current', 'News', 'Service', ',', 'Hezb-ul', 'Mujahedeen', ',', 'kelompok', 'militan', 'Kashmir', 'yang', 'terbesar', ',', 'menyatakan', 'bertanggung', 'jawab', 'atas', 'ledakan', 'di', 'Srinagar', '.'], 'question': ['Kelompok', 'apakah', 'yang', 'menyatakan', 'bertanggung', 'jawab', 'atas', 'ledakan', 'di', 'Srinagar', '?'], 'seq_label': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] } ``` ### Data Fields 1. `EmoT` dataset - `tweet`: a `string` feature. - `label`: an emotion label, with possible values including `sadness`, `anger`, `love`, `fear`, `happy`. 2. `SmSA` dataset - `text`: a `string` feature. - `label`: a sentiment label, with possible values including `positive`, `neutral`, `negative`. 3. `CASA` dataset - `sentence`: a `string` feature. - `fuel`: a sentiment label, with possible values including `negative`, `neutral`, `positive`. - `machine`: a sentiment label, with possible values including `negative`, `neutral`, `positive`. - `others`: a sentiment label, with possible values including `negative`, `neutral`, `positive`. - `part`: a sentiment label, with possible values including `negative`, `neutral`, `positive`. - `price`: a sentiment label, with possible values including `negative`, `neutral`, `positive`. - `service`: a sentiment label, with possible values including `negative`, `neutral`, `positive`. 4. `HoASA` dataset - `sentence`: a `string` feature. - `ac`: a sentiment label, with possible values including `neg`, `neut`, `pos`, `neg_pos`. - `air_panas`: a sentiment label, with possible values including `neg`, `neut`, `pos`, `neg_pos`. - `bau`: a sentiment label, with possible values including `neg`, `neut`, `pos`, `neg_pos`. - `general`: a sentiment label, with possible values including `neg`, `neut`, `pos`, `neg_pos`. - `kebersihan`: a sentiment label, with possible values including `neg`, `neut`, `pos`, `neg_pos`. - `linen`: a sentiment label, with possible values including `neg`, `neut`, `pos`, `neg_pos`. - `service`: a sentiment label, with possible values including `neg`, `neut`, `pos`, `neg_pos`. - `sunrise_meal`: a sentiment label, with possible values including `neg`, `neut`, `pos`, `neg_pos`. - `tv`: a sentiment label, with possible values including `neg`, `neut`, `pos`, `neg_pos`. - `wifi`: a sentiment label, with possible values including `neg`, `neut`, `pos`, `neg_pos`. 5. `WReTE` dataset - `premise`: a `string` feature. - `hypothesis`: a `string` feature. - `category`: a `string` feature. - `label`: a classification label, with possible values including `NotEntail`, `Entail_or_Paraphrase`. 6. `POSP` dataset - `tokens`: a `list` of `string` features. - `pos_tags`: a `list` of POS tag labels, with possible values including `B-PPO`, `B-KUA`, `B-ADV`, `B-PRN`, `B-VBI`. The POS tag labels follow the [Indonesian Association of Computational Linguistics (INACL) POS Tagging Convention](http://inacl.id/inacl/wp-content/uploads/2017/06/INACLPOS-Tagging-Convention-26-Mei.pdf). 7. `BaPOS` dataset - `tokens`: a `list` of `string` features. - `pos_tags`: a `list` of POS tag labels, with possible values including `B-PR`, `B-CD`, `I-PR`, `B-SYM`, `B-JJ`. The POS tag labels from [Tagset UI](https://bahasa.cs.ui.ac.id/postag/downloads/Tagset.pdf). 8. `TermA` dataset - `tokens`: a `list` of `string` features. - `seq_label`: a `list` of classification labels, with possible values including `I-SENTIMENT`, `O`, `I-ASPECT`, `B-SENTIMENT`, `B-ASPECT`. 9. `KEPS` dataset - `tokens`: a `list` of `string` features. - `seq_label`: a `list` of classification labels, with possible values including `O`, `B`, `I`. The labels use Inside-Outside-Beginning (IOB) tagging. 10. `NERGrit` dataset - `tokens`: a `list` of `string` features. - `ner_tags`: a `list` of NER tag labels, with possible values including `I-PERSON`, `B-ORGANISATION`, `I-ORGANISATION`, `B-PLACE`, `I-PLACE`. The labels use Inside-Outside-Beginning (IOB) tagging. 11. `NERP` dataset - `tokens`: a `list` of `string` features. - `ner_tags`: a `list` of NER tag labels, with possible values including `I-PPL`, `B-EVT`, `B-PLC`, `I-IND`, `B-IND`. 12. `FacQA` dataset - `question`: a `list` of `string` features. - `passage`: a `list` of `string` features. - `seq_label`: a `list` of classification labels, with possible values including `O`, `B`, `I`. ### Data Splits The data is split into a training, validation and test set. | | dataset | Train | Valid | Test | |----|---------|-------|-------|------| | 1 | EmoT | 3521 | 440 | 440 | | 2 | SmSA | 11000 | 1260 | 500 | | 3 | CASA | 810 | 90 | 180 | | 4 | HoASA | 2283 | 285 | 286 | | 5 | WReTE | 300 | 50 | 100 | | 6 | POSP | 6720 | 840 | 840 | | 7 | BaPOS | 8000 | 1000 | 1029 | | 8 | TermA | 3000 | 1000 | 1000 | | 9 | KEPS | 800 | 200 | 247 | | 10 | NERGrit | 1672 | 209 | 209 | | 11 | NERP | 6720 | 840 | 840 | | 12 | FacQA | 2495 | 311 | 311 | ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information The licensing status of the IndoNLU benchmark datasets is under MIT License. ### Citation Information IndoNLU citation ``` @inproceedings{wilie2020indonlu, title={IndoNLU: Benchmark and Resources for Evaluating Indonesian Natural Language Understanding}, author={Bryan Wilie and Karissa Vincentio and Genta Indra Winata and Samuel Cahyawijaya and X. Li and Zhi Yuan Lim and S. Soleman and R. Mahendra and Pascale Fung and Syafri Bahar and A. Purwarianti}, booktitle={Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing}, year={2020} } ``` `EmoT` dataset citation ``` @inproceedings{saputri2018emotion, title={Emotion Classification on Indonesian Twitter Dataset}, author={Mei Silviana Saputri, Rahmad Mahendra, and Mirna Adriani}, booktitle={Proceedings of the 2018 International Conference on Asian Language Processing(IALP)}, pages={90--95}, year={2018}, organization={IEEE} } ``` `SmSA` dataset citation ``` @inproceedings{purwarianti2019improving, title={Improving Bi-LSTM Performance for Indonesian Sentiment Analysis Using Paragraph Vector}, author={Ayu Purwarianti and Ida Ayu Putu Ari Crisdayanti}, booktitle={Proceedings of the 2019 International Conference of Advanced Informatics: Concepts, Theory and Applications (ICAICTA)}, pages={1--5}, year={2019}, organization={IEEE} } ``` `CASA` dataset citation ``` @inproceedings{ilmania2018aspect, title={Aspect Detection and Sentiment Classification Using Deep Neural Network for Indonesian Aspect-based Sentiment Analysis}, author={Arfinda Ilmania, Abdurrahman, Samuel Cahyawijaya, Ayu Purwarianti}, booktitle={Proceedings of the 2018 International Conference on Asian Language Processing(IALP)}, pages={62--67}, year={2018}, organization={IEEE} } ``` `HoASA` dataset citation ``` @inproceedings{azhar2019multi, title={Multi-label Aspect Categorization with Convolutional Neural Networks and Extreme Gradient Boosting}, author={A. N. Azhar, M. L. Khodra, and A. P. Sutiono} booktitle={Proceedings of the 2019 International Conference on Electrical Engineering and Informatics (ICEEI)}, pages={35--40}, year={2019} } ``` `WReTE` dataset citation ``` @inproceedings{setya2018semi, title={Semi-supervised Textual Entailment on Indonesian Wikipedia Data}, author={Ken Nabila Setya and Rahmad Mahendra}, booktitle={Proceedings of the 2018 International Conference on Computational Linguistics and Intelligent Text Processing (CICLing)}, year={2018} } ``` `POSP` dataset citation ``` @inproceedings{hoesen2018investigating, title={Investigating Bi-LSTM and CRF with POS Tag Embedding for Indonesian Named Entity Tagger}, author={Devin Hoesen and Ayu Purwarianti}, booktitle={Proceedings of the 2018 International Conference on Asian Language Processing (IALP)}, pages={35--38}, year={2018}, organization={IEEE} } ``` `BaPOS` dataset citation ``` @inproceedings{dinakaramani2014designing, title={Designing an Indonesian Part of Speech Tagset and Manually Tagged Indonesian Corpus}, author={Arawinda Dinakaramani, Fam Rashel, Andry Luthfi, and Ruli Manurung}, booktitle={Proceedings of the 2014 International Conference on Asian Language Processing (IALP)}, pages={66--69}, year={2014}, organization={IEEE} } @inproceedings{kurniawan2018toward, title={Toward a Standardized and More Accurate Indonesian Part-of-Speech Tagging}, author={Kemal Kurniawan and Alham Fikri Aji}, booktitle={Proceedings of the 2018 International Conference on Asian Language Processing (IALP)}, pages={303--307}, year={2018}, organization={IEEE} } ``` `TermA` dataset citation ``` @article{winatmoko2019aspect, title={Aspect and Opinion Term Extraction for Hotel Reviews Using Transfer Learning and Auxiliary Labels}, author={Yosef Ardhito Winatmoko, Ali Akbar Septiandri, Arie Pratama Sutiono}, journal={arXiv preprint arXiv:1909.11879}, year={2019} } @article{fernando2019aspect, title={Aspect and Opinion Terms Extraction Using Double Embeddings and Attention Mechanism for Indonesian Hotel Reviews}, author={Jordhy Fernando, Masayu Leylia Khodra, Ali Akbar Septiandri}, journal={arXiv preprint arXiv:1908.04899}, year={2019} } ``` `KEPS` dataset citation ``` @inproceedings{mahfuzh2019improving, title={Improving Joint Layer RNN based Keyphrase Extraction by Using Syntactical Features}, author={Miftahul Mahfuzh, Sidik Soleman, and Ayu Purwarianti}, booktitle={Proceedings of the 2019 International Conference of Advanced Informatics: Concepts, Theory and Applications (ICAICTA)}, pages={1--6}, year={2019}, organization={IEEE} } ``` `NERGrit` dataset citation ``` @online{nergrit2019, title={NERGrit Corpus}, author={NERGrit Developers}, year={2019}, url={https://github.com/grit-id/nergrit-corpus} } ``` `NERP` dataset citation ``` @inproceedings{hoesen2018investigating, title={Investigating Bi-LSTM and CRF with POS Tag Embedding for Indonesian Named Entity Tagger}, author={Devin Hoesen and Ayu Purwarianti}, booktitle={Proceedings of the 2018 International Conference on Asian Language Processing (IALP)}, pages={35--38}, year={2018}, organization={IEEE} } ``` `FacQA` dataset citation ``` @inproceedings{purwarianti2007machine, title={A Machine Learning Approach for Indonesian Question Answering System}, author={Ayu Purwarianti, Masatoshi Tsuchiya, and Seiichi Nakagawa}, booktitle={Proceedings of Artificial Intelligence and Applications }, pages={573--578}, year={2007} } ``` ### Contributions Thanks to [@yasirabd](https://github.com/yasirabd) for adding this dataset.
indonlp/indonlu
[ "task_categories:question-answering", "task_categories:text-classification", "task_categories:token-classification", "task_ids:closed-domain-qa", "task_ids:multi-class-classification", "task_ids:named-entity-recognition", "task_ids:part-of-speech", "task_ids:semantic-similarity-classification", "task_ids:sentiment-classification", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "size_categories:1K<n<10K", "size_categories:n<1K", "source_datasets:original", "language:id", "license:mit", "keyphrase-extraction", "span-extraction", "aspect-based-sentiment-analysis", "arxiv:1809.03391", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["id"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K", "1K<n<10K", "n<1K"], "source_datasets": ["original"], "task_categories": ["question-answering", "text-classification", "token-classification"], "task_ids": ["closed-domain-qa", "multi-class-classification", "named-entity-recognition", "part-of-speech", "semantic-similarity-classification", "sentiment-classification"], "paperswithcode_id": "indonlu-benchmark", "pretty_name": "IndoNLU", "configs": ["bapos", "casa", "emot", "facqa", "hoasa", "keps", "nergrit", "nerp", "posp", "smsa", "terma", "wrete"], "tags": ["keyphrase-extraction", "span-extraction", "aspect-based-sentiment-analysis"], "dataset_info": [{"config_name": "emot", "features": [{"name": "tweet", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "sadness", "1": "anger", "2": "love", "3": "fear", "4": "happy"}}}}], "splits": [{"name": "train", "num_bytes": 686418, "num_examples": 3521}, {"name": "validation", "num_bytes": 84082, "num_examples": 440}, {"name": "test", "num_bytes": 84856, "num_examples": 440}], "download_size": 840917, "dataset_size": 855356}, {"config_name": "smsa", "features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "positive", "1": "neutral", "2": "negative"}}}}], "splits": [{"name": "train", "num_bytes": 2209874, "num_examples": 11000}, {"name": "validation", "num_bytes": 249629, "num_examples": 1260}, {"name": "test", "num_bytes": 77041, "num_examples": 500}], "download_size": 2509229, "dataset_size": 2536544}, {"config_name": "casa", "features": [{"name": "sentence", "dtype": "string"}, {"name": "fuel", "dtype": {"class_label": {"names": {"0": "negative", "1": "neutral", "2": "positive"}}}}, {"name": "machine", "dtype": {"class_label": {"names": {"0": "negative", "1": "neutral", "2": "positive"}}}}, {"name": "others", "dtype": {"class_label": {"names": {"0": "negative", "1": "neutral", "2": "positive"}}}}, {"name": "part", "dtype": {"class_label": {"names": {"0": "negative", "1": "neutral", "2": "positive"}}}}, {"name": "price", "dtype": {"class_label": {"names": {"0": "negative", "1": "neutral", "2": "positive"}}}}, {"name": "service", "dtype": {"class_label": {"names": {"0": "negative", "1": "neutral", "2": "positive"}}}}], "splits": [{"name": "train", "num_bytes": 110415, "num_examples": 810}, {"name": "validation", "num_bytes": 11993, "num_examples": 90}, {"name": "test", "num_bytes": 23553, "num_examples": 180}], "download_size": 144903, "dataset_size": 145961}, {"config_name": "hoasa", "features": [{"name": "sentence", "dtype": "string"}, {"name": "ac", "dtype": {"class_label": {"names": {"0": "neg", "1": "neut", "2": "pos", "3": "neg_pos"}}}}, {"name": "air_panas", "dtype": {"class_label": {"names": {"0": "neg", "1": "neut", "2": "pos", "3": "neg_pos"}}}}, {"name": "bau", "dtype": {"class_label": {"names": {"0": "neg", "1": "neut", "2": "pos", "3": "neg_pos"}}}}, {"name": "general", "dtype": {"class_label": {"names": {"0": "neg", "1": "neut", "2": "pos", "3": "neg_pos"}}}}, {"name": "kebersihan", "dtype": {"class_label": {"names": {"0": "neg", "1": "neut", "2": "pos", "3": "neg_pos"}}}}, {"name": "linen", "dtype": {"class_label": {"names": {"0": "neg", "1": "neut", "2": "pos", "3": "neg_pos"}}}}, {"name": "service", "dtype": {"class_label": {"names": {"0": "neg", "1": "neut", "2": "pos", "3": "neg_pos"}}}}, {"name": "sunrise_meal", "dtype": {"class_label": {"names": {"0": "neg", "1": "neut", "2": "pos", "3": "neg_pos"}}}}, {"name": "tv", "dtype": {"class_label": {"names": {"0": "neg", "1": "neut", "2": "pos", "3": "neg_pos"}}}}, {"name": "wifi", "dtype": {"class_label": {"names": {"0": "neg", "1": "neut", "2": "pos", "3": "neg_pos"}}}}], "splits": [{"name": "train", "num_bytes": 458177, "num_examples": 2283}, {"name": "validation", "num_bytes": 58248, "num_examples": 285}, {"name": "test", "num_bytes": 56399, "num_examples": 286}], "download_size": 477314, "dataset_size": 572824}, {"config_name": "wrete", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "NotEntail", "1": "Entail_or_Paraphrase"}}}}], "splits": [{"name": "train", "num_bytes": 99999, "num_examples": 300}, {"name": "validation", "num_bytes": 18049, "num_examples": 50}, {"name": "test", "num_bytes": 32617, "num_examples": 100}], "download_size": 151018, "dataset_size": 150665}, {"config_name": "posp", "features": [{"name": "tokens", "sequence": "string"}, {"name": "pos_tags", "sequence": {"class_label": {"names": {"0": "B-PPO", "1": "B-KUA", "2": "B-ADV", "3": "B-PRN", "4": "B-VBI", "5": "B-PAR", "6": "B-VBP", "7": "B-NNP", "8": "B-UNS", "9": "B-VBT", "10": "B-VBL", "11": "B-NNO", "12": "B-ADJ", "13": "B-PRR", "14": "B-PRK", "15": "B-CCN", "16": "B-$$$", "17": "B-ADK", "18": "B-ART", "19": "B-CSN", "20": "B-NUM", "21": "B-SYM", "22": "B-INT", "23": "B-NEG", "24": "B-PRI", "25": "B-VBE"}}}}], "splits": [{"name": "train", "num_bytes": 2751348, "num_examples": 6720}, {"name": "validation", "num_bytes": 343924, "num_examples": 840}, {"name": "test", "num_bytes": 350720, "num_examples": 840}], "download_size": 2407206, "dataset_size": 3445992}, {"config_name": "bapos", "features": [{"name": "tokens", "sequence": "string"}, {"name": "pos_tags", "sequence": {"class_label": {"names": {"0": "B-PR", "1": "B-CD", "2": "I-PR", "3": "B-SYM", "4": "B-JJ", "5": "B-DT", "6": "I-UH", "7": "I-NND", "8": "B-SC", "9": "I-WH", "10": "I-IN", "11": "I-NNP", "12": "I-VB", "13": "B-IN", "14": "B-NND", "15": "I-CD", "16": "I-JJ", "17": "I-X", "18": "B-OD", "19": "B-RP", "20": "B-RB", "21": "B-NNP", "22": "I-RB", "23": "I-Z", "24": "B-CC", "25": "B-NEG", "26": "B-VB", "27": "B-NN", "28": "B-MD", "29": "B-UH", "30": "I-NN", "31": "B-PRP", "32": "I-SC", "33": "B-Z", "34": "I-PRP", "35": "I-OD", "36": "I-SYM", "37": "B-WH", "38": "B-FW", "39": "I-CC", "40": "B-X"}}}}], "splits": [{"name": "train", "num_bytes": 3772459, "num_examples": 8000}, {"name": "validation", "num_bytes": 460058, "num_examples": 1000}, {"name": "test", "num_bytes": 474368, "num_examples": 1029}], "download_size": 3084021, "dataset_size": 4706885}, {"config_name": "terma", "features": [{"name": "tokens", "sequence": "string"}, {"name": "seq_label", "sequence": {"class_label": {"names": {"0": "I-SENTIMENT", "1": "O", "2": "I-ASPECT", "3": "B-SENTIMENT", "4": "B-ASPECT"}}}}], "splits": [{"name": "train", "num_bytes": 817983, "num_examples": 3000}, {"name": "validation", "num_bytes": 276335, "num_examples": 1000}, {"name": "test", "num_bytes": 265922, "num_examples": 1000}], "download_size": 816822, "dataset_size": 1360240}, {"config_name": "keps", "features": [{"name": "tokens", "sequence": "string"}, {"name": "seq_label", "sequence": {"class_label": {"names": {"0": "O", "1": "B", "2": "I"}}}}], "splits": [{"name": "train", "num_bytes": 173961, "num_examples": 800}, {"name": "validation", "num_bytes": 42961, "num_examples": 200}, {"name": "test", "num_bytes": 66762, "num_examples": 247}], "download_size": 134042, "dataset_size": 283684}, {"config_name": "nergrit", "features": [{"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "I-PERSON", "1": "B-ORGANISATION", "2": "I-ORGANISATION", "3": "B-PLACE", "4": "I-PLACE", "5": "O", "6": "B-PERSON"}}}}], "splits": [{"name": "train", "num_bytes": 960710, "num_examples": 1672}, {"name": "validation", "num_bytes": 119567, "num_examples": 209}, {"name": "test", "num_bytes": 117274, "num_examples": 209}], "download_size": 641265, "dataset_size": 1197551}, {"config_name": "nerp", "features": [{"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "I-PPL", "1": "B-EVT", "2": "B-PLC", "3": "I-IND", "4": "B-IND", "5": "B-FNB", "6": "I-EVT", "7": "B-PPL", "8": "I-PLC", "9": "O", "10": "I-FNB"}}}}], "splits": [{"name": "train", "num_bytes": 2751348, "num_examples": 6720}, {"name": "validation", "num_bytes": 343924, "num_examples": 840}, {"name": "test", "num_bytes": 350720, "num_examples": 840}], "download_size": 1725986, "dataset_size": 3445992}, {"config_name": "facqa", "features": [{"name": "question", "sequence": "string"}, {"name": "passage", "sequence": "string"}, {"name": "seq_label", "sequence": {"class_label": {"names": {"0": "O", "1": "B", "2": "I"}}}}], "splits": [{"name": "train", "num_bytes": 2454368, "num_examples": 2495}, {"name": "validation", "num_bytes": 306249, "num_examples": 311}, {"name": "test", "num_bytes": 306831, "num_examples": 311}], "download_size": 2591968, "dataset_size": 3067448}]}
2023-02-03T05:49:02+00:00
[ "1809.03391" ]
[ "id" ]
TAGS #task_categories-question-answering #task_categories-text-classification #task_categories-token-classification #task_ids-closed-domain-qa #task_ids-multi-class-classification #task_ids-named-entity-recognition #task_ids-part-of-speech #task_ids-semantic-similarity-classification #task_ids-sentiment-classification #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #size_categories-1K<n<10K #size_categories-n<1K #source_datasets-original #language-Indonesian #license-mit #keyphrase-extraction #span-extraction #aspect-based-sentiment-analysis #arxiv-1809.03391 #region-us
Dataset Card for IndoNLU ======================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: IndoNLU Website * Repository: IndoNLU GitHub * Paper: IndoNLU: Benchmark and Resources for Evaluating Indonesian Natural Language Understanding * Leaderboard: * Point of Contact: ### Dataset Summary The IndoNLU benchmark is a collection of resources for training, evaluating, and analyzing natural language understanding systems for Bahasa Indonesia (Indonesian language). There are 12 datasets in IndoNLU benchmark for Indonesian natural language understanding. 1. 'EmoT': An emotion classification dataset collected from the social media platform Twitter. The dataset consists of around 4000 Indonesian colloquial language tweets, covering five different emotion labels: anger, fear, happy, love, and sadness 2. 'SmSA': This sentence-level sentiment analysis dataset is a collection of comments and reviews in Indonesian obtained from multiple online platforms. The text was crawled and then annotated by several Indonesian linguists to construct this dataset. There are three possible sentiments on the 'SmSA' dataset: positive, negative, and neutral 3. 'CASA': An aspect-based sentiment analysis dataset consisting of around a thousand car reviews collected from multiple Indonesian online automobile platforms. The dataset covers six aspects of car quality. We define the task to be a multi-label classification task, where each label represents a sentiment for a single aspect with three possible values: positive, negative, and neutral. 4. 'HoASA': An aspect-based sentiment analysis dataset consisting of hotel reviews collected from the hotel aggregator platform, AiryRooms. The dataset covers ten different aspects of hotel quality. Similar to the 'CASA' dataset, each review is labeled with a single sentiment label for each aspect. There are four possible sentiment classes for each sentiment label: positive, negative, neutral, and positive-negative. The positivenegative label is given to a review that contains multiple sentiments of the same aspect but for different objects (e.g., cleanliness of bed and toilet). 5. 'WReTE': The Wiki Revision Edits Textual Entailment dataset consists of 450 sentence pairs constructed from Wikipedia revision history. The dataset contains pairs of sentences and binary semantic relations between the pairs. The data are labeled as entailed when the meaning of the second sentence can be derived from the first one, and not entailed otherwise. 6. 'POSP': This Indonesian part-of-speech tagging (POS) dataset is collected from Indonesian news websites. The dataset consists of around 8000 sentences with 26 POS tags. The POS tag labels follow the Indonesian Association of Computational Linguistics (INACL) POS Tagging Convention. 7. 'BaPOS': This POS tagging dataset contains about 1000 sentences, collected from the PAN Localization Project. In this dataset, each word is tagged by one of 23 POS tag classes. Data splitting used in this benchmark follows the experimental setting used by Kurniawan and Aji (2018). 8. 'TermA': This span-extraction dataset is collected from the hotel aggregator platform, AiryRooms. The dataset consists of thousands of hotel reviews, which each contain a span label for aspect and sentiment words representing the opinion of the reviewer on the corresponding aspect. The labels use Inside-Outside-Beginning (IOB) tagging representation with two kinds of tags, aspect and sentiment. 9. 'KEPS': This keyphrase extraction dataset consists of text from Twitter discussing banking products and services and is written in the Indonesian language. A phrase containing important information is considered a keyphrase. Text may contain one or more keyphrases since important phrases can be located at different positions. The dataset follows the IOB chunking format, which represents the position of the keyphrase. 10. 'NERGrit': This NER dataset is taken from the Grit-ID repository, and the labels are spans in IOB chunking representation. The dataset consists of three kinds of named entity tags, PERSON (name of person), PLACE (name of location), and ORGANIZATION (name of organization). 11. 'NERP': This NER dataset (Hoesen and Purwarianti, 2018) contains texts collected from several Indonesian news websites. There are five labels available in this dataset, PER (name of person), LOC (name of location), IND (name of product or brand), EVT (name of the event), and FNB (name of food and beverage). Similar to the 'TermA' dataset, the 'NERP' dataset uses the IOB chunking format. 12. 'FacQA': The goal of the FacQA dataset is to find the answer to a question from a provided short passage from a news article. Each row in the FacQA dataset consists of a question, a short passage, and a label phrase, which can be found inside the corresponding short passage. There are six categories of questions: date, location, name, organization, person, and quantitative. ### Supported Tasks and Leaderboards ### Languages Indonesian Dataset Structure ----------------- ### Data Instances 1. 'EmoT' dataset A data point consists of 'tweet' and 'label'. An example from the train set looks as follows: 2. 'SmSA' dataset A data point consists of 'text' and 'label'. An example from the train set looks as follows: 3. 'CASA' dataset A data point consists of 'sentence' and multi-label 'feature', 'machine', 'others', 'part', 'price', and 'service'. An example from the train set looks as follows: 4. 'HoASA' dataset A data point consists of 'sentence' and multi-label 'ac', 'air\_panas', 'bau', 'general', 'kebersihan', 'linen', 'service', 'sunrise\_meal', 'tv', and 'wifi'. An example from the train set looks as follows: 5. 'WreTE' dataset A data point consists of 'premise', 'hypothesis', 'category', and 'label'. An example from the train set looks as follows: 6. 'POSP' dataset A data point consists of 'tokens' and 'pos\_tags'. An example from the train set looks as follows: 7. 'BaPOS' dataset A data point consists of 'tokens' and 'pos\_tags'. An example from the train set looks as follows: 8. 'TermA' dataset A data point consists of 'tokens' and 'seq\_label'. An example from the train set looks as follows: 9. 'KEPS' dataset A data point consists of 'tokens' and 'seq\_label'. An example from the train set looks as follows: 10. 'NERGrit' dataset A data point consists of 'tokens' and 'ner\_tags'. An example from the train set looks as follows: 11. 'NERP' dataset A data point consists of 'tokens' and 'ner\_tags'. An example from the train set looks as follows: 12. 'FacQA' dataset A data point consists of 'question', 'passage', and 'seq\_label'. An example from the train set looks as follows: ### Data Fields 1. 'EmoT' dataset * 'tweet': a 'string' feature. * 'label': an emotion label, with possible values including 'sadness', 'anger', 'love', 'fear', 'happy'. 2. 'SmSA' dataset * 'text': a 'string' feature. * 'label': a sentiment label, with possible values including 'positive', 'neutral', 'negative'. 3. 'CASA' dataset * 'sentence': a 'string' feature. * 'fuel': a sentiment label, with possible values including 'negative', 'neutral', 'positive'. * 'machine': a sentiment label, with possible values including 'negative', 'neutral', 'positive'. * 'others': a sentiment label, with possible values including 'negative', 'neutral', 'positive'. * 'part': a sentiment label, with possible values including 'negative', 'neutral', 'positive'. * 'price': a sentiment label, with possible values including 'negative', 'neutral', 'positive'. * 'service': a sentiment label, with possible values including 'negative', 'neutral', 'positive'. 4. 'HoASA' dataset * 'sentence': a 'string' feature. * 'ac': a sentiment label, with possible values including 'neg', 'neut', 'pos', 'neg\_pos'. * 'air\_panas': a sentiment label, with possible values including 'neg', 'neut', 'pos', 'neg\_pos'. * 'bau': a sentiment label, with possible values including 'neg', 'neut', 'pos', 'neg\_pos'. * 'general': a sentiment label, with possible values including 'neg', 'neut', 'pos', 'neg\_pos'. * 'kebersihan': a sentiment label, with possible values including 'neg', 'neut', 'pos', 'neg\_pos'. * 'linen': a sentiment label, with possible values including 'neg', 'neut', 'pos', 'neg\_pos'. * 'service': a sentiment label, with possible values including 'neg', 'neut', 'pos', 'neg\_pos'. * 'sunrise\_meal': a sentiment label, with possible values including 'neg', 'neut', 'pos', 'neg\_pos'. * 'tv': a sentiment label, with possible values including 'neg', 'neut', 'pos', 'neg\_pos'. * 'wifi': a sentiment label, with possible values including 'neg', 'neut', 'pos', 'neg\_pos'. 5. 'WReTE' dataset * 'premise': a 'string' feature. * 'hypothesis': a 'string' feature. * 'category': a 'string' feature. * 'label': a classification label, with possible values including 'NotEntail', 'Entail\_or\_Paraphrase'. 6. 'POSP' dataset * 'tokens': a 'list' of 'string' features. * 'pos\_tags': a 'list' of POS tag labels, with possible values including 'B-PPO', 'B-KUA', 'B-ADV', 'B-PRN', 'B-VBI'. The POS tag labels follow the Indonesian Association of Computational Linguistics (INACL) POS Tagging Convention. 7. 'BaPOS' dataset * 'tokens': a 'list' of 'string' features. * 'pos\_tags': a 'list' of POS tag labels, with possible values including 'B-PR', 'B-CD', 'I-PR', 'B-SYM', 'B-JJ'. The POS tag labels from Tagset UI. 8. 'TermA' dataset * 'tokens': a 'list' of 'string' features. * 'seq\_label': a 'list' of classification labels, with possible values including 'I-SENTIMENT', 'O', 'I-ASPECT', 'B-SENTIMENT', 'B-ASPECT'. 9. 'KEPS' dataset * 'tokens': a 'list' of 'string' features. * 'seq\_label': a 'list' of classification labels, with possible values including 'O', 'B', 'I'. The labels use Inside-Outside-Beginning (IOB) tagging. 10. 'NERGrit' dataset * 'tokens': a 'list' of 'string' features. * 'ner\_tags': a 'list' of NER tag labels, with possible values including 'I-PERSON', 'B-ORGANISATION', 'I-ORGANISATION', 'B-PLACE', 'I-PLACE'. The labels use Inside-Outside-Beginning (IOB) tagging. 11. 'NERP' dataset * 'tokens': a 'list' of 'string' features. * 'ner\_tags': a 'list' of NER tag labels, with possible values including 'I-PPL', 'B-EVT', 'B-PLC', 'I-IND', 'B-IND'. 12. 'FacQA' dataset * 'question': a 'list' of 'string' features. * 'passage': a 'list' of 'string' features. * 'seq\_label': a 'list' of classification labels, with possible values including 'O', 'B', 'I'. ### Data Splits The data is split into a training, validation and test set. Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information The licensing status of the IndoNLU benchmark datasets is under MIT License. IndoNLU citation 'EmoT' dataset citation 'SmSA' dataset citation 'CASA' dataset citation 'HoASA' dataset citation 'WReTE' dataset citation 'POSP' dataset citation 'BaPOS' dataset citation 'TermA' dataset citation 'KEPS' dataset citation 'NERGrit' dataset citation 'NERP' dataset citation 'FacQA' dataset citation ### Contributions Thanks to @yasirabd for adding this dataset.
[ "### Dataset Summary\n\n\nThe IndoNLU benchmark is a collection of resources for training, evaluating, and analyzing natural language understanding systems for Bahasa Indonesia (Indonesian language).\nThere are 12 datasets in IndoNLU benchmark for Indonesian natural language understanding.\n\n\n1. 'EmoT': An emotion classification dataset collected from the social media platform Twitter. The dataset consists of around 4000 Indonesian colloquial language tweets, covering five different emotion labels: anger, fear, happy, love, and sadness\n2. 'SmSA': This sentence-level sentiment analysis dataset is a collection of comments and reviews in Indonesian obtained from multiple online platforms. The text was crawled and then annotated by several Indonesian linguists to construct this dataset. There are three possible sentiments on the 'SmSA' dataset: positive, negative, and neutral\n3. 'CASA': An aspect-based sentiment analysis dataset consisting of around a thousand car reviews collected from multiple Indonesian online automobile platforms. The dataset covers six aspects of car quality. We define the task to be a multi-label classification task, where each label represents a sentiment for a single aspect with three possible values: positive, negative, and neutral.\n4. 'HoASA': An aspect-based sentiment analysis dataset consisting of hotel reviews collected from the hotel aggregator platform, AiryRooms. The dataset covers ten different aspects of hotel quality. Similar to the 'CASA' dataset, each review is labeled with a single sentiment label for each aspect. There are four possible sentiment classes for each sentiment label: positive, negative, neutral, and positive-negative. The positivenegative label is given to a review that contains multiple sentiments of the same aspect but for different objects (e.g., cleanliness of bed and toilet).\n5. 'WReTE': The Wiki Revision Edits Textual Entailment dataset consists of 450 sentence pairs constructed from Wikipedia revision history. The dataset contains pairs of sentences and binary semantic relations between the pairs. The data are labeled as entailed when the meaning of the second sentence can be derived from the first one, and not entailed otherwise.\n6. 'POSP': This Indonesian part-of-speech tagging (POS) dataset is collected from Indonesian news websites. The dataset consists of around 8000 sentences with 26 POS tags. The POS tag labels follow the Indonesian Association of Computational Linguistics (INACL) POS Tagging Convention.\n7. 'BaPOS': This POS tagging dataset contains about 1000 sentences, collected from the PAN Localization Project. In this dataset, each word is tagged by one of 23 POS tag classes. Data splitting used in this benchmark follows the experimental setting used by Kurniawan and Aji (2018).\n8. 'TermA': This span-extraction dataset is collected from the hotel aggregator platform, AiryRooms. The dataset consists of thousands of hotel reviews, which each contain a span label for aspect and sentiment words representing the opinion of the reviewer on the corresponding aspect. The labels use Inside-Outside-Beginning (IOB) tagging representation with two kinds of tags, aspect and sentiment.\n9. 'KEPS': This keyphrase extraction dataset consists of text from Twitter discussing banking products and services and is written in the Indonesian language. A phrase containing important information is considered a keyphrase. Text may contain one or more keyphrases since important phrases can be located at different positions. The dataset follows the IOB chunking format, which represents the position of the keyphrase.\n10. 'NERGrit': This NER dataset is taken from the Grit-ID repository, and the labels are spans in IOB chunking representation. The dataset consists of three kinds of named entity tags, PERSON (name of person), PLACE (name of location), and ORGANIZATION (name of organization).\n11. 'NERP': This NER dataset (Hoesen and Purwarianti, 2018) contains texts collected from several Indonesian news websites. There are five labels available in this dataset, PER (name of person), LOC (name of location), IND (name of product or brand), EVT (name of the event), and FNB (name of food and beverage). Similar to the 'TermA' dataset, the 'NERP' dataset uses the IOB chunking format.\n12. 'FacQA': The goal of the FacQA dataset is to find the answer to a question from a provided short passage from a news article. Each row in the FacQA dataset consists of a question, a short passage, and a label phrase, which can be found inside the corresponding short passage. There are six categories of questions: date, location, name, organization, person, and quantitative.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nIndonesian\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\n1. 'EmoT' dataset\n\n\nA data point consists of 'tweet' and 'label'. An example from the train set looks as follows:\n\n\n2. 'SmSA' dataset\n\n\nA data point consists of 'text' and 'label'. An example from the train set looks as follows:\n\n\n3. 'CASA' dataset\n\n\nA data point consists of 'sentence' and multi-label 'feature', 'machine', 'others', 'part', 'price', and 'service'. An example from the train set looks as follows:\n\n\n4. 'HoASA' dataset\n\n\nA data point consists of 'sentence' and multi-label 'ac', 'air\\_panas', 'bau', 'general', 'kebersihan', 'linen', 'service', 'sunrise\\_meal', 'tv', and 'wifi'. An example from the train set looks as follows:\n\n\n5. 'WreTE' dataset\n\n\nA data point consists of 'premise', 'hypothesis', 'category', and 'label'. An example from the train set looks as follows:\n\n\n6. 'POSP' dataset\n\n\nA data point consists of 'tokens' and 'pos\\_tags'. An example from the train set looks as follows:\n\n\n7. 'BaPOS' dataset\n\n\nA data point consists of 'tokens' and 'pos\\_tags'. An example from the train set looks as follows:\n\n\n8. 'TermA' dataset\n\n\nA data point consists of 'tokens' and 'seq\\_label'. An example from the train set looks as follows:\n\n\n9. 'KEPS' dataset\n\n\nA data point consists of 'tokens' and 'seq\\_label'. An example from the train set looks as follows:\n\n\n10. 'NERGrit' dataset\n\n\nA data point consists of 'tokens' and 'ner\\_tags'. An example from the train set looks as follows:\n\n\n11. 'NERP' dataset\n\n\nA data point consists of 'tokens' and 'ner\\_tags'. An example from the train set looks as follows:\n\n\n12. 'FacQA' dataset\n\n\nA data point consists of 'question', 'passage', and 'seq\\_label'. An example from the train set looks as follows:", "### Data Fields\n\n\n1. 'EmoT' dataset\n\n\n* 'tweet': a 'string' feature.\n* 'label': an emotion label, with possible values including 'sadness', 'anger', 'love', 'fear', 'happy'.\n\n\n2. 'SmSA' dataset\n\n\n* 'text': a 'string' feature.\n* 'label': a sentiment label, with possible values including 'positive', 'neutral', 'negative'.\n\n\n3. 'CASA' dataset\n\n\n* 'sentence': a 'string' feature.\n* 'fuel': a sentiment label, with possible values including 'negative', 'neutral', 'positive'.\n* 'machine': a sentiment label, with possible values including 'negative', 'neutral', 'positive'.\n* 'others': a sentiment label, with possible values including 'negative', 'neutral', 'positive'.\n* 'part': a sentiment label, with possible values including 'negative', 'neutral', 'positive'.\n* 'price': a sentiment label, with possible values including 'negative', 'neutral', 'positive'.\n* 'service': a sentiment label, with possible values including 'negative', 'neutral', 'positive'.\n\n\n4. 'HoASA' dataset\n\n\n* 'sentence': a 'string' feature.\n* 'ac': a sentiment label, with possible values including 'neg', 'neut', 'pos', 'neg\\_pos'.\n* 'air\\_panas': a sentiment label, with possible values including 'neg', 'neut', 'pos', 'neg\\_pos'.\n* 'bau': a sentiment label, with possible values including 'neg', 'neut', 'pos', 'neg\\_pos'.\n* 'general': a sentiment label, with possible values including 'neg', 'neut', 'pos', 'neg\\_pos'.\n* 'kebersihan': a sentiment label, with possible values including 'neg', 'neut', 'pos', 'neg\\_pos'.\n* 'linen': a sentiment label, with possible values including 'neg', 'neut', 'pos', 'neg\\_pos'.\n* 'service': a sentiment label, with possible values including 'neg', 'neut', 'pos', 'neg\\_pos'.\n* 'sunrise\\_meal': a sentiment label, with possible values including 'neg', 'neut', 'pos', 'neg\\_pos'.\n* 'tv': a sentiment label, with possible values including 'neg', 'neut', 'pos', 'neg\\_pos'.\n* 'wifi': a sentiment label, with possible values including 'neg', 'neut', 'pos', 'neg\\_pos'.\n\n\n5. 'WReTE' dataset\n\n\n* 'premise': a 'string' feature.\n* 'hypothesis': a 'string' feature.\n* 'category': a 'string' feature.\n* 'label': a classification label, with possible values including 'NotEntail', 'Entail\\_or\\_Paraphrase'.\n\n\n6. 'POSP' dataset\n\n\n* 'tokens': a 'list' of 'string' features.\n* 'pos\\_tags': a 'list' of POS tag labels, with possible values including 'B-PPO', 'B-KUA', 'B-ADV', 'B-PRN', 'B-VBI'.\n\n\nThe POS tag labels follow the Indonesian Association of Computational Linguistics (INACL) POS Tagging Convention.\n\n\n7. 'BaPOS' dataset\n\n\n* 'tokens': a 'list' of 'string' features.\n* 'pos\\_tags': a 'list' of POS tag labels, with possible values including 'B-PR', 'B-CD', 'I-PR', 'B-SYM', 'B-JJ'.\n\n\nThe POS tag labels from Tagset UI.\n\n\n8. 'TermA' dataset\n\n\n* 'tokens': a 'list' of 'string' features.\n* 'seq\\_label': a 'list' of classification labels, with possible values including 'I-SENTIMENT', 'O', 'I-ASPECT', 'B-SENTIMENT', 'B-ASPECT'.\n\n\n9. 'KEPS' dataset\n\n\n* 'tokens': a 'list' of 'string' features.\n* 'seq\\_label': a 'list' of classification labels, with possible values including 'O', 'B', 'I'.\n\n\nThe labels use Inside-Outside-Beginning (IOB) tagging.\n\n\n10. 'NERGrit' dataset\n\n\n* 'tokens': a 'list' of 'string' features.\n* 'ner\\_tags': a 'list' of NER tag labels, with possible values including 'I-PERSON', 'B-ORGANISATION', 'I-ORGANISATION', 'B-PLACE', 'I-PLACE'.\n\n\nThe labels use Inside-Outside-Beginning (IOB) tagging.\n\n\n11. 'NERP' dataset\n\n\n* 'tokens': a 'list' of 'string' features.\n* 'ner\\_tags': a 'list' of NER tag labels, with possible values including 'I-PPL', 'B-EVT', 'B-PLC', 'I-IND', 'B-IND'.\n\n\n12. 'FacQA' dataset\n\n\n* 'question': a 'list' of 'string' features.\n* 'passage': a 'list' of 'string' features.\n* 'seq\\_label': a 'list' of classification labels, with possible values including 'O', 'B', 'I'.", "### Data Splits\n\n\nThe data is split into a training, validation and test set.\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nThe licensing status of the IndoNLU benchmark datasets is under MIT License.\n\n\nIndoNLU citation\n\n\n'EmoT' dataset citation\n\n\n'SmSA' dataset citation\n\n\n'CASA' dataset citation\n\n\n'HoASA' dataset citation\n\n\n'WReTE' dataset citation\n\n\n'POSP' dataset citation\n\n\n'BaPOS' dataset citation\n\n\n'TermA' dataset citation\n\n\n'KEPS' dataset citation\n\n\n'NERGrit' dataset citation\n\n\n'NERP' dataset citation\n\n\n'FacQA' dataset citation", "### Contributions\n\n\nThanks to @yasirabd for adding this dataset." ]
[ "TAGS\n#task_categories-question-answering #task_categories-text-classification #task_categories-token-classification #task_ids-closed-domain-qa #task_ids-multi-class-classification #task_ids-named-entity-recognition #task_ids-part-of-speech #task_ids-semantic-similarity-classification #task_ids-sentiment-classification #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #size_categories-1K<n<10K #size_categories-n<1K #source_datasets-original #language-Indonesian #license-mit #keyphrase-extraction #span-extraction #aspect-based-sentiment-analysis #arxiv-1809.03391 #region-us \n", "### Dataset Summary\n\n\nThe IndoNLU benchmark is a collection of resources for training, evaluating, and analyzing natural language understanding systems for Bahasa Indonesia (Indonesian language).\nThere are 12 datasets in IndoNLU benchmark for Indonesian natural language understanding.\n\n\n1. 'EmoT': An emotion classification dataset collected from the social media platform Twitter. The dataset consists of around 4000 Indonesian colloquial language tweets, covering five different emotion labels: anger, fear, happy, love, and sadness\n2. 'SmSA': This sentence-level sentiment analysis dataset is a collection of comments and reviews in Indonesian obtained from multiple online platforms. The text was crawled and then annotated by several Indonesian linguists to construct this dataset. There are three possible sentiments on the 'SmSA' dataset: positive, negative, and neutral\n3. 'CASA': An aspect-based sentiment analysis dataset consisting of around a thousand car reviews collected from multiple Indonesian online automobile platforms. The dataset covers six aspects of car quality. We define the task to be a multi-label classification task, where each label represents a sentiment for a single aspect with three possible values: positive, negative, and neutral.\n4. 'HoASA': An aspect-based sentiment analysis dataset consisting of hotel reviews collected from the hotel aggregator platform, AiryRooms. The dataset covers ten different aspects of hotel quality. Similar to the 'CASA' dataset, each review is labeled with a single sentiment label for each aspect. There are four possible sentiment classes for each sentiment label: positive, negative, neutral, and positive-negative. The positivenegative label is given to a review that contains multiple sentiments of the same aspect but for different objects (e.g., cleanliness of bed and toilet).\n5. 'WReTE': The Wiki Revision Edits Textual Entailment dataset consists of 450 sentence pairs constructed from Wikipedia revision history. The dataset contains pairs of sentences and binary semantic relations between the pairs. The data are labeled as entailed when the meaning of the second sentence can be derived from the first one, and not entailed otherwise.\n6. 'POSP': This Indonesian part-of-speech tagging (POS) dataset is collected from Indonesian news websites. The dataset consists of around 8000 sentences with 26 POS tags. The POS tag labels follow the Indonesian Association of Computational Linguistics (INACL) POS Tagging Convention.\n7. 'BaPOS': This POS tagging dataset contains about 1000 sentences, collected from the PAN Localization Project. In this dataset, each word is tagged by one of 23 POS tag classes. Data splitting used in this benchmark follows the experimental setting used by Kurniawan and Aji (2018).\n8. 'TermA': This span-extraction dataset is collected from the hotel aggregator platform, AiryRooms. The dataset consists of thousands of hotel reviews, which each contain a span label for aspect and sentiment words representing the opinion of the reviewer on the corresponding aspect. The labels use Inside-Outside-Beginning (IOB) tagging representation with two kinds of tags, aspect and sentiment.\n9. 'KEPS': This keyphrase extraction dataset consists of text from Twitter discussing banking products and services and is written in the Indonesian language. A phrase containing important information is considered a keyphrase. Text may contain one or more keyphrases since important phrases can be located at different positions. The dataset follows the IOB chunking format, which represents the position of the keyphrase.\n10. 'NERGrit': This NER dataset is taken from the Grit-ID repository, and the labels are spans in IOB chunking representation. The dataset consists of three kinds of named entity tags, PERSON (name of person), PLACE (name of location), and ORGANIZATION (name of organization).\n11. 'NERP': This NER dataset (Hoesen and Purwarianti, 2018) contains texts collected from several Indonesian news websites. There are five labels available in this dataset, PER (name of person), LOC (name of location), IND (name of product or brand), EVT (name of the event), and FNB (name of food and beverage). Similar to the 'TermA' dataset, the 'NERP' dataset uses the IOB chunking format.\n12. 'FacQA': The goal of the FacQA dataset is to find the answer to a question from a provided short passage from a news article. Each row in the FacQA dataset consists of a question, a short passage, and a label phrase, which can be found inside the corresponding short passage. There are six categories of questions: date, location, name, organization, person, and quantitative.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nIndonesian\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\n1. 'EmoT' dataset\n\n\nA data point consists of 'tweet' and 'label'. An example from the train set looks as follows:\n\n\n2. 'SmSA' dataset\n\n\nA data point consists of 'text' and 'label'. An example from the train set looks as follows:\n\n\n3. 'CASA' dataset\n\n\nA data point consists of 'sentence' and multi-label 'feature', 'machine', 'others', 'part', 'price', and 'service'. An example from the train set looks as follows:\n\n\n4. 'HoASA' dataset\n\n\nA data point consists of 'sentence' and multi-label 'ac', 'air\\_panas', 'bau', 'general', 'kebersihan', 'linen', 'service', 'sunrise\\_meal', 'tv', and 'wifi'. An example from the train set looks as follows:\n\n\n5. 'WreTE' dataset\n\n\nA data point consists of 'premise', 'hypothesis', 'category', and 'label'. An example from the train set looks as follows:\n\n\n6. 'POSP' dataset\n\n\nA data point consists of 'tokens' and 'pos\\_tags'. An example from the train set looks as follows:\n\n\n7. 'BaPOS' dataset\n\n\nA data point consists of 'tokens' and 'pos\\_tags'. An example from the train set looks as follows:\n\n\n8. 'TermA' dataset\n\n\nA data point consists of 'tokens' and 'seq\\_label'. An example from the train set looks as follows:\n\n\n9. 'KEPS' dataset\n\n\nA data point consists of 'tokens' and 'seq\\_label'. An example from the train set looks as follows:\n\n\n10. 'NERGrit' dataset\n\n\nA data point consists of 'tokens' and 'ner\\_tags'. An example from the train set looks as follows:\n\n\n11. 'NERP' dataset\n\n\nA data point consists of 'tokens' and 'ner\\_tags'. An example from the train set looks as follows:\n\n\n12. 'FacQA' dataset\n\n\nA data point consists of 'question', 'passage', and 'seq\\_label'. An example from the train set looks as follows:", "### Data Fields\n\n\n1. 'EmoT' dataset\n\n\n* 'tweet': a 'string' feature.\n* 'label': an emotion label, with possible values including 'sadness', 'anger', 'love', 'fear', 'happy'.\n\n\n2. 'SmSA' dataset\n\n\n* 'text': a 'string' feature.\n* 'label': a sentiment label, with possible values including 'positive', 'neutral', 'negative'.\n\n\n3. 'CASA' dataset\n\n\n* 'sentence': a 'string' feature.\n* 'fuel': a sentiment label, with possible values including 'negative', 'neutral', 'positive'.\n* 'machine': a sentiment label, with possible values including 'negative', 'neutral', 'positive'.\n* 'others': a sentiment label, with possible values including 'negative', 'neutral', 'positive'.\n* 'part': a sentiment label, with possible values including 'negative', 'neutral', 'positive'.\n* 'price': a sentiment label, with possible values including 'negative', 'neutral', 'positive'.\n* 'service': a sentiment label, with possible values including 'negative', 'neutral', 'positive'.\n\n\n4. 'HoASA' dataset\n\n\n* 'sentence': a 'string' feature.\n* 'ac': a sentiment label, with possible values including 'neg', 'neut', 'pos', 'neg\\_pos'.\n* 'air\\_panas': a sentiment label, with possible values including 'neg', 'neut', 'pos', 'neg\\_pos'.\n* 'bau': a sentiment label, with possible values including 'neg', 'neut', 'pos', 'neg\\_pos'.\n* 'general': a sentiment label, with possible values including 'neg', 'neut', 'pos', 'neg\\_pos'.\n* 'kebersihan': a sentiment label, with possible values including 'neg', 'neut', 'pos', 'neg\\_pos'.\n* 'linen': a sentiment label, with possible values including 'neg', 'neut', 'pos', 'neg\\_pos'.\n* 'service': a sentiment label, with possible values including 'neg', 'neut', 'pos', 'neg\\_pos'.\n* 'sunrise\\_meal': a sentiment label, with possible values including 'neg', 'neut', 'pos', 'neg\\_pos'.\n* 'tv': a sentiment label, with possible values including 'neg', 'neut', 'pos', 'neg\\_pos'.\n* 'wifi': a sentiment label, with possible values including 'neg', 'neut', 'pos', 'neg\\_pos'.\n\n\n5. 'WReTE' dataset\n\n\n* 'premise': a 'string' feature.\n* 'hypothesis': a 'string' feature.\n* 'category': a 'string' feature.\n* 'label': a classification label, with possible values including 'NotEntail', 'Entail\\_or\\_Paraphrase'.\n\n\n6. 'POSP' dataset\n\n\n* 'tokens': a 'list' of 'string' features.\n* 'pos\\_tags': a 'list' of POS tag labels, with possible values including 'B-PPO', 'B-KUA', 'B-ADV', 'B-PRN', 'B-VBI'.\n\n\nThe POS tag labels follow the Indonesian Association of Computational Linguistics (INACL) POS Tagging Convention.\n\n\n7. 'BaPOS' dataset\n\n\n* 'tokens': a 'list' of 'string' features.\n* 'pos\\_tags': a 'list' of POS tag labels, with possible values including 'B-PR', 'B-CD', 'I-PR', 'B-SYM', 'B-JJ'.\n\n\nThe POS tag labels from Tagset UI.\n\n\n8. 'TermA' dataset\n\n\n* 'tokens': a 'list' of 'string' features.\n* 'seq\\_label': a 'list' of classification labels, with possible values including 'I-SENTIMENT', 'O', 'I-ASPECT', 'B-SENTIMENT', 'B-ASPECT'.\n\n\n9. 'KEPS' dataset\n\n\n* 'tokens': a 'list' of 'string' features.\n* 'seq\\_label': a 'list' of classification labels, with possible values including 'O', 'B', 'I'.\n\n\nThe labels use Inside-Outside-Beginning (IOB) tagging.\n\n\n10. 'NERGrit' dataset\n\n\n* 'tokens': a 'list' of 'string' features.\n* 'ner\\_tags': a 'list' of NER tag labels, with possible values including 'I-PERSON', 'B-ORGANISATION', 'I-ORGANISATION', 'B-PLACE', 'I-PLACE'.\n\n\nThe labels use Inside-Outside-Beginning (IOB) tagging.\n\n\n11. 'NERP' dataset\n\n\n* 'tokens': a 'list' of 'string' features.\n* 'ner\\_tags': a 'list' of NER tag labels, with possible values including 'I-PPL', 'B-EVT', 'B-PLC', 'I-IND', 'B-IND'.\n\n\n12. 'FacQA' dataset\n\n\n* 'question': a 'list' of 'string' features.\n* 'passage': a 'list' of 'string' features.\n* 'seq\\_label': a 'list' of classification labels, with possible values including 'O', 'B', 'I'.", "### Data Splits\n\n\nThe data is split into a training, validation and test set.\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nThe licensing status of the IndoNLU benchmark datasets is under MIT License.\n\n\nIndoNLU citation\n\n\n'EmoT' dataset citation\n\n\n'SmSA' dataset citation\n\n\n'CASA' dataset citation\n\n\n'HoASA' dataset citation\n\n\n'WReTE' dataset citation\n\n\n'POSP' dataset citation\n\n\n'BaPOS' dataset citation\n\n\n'TermA' dataset citation\n\n\n'KEPS' dataset citation\n\n\n'NERGrit' dataset citation\n\n\n'NERP' dataset citation\n\n\n'FacQA' dataset citation", "### Contributions\n\n\nThanks to @yasirabd for adding this dataset." ]
11b9160bec51fb01e2f1999f0de1c399aa81567a
# Dataset Card for InquisitiveQg ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Add homepage URL here if available (unless it's a GitHub repository)]() - **Repository:** [If the dataset is hosted on github or has a github homepage, add URL here]() - **Paper:** [If the dataset was introduced by a paper or there was a paper written describing the dataset, add URL here (landing page for Arxiv paper preferred)]() - **Leaderboard:** [If the dataset supports an active leaderboard, add link here]() - **Point of Contact:** [If known, name and email of at least one person the reader can contact for questions about the dataset.]() ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset.
inquisitive_qg
[ "task_categories:text2text-generation", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:unknown", "question-generation", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text2text-generation"], "task_ids": [], "paperswithcode_id": "inquisitive", "pretty_name": "InquisitiveQg", "tags": ["question-generation"], "dataset_info": {"features": [{"name": "id", "dtype": "int32"}, {"name": "article_id", "dtype": "int32"}, {"name": "article", "dtype": "string"}, {"name": "sentence_id", "dtype": "int32"}, {"name": "sentence", "dtype": "string"}, {"name": "span", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "span_start_position", "dtype": "int32"}, {"name": "span_end_position", "dtype": "int32"}], "config_name": "plain_text", "splits": [{"name": "train", "num_bytes": 66099232, "num_examples": 15931}, {"name": "validation", "num_bytes": 8904329, "num_examples": 1991}, {"name": "test", "num_bytes": 7167203, "num_examples": 1894}], "download_size": 7085941, "dataset_size": 82170764}}
2024-01-18T11:06:30+00:00
[]
[ "en" ]
TAGS #task_categories-text2text-generation #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-unknown #question-generation #region-us
# Dataset Card for InquisitiveQg ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: [Add homepage URL here if available (unless it's a GitHub repository)]() - Repository: [If the dataset is hosted on github or has a github homepage, add URL here]() - Paper: [If the dataset was introduced by a paper or there was a paper written describing the dataset, add URL here (landing page for Arxiv paper preferred)]() - Leaderboard: [If the dataset supports an active leaderboard, add link here]() - Point of Contact: [If known, name and email of at least one person the reader can contact for questions about the dataset.]() ### Dataset Summary ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions Thanks to @patil-suraj for adding this dataset.
[ "# Dataset Card for InquisitiveQg", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: [Add homepage URL here if available (unless it's a GitHub repository)]()\n- Repository: [If the dataset is hosted on github or has a github homepage, add URL here]()\n- Paper: [If the dataset was introduced by a paper or there was a paper written describing the dataset, add URL here (landing page for Arxiv paper preferred)]()\n- Leaderboard: [If the dataset supports an active leaderboard, add link here]()\n- Point of Contact: [If known, name and email of at least one person the reader can contact for questions about the dataset.]()", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @patil-suraj for adding this dataset." ]
[ "TAGS\n#task_categories-text2text-generation #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-unknown #question-generation #region-us \n", "# Dataset Card for InquisitiveQg", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: [Add homepage URL here if available (unless it's a GitHub repository)]()\n- Repository: [If the dataset is hosted on github or has a github homepage, add URL here]()\n- Paper: [If the dataset was introduced by a paper or there was a paper written describing the dataset, add URL here (landing page for Arxiv paper preferred)]()\n- Leaderboard: [If the dataset supports an active leaderboard, add link here]()\n- Point of Contact: [If known, name and email of at least one person the reader can contact for questions about the dataset.]()", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @patil-suraj for adding this dataset." ]
3144f131887dbf2e9a90ad67943f7a10a5fdc4f3
# Dataset Card for Interpress Turkish News Category Dataset (270K) ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Interpress](https://www.interpress.com/) - **Point of Contact:** [Yavuz Komecoglu](mailto:[email protected]) ### Dataset Summary Turkish News Category Dataset (270K) is a Turkish news data set consisting of 273601 news in 17 categories, compiled from printed media and news websites between 2010 and 2017 by the Interpress (https://www.interpress.com/) media monitoring company. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The dataset is based on Turkish. ## Dataset Structure ### Data Instances A text classification dataset with 17 different news category. ``` {'id': 301365715, 'title': 'BİR SİHİRBAZ', 'content': 'NİANG, TAKIM ARKADAŞI FERNANDES E ÖVGÜLER YAĞDIRDI FUTBOL OYNARKEN EĞLENİYORUM YÜZDE 701E OYNUYORUM LİDERLE ARAMIZDA SADECE 5 PUAN VAR, ŞAMPİYONLUK ŞANSIMIZ YÜKSEK 4 j Fernandes le birlikte oynamayı seviyorum, adam adeta sihirbaz gibi J Frank Ribery, futbol hayatımda oynamaktan en çok zevk aldığım isim ı Abartılacak bir ] sonuç almadık ı .BAHÇE derbisinde Kartal ın ilk golünü atan, üçüncü golün de asistini yapan Mamadou Niang, TRT Spor da Futbol Keyfi programında özel açıklamalar yaptı. Senegalli forvet şampiyonluk şanslarının yüksek olduğunu dile getirirken, Portekizli yıldız Fernandes için Onunla oynamayı seviyorum, adeta bir sihirbaz gibi ifadesini kullandı. Frank Ribery nin futbol hayatında oynamaktan en çok zevk aldığım isim olduğunu ifade eden Niang, Moussa Sow ve Burak Yılmaz ın da Süper Lig deki en iyi forvetler olduğunu, ikisinin de tarzını beğendiğini söyledi. Senegalli yıldız şampiyonluk şansları için, Çok yüksek. Çünkü liderle aramızda 5 puan fark var ve bunu kapatacak güçteyiz yorumunu yaptı. NİANG şöyle devam etti: t.f En zorlandığım stoper İbrahim Toraman dır. Neyse ki şu an onunla takım arkadaşıyım. Almeida sakatlıktan döndükten sonra nasıl bir diziliş olur bilemiyorum. Onunla beraber oynayabiliriz, Holosko ile de oynayabiliriz. Türkiye, .. O NİANG, şu anda gerçek performansının yüzde 70 i ile oynadığını söyledi. İyi bir seviyede olmadığını kabul ettiğini belirten Senegalli yıldız, Sahada savaşan oyuncularla birlikte olmayı seviyorum. Bizim takımda Olcay ve Oğuzhan gibi bu yapıda isimler var. Tabii ki şansın da önemi var diye konuştu. zor bir lig. Eskiden arkadaşlarıma Türkiye Ligi nin iyi olduğunu söylediğimde inanmazlardı. Şimdi Didier Drogba, VVesley Sneijder, Sovvgibi oyuncuların burada olması ilgiyi artırdı. Futbol oynarken eğleniyorum ve şu an emekli olmayı düşünmüyorum. Açılış törenine, yönetici Metin Albayrak ile birlikte futbolcular Necip Uysal, McGregor ve Mehmet Akyüz de katıldı. BEŞİKTAŞLI Necip Uysal, +f başkan Fikret Orman gibi F.Bahçe galibiyetinin abartılmaması gerektiğini söyledi. Pazar günü İnönü Stadı nda güzel bir zafer elde ettiklerini vurgulayan genç yıldız, 10 karşılaşmaya daha çıkacağız. Her maçımız final, ayaklarımızın yere sağlam basması gerekiyor. Maçlara tek tek bakacağız ve hepsini kazanmak için oynayacağız yorumunu yaptı. Trabzon un her zaman zor deplasman olduğunu ifade eden Necip, Kolay olmayacağını biliyoruz ama şampiyonluk şansımızın sürmesi için kesinlikle üç puanla dönmeye mecburuz dedi. sflPa', 'category': 12, 'categorycode': 12, 'publishdatetime': '2013-03-07T00:00:00Z'} ``` ### Data Fields - `id` - `title` - `content` - `category` - `categorycode` - `publishdatetime` ### Data Splits The data is split into a training and testing. The split is organized as the following | | train | test | |------------|--------:|-------:| | data split | 218,880 | 54,721 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization Downloaded over 270,000 news from the printed media and news websites between 2010 and 2017 by the Interpress (https://www.interpress.com/) media monitoring company. This data collection compiled from print media and internet news is presented in its raw form. For this reason, it is appropriate to use it with careful pre-processing steps regarding various OCR errors and typos. #### Who are the source language producers? Turkish printed news sources and online news sites. ### Annotations The dataset does not contain any additional annotations. #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information https://www.interpress.com/ ### Contributions Thanks to [@basakbuluz](https://github.com/basakbuluz) for adding this dataset.
interpress_news_category_tr
[ "task_categories:text-classification", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:tr", "license:unknown", "news-category-classification", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["tr"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": [], "pretty_name": "Interpress Turkish News Category Dataset (270K)", "tags": ["news-category-classification"], "dataset_info": {"features": [{"name": "id", "dtype": "int32"}, {"name": "title", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "category", "dtype": {"class_label": {"names": {"0": "aktuel", "1": "bilisim", "2": "egitim", "3": "ekonomi", "4": "gida", "5": "iletisim", "6": "kultursanat", "7": "magazin", "8": "saglik", "9": "savunma", "10": "seyahat", "11": "siyasi", "12": "spor", "13": "teknoloji", "14": "ticaret", "15": "turizm", "16": "yasam"}}}}, {"name": "categorycode", "dtype": {"class_label": {"names": {"0": "0", "1": "1", "2": "2", "3": "3", "4": "4", "5": "5", "6": "6", "7": "7", "8": "8", "9": "9", "10": "10", "11": "11", "12": "12", "13": "13", "14": "14", "15": "15", "16": "16"}}}}, {"name": "publishdatetime", "dtype": "string"}], "config_name": "270k", "splits": [{"name": "train", "num_bytes": 736098052, "num_examples": 218880}, {"name": "test", "num_bytes": 184683629, "num_examples": 54721}], "download_size": 354802486, "dataset_size": 920781681}}
2024-01-18T11:06:32+00:00
[]
[ "tr" ]
TAGS #task_categories-text-classification #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-Turkish #license-unknown #news-category-classification #region-us
Dataset Card for Interpress Turkish News Category Dataset (270K) ================================================================ Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: Interpress * Point of Contact: Yavuz Komecoglu ### Dataset Summary Turkish News Category Dataset (270K) is a Turkish news data set consisting of 273601 news in 17 categories, compiled from printed media and news websites between 2010 and 2017 by the Interpress (URL media monitoring company. ### Supported Tasks and Leaderboards ### Languages The dataset is based on Turkish. Dataset Structure ----------------- ### Data Instances A text classification dataset with 17 different news category. ### Data Fields * 'id' * 'title' * 'content' * 'category' * 'categorycode' * 'publishdatetime' ### Data Splits The data is split into a training and testing. The split is organized as the following Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization Downloaded over 270,000 news from the printed media and news websites between 2010 and 2017 by the Interpress (URL media monitoring company. This data collection compiled from print media and internet news is presented in its raw form. For this reason, it is appropriate to use it with careful pre-processing steps regarding various OCR errors and typos. #### Who are the source language producers? Turkish printed news sources and online news sites. ### Annotations The dataset does not contain any additional annotations. #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information URL ### Contributions Thanks to @basakbuluz for adding this dataset.
[ "### Dataset Summary\n\n\nTurkish News Category Dataset (270K) is a Turkish news data set consisting of 273601 news in 17 categories, compiled from printed media and news websites between 2010 and 2017 by the Interpress (URL media monitoring company.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nThe dataset is based on Turkish.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA text classification dataset with 17 different news category.", "### Data Fields\n\n\n* 'id'\n* 'title'\n* 'content'\n* 'category'\n* 'categorycode'\n* 'publishdatetime'", "### Data Splits\n\n\nThe data is split into a training and testing. The split is organized as the following\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nDownloaded over 270,000 news from the printed media and news websites between 2010 and 2017 by the Interpress (URL media monitoring company. This data collection compiled from print media and internet news is presented in its raw form. For this reason, it is appropriate to use it with careful pre-processing steps regarding various OCR errors and typos.", "#### Who are the source language producers?\n\n\nTurkish printed news sources and online news sites.", "### Annotations\n\n\nThe dataset does not contain any additional annotations.", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nURL", "### Contributions\n\n\nThanks to @basakbuluz for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-Turkish #license-unknown #news-category-classification #region-us \n", "### Dataset Summary\n\n\nTurkish News Category Dataset (270K) is a Turkish news data set consisting of 273601 news in 17 categories, compiled from printed media and news websites between 2010 and 2017 by the Interpress (URL media monitoring company.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nThe dataset is based on Turkish.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA text classification dataset with 17 different news category.", "### Data Fields\n\n\n* 'id'\n* 'title'\n* 'content'\n* 'category'\n* 'categorycode'\n* 'publishdatetime'", "### Data Splits\n\n\nThe data is split into a training and testing. The split is organized as the following\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nDownloaded over 270,000 news from the printed media and news websites between 2010 and 2017 by the Interpress (URL media monitoring company. This data collection compiled from print media and internet news is presented in its raw form. For this reason, it is appropriate to use it with careful pre-processing steps regarding various OCR errors and typos.", "#### Who are the source language producers?\n\n\nTurkish printed news sources and online news sites.", "### Annotations\n\n\nThe dataset does not contain any additional annotations.", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nURL", "### Contributions\n\n\nThanks to @basakbuluz for adding this dataset." ]
cd960a586c2591aee73d8e36a206de67328fdf0f
# Dataset Card for Interpress Turkish News Category Dataset (270K - Lite Version) ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Interpress](https://www.interpress.com/) - **Point of Contact:** [Yavuz Komecoglu](mailto:[email protected]) ### Dataset Summary Turkish News Category Dataset (270K - Lite Version) is a Turkish news data set consisting of 273601 news in 10 categories ("kültürsanat", "ekonomi", "siyaset", "eğitim", "dünya", "spor", "teknoloji", "magazin", "sağlık", "gündem"), compiled from printed media and news websites between 2010 and 2017 by the Interpress (https://www.interpress.com/) media monitoring company. **It has been rearranged as easily separable and with fewer classes.** ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The dataset is based on Turkish. ## Dataset Structure ### Data Instances A text classification dataset with 10 different news category. Here is an example from the dataset: ``` { 'category': 0, 'content': 'Tarihten Sınıfta Kaldık Bugün tarihe damgasını vuran Osmanlı İmparatorluğu nun kuruluş yıldönümü. Adına dizilerin çekildiği tarihimizi ne kadar biliyoruz? Gerekçeler faklı; ama sonuç aynı çıktı. Tarihten sınıfta kaldık. Sayfa 5r 1 Bugün tarihe damgasını vuran Osmanlı İmparatorluğumun kuruluş yıldönümü. Adına dizilerin çekildiği tarihimizi ne kadar biliyoruz? Gerekçeler faklı; ama sonuç aynı çıktı. Tarihten sınıfta kaldık 7 Ocak 1299... Kıtalara dağılan ücüyle, ülkeler arasında gördüğü aygıyla tarihe damgasını vuran anlı devletin kuruluş tarihi. Peki, anlı tarihimizi ne kadar biliyoruz? on zamanlarda tarihimizi anlatan izilere ilgi nasıl? Bu dizilerde anlatanlar ne kadar sağlıklı? İşte sokaın değerlendirmesi; levlüdiye Karaman (42-Ev lamım): Bir bilgim yok. Tarihle izla ilgilenmiyorum. Eşim daha ilgilidir bu konuda. Evde anlatır, ndan duyduklarımla yetiniyorum esem yalan olmaz. Osmanlı döeminde yaşamak isterdim. Tarih izileri izlerim Muhteşem Yüzyıl izisini çok izledim; hatta hiç kaırmazdım. Ama tarihimiz bu değil. Sunuün bilincindeyim. Muhteşem üzyıl dizisi genelde haremiyle ön landaydı. Onun için tarihi diziden ğrenmeyi de doğru bulmuyorum. )kullarda verilen tarih dersleri yeisiz. Daha çok tanıtabilirler. Görel anlatım yapılsın çocuklarımız aten okumak istemiyor. En azman eğlenceli hale getirip bu şekilde ilgilendirebilirler. erdi Üstün (22-Saatçi): Bu gün Osmanlı Devleti nin kuruluş yıldönümü olduğunu bilmiyordum. O dönemde yaşamak isterdim. Tarih yazılmış neden yaşamak istemeyim ki. Tarihime yeterince hakim olduğumu düşünüyorum. Araştırmalar yapıyorum. Merak ediyorum. Okullarda verilen tarih dersleri yeterli. Tarih dizisi izlemem, televizyondan tarihimi öğrenmek bana mantıklı gelmiyor. Yeterli olabilir; ama hikayeleştiriliyor. Sonuçta olduğu gibi anlatılsa daha iyi olur. Songül Karabacak (40-Ev Hanımı): Kuruluş yıldönümü olduğunu bilmiyordum. Tarih bilgim çok azdır. Zaten biz yaşadığımız dönemde tarih yazıyoruz. Osmanlı Dönemi nde yaşamak istemezdim. Sebebini bilmiyorum; ama hayatımdan memnunum, dönemden de memnunum. Dizileri takip etmiyorum. Ama mutlaka dizilerde tarihimiz doğru yansıtılıyor ki insanlar sürekli takip ediyor. Benim televizyonla pek aram yoktur. Ertuğrul Şahin (47-Çalışmıyor): Kuruluş yıldönümü olduğunu bilmiyordum. Sizden öğrendim. O dönemde yaşamak isterdim. Tarih sonuçta merak ederim. Tarihle ilgili çok bilgim yok. Okumadım, zaten şartlar el vermedi. Okullarda verilen eğitim yeterli değil. Örnek vermek gerekirse; 20 yaşında oğlum var Atatürk ün doğum yılını soruyorum yüzüme bakıyor. Verilen eğitim belli. Konu belirliyorlar onun dışına çıkmıyorlar. Daha fazla bilgi verilebilir. Tabi gençlerimizde de suç var bize baksınlar tarihimizi bilmiyoruz. Onlar araştırma yapsınlar her gün internette geziyorlar faydasız bir şeye bakacaklarına ecdatlarını okusunlar. Tarih dizlerini izlerim. Ama doğru yansıtılıyor mu orasını bilmiyorum sadece izleyiciyim. Ama önceden Süleyman Şah ı duyardım. Büyüklerimiz anlatırdı bunu diziden teyit ettim mesela. Ahmet Efe (22-Muhasebeci): Kuruluş yıldönümü olduğuyla ilgili bir bilgim yok. O dönemde yaşamak isterdim. Aldığımız bilgiler sonucunda illa ki bir özenme oluyor. Tam anlamıyla tarih bilgisine sahip olduğumu düşünmüyorum. Tarihe merakım var aslında; ama çok kısıtlı araştırma yapıyorum. Okullarda verilen tarih dersi yeterli değil. Çünkü şuradan birkaç çocuğu çevirip sorsanız size yeterli bilgi vermez. Veremez onun da bilgisi yok sonuçta. Zaten kısıtlı bilgiler veriliyor. Tarih dizilerini kılıç kalkan kuşanıp izliyorum. Doğru yansıtılıyor bundan dolayı da biraz insanlar tarihini öğrenmeye başladı desek yalan olmaz. Bu ne kadar doğru derseniz de bilgiyi doğru verdikten sonra tabi diziden de tarih öğrenilebilir. Mehmet Ak (28-Satış Danışmanı): Kuruluşunun bugün olduğunu bilmiyordum. O dönemde yaşamak isterdim. Yeterli bilgim yok bence kim tarihi tam anlamıyla öğrenebilir ki zaten. Ama tabi tarih kitapları okuyorum, araştırıyorum. Okullarda verilen tarih derslerini yeterli bulmuyorum; ama daha fazla neler yapılabilir, tarih küçüklere nasıl anlatılır bilmiyorum tek bildiğim yeterli olmadığı. Tarih dizileri gerçeği yüzde 75 yansıtıyor. Bu konuda araştırma yaptım yüzeysel anlatılıyor; fakat yine de bilgi edinilebilecek diziler. En azından rutinleşmiş dizi konularından uzak. Aile ile rahat rahat izleyebilirsin. Hasan Çalık (65-Emekli): Kuruluş yıldönümü olduğunu biliyorum. Araştırma yaparım. O dönemde yaşamak istemezdim Cumhuriyet döneminde yaşamayı daha çok isterdim. Okullarda verilen dersler yeterli. Film ya da dizi okumak yerine kitap okumayı tercih ederim. Bir insan ancak kitap okuyarak aydınlanabilir. Bu şekilde kendini geliştirebilir. Bir ömre ne kadar kitap sığdırırsan o kadar aydın bir insan olursun. Konusu fark etmez ister tarih olsun, ister roman okumak her zaman kazanç sağlar. Bir diziden tarihi ne kadar yeterli öğrenebilirsin ki ya da ne kadar doğru anlatılabilir. Bence diziyi bırakıp kitaplara yönelsinler. Nuray Çelik' } ``` ### Data Fields - **category** : Indicates to which category the news text belongs. (Such as "kültürsanat" (0), "ekonomi" (1), "siyaset" (2), "eğitim" (3), "dünya" (4), "spor" (5), "teknoloji" (6), "magazin" (7), "sağlık" (8), "gündem" (9)) - **content** : Contains the text of the news. ### Data Splits The data is split into a training and testing. The split is organized as the following | | train | test | |------------|--------:|-------:| | data split | 218,880 | 54,721 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization Downloaded over 270,000 news from the printed media and news websites between 2010 and 2017 by the Interpress (https://www.interpress.com/) media monitoring company. This data collection compiled from print media and internet news is presented in its raw form. For this reason, it is appropriate to use it with careful pre-processing steps regarding various OCR errors and typos. #### Who are the source language producers? Turkish printed news sources and online news sites. ### Annotations The dataset does not contain any additional annotations. #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information https://www.interpress.com/ ### Contributions Thanks to [@basakbuluz](https://github.com/basakbuluz) & [@yavuzkomecoglu](https://github.com/yavuzkomecoglu) & [@serdarakyol](https://github.com/serdarakyol/) for adding this dataset.
interpress_news_category_tr_lite
[ "task_categories:text-classification", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:extended|interpress_news_category_tr", "language:tr", "license:unknown", "news-category-classification", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["tr"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["extended|interpress_news_category_tr"], "task_categories": ["text-classification"], "task_ids": [], "pretty_name": "Interpress Turkish News Category Dataset (270K - Lite Version)", "tags": ["news-category-classification"], "dataset_info": {"features": [{"name": "content", "dtype": "string"}, {"name": "category", "dtype": {"class_label": {"names": {"0": "k\u00fclt\u00fcrsanat", "1": "ekonomi", "2": "siyaset", "3": "e\u011fitim", "4": "d\u00fcnya", "5": "spor", "6": "teknoloji", "7": "magazin", "8": "sa\u011fl\u0131k", "9": "g\u00fcndem"}}}}], "config_name": "270k_10class", "splits": [{"name": "train", "num_bytes": 721110711, "num_examples": 218880}, {"name": "test", "num_bytes": 179348267, "num_examples": 54721}], "download_size": 342920336, "dataset_size": 900458978}}
2024-01-18T11:06:44+00:00
[]
[ "tr" ]
TAGS #task_categories-text-classification #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-extended|interpress_news_category_tr #language-Turkish #license-unknown #news-category-classification #region-us
Dataset Card for Interpress Turkish News Category Dataset (270K - Lite Version) =============================================================================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: Interpress * Point of Contact: Yavuz Komecoglu ### Dataset Summary Turkish News Category Dataset (270K - Lite Version) is a Turkish news data set consisting of 273601 news in 10 categories ("kültürsanat", "ekonomi", "siyaset", "eğitim", "dünya", "spor", "teknoloji", "magazin", "sağlık", "gündem"), compiled from printed media and news websites between 2010 and 2017 by the Interpress (URL media monitoring company. It has been rearranged as easily separable and with fewer classes. ### Supported Tasks and Leaderboards ### Languages The dataset is based on Turkish. Dataset Structure ----------------- ### Data Instances A text classification dataset with 10 different news category. Here is an example from the dataset: ### Data Fields * category : Indicates to which category the news text belongs. (Such as "kültürsanat" (0), "ekonomi" (1), "siyaset" (2), "eğitim" (3), "dünya" (4), "spor" (5), "teknoloji" (6), "magazin" (7), "sağlık" (8), "gündem" (9)) * content : Contains the text of the news. ### Data Splits The data is split into a training and testing. The split is organized as the following Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization Downloaded over 270,000 news from the printed media and news websites between 2010 and 2017 by the Interpress (URL media monitoring company. This data collection compiled from print media and internet news is presented in its raw form. For this reason, it is appropriate to use it with careful pre-processing steps regarding various OCR errors and typos. #### Who are the source language producers? Turkish printed news sources and online news sites. ### Annotations The dataset does not contain any additional annotations. #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information URL ### Contributions Thanks to @basakbuluz & @yavuzkomecoglu & @serdarakyol for adding this dataset.
[ "### Dataset Summary\n\n\nTurkish News Category Dataset (270K - Lite Version) is a Turkish news data set consisting of 273601 news in 10 categories (\"kültürsanat\", \"ekonomi\", \"siyaset\", \"eğitim\", \"dünya\", \"spor\", \"teknoloji\", \"magazin\", \"sağlık\", \"gündem\"), compiled from printed media and news websites between 2010 and 2017 by the Interpress (URL media monitoring company. It has been rearranged as easily separable and with fewer classes.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nThe dataset is based on Turkish.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA text classification dataset with 10 different news category.\n\n\nHere is an example from the dataset:", "### Data Fields\n\n\n* category : Indicates to which category the news text belongs.\n(Such as \"kültürsanat\" (0), \"ekonomi\" (1), \"siyaset\" (2), \"eğitim\" (3), \"dünya\" (4), \"spor\" (5), \"teknoloji\" (6), \"magazin\" (7), \"sağlık\" (8), \"gündem\" (9))\n* content : Contains the text of the news.", "### Data Splits\n\n\nThe data is split into a training and testing. The split is organized as the following\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nDownloaded over 270,000 news from the printed media and news websites between 2010 and 2017 by the Interpress (URL media monitoring company. This data collection compiled from print media and internet news is presented in its raw form. For this reason, it is appropriate to use it with careful pre-processing steps regarding various OCR errors and typos.", "#### Who are the source language producers?\n\n\nTurkish printed news sources and online news sites.", "### Annotations\n\n\nThe dataset does not contain any additional annotations.", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nURL", "### Contributions\n\n\nThanks to @basakbuluz & @yavuzkomecoglu & @serdarakyol for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-extended|interpress_news_category_tr #language-Turkish #license-unknown #news-category-classification #region-us \n", "### Dataset Summary\n\n\nTurkish News Category Dataset (270K - Lite Version) is a Turkish news data set consisting of 273601 news in 10 categories (\"kültürsanat\", \"ekonomi\", \"siyaset\", \"eğitim\", \"dünya\", \"spor\", \"teknoloji\", \"magazin\", \"sağlık\", \"gündem\"), compiled from printed media and news websites between 2010 and 2017 by the Interpress (URL media monitoring company. It has been rearranged as easily separable and with fewer classes.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nThe dataset is based on Turkish.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA text classification dataset with 10 different news category.\n\n\nHere is an example from the dataset:", "### Data Fields\n\n\n* category : Indicates to which category the news text belongs.\n(Such as \"kültürsanat\" (0), \"ekonomi\" (1), \"siyaset\" (2), \"eğitim\" (3), \"dünya\" (4), \"spor\" (5), \"teknoloji\" (6), \"magazin\" (7), \"sağlık\" (8), \"gündem\" (9))\n* content : Contains the text of the news.", "### Data Splits\n\n\nThe data is split into a training and testing. The split is organized as the following\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nDownloaded over 270,000 news from the printed media and news websites between 2010 and 2017 by the Interpress (URL media monitoring company. This data collection compiled from print media and internet news is presented in its raw form. For this reason, it is appropriate to use it with careful pre-processing steps regarding various OCR errors and typos.", "#### Who are the source language producers?\n\n\nTurkish printed news sources and online news sites.", "### Annotations\n\n\nThe dataset does not contain any additional annotations.", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nURL", "### Contributions\n\n\nThanks to @basakbuluz & @yavuzkomecoglu & @serdarakyol for adding this dataset." ]
a117afac97d4e30e43a96f76984b4e3e67891b10
# Dataset Card for IRC Disentanglement ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) - [Acknowledgments](#acknowledgments) ## Dataset Description - **Homepage:** https://jkk.name/irc-disentanglement/ - **Repository:** https://github.com/jkkummerfeld/irc-disentanglement/tree/master/data - **Paper:** https://aclanthology.org/P19-1374/ - **Leaderboard:** NA - **Point of Contact:** [email protected] ### Dataset Summary Disentangling conversations mixed together in a single stream of messages is a difficult task, made harder by the lack of large manually annotated datasets. This new dataset of 77,563 messages manually annotated with reply-structure graphs that both disentangle conversations and define internal conversation structure. The dataset is 16 times larger than all previously released datasets combined, the first to include adjudication of annotation disagreements, and the first to include context. Note, the Github repository for the dataset also contains several useful tools for: - Conversion (e.g. extracting conversations from graphs) - Evaluation - Preprocessing - Word embeddings trained on the full Ubuntu logs in 2018 ### Supported Tasks and Leaderboards Conversational Disentanglement ### Languages English (en) ## Dataset Structure ### Data Instances For Ubuntu: data["train"][1050] ``` { 'ascii': "[03:57] <Xophe> (also, I'm guessing that this isn't a good place to report minor but annoying bugs... what is?)", 'connections': [1048, 1054, 1055, 1072, 1073], 'date': '2004-12-25', 'id': 1050, 'raw': "[03:57] <Xophe> (also, I'm guessing that this isn't a good place to report minor but annoying bugs... what is?)", 'tokenized': "<s> ( also , i 'm guessing that this is n't a good place to report minor but annoying bugs ... what is ?) </s>" } ``` For Channel_two: data["train"][50] ``` { 'ascii': "[01:04] <Felicia> Chanel: i don't know off hand sorry", 'connections': [49, 53], 'id': 50, 'raw': "[01:04] <Felicia> Chanel: i don't know off hand sorry", 'tokenized': "<s> <user> : i do n't know off hand sorry </s>" } ``` ### Data Fields 'id' : The id of the message, this is the value that would be in the 'connections' of associated messages. 'raw' : The original message from the IRC log, as downloaded. 'ascii' : The raw message converted to ascii (unconvertable characters are replaced with a special word). 'tokenized' : The same message with automatic tokenisation and replacement of rare words with placeholder symbols. 'connections' : The indices of linked messages. (only ubuntu) 'date' : The date the messages are from. The labelling for each date only start after the first 1000 messages of that date. ### Data Splits The dataset has 4 parts: | Part | Number of Annotated Messages | | ------------- | ------------------------------------------- | | Train | 67,463 | | Dev | 2,500 | | Test | 5,000 | | Channel 2 | 2,600 | ## Dataset Creation ### Curation Rationale IRC is a synchronous chat setting with a long history of use. Several channels log all messages and make them publicly available. The Ubuntu channel is particularly heavily used and has been the subject of several academic studies. Data was selected from the channel in order to capture the diversity of situations in the channel (e.g. when there are many users or very few users). For full details, see the [annotation information page](https://github.com/jkkummerfeld/irc-disentanglement/blob/master/data/READ.history.md). ### Source Data #### Initial Data Collection and Normalization Data was collected from the Ubuntu IRC channel logs, which are publicly available at [https://irclogs.ubuntu.com/](https://irclogs.ubuntu.com/). The raw files are included, as well as two other versions: - ASCII, converted using the script [make_txt.py](https://github.com/jkkummerfeld/irc-disentanglement/blob/master/tools/preprocessing/make-txt.py) - Tok, tokenised text with rare words replaced by UNK using the script [dstc8-tokenise.py](https://github.com/jkkummerfeld/irc-disentanglement/blob/master/tools/preprocessing/dstc8-tokenise.py) The raw channel two data is from prior work [(Elsner and Charniak, 2008)](https://www.aclweb.org/anthology/P08-1095.pdf)]. #### Who are the source language producers? The text is from a large group of internet users asking questions and providing answers related to Ubuntu. ### Annotations #### Annotation process The data is expert annotated with: - Training, one annotation per line in general, a small portion is double-annotated and adjudicated - Dev, Channel 2, double annotated and adjudicated - Test, triple annotated and adjudicated | Part | Annotators | Adjudication? | | ------------- | --------------- | ------------------------------------- | | Train | 1 or 2 per file | For files with 2 annotators (only 10) | | Dev | 2 | Yes | | Test | 3 | Yes | | Channel 2 | 2 | Yes | #### Who are the annotators? Students and a postdoc at the University of Michigan. Everyone involved went through a training process with feedback to learn the annotation guidelines. ### Personal and Sensitive Information No content is removed or obfuscated. There is probably personal information in the dataset from users. ## Considerations for Using the Data ### Social Impact of Dataset The raw data is already available online and the annotations do not significantly provide additional information that could have a direct social impact. ### Discussion of Biases The data is mainly from a single technical domain (Ubuntu tech support) that probably has a demographic skew of some sort. Given that users are only identified by their self-selected usernames, it is difficult to know more about the authors. ### Other Known Limitations Being focused on a single language and a single channel means that the data is likely capturing a particular set of conventions in communication. Those conventions may not apply to other channels, or beyond IRC. ## Additional Information ### Dataset Curators Jonathan K. Kummerfeld ### Licensing Information Creative Commons Attribution 4.0 ### Citation Information ``` @inproceedings{kummerfeld-etal-2019-large, title = "A Large-Scale Corpus for Conversation Disentanglement", author = "Kummerfeld, Jonathan K. and Gouravajhala, Sai R. and Peper, Joseph J. and Athreya, Vignesh and Gunasekara, Chulaka and Ganhotra, Jatin and Patel, Siva Sankalp and Polymenakos, Lazaros C and Lasecki, Walter", booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", month = jul, year = "2019", address = "Florence, Italy", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/P19-1374", doi = "10.18653/v1/P19-1374", pages = "3846--3856", arxiv = "https://arxiv.org/abs/1810.11118", software = "https://jkk.name/irc-disentanglement", data = "https://jkk.name/irc-disentanglement", abstract = "Disentangling conversations mixed together in a single stream of messages is a difficult task, made harder by the lack of large manually annotated datasets. We created a new dataset of 77,563 messages manually annotated with reply-structure graphs that both disentangle conversations and define internal conversation structure. Our data is 16 times larger than all previously released datasets combined, the first to include adjudication of annotation disagreements, and the first to include context. We use our data to re-examine prior work, in particular, finding that 89{\%} of conversations in a widely used dialogue corpus are either missing messages or contain extra messages. Our manually-annotated data presents an opportunity to develop robust data-driven methods for conversation disentanglement, which will help advance dialogue research.", } ``` ### Contributions Thanks to [@dhruvjoshi1998](https://github.com/dhruvjoshi1998) for adding this dataset. Thanks to [@jkkummerfeld](https://github.com/jkkummerfeld) for improvements to the documentation. ### Acknowledgments This material is based in part upon work supported by IBM under contract 4915012629. Any opinions, findings, conclusions or recommendations expressed are those of the authors and do not necessarily reflect the views of IBM.
irc_disentangle
[ "task_categories:token-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-4.0", "conversation-disentanglement", "arxiv:1810.11118", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": [], "paperswithcode_id": "irc-disentanglement", "pretty_name": "IRC Disentanglement", "tags": ["conversation-disentanglement"], "dataset_info": [{"config_name": "ubuntu", "features": [{"name": "id", "dtype": "int32"}, {"name": "raw", "dtype": "string"}, {"name": "ascii", "dtype": "string"}, {"name": "tokenized", "dtype": "string"}, {"name": "date", "dtype": "string"}, {"name": "connections", "sequence": "int32"}], "splits": [{"name": "train", "num_bytes": 56012854, "num_examples": 220616}, {"name": "validation", "num_bytes": 3081479, "num_examples": 12510}, {"name": "test", "num_bytes": 3919900, "num_examples": 15010}], "download_size": 118470210, "dataset_size": 63014233}, {"config_name": "channel_two", "features": [{"name": "id", "dtype": "int32"}, {"name": "raw", "dtype": "string"}, {"name": "ascii", "dtype": "string"}, {"name": "tokenized", "dtype": "string"}, {"name": "connections", "sequence": "int32"}], "splits": [{"name": "dev", "num_bytes": 197505, "num_examples": 1001}, {"name": "pilot", "num_bytes": 92663, "num_examples": 501}, {"name": "test", "num_bytes": 186823, "num_examples": 1001}, {"name": "pilot_dev", "num_bytes": 290175, "num_examples": 1501}, {"name": "all_", "num_bytes": 496524, "num_examples": 2602}], "download_size": 118470210, "dataset_size": 1263690}]}
2024-01-18T11:06:46+00:00
[ "1810.11118" ]
[ "en" ]
TAGS #task_categories-token-classification #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-4.0 #conversation-disentanglement #arxiv-1810.11118 #region-us
Dataset Card for IRC Disentanglement ==================================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions + Acknowledgments Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: URL * Leaderboard: NA * Point of Contact: jkummerf@URL ### Dataset Summary Disentangling conversations mixed together in a single stream of messages is a difficult task, made harder by the lack of large manually annotated datasets. This new dataset of 77,563 messages manually annotated with reply-structure graphs that both disentangle conversations and define internal conversation structure. The dataset is 16 times larger than all previously released datasets combined, the first to include adjudication of annotation disagreements, and the first to include context. Note, the Github repository for the dataset also contains several useful tools for: * Conversion (e.g. extracting conversations from graphs) * Evaluation * Preprocessing * Word embeddings trained on the full Ubuntu logs in 2018 ### Supported Tasks and Leaderboards Conversational Disentanglement ### Languages English (en) Dataset Structure ----------------- ### Data Instances For Ubuntu: data["train"][1050] For Channel\_two: data["train"][50] ### Data Fields 'id' : The id of the message, this is the value that would be in the 'connections' of associated messages. 'raw' : The original message from the IRC log, as downloaded. 'ascii' : The raw message converted to ascii (unconvertable characters are replaced with a special word). 'tokenized' : The same message with automatic tokenisation and replacement of rare words with placeholder symbols. 'connections' : The indices of linked messages. (only ubuntu) 'date' : The date the messages are from. The labelling for each date only start after the first 1000 messages of that date. ### Data Splits The dataset has 4 parts: Dataset Creation ---------------- ### Curation Rationale IRC is a synchronous chat setting with a long history of use. Several channels log all messages and make them publicly available. The Ubuntu channel is particularly heavily used and has been the subject of several academic studies. Data was selected from the channel in order to capture the diversity of situations in the channel (e.g. when there are many users or very few users). For full details, see the annotation information page. ### Source Data #### Initial Data Collection and Normalization Data was collected from the Ubuntu IRC channel logs, which are publicly available at URL The raw files are included, as well as two other versions: * ASCII, converted using the script make\_txt.py * Tok, tokenised text with rare words replaced by UNK using the script URL The raw channel two data is from prior work (Elsner and Charniak, 2008)]. #### Who are the source language producers? The text is from a large group of internet users asking questions and providing answers related to Ubuntu. ### Annotations #### Annotation process The data is expert annotated with: * Training, one annotation per line in general, a small portion is double-annotated and adjudicated * Dev, Channel 2, double annotated and adjudicated * Test, triple annotated and adjudicated Part: Train, Annotators: 1 or 2 per file, Adjudication?: For files with 2 annotators (only 10) Part: Dev, Annotators: 2, Adjudication?: Yes Part: Test, Annotators: 3, Adjudication?: Yes Part: Channel 2, Annotators: 2, Adjudication?: Yes #### Who are the annotators? Students and a postdoc at the University of Michigan. Everyone involved went through a training process with feedback to learn the annotation guidelines. ### Personal and Sensitive Information No content is removed or obfuscated. There is probably personal information in the dataset from users. Considerations for Using the Data --------------------------------- ### Social Impact of Dataset The raw data is already available online and the annotations do not significantly provide additional information that could have a direct social impact. ### Discussion of Biases The data is mainly from a single technical domain (Ubuntu tech support) that probably has a demographic skew of some sort. Given that users are only identified by their self-selected usernames, it is difficult to know more about the authors. ### Other Known Limitations Being focused on a single language and a single channel means that the data is likely capturing a particular set of conventions in communication. Those conventions may not apply to other channels, or beyond IRC. Additional Information ---------------------- ### Dataset Curators Jonathan K. Kummerfeld ### Licensing Information Creative Commons Attribution 4.0 ### Contributions Thanks to @dhruvjoshi1998 for adding this dataset. Thanks to @jkkummerfeld for improvements to the documentation. ### Acknowledgments This material is based in part upon work supported by IBM under contract 4915012629. Any opinions, findings, conclusions or recommendations expressed are those of the authors and do not necessarily reflect the views of IBM.
[ "### Dataset Summary\n\n\nDisentangling conversations mixed together in a single stream of messages is a difficult task, made harder by the lack of large manually annotated datasets. This new dataset of 77,563 messages manually annotated with reply-structure graphs that both disentangle conversations and define internal conversation structure. The dataset is 16 times larger than all previously released datasets combined, the first to include adjudication of annotation disagreements, and the first to include context.\n\n\nNote, the Github repository for the dataset also contains several useful tools for:\n\n\n* Conversion (e.g. extracting conversations from graphs)\n* Evaluation\n* Preprocessing\n* Word embeddings trained on the full Ubuntu logs in 2018", "### Supported Tasks and Leaderboards\n\n\nConversational Disentanglement", "### Languages\n\n\nEnglish (en)\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nFor Ubuntu:\n\n\ndata[\"train\"][1050]\n\n\nFor Channel\\_two:\n\n\ndata[\"train\"][50]", "### Data Fields\n\n\n'id' : The id of the message, this is the value that would be in the 'connections' of associated messages.\n\n\n'raw' : The original message from the IRC log, as downloaded.\n\n\n'ascii' : The raw message converted to ascii (unconvertable characters are replaced with a special word).\n\n\n'tokenized' : The same message with automatic tokenisation and replacement of rare words with placeholder symbols.\n\n\n'connections' : The indices of linked messages.\n\n\n(only ubuntu) 'date' : The date the messages are from. The labelling for each date only start after the first 1000 messages of that date.", "### Data Splits\n\n\nThe dataset has 4 parts:\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nIRC is a synchronous chat setting with a long history of use.\nSeveral channels log all messages and make them publicly available.\nThe Ubuntu channel is particularly heavily used and has been the subject of several academic studies.\n\n\nData was selected from the channel in order to capture the diversity of situations in the channel (e.g. when there are many users or very few users).\nFor full details, see the annotation information page.", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nData was collected from the Ubuntu IRC channel logs, which are publicly available at URL\nThe raw files are included, as well as two other versions:\n\n\n* ASCII, converted using the script make\\_txt.py\n* Tok, tokenised text with rare words replaced by UNK using the script URL\n\n\nThe raw channel two data is from prior work (Elsner and Charniak, 2008)].", "#### Who are the source language producers?\n\n\nThe text is from a large group of internet users asking questions and providing answers related to Ubuntu.", "### Annotations", "#### Annotation process\n\n\nThe data is expert annotated with:\n\n\n* Training, one annotation per line in general, a small portion is double-annotated and adjudicated\n* Dev, Channel 2, double annotated and adjudicated\n* Test, triple annotated and adjudicated\n\n\nPart: Train, Annotators: 1 or 2 per file, Adjudication?: For files with 2 annotators (only 10)\nPart: Dev, Annotators: 2, Adjudication?: Yes\nPart: Test, Annotators: 3, Adjudication?: Yes\nPart: Channel 2, Annotators: 2, Adjudication?: Yes", "#### Who are the annotators?\n\n\nStudents and a postdoc at the University of Michigan.\nEveryone involved went through a training process with feedback to learn the annotation guidelines.", "### Personal and Sensitive Information\n\n\nNo content is removed or obfuscated.\nThere is probably personal information in the dataset from users.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset\n\n\nThe raw data is already available online and the annotations do not significantly provide additional information that could have a direct social impact.", "### Discussion of Biases\n\n\nThe data is mainly from a single technical domain (Ubuntu tech support) that probably has a demographic skew of some sort.\nGiven that users are only identified by their self-selected usernames, it is difficult to know more about the authors.", "### Other Known Limitations\n\n\nBeing focused on a single language and a single channel means that the data is likely capturing a particular set of conventions in communication.\nThose conventions may not apply to other channels, or beyond IRC.\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nJonathan K. Kummerfeld", "### Licensing Information\n\n\nCreative Commons Attribution 4.0", "### Contributions\n\n\nThanks to @dhruvjoshi1998 for adding this dataset.\n\n\nThanks to @jkkummerfeld for improvements to the documentation.", "### Acknowledgments\n\n\nThis material is based in part upon work supported by IBM under contract 4915012629. Any opinions, findings, conclusions or recommendations expressed are those of the authors and do not necessarily reflect the views of IBM." ]
[ "TAGS\n#task_categories-token-classification #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-4.0 #conversation-disentanglement #arxiv-1810.11118 #region-us \n", "### Dataset Summary\n\n\nDisentangling conversations mixed together in a single stream of messages is a difficult task, made harder by the lack of large manually annotated datasets. This new dataset of 77,563 messages manually annotated with reply-structure graphs that both disentangle conversations and define internal conversation structure. The dataset is 16 times larger than all previously released datasets combined, the first to include adjudication of annotation disagreements, and the first to include context.\n\n\nNote, the Github repository for the dataset also contains several useful tools for:\n\n\n* Conversion (e.g. extracting conversations from graphs)\n* Evaluation\n* Preprocessing\n* Word embeddings trained on the full Ubuntu logs in 2018", "### Supported Tasks and Leaderboards\n\n\nConversational Disentanglement", "### Languages\n\n\nEnglish (en)\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nFor Ubuntu:\n\n\ndata[\"train\"][1050]\n\n\nFor Channel\\_two:\n\n\ndata[\"train\"][50]", "### Data Fields\n\n\n'id' : The id of the message, this is the value that would be in the 'connections' of associated messages.\n\n\n'raw' : The original message from the IRC log, as downloaded.\n\n\n'ascii' : The raw message converted to ascii (unconvertable characters are replaced with a special word).\n\n\n'tokenized' : The same message with automatic tokenisation and replacement of rare words with placeholder symbols.\n\n\n'connections' : The indices of linked messages.\n\n\n(only ubuntu) 'date' : The date the messages are from. The labelling for each date only start after the first 1000 messages of that date.", "### Data Splits\n\n\nThe dataset has 4 parts:\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nIRC is a synchronous chat setting with a long history of use.\nSeveral channels log all messages and make them publicly available.\nThe Ubuntu channel is particularly heavily used and has been the subject of several academic studies.\n\n\nData was selected from the channel in order to capture the diversity of situations in the channel (e.g. when there are many users or very few users).\nFor full details, see the annotation information page.", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nData was collected from the Ubuntu IRC channel logs, which are publicly available at URL\nThe raw files are included, as well as two other versions:\n\n\n* ASCII, converted using the script make\\_txt.py\n* Tok, tokenised text with rare words replaced by UNK using the script URL\n\n\nThe raw channel two data is from prior work (Elsner and Charniak, 2008)].", "#### Who are the source language producers?\n\n\nThe text is from a large group of internet users asking questions and providing answers related to Ubuntu.", "### Annotations", "#### Annotation process\n\n\nThe data is expert annotated with:\n\n\n* Training, one annotation per line in general, a small portion is double-annotated and adjudicated\n* Dev, Channel 2, double annotated and adjudicated\n* Test, triple annotated and adjudicated\n\n\nPart: Train, Annotators: 1 or 2 per file, Adjudication?: For files with 2 annotators (only 10)\nPart: Dev, Annotators: 2, Adjudication?: Yes\nPart: Test, Annotators: 3, Adjudication?: Yes\nPart: Channel 2, Annotators: 2, Adjudication?: Yes", "#### Who are the annotators?\n\n\nStudents and a postdoc at the University of Michigan.\nEveryone involved went through a training process with feedback to learn the annotation guidelines.", "### Personal and Sensitive Information\n\n\nNo content is removed or obfuscated.\nThere is probably personal information in the dataset from users.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset\n\n\nThe raw data is already available online and the annotations do not significantly provide additional information that could have a direct social impact.", "### Discussion of Biases\n\n\nThe data is mainly from a single technical domain (Ubuntu tech support) that probably has a demographic skew of some sort.\nGiven that users are only identified by their self-selected usernames, it is difficult to know more about the authors.", "### Other Known Limitations\n\n\nBeing focused on a single language and a single channel means that the data is likely capturing a particular set of conventions in communication.\nThose conventions may not apply to other channels, or beyond IRC.\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nJonathan K. Kummerfeld", "### Licensing Information\n\n\nCreative Commons Attribution 4.0", "### Contributions\n\n\nThanks to @dhruvjoshi1998 for adding this dataset.\n\n\nThanks to @jkkummerfeld for improvements to the documentation.", "### Acknowledgments\n\n\nThis material is based in part upon work supported by IBM under contract 4915012629. Any opinions, findings, conclusions or recommendations expressed are those of the authors and do not necessarily reflect the views of IBM." ]
8328d695f95d926c918ea529f5b5fe636f0872d1
# Dataset Card for [Dataset Name] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [IsiXhosa Ner Corpus Homepage](https://repo.sadilar.org/handle/20.500.12185/312) - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** [Martin Puttkammer](mailto:[email protected]) ### Dataset Summary The isiXhosa Ner Corpus is a Xhosa dataset developed by [The Centre for Text Technology (CTexT), North-West University, South Africa](http://humanities.nwu.ac.za/ctext). The data is based on documents from the South African goverment domain and crawled from gov.za websites. It was created to support NER task for Xhosa language. The dataset uses CoNLL shared task annotation standards. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The language supported is Xhosa. ## Dataset Structure ### Data Instances A data point consists of sentences seperated by empty line and tab-seperated tokens and tags. {'id': '0', 'ner_tags': [7, 8, 5, 6, 0], 'tokens': ['Injongo', 'ye-website', 'yaseMzantsi', 'Afrika', 'kukuvelisa'] } ### Data Fields - `id`: id of the sample - `tokens`: the tokens of the example text - `ner_tags`: the NER tags of each token The NER tags correspond to this list: ``` "OUT", "B-PERS", "I-PERS", "B-ORG", "I-ORG", "B-LOC", "I-LOC", "B-MISC", "I-MISC", ``` The NER tags have the same format as in the CoNLL shared task: a B denotes the first item of a phrase and an I any non-initial word. There are four types of phrases: person names (PER), organizations (ORG), locations (LOC) and miscellaneous names (MISC). (OUT) is used for tokens not considered part of any named entity. ### Data Splits The data was not split. ## Dataset Creation ### Curation Rationale The data was created to help introduce resources to new language - Xhosa. [More Information Needed] ### Source Data #### Initial Data Collection and Normalization The data is based on South African government domain and was crawled from gov.za websites. [More Information Needed] #### Who are the source language producers? The data was produced by writers of South African government websites - gov.za [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? The data was annotated during the NCHLT text resource development project. [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators The annotated data sets were developed by the Centre for Text Technology (CTexT, North-West University, South Africa). See: [more information](http://www.nwu.ac.za/ctext) ### Licensing Information The data is under the [Creative Commons Attribution 2.5 South Africa License](http://creativecommons.org/licenses/by/2.5/za/legalcode) ### Citation Information ``` @inproceedings{isixhosa_ner_corpus, author = { K. Podile and Roald Eiselen}, title = {NCHLT isiXhosa Named Entity Annotated Corpus}, booktitle = {Eiselen, R. 2016. Government domain named entity recognition for South African languages. Proceedings of the 10th Language Resource and Evaluation Conference, Portorož, Slovenia.}, year = {2016}, url = {https://repo.sadilar.org/handle/20.500.12185/312}, } ``` ### Contributions Thanks to [@yvonnegitau](https://github.com/yvonnegitau) for adding this dataset.
isixhosa_ner_corpus
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:xh", "license:other", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["xh"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "pretty_name": "IsixhosaNerCorpus", "license_details": "Creative Commons Attribution 2.5 South Africa License", "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "OUT", "1": "B-PERS", "2": "I-PERS", "3": "B-ORG", "4": "I-ORG", "5": "B-LOC", "6": "I-LOC", "7": "B-MISC", "8": "I-MISC"}}}}], "config_name": "isixhosa_ner_corpus", "splits": [{"name": "train", "num_bytes": 2414995, "num_examples": 6284}], "download_size": 14513302, "dataset_size": 2414995}}
2024-01-18T11:06:47+00:00
[]
[ "xh" ]
TAGS #task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Xhosa #license-other #region-us
# Dataset Card for [Dataset Name] ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: IsiXhosa Ner Corpus Homepage - Repository: - Paper: - Leaderboard: - Point of Contact: Martin Puttkammer ### Dataset Summary The isiXhosa Ner Corpus is a Xhosa dataset developed by The Centre for Text Technology (CTexT), North-West University, South Africa. The data is based on documents from the South African goverment domain and crawled from URL websites. It was created to support NER task for Xhosa language. The dataset uses CoNLL shared task annotation standards. ### Supported Tasks and Leaderboards ### Languages The language supported is Xhosa. ## Dataset Structure ### Data Instances A data point consists of sentences seperated by empty line and tab-seperated tokens and tags. {'id': '0', 'ner_tags': [7, 8, 5, 6, 0], 'tokens': ['Injongo', 'ye-website', 'yaseMzantsi', 'Afrika', 'kukuvelisa'] } ### Data Fields - 'id': id of the sample - 'tokens': the tokens of the example text - 'ner_tags': the NER tags of each token The NER tags correspond to this list: The NER tags have the same format as in the CoNLL shared task: a B denotes the first item of a phrase and an I any non-initial word. There are four types of phrases: person names (PER), organizations (ORG), locations (LOC) and miscellaneous names (MISC). (OUT) is used for tokens not considered part of any named entity. ### Data Splits The data was not split. ## Dataset Creation ### Curation Rationale The data was created to help introduce resources to new language - Xhosa. ### Source Data #### Initial Data Collection and Normalization The data is based on South African government domain and was crawled from URL websites. #### Who are the source language producers? The data was produced by writers of South African government websites - URL ### Annotations #### Annotation process #### Who are the annotators? The data was annotated during the NCHLT text resource development project. ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators The annotated data sets were developed by the Centre for Text Technology (CTexT, North-West University, South Africa). See: more information ### Licensing Information The data is under the Creative Commons Attribution 2.5 South Africa License ### Contributions Thanks to @yvonnegitau for adding this dataset.
[ "# Dataset Card for [Dataset Name]", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: IsiXhosa Ner Corpus Homepage\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact: Martin Puttkammer", "### Dataset Summary\n\nThe isiXhosa Ner Corpus is a Xhosa dataset developed by The Centre for Text Technology (CTexT), North-West University, South Africa. The data is based on documents from the South African goverment domain and crawled from URL websites. It was created to support NER task for Xhosa language. The dataset uses CoNLL shared task annotation standards.", "### Supported Tasks and Leaderboards", "### Languages\n\nThe language supported is Xhosa.", "## Dataset Structure", "### Data Instances\n\nA data point consists of sentences seperated by empty line and tab-seperated tokens and tags. \n{'id': '0',\n 'ner_tags': [7, 8, 5, 6, 0],\n 'tokens': ['Injongo', 'ye-website', 'yaseMzantsi', 'Afrika', 'kukuvelisa']\n}", "### Data Fields\n\n- 'id': id of the sample\n- 'tokens': the tokens of the example text\n- 'ner_tags': the NER tags of each token\n\nThe NER tags correspond to this list:\n\nThe NER tags have the same format as in the CoNLL shared task: a B denotes the first item of a phrase and an I any non-initial word. There are four types of phrases: person names (PER), organizations (ORG), locations (LOC) and miscellaneous names (MISC). (OUT) is used for tokens not considered part of any named entity.", "### Data Splits\n\nThe data was not split.", "## Dataset Creation", "### Curation Rationale\n\nThe data was created to help introduce resources to new language - Xhosa.", "### Source Data", "#### Initial Data Collection and Normalization\n\nThe data is based on South African government domain and was crawled from URL websites.", "#### Who are the source language producers?\n\nThe data was produced by writers of South African government websites - URL", "### Annotations", "#### Annotation process", "#### Who are the annotators?\n\nThe data was annotated during the NCHLT text resource development project.", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nThe annotated data sets were developed by the Centre for Text Technology (CTexT, North-West University, South Africa).\n\nSee: more information", "### Licensing Information\n\nThe data is under the Creative Commons Attribution 2.5 South Africa License", "### Contributions\n\nThanks to @yvonnegitau for adding this dataset." ]
[ "TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Xhosa #license-other #region-us \n", "# Dataset Card for [Dataset Name]", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: IsiXhosa Ner Corpus Homepage\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact: Martin Puttkammer", "### Dataset Summary\n\nThe isiXhosa Ner Corpus is a Xhosa dataset developed by The Centre for Text Technology (CTexT), North-West University, South Africa. The data is based on documents from the South African goverment domain and crawled from URL websites. It was created to support NER task for Xhosa language. The dataset uses CoNLL shared task annotation standards.", "### Supported Tasks and Leaderboards", "### Languages\n\nThe language supported is Xhosa.", "## Dataset Structure", "### Data Instances\n\nA data point consists of sentences seperated by empty line and tab-seperated tokens and tags. \n{'id': '0',\n 'ner_tags': [7, 8, 5, 6, 0],\n 'tokens': ['Injongo', 'ye-website', 'yaseMzantsi', 'Afrika', 'kukuvelisa']\n}", "### Data Fields\n\n- 'id': id of the sample\n- 'tokens': the tokens of the example text\n- 'ner_tags': the NER tags of each token\n\nThe NER tags correspond to this list:\n\nThe NER tags have the same format as in the CoNLL shared task: a B denotes the first item of a phrase and an I any non-initial word. There are four types of phrases: person names (PER), organizations (ORG), locations (LOC) and miscellaneous names (MISC). (OUT) is used for tokens not considered part of any named entity.", "### Data Splits\n\nThe data was not split.", "## Dataset Creation", "### Curation Rationale\n\nThe data was created to help introduce resources to new language - Xhosa.", "### Source Data", "#### Initial Data Collection and Normalization\n\nThe data is based on South African government domain and was crawled from URL websites.", "#### Who are the source language producers?\n\nThe data was produced by writers of South African government websites - URL", "### Annotations", "#### Annotation process", "#### Who are the annotators?\n\nThe data was annotated during the NCHLT text resource development project.", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nThe annotated data sets were developed by the Centre for Text Technology (CTexT, North-West University, South Africa).\n\nSee: more information", "### Licensing Information\n\nThe data is under the Creative Commons Attribution 2.5 South Africa License", "### Contributions\n\nThanks to @yvonnegitau for adding this dataset." ]
4f9fee744c6d5ca9fcc921c621386a81ccfa2837
# Dataset Card for Isizulu Ner Corpus ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Isizulu Ner Corpus Homepage](https://repo.sadilar.org/handle/20.500.12185/319) - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** [Martin Puttkammer](mailto:[email protected]) ### Dataset Summary The isizulu Ner Corpus is a Zulu dataset developed by [The Centre for Text Technology (CTexT), North-West University, South Africa](http://humanities.nwu.ac.za/ctext). The data is based on documents from the South African goverment domain and crawled from gov.za websites. It was created to support NER task for Zulu language. The dataset uses CoNLL shared task annotation standards. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The language supported is Zulu. ## Dataset Structure ### Data Instances A data point consists of sentences seperated by empty line and tab-seperated tokens and tags. {'id': '0', 'ner_tags': [7, 8, 0, 0, 0], 'tokens': ['Lesi', 'sigaba', 'se-website', ',', 'esikhonjiswe'] } ### Data Fields - `id`: id of the sample - `tokens`: the tokens of the example text - `ner_tags`: the NER tags of each token The NER tags correspond to this list: ``` "OUT", "B-PERS", "I-PERS", "B-ORG", "I-ORG", "B-LOC", "I-LOC", "B-MISC", "I-MISC", ``` The NER tags have the same format as in the CoNLL shared task: a B denotes the first item of a phrase and an I any non-initial word. There are four types of phrases: person names (PER), organizations (ORG), locations (LOC) and miscellaneous names (MISC). (OUT) is used for tokens not considered part of any named entity. ### Data Splits The data was not split. ## Dataset Creation ### Curation Rationale The data was created to help introduce resources to new language - zulu. [More Information Needed] ### Source Data #### Initial Data Collection and Normalization The data is based on South African government domain and was crawled from gov.za websites. #### Who are the source language producers? The data was produced by writers of South African government websites - gov.za ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? The data was annotated during the NCHLT text resource development project. [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators The annotated data sets were developed by the Centre for Text Technology (CTexT, North-West University, South Africa). See: [more information](http://www.nwu.ac.za/ctext) ### Licensing Information The data is under the [Creative Commons Attribution 2.5 South Africa License](http://creativecommons.org/licenses/by/2.5/za/legalcode) ### Citation Information ``` @inproceedings{isizulu_ner_corpus, author = {A.N. Manzini and Roald Eiselen}, title = {NCHLT isiZulu Named Entity Annotated Corpus}, booktitle = {Eiselen, R. 2016. Government domain named entity recognition for South African languages. Proceedings of the 10th Language Resource and Evaluation Conference, Portorož, Slovenia.}, year = {2016}, url = {https://repo.sadilar.org/handle/20.500.12185/319}, } ``` ### Contributions Thanks to [@yvonnegitau](https://github.com/yvonnegitau) for adding this dataset.
isizulu_ner_corpus
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:zu", "license:other", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["zu"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "pretty_name": "Isizulu Ner Corpus", "license_details": "Creative Commons Attribution 2.5 South Africa", "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "OUT", "1": "B-PERS", "2": "I-PERS", "3": "B-ORG", "4": "I-ORG", "5": "B-LOC", "6": "I-LOC", "7": "B-MISC", "8": "I-MISC"}}}}], "config_name": "isizulu_ner_corpus", "splits": [{"name": "train", "num_bytes": 4038876, "num_examples": 10956}], "download_size": 25097584, "dataset_size": 4038876}}
2024-01-18T11:06:49+00:00
[]
[ "zu" ]
TAGS #task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Zulu #license-other #region-us
# Dataset Card for Isizulu Ner Corpus ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: Isizulu Ner Corpus Homepage - Repository: - Paper: - Leaderboard: - Point of Contact: Martin Puttkammer ### Dataset Summary The isizulu Ner Corpus is a Zulu dataset developed by The Centre for Text Technology (CTexT), North-West University, South Africa. The data is based on documents from the South African goverment domain and crawled from URL websites. It was created to support NER task for Zulu language. The dataset uses CoNLL shared task annotation standards. ### Supported Tasks and Leaderboards ### Languages The language supported is Zulu. ## Dataset Structure ### Data Instances A data point consists of sentences seperated by empty line and tab-seperated tokens and tags. {'id': '0', 'ner_tags': [7, 8, 0, 0, 0], 'tokens': ['Lesi', 'sigaba', 'se-website', ',', 'esikhonjiswe'] } ### Data Fields - 'id': id of the sample - 'tokens': the tokens of the example text - 'ner_tags': the NER tags of each token The NER tags correspond to this list: The NER tags have the same format as in the CoNLL shared task: a B denotes the first item of a phrase and an I any non-initial word. There are four types of phrases: person names (PER), organizations (ORG), locations (LOC) and miscellaneous names (MISC). (OUT) is used for tokens not considered part of any named entity. ### Data Splits The data was not split. ## Dataset Creation ### Curation Rationale The data was created to help introduce resources to new language - zulu. ### Source Data #### Initial Data Collection and Normalization The data is based on South African government domain and was crawled from URL websites. #### Who are the source language producers? The data was produced by writers of South African government websites - URL ### Annotations #### Annotation process #### Who are the annotators? The data was annotated during the NCHLT text resource development project. ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators The annotated data sets were developed by the Centre for Text Technology (CTexT, North-West University, South Africa). See: more information ### Licensing Information The data is under the Creative Commons Attribution 2.5 South Africa License ### Contributions Thanks to @yvonnegitau for adding this dataset.
[ "# Dataset Card for Isizulu Ner Corpus", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: Isizulu Ner Corpus Homepage\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact: Martin Puttkammer", "### Dataset Summary\n\nThe isizulu Ner Corpus is a Zulu dataset developed by The Centre for Text Technology (CTexT), North-West University, South Africa. The data is based on documents from the South African goverment domain and crawled from URL websites. It was created to support NER task for Zulu language. The dataset uses CoNLL shared task annotation standards.", "### Supported Tasks and Leaderboards", "### Languages\n\nThe language supported is Zulu.", "## Dataset Structure", "### Data Instances\n\nA data point consists of sentences seperated by empty line and tab-seperated tokens and tags. \n{'id': '0',\n 'ner_tags': [7, 8, 0, 0, 0],\n 'tokens': ['Lesi', 'sigaba', 'se-website', ',', 'esikhonjiswe']\n}", "### Data Fields\n\n- 'id': id of the sample\n- 'tokens': the tokens of the example text\n- 'ner_tags': the NER tags of each token\n\nThe NER tags correspond to this list:\n\nThe NER tags have the same format as in the CoNLL shared task: a B denotes the first item of a phrase and an I any non-initial word. There are four types of phrases: person names (PER), organizations (ORG), locations (LOC) and miscellaneous names (MISC). (OUT) is used for tokens not considered part of any named entity.", "### Data Splits\n\nThe data was not split.", "## Dataset Creation", "### Curation Rationale\n\nThe data was created to help introduce resources to new language - zulu.", "### Source Data", "#### Initial Data Collection and Normalization\n\nThe data is based on South African government domain and was crawled from URL websites.", "#### Who are the source language producers?\n\nThe data was produced by writers of South African government websites - URL", "### Annotations", "#### Annotation process", "#### Who are the annotators?\nThe data was annotated during the NCHLT text resource development project.", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nThe annotated data sets were developed by the Centre for Text Technology (CTexT, North-West University, South Africa).\n\nSee: more information", "### Licensing Information\n\nThe data is under the Creative Commons Attribution 2.5 South Africa License", "### Contributions\n\nThanks to @yvonnegitau for adding this dataset." ]
[ "TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Zulu #license-other #region-us \n", "# Dataset Card for Isizulu Ner Corpus", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: Isizulu Ner Corpus Homepage\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact: Martin Puttkammer", "### Dataset Summary\n\nThe isizulu Ner Corpus is a Zulu dataset developed by The Centre for Text Technology (CTexT), North-West University, South Africa. The data is based on documents from the South African goverment domain and crawled from URL websites. It was created to support NER task for Zulu language. The dataset uses CoNLL shared task annotation standards.", "### Supported Tasks and Leaderboards", "### Languages\n\nThe language supported is Zulu.", "## Dataset Structure", "### Data Instances\n\nA data point consists of sentences seperated by empty line and tab-seperated tokens and tags. \n{'id': '0',\n 'ner_tags': [7, 8, 0, 0, 0],\n 'tokens': ['Lesi', 'sigaba', 'se-website', ',', 'esikhonjiswe']\n}", "### Data Fields\n\n- 'id': id of the sample\n- 'tokens': the tokens of the example text\n- 'ner_tags': the NER tags of each token\n\nThe NER tags correspond to this list:\n\nThe NER tags have the same format as in the CoNLL shared task: a B denotes the first item of a phrase and an I any non-initial word. There are four types of phrases: person names (PER), organizations (ORG), locations (LOC) and miscellaneous names (MISC). (OUT) is used for tokens not considered part of any named entity.", "### Data Splits\n\nThe data was not split.", "## Dataset Creation", "### Curation Rationale\n\nThe data was created to help introduce resources to new language - zulu.", "### Source Data", "#### Initial Data Collection and Normalization\n\nThe data is based on South African government domain and was crawled from URL websites.", "#### Who are the source language producers?\n\nThe data was produced by writers of South African government websites - URL", "### Annotations", "#### Annotation process", "#### Who are the annotators?\nThe data was annotated during the NCHLT text resource development project.", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nThe annotated data sets were developed by the Centre for Text Technology (CTexT, North-West University, South Africa).\n\nSee: more information", "### Licensing Information\n\nThe data is under the Creative Commons Attribution 2.5 South Africa License", "### Contributions\n\nThanks to @yvonnegitau for adding this dataset." ]
c18a4f81a47ae6fa079fe9d32db288ddde38451d
# Dataset Card for IWSLT 2017 ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://sites.google.com/site/iwsltevaluation2017/TED-tasks](https://sites.google.com/site/iwsltevaluation2017/TED-tasks) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [Overview of the IWSLT 2017 Evaluation Campaign](https://aclanthology.org/2017.iwslt-1.1/) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 4.24 GB - **Size of the generated dataset:** 1.14 GB - **Total amount of disk used:** 5.38 GB ### Dataset Summary The IWSLT 2017 Multilingual Task addresses text translation, including zero-shot translation, with a single MT system across all directions including English, German, Dutch, Italian and Romanian. As unofficial task, conventional bilingual text translation is offered between English and Arabic, French, Japanese, Chinese, German and Korean. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### iwslt2017-ar-en - **Size of downloaded dataset files:** 27.75 MB - **Size of the generated dataset:** 58.74 MB - **Total amount of disk used:** 86.49 MB An example of 'train' looks as follows. ``` This example was too long and was cropped: { "translation": "{\"ar\": \"لقد طرت في \\\"القوات الجوية \\\" لمدة ثمان سنوات. والآن أجد نفسي مضطرا لخلع حذائي قبل صعود الطائرة!\", \"en\": \"I flew on Air ..." } ``` #### iwslt2017-de-en - **Size of downloaded dataset files:** 16.76 MB - **Size of the generated dataset:** 44.43 MB - **Total amount of disk used:** 61.18 MB An example of 'train' looks as follows. ``` { "translation": { "de": "Es ist mir wirklich eine Ehre, zweimal auf dieser Bühne stehen zu dürfen. Tausend Dank dafür.", "en": "And it's truly a great honor to have the opportunity to come to this stage twice; I'm extremely grateful." } } ``` #### iwslt2017-en-ar - **Size of downloaded dataset files:** 29.33 MB - **Size of the generated dataset:** 58.74 MB - **Total amount of disk used:** 88.07 MB An example of 'train' looks as follows. ``` This example was too long and was cropped: { "translation": "{\"ar\": \"لقد طرت في \\\"القوات الجوية \\\" لمدة ثمان سنوات. والآن أجد نفسي مضطرا لخلع حذائي قبل صعود الطائرة!\", \"en\": \"I flew on Air ..." } ``` #### iwslt2017-en-de - **Size of downloaded dataset files:** 16.76 MB - **Size of the generated dataset:** 44.43 MB - **Total amount of disk used:** 61.18 MB An example of 'validation' looks as follows. ``` { "translation": { "de": "Die nächste Folie, die ich Ihnen zeige, ist eine Zeitrafferaufnahme was in den letzten 25 Jahren passiert ist.", "en": "The next slide I show you will be a rapid fast-forward of what's happened over the last 25 years." } } ``` #### iwslt2017-en-fr - **Size of downloaded dataset files:** 27.69 MB - **Size of the generated dataset:** 51.24 MB - **Total amount of disk used:** 78.94 MB An example of 'validation' looks as follows. ``` { "translation": { "en": "But this understates the seriousness of this particular problem because it doesn't show the thickness of the ice.", "fr": "Mais ceci tend à amoindrir le problème parce qu'on ne voit pas l'épaisseur de la glace." } } ``` ### Data Fields The data fields are the same among all splits. #### iwslt2017-ar-en - `translation`: a multilingual `string` variable, with possible languages including `ar`, `en`. #### iwslt2017-de-en - `translation`: a multilingual `string` variable, with possible languages including `de`, `en`. #### iwslt2017-en-ar - `translation`: a multilingual `string` variable, with possible languages including `en`, `ar`. #### iwslt2017-en-de - `translation`: a multilingual `string` variable, with possible languages including `en`, `de`. #### iwslt2017-en-fr - `translation`: a multilingual `string` variable, with possible languages including `en`, `fr`. ### Data Splits | name |train |validation|test| |---------------|-----:|---------:|---:| |iwslt2017-ar-en|231713| 888|8583| |iwslt2017-de-en|206112| 888|8079| |iwslt2017-en-ar|231713| 888|8583| |iwslt2017-en-de|206112| 888|8079| |iwslt2017-en-fr|232825| 890|8597| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information Creative Commons BY-NC-ND See the (TED Talks Usage Policy)[https://www.ted.com/about/our-organization/our-policies-terms/ted-talks-usage-policy]. ### Citation Information ``` @inproceedings{cettolo-etal-2017-overview, title = "Overview of the {IWSLT} 2017 Evaluation Campaign", author = {Cettolo, Mauro and Federico, Marcello and Bentivogli, Luisa and Niehues, Jan and St{\"u}ker, Sebastian and Sudoh, Katsuhito and Yoshino, Koichiro and Federmann, Christian}, booktitle = "Proceedings of the 14th International Conference on Spoken Language Translation", month = dec # " 14-15", year = "2017", address = "Tokyo, Japan", publisher = "International Workshop on Spoken Language Translation", url = "https://aclanthology.org/2017.iwslt-1.1", pages = "2--14", } ``` ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@Narsil](https://github.com/Narsil) for adding this dataset.
iwslt2017
[ "task_categories:translation", "annotations_creators:crowdsourced", "language_creators:expert-generated", "multilinguality:translation", "size_categories:1M<n<10M", "source_datasets:original", "language:ar", "language:de", "language:en", "language:fr", "language:it", "language:ja", "language:ko", "language:nl", "language:ro", "language:zh", "license:cc-by-nc-nd-4.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["expert-generated"], "language": ["ar", "de", "en", "fr", "it", "ja", "ko", "nl", "ro", "zh"], "license": ["cc-by-nc-nd-4.0"], "multilinguality": ["translation"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["translation"], "task_ids": [], "paperswithcode_id": "iwslt-2017", "pretty_name": "IWSLT 2017", "dataset_info": [{"config_name": "iwslt2017-en-it", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "it"]}}}], "splits": [{"name": "train", "num_bytes": 46647925, "num_examples": 231619}, {"name": "test", "num_bytes": 305246, "num_examples": 1566}, {"name": "validation", "num_bytes": 200023, "num_examples": 929}], "download_size": 329391132, "dataset_size": 47153194}, {"config_name": "iwslt2017-en-nl", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "nl"]}}}], "splits": [{"name": "train", "num_bytes": 42843933, "num_examples": 237240}, {"name": "test", "num_bytes": 311646, "num_examples": 1777}, {"name": "validation", "num_bytes": 197814, "num_examples": 1003}], "download_size": 329391132, "dataset_size": 43353393}, {"config_name": "iwslt2017-en-ro", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "ro"]}}}], "splits": [{"name": "train", "num_bytes": 44129950, "num_examples": 220538}, {"name": "test", "num_bytes": 316790, "num_examples": 1678}, {"name": "validation", "num_bytes": 205028, "num_examples": 914}], "download_size": 329391132, "dataset_size": 44651768}, {"config_name": "iwslt2017-it-en", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["it", "en"]}}}], "splits": [{"name": "train", "num_bytes": 46647925, "num_examples": 231619}, {"name": "test", "num_bytes": 305246, "num_examples": 1566}, {"name": "validation", "num_bytes": 200023, "num_examples": 929}], "download_size": 329391132, "dataset_size": 47153194}, {"config_name": "iwslt2017-it-nl", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["it", "nl"]}}}], "splits": [{"name": "train", "num_bytes": 43033168, "num_examples": 233415}, {"name": "test", "num_bytes": 309725, "num_examples": 1669}, {"name": "validation", "num_bytes": 197774, "num_examples": 1001}], "download_size": 329391132, "dataset_size": 43540667}, {"config_name": "iwslt2017-it-ro", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["it", "ro"]}}}], "splits": [{"name": "train", "num_bytes": 44485169, "num_examples": 217551}, {"name": "test", "num_bytes": 314974, "num_examples": 1643}, {"name": "validation", "num_bytes": 204989, "num_examples": 914}], "download_size": 329391132, "dataset_size": 45005132}, {"config_name": "iwslt2017-nl-en", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["nl", "en"]}}}], "splits": [{"name": "train", "num_bytes": 42843933, "num_examples": 237240}, {"name": "test", "num_bytes": 311646, "num_examples": 1777}, {"name": "validation", "num_bytes": 197814, "num_examples": 1003}], "download_size": 329391132, "dataset_size": 43353393}, {"config_name": "iwslt2017-nl-it", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["nl", "it"]}}}], "splits": [{"name": "train", "num_bytes": 43033168, "num_examples": 233415}, {"name": "test", "num_bytes": 309725, "num_examples": 1669}, {"name": "validation", "num_bytes": 197774, "num_examples": 1001}], "download_size": 329391132, "dataset_size": 43540667}, {"config_name": "iwslt2017-nl-ro", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["nl", "ro"]}}}], "splits": [{"name": "train", "num_bytes": 41338738, "num_examples": 206920}, {"name": "test", "num_bytes": 320952, "num_examples": 1680}, {"name": "validation", "num_bytes": 202380, "num_examples": 913}], "download_size": 329391132, "dataset_size": 41862070}, {"config_name": "iwslt2017-ro-en", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["ro", "en"]}}}], "splits": [{"name": "train", "num_bytes": 44129950, "num_examples": 220538}, {"name": "test", "num_bytes": 316790, "num_examples": 1678}, {"name": "validation", "num_bytes": 205028, "num_examples": 914}], "download_size": 329391132, "dataset_size": 44651768}, {"config_name": "iwslt2017-ro-it", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["ro", "it"]}}}], "splits": [{"name": "train", "num_bytes": 44485169, "num_examples": 217551}, {"name": "test", "num_bytes": 314974, "num_examples": 1643}, {"name": "validation", "num_bytes": 204989, "num_examples": 914}], "download_size": 329391132, "dataset_size": 45005132}, {"config_name": "iwslt2017-ro-nl", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["ro", "nl"]}}}], "splits": [{"name": "train", "num_bytes": 41338738, "num_examples": 206920}, {"name": "test", "num_bytes": 320952, "num_examples": 1680}, {"name": "validation", "num_bytes": 202380, "num_examples": 913}], "download_size": 329391132, "dataset_size": 41862070}, {"config_name": "iwslt2017-ar-en", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["ar", "en"]}}}], "splits": [{"name": "train", "num_bytes": 56481059, "num_examples": 231713}, {"name": "test", "num_bytes": 2014296, "num_examples": 8583}, {"name": "validation", "num_bytes": 241206, "num_examples": 888}], "download_size": 27748780, "dataset_size": 58736561}, {"config_name": "iwslt2017-de-en", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["de", "en"]}}}], "splits": [{"name": "train", "num_bytes": 42608380, "num_examples": 206112}, {"name": "test", "num_bytes": 1608474, "num_examples": 8079}, {"name": "validation", "num_bytes": 210975, "num_examples": 888}], "download_size": 16758320, "dataset_size": 44427829}, {"config_name": "iwslt2017-en-ar", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "ar"]}}}], "splits": [{"name": "train", "num_bytes": 56481059, "num_examples": 231713}, {"name": "test", "num_bytes": 2014296, "num_examples": 8583}, {"name": "validation", "num_bytes": 241206, "num_examples": 888}], "download_size": 29333173, "dataset_size": 58736561}, {"config_name": "iwslt2017-en-de", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "de"]}}}], "splits": [{"name": "train", "num_bytes": 42608380, "num_examples": 206112}, {"name": "test", "num_bytes": 1608474, "num_examples": 8079}, {"name": "validation", "num_bytes": 210975, "num_examples": 888}], "download_size": 16758334, "dataset_size": 44427829}, {"config_name": "iwslt2017-en-fr", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "fr"]}}}], "splits": [{"name": "train", "num_bytes": 49273286, "num_examples": 232825}, {"name": "test", "num_bytes": 1767465, "num_examples": 8597}, {"name": "validation", "num_bytes": 207579, "num_examples": 890}], "download_size": 27699724, "dataset_size": 51248330}, {"config_name": "iwslt2017-en-ja", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "ja"]}}}], "splits": [{"name": "train", "num_bytes": 48204987, "num_examples": 223108}, {"name": "test", "num_bytes": 1809007, "num_examples": 8469}, {"name": "validation", "num_bytes": 208124, "num_examples": 871}], "download_size": 26983602, "dataset_size": 50222118}, {"config_name": "iwslt2017-en-ko", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "ko"]}}}], "splits": [{"name": "train", "num_bytes": 51678043, "num_examples": 230240}, {"name": "test", "num_bytes": 1869793, "num_examples": 8514}, {"name": "validation", "num_bytes": 219295, "num_examples": 879}], "download_size": 19364776, "dataset_size": 53767131}, {"config_name": "iwslt2017-en-zh", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "zh"]}}}], "splits": [{"name": "train", "num_bytes": 44271004, "num_examples": 231266}, {"name": "test", "num_bytes": 1605527, "num_examples": 8549}, {"name": "validation", "num_bytes": 202537, "num_examples": 879}], "download_size": 27597071, "dataset_size": 46079068}, {"config_name": "iwslt2017-fr-en", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["fr", "en"]}}}], "splits": [{"name": "train", "num_bytes": 49273286, "num_examples": 232825}, {"name": "test", "num_bytes": 1767465, "num_examples": 8597}, {"name": "validation", "num_bytes": 207579, "num_examples": 890}], "download_size": 26880731, "dataset_size": 51248330}, {"config_name": "iwslt2017-ja-en", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["ja", "en"]}}}], "splits": [{"name": "train", "num_bytes": 48204987, "num_examples": 223108}, {"name": "test", "num_bytes": 1809007, "num_examples": 8469}, {"name": "validation", "num_bytes": 208124, "num_examples": 871}], "download_size": 26190859, "dataset_size": 50222118}, {"config_name": "iwslt2017-ko-en", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["ko", "en"]}}}], "splits": [{"name": "train", "num_bytes": 51678043, "num_examples": 230240}, {"name": "test", "num_bytes": 1869793, "num_examples": 8514}, {"name": "validation", "num_bytes": 219295, "num_examples": 879}], "download_size": 19364733, "dataset_size": 53767131}, {"config_name": "iwslt2017-zh-en", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["zh", "en"]}}}], "splits": [{"name": "train", "num_bytes": 44271004, "num_examples": 231266}, {"name": "test", "num_bytes": 1605527, "num_examples": 8549}, {"name": "validation", "num_bytes": 202537, "num_examples": 879}], "download_size": 26849290, "dataset_size": 46079068}]}
2023-04-05T09:07:51+00:00
[]
[ "ar", "de", "en", "fr", "it", "ja", "ko", "nl", "ro", "zh" ]
TAGS #task_categories-translation #annotations_creators-crowdsourced #language_creators-expert-generated #multilinguality-translation #size_categories-1M<n<10M #source_datasets-original #language-Arabic #language-German #language-English #language-French #language-Italian #language-Japanese #language-Korean #language-Dutch #language-Romanian #language-Chinese #license-cc-by-nc-nd-4.0 #region-us
Dataset Card for IWSLT 2017 =========================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: * Paper: Overview of the IWSLT 2017 Evaluation Campaign * Point of Contact: * Size of downloaded dataset files: 4.24 GB * Size of the generated dataset: 1.14 GB * Total amount of disk used: 5.38 GB ### Dataset Summary The IWSLT 2017 Multilingual Task addresses text translation, including zero-shot translation, with a single MT system across all directions including English, German, Dutch, Italian and Romanian. As unofficial task, conventional bilingual text translation is offered between English and Arabic, French, Japanese, Chinese, German and Korean. ### Supported Tasks and Leaderboards ### Languages Dataset Structure ----------------- ### Data Instances #### iwslt2017-ar-en * Size of downloaded dataset files: 27.75 MB * Size of the generated dataset: 58.74 MB * Total amount of disk used: 86.49 MB An example of 'train' looks as follows. #### iwslt2017-de-en * Size of downloaded dataset files: 16.76 MB * Size of the generated dataset: 44.43 MB * Total amount of disk used: 61.18 MB An example of 'train' looks as follows. #### iwslt2017-en-ar * Size of downloaded dataset files: 29.33 MB * Size of the generated dataset: 58.74 MB * Total amount of disk used: 88.07 MB An example of 'train' looks as follows. #### iwslt2017-en-de * Size of downloaded dataset files: 16.76 MB * Size of the generated dataset: 44.43 MB * Total amount of disk used: 61.18 MB An example of 'validation' looks as follows. #### iwslt2017-en-fr * Size of downloaded dataset files: 27.69 MB * Size of the generated dataset: 51.24 MB * Total amount of disk used: 78.94 MB An example of 'validation' looks as follows. ### Data Fields The data fields are the same among all splits. #### iwslt2017-ar-en * 'translation': a multilingual 'string' variable, with possible languages including 'ar', 'en'. #### iwslt2017-de-en * 'translation': a multilingual 'string' variable, with possible languages including 'de', 'en'. #### iwslt2017-en-ar * 'translation': a multilingual 'string' variable, with possible languages including 'en', 'ar'. #### iwslt2017-en-de * 'translation': a multilingual 'string' variable, with possible languages including 'en', 'de'. #### iwslt2017-en-fr * 'translation': a multilingual 'string' variable, with possible languages including 'en', 'fr'. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information Creative Commons BY-NC-ND See the (TED Talks Usage Policy)[URL ### Contributions Thanks to @thomwolf, @Narsil for adding this dataset.
[ "### Dataset Summary\n\n\nThe IWSLT 2017 Multilingual Task addresses text translation, including zero-shot translation, with a single MT system\nacross all directions including English, German, Dutch, Italian and Romanian. As unofficial task, conventional\nbilingual text translation is offered between English and Arabic, French, Japanese, Chinese, German and Korean.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### iwslt2017-ar-en\n\n\n* Size of downloaded dataset files: 27.75 MB\n* Size of the generated dataset: 58.74 MB\n* Total amount of disk used: 86.49 MB\n\n\nAn example of 'train' looks as follows.", "#### iwslt2017-de-en\n\n\n* Size of downloaded dataset files: 16.76 MB\n* Size of the generated dataset: 44.43 MB\n* Total amount of disk used: 61.18 MB\n\n\nAn example of 'train' looks as follows.", "#### iwslt2017-en-ar\n\n\n* Size of downloaded dataset files: 29.33 MB\n* Size of the generated dataset: 58.74 MB\n* Total amount of disk used: 88.07 MB\n\n\nAn example of 'train' looks as follows.", "#### iwslt2017-en-de\n\n\n* Size of downloaded dataset files: 16.76 MB\n* Size of the generated dataset: 44.43 MB\n* Total amount of disk used: 61.18 MB\n\n\nAn example of 'validation' looks as follows.", "#### iwslt2017-en-fr\n\n\n* Size of downloaded dataset files: 27.69 MB\n* Size of the generated dataset: 51.24 MB\n* Total amount of disk used: 78.94 MB\n\n\nAn example of 'validation' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### iwslt2017-ar-en\n\n\n* 'translation': a multilingual 'string' variable, with possible languages including 'ar', 'en'.", "#### iwslt2017-de-en\n\n\n* 'translation': a multilingual 'string' variable, with possible languages including 'de', 'en'.", "#### iwslt2017-en-ar\n\n\n* 'translation': a multilingual 'string' variable, with possible languages including 'en', 'ar'.", "#### iwslt2017-en-de\n\n\n* 'translation': a multilingual 'string' variable, with possible languages including 'en', 'de'.", "#### iwslt2017-en-fr\n\n\n* 'translation': a multilingual 'string' variable, with possible languages including 'en', 'fr'.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCreative Commons BY-NC-ND\n\n\nSee the (TED Talks Usage Policy)[URL", "### Contributions\n\n\nThanks to @thomwolf, @Narsil for adding this dataset." ]
[ "TAGS\n#task_categories-translation #annotations_creators-crowdsourced #language_creators-expert-generated #multilinguality-translation #size_categories-1M<n<10M #source_datasets-original #language-Arabic #language-German #language-English #language-French #language-Italian #language-Japanese #language-Korean #language-Dutch #language-Romanian #language-Chinese #license-cc-by-nc-nd-4.0 #region-us \n", "### Dataset Summary\n\n\nThe IWSLT 2017 Multilingual Task addresses text translation, including zero-shot translation, with a single MT system\nacross all directions including English, German, Dutch, Italian and Romanian. As unofficial task, conventional\nbilingual text translation is offered between English and Arabic, French, Japanese, Chinese, German and Korean.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### iwslt2017-ar-en\n\n\n* Size of downloaded dataset files: 27.75 MB\n* Size of the generated dataset: 58.74 MB\n* Total amount of disk used: 86.49 MB\n\n\nAn example of 'train' looks as follows.", "#### iwslt2017-de-en\n\n\n* Size of downloaded dataset files: 16.76 MB\n* Size of the generated dataset: 44.43 MB\n* Total amount of disk used: 61.18 MB\n\n\nAn example of 'train' looks as follows.", "#### iwslt2017-en-ar\n\n\n* Size of downloaded dataset files: 29.33 MB\n* Size of the generated dataset: 58.74 MB\n* Total amount of disk used: 88.07 MB\n\n\nAn example of 'train' looks as follows.", "#### iwslt2017-en-de\n\n\n* Size of downloaded dataset files: 16.76 MB\n* Size of the generated dataset: 44.43 MB\n* Total amount of disk used: 61.18 MB\n\n\nAn example of 'validation' looks as follows.", "#### iwslt2017-en-fr\n\n\n* Size of downloaded dataset files: 27.69 MB\n* Size of the generated dataset: 51.24 MB\n* Total amount of disk used: 78.94 MB\n\n\nAn example of 'validation' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### iwslt2017-ar-en\n\n\n* 'translation': a multilingual 'string' variable, with possible languages including 'ar', 'en'.", "#### iwslt2017-de-en\n\n\n* 'translation': a multilingual 'string' variable, with possible languages including 'de', 'en'.", "#### iwslt2017-en-ar\n\n\n* 'translation': a multilingual 'string' variable, with possible languages including 'en', 'ar'.", "#### iwslt2017-en-de\n\n\n* 'translation': a multilingual 'string' variable, with possible languages including 'en', 'de'.", "#### iwslt2017-en-fr\n\n\n* 'translation': a multilingual 'string' variable, with possible languages including 'en', 'fr'.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCreative Commons BY-NC-ND\n\n\nSee the (TED Talks Usage Policy)[URL", "### Contributions\n\n\nThanks to @thomwolf, @Narsil for adding this dataset." ]