language:
- en
- zh
license: cc-by-sa-4.0
task_categories:
- multiple-choice
pretty_name: LogiQA2.0
data_splits:
- train
- validation
- test
dataset_info:
config_name: logiqa2_zh
features:
- name: answer
dtype: int32
- name: text
dtype: string
- name: question
dtype: string
- name: options
sequence: string
splits:
- name: train
num_bytes: 8820627
num_examples: 12751
- name: test
num_bytes: 1087414
num_examples: 1594
- name: validation
num_bytes: 1107666
num_examples: 1593
download_size: 7563394
dataset_size: 11015707
configs:
- config_name: logiqa2_zh
data_files:
- split: train
path: logiqa2_zh/train-*
- split: test
path: logiqa2_zh/test-*
- split: validation
path: logiqa2_zh/validation-*
Dataset Card for Dataset Name
Dataset Description
- Homepage: https://github.com/csitfun/LogiQA2.0, https://github.com/csitfun/LogiEval
- Repository: https://github.com/csitfun/LogiQA2.0, https://github.com/csitfun/LogiEval
- Paper: https://ieeexplore.ieee.org/abstract/document/10174688
Dataset Summary
Logiqa2.0 dataset - logical reasoning in MRC and NLI tasks
LogiEval: a benchmark suite for testing logical reasoning abilities of instruct-prompt large language models
Licensing Information
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Citation Information
@ARTICLE{10174688, author={Liu, Hanmeng and Liu, Jian and Cui, Leyang and Teng, Zhiyang and Duan, Nan and Zhou, Ming and Zhang, Yue}, journal={IEEE/ACM Transactions on Audio, Speech, and Language Processing}, title={LogiQA 2.0 — An Improved Dataset for Logical Reasoning in Natural Language Understanding}, year={2023}, volume={}, number={}, pages={1-16}, doi={10.1109/TASLP.2023.3293046}}
@misc{liu2023evaluating, title={Evaluating the Logical Reasoning Ability of ChatGPT and GPT-4}, author={Hanmeng Liu and Ruoxi Ning and Zhiyang Teng and Jian Liu and Qiji Zhou and Yue Zhang}, year={2023}, eprint={2304.03439}, archivePrefix={arXiv}, primaryClass={cs.CL} }