language:
- en
- zh
license: cc-by-sa-4.0
task_categories:
- multiple-choice
pretty_name: LogiQA2.0
data_splits:
- train
- validation
- test
dataset_info:
config_name: logiqa2
features:
- name: id
dtype: int32
- name: answer
dtype: int32
- name: text
dtype: string
- name: type
dtype: string
- name: question
dtype: string
- name: options
sequence: string
splits:
- name: train
num_bytes: 13374480
num_examples: 12567
- name: test
num_bytes: 1656534
num_examples: 1572
- name: validation
num_bytes: 1687080
num_examples: 1569
download_size: 8538515
dataset_size: 16718094
configs:
- config_name: logiqa2
data_files:
- split: train
path: logiqa2/train-*
- split: test
path: logiqa2/test-*
- split: validation
path: logiqa2/validation-*
default: true
Dataset Card for Dataset Name
Dataset Description
- Homepage: https://github.com/csitfun/LogiQA2.0, https://github.com/csitfun/LogiEval
- Repository: https://github.com/csitfun/LogiQA2.0, https://github.com/csitfun/LogiEval
- Paper: https://ieeexplore.ieee.org/abstract/document/10174688
Dataset Summary
Logiqa2.0 dataset - logical reasoning in MRC and NLI tasks
LogiEval: a benchmark suite for testing logical reasoning abilities of instruct-prompt large language models
Licensing Information
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Citation Information
@ARTICLE{10174688, author={Liu, Hanmeng and Liu, Jian and Cui, Leyang and Teng, Zhiyang and Duan, Nan and Zhou, Ming and Zhang, Yue}, journal={IEEE/ACM Transactions on Audio, Speech, and Language Processing}, title={LogiQA 2.0 — An Improved Dataset for Logical Reasoning in Natural Language Understanding}, year={2023}, volume={}, number={}, pages={1-16}, doi={10.1109/TASLP.2023.3293046}}
@misc{liu2023evaluating, title={Evaluating the Logical Reasoning Ability of ChatGPT and GPT-4}, author={Hanmeng Liu and Ruoxi Ning and Zhiyang Teng and Jian Liu and Qiji Zhou and Yue Zhang}, year={2023}, eprint={2304.03439}, archivePrefix={arXiv}, primaryClass={cs.CL} }