Datasets:

Modalities:
Text
Formats:
parquet
Libraries:
Datasets
pandas
vund commited on
Commit
44e95a5
1 Parent(s): c0fe166

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +22 -0
README.md CHANGED
@@ -29,3 +29,25 @@ configs:
29
  - split: test
30
  path: data/test-*
31
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
29
  - split: test
30
  path: data/test-*
31
  ---
32
+ # ViGEText_17to23 dataset
33
+ Evaluating the Symbol Binding Ability of Large Language Models for Multiple-Choice Questions in Vietnamese General Education: https://github.com/uitnlp/vigetext_17to23
34
+
35
+ ```
36
+ @inproceedings{10.1145/3628797.3628837,
37
+ author = {Nguyen, Duc-Vu and Nguyen, Quoc-Nam},
38
+ title = {Evaluating the Symbol Binding Ability of Large Language Models for Multiple-Choice Questions in Vietnamese General Education},
39
+ year = {2023},
40
+ isbn = {9798400708916},
41
+ publisher = {Association for Computing Machinery},
42
+ address = {New York, NY, USA},
43
+ url = {https://doi.org/10.1145/3628797.3628837},
44
+ doi = {10.1145/3628797.3628837},
45
+ abstract = {In this paper, we evaluate the ability of large language models (LLMs) to perform multiple choice symbol binding (MCSB) for multiple choice question answering (MCQA) tasks in zero-shot, one-shot, and few-shot settings. We focus on Vietnamese, with fewer challenging MCQA datasets than in English. The two existing datasets, ViMMRC 1.0 and ViMMRC 2.0, focus on literature. Recent research in Vietnamese natural language processing (NLP) has focused on the Vietnamese National High School Graduation Examination (VNHSGE) from 2019 to 2023 to evaluate ChatGPT. However, these studies have mainly focused on how ChatGPT solves the VNHSGE step by step. We aim to create a novel and high-quality dataset by providing structured guidelines for typing LaTeX formulas for mathematics, physics, chemistry, and biology. This dataset can be used to evaluate the MCSB ability of LLMs and smaller language models (LMs) because it is typed in a strict LaTeX style. We determine the most probable character answer (A, B, C, or D) based on context, instead of finding the answer step by step as in previous Vietnamese works. This reduces computational costs and accelerates the evaluation of LLMs. Our evaluation of six well-known LLMs, namely BLOOMZ-7.1B-MT, LLaMA-2-7B, LLaMA-2-70B, GPT-3, GPT-3.5, and GPT-4.0, on the ViMMRC 1.0 and ViMMRC 2.0 benchmarks and our proposed dataset shows promising results on the MCSB ability of LLMs for Vietnamese. The dataset is available1 for research purposes only.},
46
+ booktitle = {Proceedings of the 12th International Symposium on Information and Communication Technology},
47
+ pages = {379–386},
48
+ numpages = {8},
49
+ keywords = {Analysis of Language Models, Multiple Choice Symbol Binding, Multiple Choice Question Answering, Language Modeling},
50
+ location = {<conf-loc>, <city>Ho Chi Minh</city>, <country>Vietnam</country>, </conf-loc>},
51
+ series = {SOICT '23}
52
+ }
53
+ ```