Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
social / README.md
socialnormdataset's picture
Update README.md
43de4f8 verified
metadata
license: apache-2.0
language:
  - en
pretty_name: Social
size_categories:
  - 10K<n<100K
tags:
  - social
  - benchmark

Social Dataset

πŸ“ƒ [Paper] β€’ πŸ’» [Github] β€’ πŸ€— [Dataset] β€’ πŸ“½ [Slides] β€’ πŸ“‹ [Poster]

This dataset is proposed in the NAACL 2024 paper: Measuring Social Norms of Large Language Models.

We present a new challenge to examine whether large language models understand social norms. In contrast to existing datasets, our dataset requires a fundamental understanding of social norms to solve. Our dataset features the largest set of social norm skills, consisting of 402 skills and 12,383 questions covering a wide set of social norms ranging from opinions and arguments to culture and laws. We design our dataset according to the K-12 curriculum. This enables the direct comparison of the social understanding of large language models to humans, more specifically, elementary students. While prior work generates nearly random accuracy on our benchmark, recent large language models such as GPT3.5-Turbo and LLaMA2-Chat are able to improve the performance significantly, only slightly below human performance. We then propose a multi-agent framework based on large language models to improve the models' ability to understand social norms. This method further improves large language models to be on par with humans. Given the increasing adoption of large language models in real-world applications, our finding is particularly important and presents a unique direction for future improvements.

Authors

Ye Yuan, Kexin Tang, Jianhao Shen, Ming Zhang*, Chenguang Wang*

Resources

Dataset Structure

The basic statistics of the dataset are as follows:

Subject #Skills #Questions Avg. #A
Social Studies 170 2,315 3.4
Language Arts 232 10,068 2.4
Total 402 12,383 2.6

The dataset is in the following format:

DatasetDict({
    test: Dataset({
        features: ['subject', 'grade', 'skill', 'question', 'choices', 'answer_idx'],
        num_rows: 12383
    })
})

And the detailed description of the features are as follows:

  • subject: str
    • The subject of the question, one of social studies, language arts.
  • grade: str
    • The grade level information of the question.
  • skill: str
    • The skill level information of the question.
  • question: str
    • The question text.
  • choices: Optional[List[str]]
    • The choices of the question.
  • answer_idx: int
    • The index of the correct answer in the choices.

How to Use

Please refer to our code for the usage of evaluation on the dataset.

Citation

@inproceedings{yuan2024measuring,
    title={Measuring Social Norms of Large Language Models}, 
    author={Ye Yuan and Kexin Tang and Jianhao Shen and Ming Zhang and Chenguang Wang},
    year={2024},
    booktitle={NAACL},
}

Dataset Card Contact

[email protected]