Datasets:
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,49 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- ru
|
4 |
+
|
5 |
+
tags:
|
6 |
+
- toxic comments classification
|
7 |
+
|
8 |
+
---
|
9 |
+
|
10 |
+
## General concept of the model
|
11 |
+
|
12 |
+
|
13 |
+
Sensitive topics are such topics that have a high chance of initiating a toxic conversation: homophobia, politics, racism, etc. This dataset uses 18 topics.
|
14 |
+
|
15 |
+
More details can be found [in this article ](https://www.aclweb.org/anthology/2021.bsnlp-1.4/) presented at the workshop for Balto-Slavic NLP at the EACL-2021 conference.
|
16 |
+
This paper presents the first version of this dataset. Here you can see the last version of the dataset which is significantly larger and also properly filtered.
|
17 |
+
|
18 |
+
|
19 |
+
## Licensing Information
|
20 |
+
|
21 |
+
[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][cc-by-nc-sa].
|
22 |
+
|
23 |
+
[![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa]
|
24 |
+
|
25 |
+
[cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/
|
26 |
+
[cc-by-nc-sa-image]: https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png
|
27 |
+
|
28 |
+
## Citation
|
29 |
+
|
30 |
+
If you find this repository helpful, feel free to cite our publication:
|
31 |
+
|
32 |
+
```
|
33 |
+
@inproceedings{babakov-etal-2021-detecting,
|
34 |
+
title = "Detecting Inappropriate Messages on Sensitive Topics that Could Harm a Company{'}s Reputation",
|
35 |
+
author = "Babakov, Nikolay and
|
36 |
+
Logacheva, Varvara and
|
37 |
+
Kozlova, Olga and
|
38 |
+
Semenov, Nikita and
|
39 |
+
Panchenko, Alexander",
|
40 |
+
booktitle = "Proceedings of the 8th Workshop on Balto-Slavic Natural Language Processing",
|
41 |
+
month = apr,
|
42 |
+
year = "2021",
|
43 |
+
address = "Kiyv, Ukraine",
|
44 |
+
publisher = "Association for Computational Linguistics",
|
45 |
+
url = "https://www.aclweb.org/anthology/2021.bsnlp-1.4",
|
46 |
+
pages = "26--36",
|
47 |
+
abstract = "Not all topics are equally {``}flammable{''} in terms of toxicity: a calm discussion of turtles or fishing less often fuels inappropriate toxic dialogues than a discussion of politics or sexual minorities. We define a set of sensitive topics that can yield inappropriate and toxic messages and describe the methodology of collecting and labelling a dataset for appropriateness. While toxicity in user-generated data is well-studied, we aim at defining a more fine-grained notion of inappropriateness. The core of inappropriateness is that it can harm the reputation of a speaker. This is different from toxicity in two respects: (i) inappropriateness is topic-related, and (ii) inappropriate message is not toxic but still unacceptable. We collect and release two datasets for Russian: a topic-labelled dataset and an appropriateness-labelled dataset. We also release pre-trained classification models trained on this data.",
|
48 |
+
}
|
49 |
+
```
|