nicholasKluge commited on
Commit
d40cd71
1 Parent(s): 055c256

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -7
README.md CHANGED
@@ -1,11 +1,7 @@
1
  ---
2
  license: apache-2.0
3
  datasets:
4
- - nicholasKluge/toxic-aira-dataset
5
- - Anthropic/hh-rlhf
6
- - allenai/prosocial-dialog
7
- - allenai/real-toxicity-prompts
8
- - dirtycomputer/Toxic_Comment_Classification_Challenge
9
  language:
10
  - en
11
  metrics:
@@ -31,12 +27,12 @@ co2_eq_emissions:
31
 
32
  The ToxicityModel is a fine-tuned version of [RoBERTa](https://huggingface.co/roberta-base) that can be used to score the toxicity of a sentence.
33
 
34
- The model was trained with a dataset composed of `toxic_response` and `non_toxic_response`.
35
 
36
  ## Details
37
 
38
  - **Size:** 124,646,401 parameters
39
- - **Dataset:** [Toxic-Aira Dataset](https://huggingface.co/datasets/nicholasKluge/toxic-aira-dataset)
40
  - **Language:** English
41
  - **Number of Training Steps:** 1000
42
  - **Batch size:** 32
 
1
  ---
2
  license: apache-2.0
3
  datasets:
4
+ - nicholasKluge/toxic-text
 
 
 
 
5
  language:
6
  - en
7
  metrics:
 
27
 
28
  The ToxicityModel is a fine-tuned version of [RoBERTa](https://huggingface.co/roberta-base) that can be used to score the toxicity of a sentence.
29
 
30
+ The model was trained with a dataset composed of `toxic` and `non_toxic` language examples.
31
 
32
  ## Details
33
 
34
  - **Size:** 124,646,401 parameters
35
+ - **Dataset:** [Toxic-Text Dataset](https://huggingface.co/datasets/nicholasKluge/toxic-text)
36
  - **Language:** English
37
  - **Number of Training Steps:** 1000
38
  - **Batch size:** 32