asahi417 commited on
Commit
5f994e4
1 Parent(s): 5f926c8

model update

Browse files
Files changed (1) hide show
  1. README.md +17 -17
README.md CHANGED
@@ -14,7 +14,7 @@ model-index:
14
  metrics:
15
  - name: Accuracy
16
  type: accuracy
17
- value: 0.9297619047619048
18
  - task:
19
  name: Analogy Questions (SAT full)
20
  type: multiple-choice-qa
@@ -91,10 +91,10 @@ model-index:
91
  metrics:
92
  - name: F1
93
  type: f1
94
- value: None
95
  - name: F1 (macro)
96
  type: f1_macro
97
- value: None
98
  - task:
99
  name: Lexical Relation Classification (CogALexV)
100
  type: classification
@@ -105,10 +105,10 @@ model-index:
105
  metrics:
106
  - name: F1
107
  type: f1
108
- value: None
109
  - name: F1 (macro)
110
  type: f1_macro
111
- value: None
112
  - task:
113
  name: Lexical Relation Classification (EVALution)
114
  type: classification
@@ -119,10 +119,10 @@ model-index:
119
  metrics:
120
  - name: F1
121
  type: f1
122
- value: None
123
  - name: F1 (macro)
124
  type: f1_macro
125
- value: None
126
  - task:
127
  name: Lexical Relation Classification (K&H+N)
128
  type: classification
@@ -133,10 +133,10 @@ model-index:
133
  metrics:
134
  - name: F1
135
  type: f1
136
- value: None
137
  - name: F1 (macro)
138
  type: f1_macro
139
- value: None
140
  - task:
141
  name: Lexical Relation Classification (ROOT09)
142
  type: classification
@@ -147,10 +147,10 @@ model-index:
147
  metrics:
148
  - name: F1
149
  type: f1
150
- value: None
151
  - name: F1 (macro)
152
  type: f1_macro
153
- value: None
154
 
155
  ---
156
  # relbert/roberta-large-semeval2012-average-no-mask-prompt-e-nce
@@ -167,13 +167,13 @@ It achieves the following results on the relation understanding tasks:
167
  - Accuracy on U4: 0.6203703703703703
168
  - Accuracy on Google: 0.886
169
  - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-e-nce/raw/main/classification.json)):
170
- - Micro F1 score on BLESS: None
171
- - Micro F1 score on CogALexV: None
172
- - Micro F1 score on EVALution: None
173
- - Micro F1 score on K&H+N: None
174
- - Micro F1 score on ROOT09: None
175
  - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-e-nce/raw/main/relation_mapping.json)):
176
- - Accuracy on Relation Mapping: 0.9297619047619048
177
 
178
 
179
  ### Usage
 
14
  metrics:
15
  - name: Accuracy
16
  type: accuracy
17
+ value: 0.8450793650793651
18
  - task:
19
  name: Analogy Questions (SAT full)
20
  type: multiple-choice-qa
 
91
  metrics:
92
  - name: F1
93
  type: f1
94
+ value: 0.9199939731806539
95
  - name: F1 (macro)
96
  type: f1_macro
97
+ value: 0.9158483158560947
98
  - task:
99
  name: Lexical Relation Classification (CogALexV)
100
  type: classification
 
105
  metrics:
106
  - name: F1
107
  type: f1
108
+ value: 0.8457746478873239
109
  - name: F1 (macro)
110
  type: f1_macro
111
+ value: 0.6760195209742395
112
  - task:
113
  name: Lexical Relation Classification (EVALution)
114
  type: classification
 
119
  metrics:
120
  - name: F1
121
  type: f1
122
+ value: 0.6684723726977249
123
  - name: F1 (macro)
124
  type: f1_macro
125
+ value: 0.65910797043685
126
  - task:
127
  name: Lexical Relation Classification (K&H+N)
128
  type: classification
 
133
  metrics:
134
  - name: F1
135
  type: f1
136
+ value: 0.959379564582319
137
  - name: F1 (macro)
138
  type: f1_macro
139
+ value: 0.8779321856206035
140
  - task:
141
  name: Lexical Relation Classification (ROOT09)
142
  type: classification
 
147
  metrics:
148
  - name: F1
149
  type: f1
150
+ value: 0.9031651519899718
151
  - name: F1 (macro)
152
  type: f1_macro
153
+ value: 0.9015700872047177
154
 
155
  ---
156
  # relbert/roberta-large-semeval2012-average-no-mask-prompt-e-nce
 
167
  - Accuracy on U4: 0.6203703703703703
168
  - Accuracy on Google: 0.886
169
  - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-e-nce/raw/main/classification.json)):
170
+ - Micro F1 score on BLESS: 0.9199939731806539
171
+ - Micro F1 score on CogALexV: 0.8457746478873239
172
+ - Micro F1 score on EVALution: 0.6684723726977249
173
+ - Micro F1 score on K&H+N: 0.959379564582319
174
+ - Micro F1 score on ROOT09: 0.9031651519899718
175
  - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-e-nce/raw/main/relation_mapping.json)):
176
+ - Accuracy on Relation Mapping: 0.8450793650793651
177
 
178
 
179
  ### Usage