model update
Browse files
README.md
CHANGED
@@ -14,7 +14,7 @@ model-index:
|
|
14 |
metrics:
|
15 |
- name: Accuracy
|
16 |
type: accuracy
|
17 |
-
value:
|
18 |
- task:
|
19 |
name: Analogy Questions (SAT full)
|
20 |
type: multiple-choice-qa
|
@@ -173,7 +173,7 @@ It achieves the following results on the relation understanding tasks:
|
|
173 |
- Micro F1 score on K&H+N: 0.9649440077902205
|
174 |
- Micro F1 score on ROOT09: 0.9172673143215293
|
175 |
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-large-semeval2012-mask-prompt-d-nce/raw/main/relation_mapping.json)):
|
176 |
-
- Accuracy on Relation Mapping:
|
177 |
|
178 |
|
179 |
### Usage
|
|
|
14 |
metrics:
|
15 |
- name: Accuracy
|
16 |
type: accuracy
|
17 |
+
value: 0.9
|
18 |
- task:
|
19 |
name: Analogy Questions (SAT full)
|
20 |
type: multiple-choice-qa
|
|
|
173 |
- Micro F1 score on K&H+N: 0.9649440077902205
|
174 |
- Micro F1 score on ROOT09: 0.9172673143215293
|
175 |
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-large-semeval2012-mask-prompt-d-nce/raw/main/relation_mapping.json)):
|
176 |
+
- Accuracy on Relation Mapping: 0.9
|
177 |
|
178 |
|
179 |
### Usage
|