IProject-10
commited on
Commit
•
c4ba527
1
Parent(s):
5612b84
Update README.md
Browse files
README.md
CHANGED
@@ -9,6 +9,11 @@ model-index:
|
|
9 |
- name: bert-base-uncased-finetuned-squad2
|
10 |
results: []
|
11 |
pipeline_tag: question-answering
|
|
|
|
|
|
|
|
|
|
|
12 |
---
|
13 |
|
14 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
@@ -22,17 +27,34 @@ It achieves the following results on the evaluation set:
|
|
22 |
|
23 |
## Model description
|
24 |
|
25 |
-
|
|
|
|
|
|
|
|
|
|
|
26 |
|
27 |
## Intended uses & limitations
|
28 |
|
29 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
30 |
|
31 |
-
## Training and evaluation data
|
32 |
|
33 |
-
More information needed
|
34 |
|
35 |
-
## Training procedure
|
36 |
|
37 |
### Training hyperparameters
|
38 |
|
|
|
9 |
- name: bert-base-uncased-finetuned-squad2
|
10 |
results: []
|
11 |
pipeline_tag: question-answering
|
12 |
+
metrics:
|
13 |
+
- exact_match
|
14 |
+
- f1
|
15 |
+
language:
|
16 |
+
- en
|
17 |
---
|
18 |
|
19 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
|
|
27 |
|
28 |
## Model description
|
29 |
|
30 |
+
BERTbase fine-tuned on SQuAD 2.0 : Encoder-based Transformer Language model, pretrained with Masked Language Modeling and Next Sentence Prediction.
|
31 |
+
Suitable for Question-Answering tasks, predicts answer spans within the context provided.
|
32 |
+
|
33 |
+
Training data: Train-set SQuAD2.0
|
34 |
+
Evaluation data: Validation-set SQuAD2.0
|
35 |
+
Hardware Accelerator used: GPU Tesla T4
|
36 |
|
37 |
## Intended uses & limitations
|
38 |
|
39 |
+
For Question-Answering -
|
40 |
+
|
41 |
+
question = "How many programming languages does BLOOM support?"
|
42 |
+
context = "BLOOM has 176 billion parameters and can generate text in 46 languages natural languages and 13 programming languages."
|
43 |
+
|
44 |
+
from transformers import pipeline
|
45 |
+
|
46 |
+
question_answerer = pipeline("question-answering", model="IProject-10/bert-base-uncased-finetuned-squad2")
|
47 |
+
question_answerer(question=question, context=context)
|
48 |
+
|
49 |
+
{{ direct_use | default("[question-answering]", true)}}
|
50 |
+
{{ downstream_use | default("[question-answering]", true)}}
|
51 |
+
|
52 |
+
## Results
|
53 |
+
|
54 |
+
Evaluation on SQuAD 2.0 validation dataset:
|
55 |
|
|
|
56 |
|
|
|
57 |
|
|
|
58 |
|
59 |
### Training hyperparameters
|
60 |
|