IProject-10 commited on
Commit
b571a71
1 Parent(s): c4ba527

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +19 -13
README.md CHANGED
@@ -27,32 +27,38 @@ It achieves the following results on the evaluation set:
27
 
28
  ## Model description
29
 
30
- BERTbase fine-tuned on SQuAD 2.0 : Encoder-based Transformer Language model, pretrained with Masked Language Modeling and Next Sentence Prediction.
31
- Suitable for Question-Answering tasks, predicts answer spans within the context provided.
32
 
33
- Training data: Train-set SQuAD2.0
34
- Evaluation data: Validation-set SQuAD2.0
35
  Hardware Accelerator used: GPU Tesla T4
36
 
37
  ## Intended uses & limitations
38
 
39
- For Question-Answering -
40
-
41
- question = "How many programming languages does BLOOM support?"
42
- context = "BLOOM has 176 billion parameters and can generate text in 46 languages natural languages and 13 programming languages."
43
 
 
44
  from transformers import pipeline
45
 
46
- question_answerer = pipeline("question-answering", model="IProject-10/bert-base-uncased-finetuned-squad2")
47
- question_answerer(question=question, context=context)
48
-
49
- {{ direct_use | default("[question-answering]", true)}}
50
- {{ downstream_use | default("[question-answering]", true)}}
51
 
 
 
 
 
 
 
 
52
  ## Results
53
 
54
  Evaluation on SQuAD 2.0 validation dataset:
55
 
 
 
 
56
 
57
 
58
 
 
27
 
28
  ## Model description
29
 
30
+ BERTbase fine-tuned on SQuAD 2.0 : Encoder-based Transformer Language model, pretrained with Masked Language Modeling and Next Sentence Prediction.<br>
31
+ Suitable for Question-Answering tasks, predicts answer spans within the context provided.<br>
32
 
33
+ Training data: Train-set SQuAD2.0<br>
34
+ Evaluation data: Validation-set SQuAD2.0<br>
35
  Hardware Accelerator used: GPU Tesla T4
36
 
37
  ## Intended uses & limitations
38
 
39
+ For Question-Answering -
 
 
 
40
 
41
+ ```python
42
  from transformers import pipeline
43
 
44
+ # Replace this with your own checkpoint
45
+ model_checkpoint = "IProject-10/bert-base-uncased-finetuned-squad2"
46
+ question_answerer = pipeline("question-answering", model=model_checkpoint)
 
 
47
 
48
+ context = """
49
+ 🤗 Transformers is backed by the three most popular deep learning libraries — Jax, PyTorch and TensorFlow — with a seamless integration
50
+ between them. It's straightforward to train your models with one before loading them for inference with the other.
51
+ """
52
+ question = "Which deep learning libraries back 🤗 Transformers?"
53
+ question_answerer(question=question, context=context)
54
+ ```
55
  ## Results
56
 
57
  Evaluation on SQuAD 2.0 validation dataset:
58
 
59
+ ```
60
+
61
+ ```
62
 
63
 
64