EzraAragon
commited on
Commit
•
b13b3ca
1
Parent(s):
a5df403
Update README.md
Browse files
README.md
CHANGED
@@ -26,13 +26,13 @@ In particular, for training the model we used a batch size of 256, Adam optimize
|
|
26 |
# Usage
|
27 |
|
28 |
|
29 |
-
|
30 |
```
|
31 |
from transformers import pipeline
|
32 |
|
33 |
pipe = pipeline("fill-mask", model="citiusLTL/DisorBERT")
|
34 |
```
|
35 |
-
|
36 |
```
|
37 |
from transformers import AutoTokenizer, AutoModelForMaskedLM
|
38 |
|
|
|
26 |
# Usage
|
27 |
|
28 |
|
29 |
+
### Use a pipeline as a high-level helper
|
30 |
```
|
31 |
from transformers import pipeline
|
32 |
|
33 |
pipe = pipeline("fill-mask", model="citiusLTL/DisorBERT")
|
34 |
```
|
35 |
+
### Load model directly
|
36 |
```
|
37 |
from transformers import AutoTokenizer, AutoModelForMaskedLM
|
38 |
|