TheBossLevel123
commited on
Commit
•
394c40c
1
Parent(s):
a7499c6
Update README.md
Browse files
README.md
CHANGED
@@ -19,7 +19,23 @@ This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-intermediate-ste
|
|
19 |
|
20 |
## Model description
|
21 |
|
22 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
23 |
|
24 |
## Intended uses & limitations
|
25 |
|
|
|
19 |
|
20 |
## Model description
|
21 |
|
22 |
+
```py
|
23 |
+
import torch
|
24 |
+
from transformers import pipeline, AutoTokenizer, TextStreamer
|
25 |
+
import re
|
26 |
+
tokenizer = AutoTokenizer.from_pretrained("TheBossLevel123/TinyAITA")
|
27 |
+
pipe = pipeline("text-generation", model="TheBossLevel123/TinyAITA", torch_dtype=torch.bfloat16, device_map="auto")
|
28 |
+
|
29 |
+
streamer=TextStreamer(tokenizer)
|
30 |
+
```
|
31 |
+
```py
|
32 |
+
prompt = 'AITA for XYZ?'
|
33 |
+
outputs = pipe(prompt, max_new_tokens=1024, do_sample=True, temperature=0.9, streamer=streamer, eos_token_id=tokenizer.encode("<|im_end|>"))
|
34 |
+
if outputs and "generated_text" in outputs[0]:
|
35 |
+
text = outputs[0]["generated_text"]
|
36 |
+
print(f"Prompt: {prompt}")
|
37 |
+
print("")
|
38 |
+
print(text)```
|
39 |
|
40 |
## Intended uses & limitations
|
41 |
|