reciprocate commited on
Commit
b5acbe8
1 Parent(s): 5558dfa

shorten instruction format example

Browse files
Files changed (1) hide show
  1. README.md +3 -10
README.md CHANGED
@@ -28,18 +28,11 @@ extra_gated_fields:
28
  `StableLM Zephyr 3B` uses the following instruction format:
29
  ```
30
  <|user|>
31
- List 10 synonyms for the word "tiny"<|endoftext|>
32
  <|assistant|>
33
  1. Dwarf
34
  2. Little
35
- 3. Petite
36
- 4. Miniature
37
- 5. Small
38
- 6. Compact
39
- 7. Cramped
40
- 8. Wee
41
- 9. Nibble
42
- 10. Crumble<|endoftext|>
43
  ```
44
 
45
  This format is also available through the tokenizer's `apply_chat_template` method:
@@ -54,7 +47,7 @@ model = AutoModelForCausalLM.from_pretrained(
54
  device_map="auto"
55
  )
56
 
57
- prompt = [{'role': 'user', 'content': 'List 10 synonyms for the word "tiny"'}]
58
  inputs = tokenizer.apply_chat_template(prompt, add_generation_prompt=True, return_tensors='pt')
59
 
60
  tokens = model.generate(
 
28
  `StableLM Zephyr 3B` uses the following instruction format:
29
  ```
30
  <|user|>
31
+ List 3 synonyms for the word "tiny"<|endoftext|>
32
  <|assistant|>
33
  1. Dwarf
34
  2. Little
35
+ 3. Petite<|endoftext|>
 
 
 
 
 
 
 
36
  ```
37
 
38
  This format is also available through the tokenizer's `apply_chat_template` method:
 
47
  device_map="auto"
48
  )
49
 
50
+ prompt = [{'role': 'user', 'content': 'List 3 synonyms for the word "tiny"'}]
51
  inputs = tokenizer.apply_chat_template(prompt, add_generation_prompt=True, return_tensors='pt')
52
 
53
  tokens = model.generate(