rmdhirr commited on
Commit
bbd0acb
1 Parent(s): a5b2c6c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -145,8 +145,8 @@ Pulsar_7B was fine-tuned on the following datasets:
145
  ## Quantizations
146
  Thanks to mradermacher, static GGUF quants are available [here](https://huggingface.co/mradermacher/Pulsar_7B-GGUF).
147
 
148
- ## Formatting
149
- Pulsar_7B works well with Alpaca, it's not a picky model when it comes to formatting. Mistral should be compatible too. The custom chat template from [MTSAIR/multi_verse_model](https://huggingface.co/MTSAIR/multi_verse_model) also performs well:
150
  ```
151
  {% for message in messages %}{% if message['role'] == 'user' %}{{ '### Instruction:\n' + message['content'] + '\n### Response:\n' }}{% elif message['role'] == 'assistant' %}{{ message['content'] + eos_token}}{% elif message['role'] == 'system' %}{{ '### System:\n' + message['content'] + '\n' }}{% endif %}{% endfor %}
152
  ```
 
145
  ## Quantizations
146
  Thanks to mradermacher, static GGUF quants are available [here](https://huggingface.co/mradermacher/Pulsar_7B-GGUF).
147
 
148
+ ## Formatting/Preset
149
+ Pulsar_7B works well with Alpaca, it's not a picky model when it comes to formatting/preset. Mistral should be compatible too. The custom chat template from [MTSAIR/multi_verse_model](https://huggingface.co/MTSAIR/multi_verse_model) also performs well:
150
  ```
151
  {% for message in messages %}{% if message['role'] == 'user' %}{{ '### Instruction:\n' + message['content'] + '\n### Response:\n' }}{% elif message['role'] == 'assistant' %}{{ message['content'] + eos_token}}{% elif message['role'] == 'system' %}{{ '### System:\n' + message['content'] + '\n' }}{% endif %}{% endfor %}
152
  ```