MistralTokenizer
#33
by
Esj-DL
- opened
Hi @MaziyarPanahi ,
Thank you for your work to share the quantized models, I am relatively new to this, my question was if the quantized model is already prepared with MistralTokenizer or does this have to be specified?
Hi @Esj-DL
You are very welcome, glad it's useful. The Llama.cpp packages the tokenizer inside the GGUF, so nothing to do. You just need to follow the template of the model for better results. Since this model has new feature like TOOL calling, here is a nice example as how the prompt should look like:
https://github.com/mistralai/mistral-common/blob/main/examples/tokenizer.ipynb
Thanks for your reply.
This comment has been hidden