which tokenizer.model should be used?
#4
by
aotsukiqx
- opened
Dont see tokenizer.model in the repo, should I use llama tokenizer.model for run?
This is a GGML model so the tokenizer is built into the GGML file itself.
And you can't use it from plain transformers, if you want to load it from Python you should use llama-cpp-python or ctransformers.
Thank you very much for the response. I will try it out. I'm a beginner with LLM and just starting to learn. :)
aotsukiqx
changed discussion status to
closed