Spaces:
Running
on
A10G
This doesn't work for Llama3 models
Tried out trying to gguf a llama-3 model that I merged. But it didn't work.
https://huggingface.co/birgermoell/Llama-3-dare_ties
Error: Error converting to fp16: b'Traceback (most recent call last):\n File "/home/user/app/llama.cpp/convert.py", line 1548, in \n main()\n File "/home/user/app/llama.cpp/convert.py", line 1515, in main\n vocab, special_vocab = vocab_factory.load_vocab(vocab_types, model_parent_path)\n File "/home/user/app/llama.cpp/convert.py", line 1417, in load_vocab\n vocab = self._create_vocab_by_path(vocab_types)\n File "/home/user/app/llama.cpp/convert.py", line 1407, in _create_vocab_by_path\n raise FileNotFoundError(f"Could not find a tokenizer matching any of {vocab_types}")\nFileNotFoundError: Could not find a tokenizer matching any of ['spm', 'hfft']\n'
I get the same error, it would be cool if the developers fix it.
Llama3 is not supported atm:
https://github.com/ggerganov/llama.cpp/pull/6745
I still don't understand, is this temporary or not supported at all?
https://github.com/ggerganov/llama.cpp/pull/6745
this PR has been merged , so Llama3 is now supported.
please update this app .
Just rebuild it. This will work.
Just restarted the app - to pull the latest llama.cpp
- running some quick tests for it.
ALright made a small patch for llama models to go through the hf-convert
script and it works now: https://huggingface.co/reach-vb/llama-3-8b-Q8_0-GGUF ๐ค
https://github.com/ggerganov/llama.cpp/blob/master/docs/HOWTO-add-model.md
Convert the model to GGUF
This step is done in python with a convert script using the gguf library. Depending on the model architecture, you can use either convert.py or convert-hf-to-gguf.py.
Looks like convert-hf-to-gguf.py can convert any model
cool, so i think this can now be closed!
please confirm @birgermoell
It worked as intended for me.
(closing since it is fixed)