main branch pulled today from llama.cppconvert.py to fp16
llama_model_load: error loading model: create_tensor: tensor 'blk.0.ffn_gate.weight' not found
shrug sounds like a bug in llama.cpp
use convert-hf-to-ggml.py
· Sign up or log in to comment