Error report.
#1
by
John6666
- opened
The environment is the same as before. If it is not the new LLM that is not yet supported, I got an error that is simply not good.
llama_model_load: error loading model: vocab size mismatch
I can confirm this error occurs on latest master of llama.cpp:
root@StormPeak:~/llama.cpp# ./llama-cli -m LoomAI.Q4_K_M.gguf -p "I believe the meaning of life is" -n 256 -c 700
Log start
main: build = 3590 (4b9afbbe)
main: built with cc (Debian 12.2.0-14) 12.2.0 for x86_64-linux-gnu
main: seed = 1723893539
llama_model_loader: loaded meta data with 36 key-value pairs and 291 tensors from LoomAI.Q4_K_M.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = LoomAI
llama_model_loader: - kv 3: general.organization str = Shrujan142
llama_model_loader: - kv 4: general.size_label str = 6.7B
llama_model_loader: - kv 5: llama.block_count u32 = 32
llama_model_loader: - kv 6: llama.context_length u32 = 4096
llama_model_loader: - kv 7: llama.embedding_length u32 = 4096
llama_model_loader: - kv 8: llama.feed_forward_length u32 = 11008
llama_model_loader: - kv 9: llama.attention.head_count u32 = 32
llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 32
llama_model_loader: - kv 11: llama.rope.freq_base f32 = 10000.000000
llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 13: general.file_type u32 = 15
llama_model_loader: - kv 14: llama.vocab_size u32 = 32000
llama_model_loader: - kv 15: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 16: tokenizer.ggml.add_space_prefix bool = false
llama_model_loader: - kv 17: tokenizer.ggml.model str = llama
llama_model_loader: - kv 18: tokenizer.ggml.pre str = default
llama_model_loader: - kv 19: tokenizer.ggml.tokens arr[str,32001] = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv 20: tokenizer.ggml.scores arr[f32,32001] = [-1000.000000, -1000.000000, -1000.00...
llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,32001] = [3, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv 22: tokenizer.ggml.bos_token_id u32 = 1
llama_model_loader: - kv 23: tokenizer.ggml.eos_token_id u32 = 2
llama_model_loader: - kv 24: tokenizer.ggml.unknown_token_id u32 = 0
llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 2
llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 28: general.quantization_version u32 = 2
llama_model_loader: - kv 29: general.url str = https://huggingface.co/mradermacher/L...
llama_model_loader: - kv 30: mradermacher.quantize_version str = 2
llama_model_loader: - kv 31: mradermacher.quantized_by str = mradermacher
llama_model_loader: - kv 32: mradermacher.quantized_at str = 2024-08-17T11:08:26+02:00
llama_model_loader: - kv 33: mradermacher.quantized_on str = leia
llama_model_loader: - kv 34: general.source.url str = https://huggingface.co/shrujan142/LoomAI
llama_model_loader: - kv 35: mradermacher.convert_type str = hf
llama_model_loader: - type f32: 65 tensors
llama_model_loader: - type q4_K: 193 tensors
llama_model_loader: - type q6_K: 33 tensors
llm_load_vocab: special tokens cache size = 4
llm_load_vocab: token to piece cache size = 0.1684 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = SPM
llm_load_print_meta: n_vocab = 32000
llm_load_print_meta: n_merges = 0
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 4096
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 32
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 1
llm_load_print_meta: n_embd_k_gqa = 4096
llm_load_print_meta: n_embd_v_gqa = 4096
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 11008
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 4096
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 7B
llm_load_print_meta: model ftype = Q4_K - Medium
llm_load_print_meta: model params = 6.74 B
llm_load_print_meta: model size = 3.80 GiB (4.84 BPW)
llm_load_print_meta: general.name = LoomAI
llm_load_print_meta: BOS token = 1 '<s>'
llm_load_print_meta: EOS token = 2 '</s>'
llm_load_print_meta: UNK token = 0 '<unk>'
llm_load_print_meta: PAD token = 2 '</s>'
llm_load_print_meta: LF token = 13 '<0x0A>'
llm_load_print_meta: max token length = 48
llama_model_load: error loading model: vocab size mismatch
llama_load_model_from_file: failed to load model
llama_init_from_gpt_params: error: failed to load model 'LoomAI.Q4_K_M.gguf'
main: error: unable to load model
surprising that it works with transformers. i'll delete this repo then.