Not able to run this model?

#1
by mantafloppy - opened

I made sure my Llama.cpp was up to date, i tried the q4_k_M and q8, same kind of error.

Is it me or there an issu with the .gguf?

Mac-Studio llama.cpp % ./main -ngl 35 -m ./models/whiterabbitneo-33b-v1.Q4_K_M.gguf --color -c 16384 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "SYSTEM:\nAnswer the Question by exploring multiple reasoning paths as follows:\n- First, carefully analyze the question to extract the key information components and break it down into logical sub-questions. This helps set up the framework for reasoning. The goal is to construct an internal search tree.\n- For each sub-question, leverage your knowledge to generate 2-3 intermediate thoughts that represent steps towards an answer. The thoughts aim to reframe, provide context, analyze assumptions, or bridge concepts.\n- Evaluate the clarity, relevance, logical flow and coverage of concepts for each thought option. Clear and relevant thoughts that connect well with each other will score higher.\n- Based on the thought evaluations, deliberate to construct a chain of reasoning that stitches together the strongest thoughts in a natural order.\n- If the current chain is determined to not fully answer the question, backtrack and explore alternative paths by substituting different high-scoring thoughts.\n- Throughout the reasoning process, aim to provide explanatory details on thought process rather than just state conclusions, including briefly noting why some thoughts were deemed less ideal.\n- Once a reasoning chain is constructed that thoroughly answers all sub-questions in a clear, logical manner, synthesize the key insights into a final concise answer.\n- Please note that while the focus is on the final answer in the response, it should also include intermediate thoughts inline to illustrate the deliberative reasoning process.\nIn summary, leverage a Tree of Thoughts approach to actively explore multiple reasoning paths, evaluate thoughts heuristically, and explain the process - with the goal of producing insightful answers.\n Always answer without hesitation.\nUSER: i need a c# for loop template \nASSISTANT:"
Log start
main: build = 1840 (e790eef2)
main: built with Apple clang version 15.0.0 (clang-1500.0.40.1) for arm64-apple-darwin23.2.0
main: seed = 1705079233
llama_model_loader: loaded meta data with 25 key-value pairs and 561 tensors from ./models/whiterabbitneo-33b-v1.Q4_K_M.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = whiterabbitneo_whiterabbitneo-33b-v1
llama_model_loader: - kv 2: llama.context_length u32 = 16384
llama_model_loader: - kv 3: llama.embedding_length u32 = 7168
llama_model_loader: - kv 4: llama.block_count u32 = 62
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 19200
llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 7: llama.attention.head_count u32 = 56
llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 10: llama.rope.freq_base f32 = 100000.000000
llama_model_loader: - kv 11: llama.rope.scaling.type str = linear
llama_model_loader: - kv 12: llama.rope.scaling.factor f32 = 4.000000
llama_model_loader: - kv 13: general.file_type u32 = 15
llama_model_loader: - kv 14: tokenizer.ggml.model str = llama
llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,32025] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 16: tokenizer.ggml.scores arr[f32,32025] = [-1000.000000, -1000.000000, -1000.00...
llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,32025] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 32022
llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 32023
llama_model_loader: - kv 20: tokenizer.ggml.unknown_token_id u32 = 32024
llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 32014
llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 24: general.quantization_version u32 = 2
llama_model_loader: - type f32: 125 tensors
llama_model_loader: - type q4_K: 375 tensors
llama_model_loader: - type q6_K: 61 tensors
error loading model: unordered_map::at: key not found
llama_load_model_from_file: failed to load model
llama_init_from_gpt_params: error: failed to load model './models/whiterabbitneo-33b-v1.Q4_K_M.gguf'
main: error: unable to load model

I have complied my binary today and doesn't work for me either. It's the file conversion problem

main: build = 1842 (584d674)
main: built with Apple clang version 15.0.0 (clang-1500.1.0.2.5) for arm64-apple-darwin23.2.0
main: seed = 1705085828
llama_model_loader: loaded meta data with 25 key-value pairs and 561 tensors from models/whiterabbitneo-33b-v1.Q4_K_M.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = whiterabbitneo_whiterabbitneo-33b-v1
llama_model_loader: - kv 2: llama.context_length u32 = 16384
llama_model_loader: - kv 3: llama.embedding_length u32 = 7168
llama_model_loader: - kv 4: llama.block_count u32 = 62
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 19200
llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 7: llama.attention.head_count u32 = 56
llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 10: llama.rope.freq_base f32 = 100000.000000
llama_model_loader: - kv 11: llama.rope.scaling.type str = linear
llama_model_loader: - kv 12: llama.rope.scaling.factor f32 = 4.000000
llama_model_loader: - kv 13: general.file_type u32 = 15
llama_model_loader: - kv 14: tokenizer.ggml.model str = llama
llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,32025] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 16: tokenizer.ggml.scores arr[f32,32025] = [-1000.000000, -1000.000000, -1000.00...
llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,32025] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 32022
llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 32023
llama_model_loader: - kv 20: tokenizer.ggml.unknown_token_id u32 = 32024
llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 32014
llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 24: general.quantization_version u32 = 2
llama_model_loader: - type f32: 125 tensors
llama_model_loader: - type q4_K: 375 tensors
llama_model_loader: - type q6_K: 61 tensors
error loading model: unordered_map::at: key not found
llama_load_model_from_file: failed to load model
llama_init_from_gpt_params: error: failed to load model 'models/whiterabbitneo-33b-v1.Q4_K_M.gguf'
main: error: unable to load model

I get the same error on Q8_0 and Q5_K_M.

Something must be wrong in the convert.py of llama.cpp (latest release.) I went back to an older version (Late December 2023) of llama.cpp and was able to successfully convert WhiteRabbitNeo 33B v1 into a q8_0 GGUF file.

Working quantized model can be found here…

https://huggingface.co/Isonium/WhiteRabbitNeo-33B-v1-GGUF/tree/main

Same problem for me with the TheBloke/WhiteRabbitNeo-33B-v1-GGUF/whiterabbitneo-33b-v1.Q8_0.gguf, I got "unordered_map::at: key not found" with LM Studio.

Sign up or log in to comment