Transformers
llama

ANNOYING ERROR

#4
by AnasRehman12 - opened

Traceback (most recent call last):

File “/content/text-generation-webui/server.py”, line 68, in load_model_wrapper

shared.model, shared.tokenizer = load_model(shared.model_name, loader)

                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File “/content/text-generation-webui/modules/models.py”, line 78, in load_model

output = load_func_maploader

     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File “/content/text-generation-webui/modules/models.py”, line 218, in huggingface_loader

model = LoaderClass.from_pretrained(checkpoint, **params)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File “/usr/local/lib/python3.11/site-packages/transformers/models/auto/auto_factory.py”, line 493, in from_pretrained

return model_class.from_pretrained(

   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File “/usr/local/lib/python3.11/site-packages/transformers/modeling_utils.py”, line 2474, in from_pretrained

raise EnvironmentError(
OSError: Error no file named pytorch_model.bin, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory models/TheBloke_llama2_7b_chat_uncensored-GGML.

You are loading the model using huggingface transformers instead of llama cpp which this is designed for

Sign up or log in to comment