Issues attempting to load model.
#1
by
ReXommendation
- opened
No error (not enough time for it), only insane amounts of memory consumption before crashing the terminal or my system. This is with 32 GB of real ram and 16GB of swap. This is the GPTQ plugin I'm using: https://github.com/0cc4m/GPTQ-for-LLaMa.
Can't really help without error messages. But it's normal to see RAM usage increase - it has to load the model to RAM before it can move it to the GPU. Try increasing your swap/Pagefile size to around 100GB, that usually helps people load 30B models.
I wish TheBloke had a guide or came back to converting models again, some of the new stuff looks awesome but i can't use GGUF because it crashes constantly.