Any Constructive help is always welcome. :/
bin C:\Users\Mwall\Desktop\Hmm\oobabooga_windows\oobabooga_windows\installer_files\env\lib\site-packages\bitsandbytes\libbitsandbytes_cpu.dll
C:\Users\Mwall\Desktop\Hmm\oobabooga_windows\oobabooga_windows\installer_files\env\lib\site-packages\bitsandbytes\cextension.py:34: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable.
warn("The installed version of bitsandbytes was compiled without GPU support. "
function 'cadam32bit_grad_fp32' not found
INFO:Loading facebook_opt-6.7b...
Loading checkpoint shards: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 2/2 [00:25<00:00, 12.67s/it]
Traceback (most recent call last):
File "C:\Users\Mwall\Desktop\Hmm\oobabooga_windows\oobabooga_windows\text-generation-webui\server.py", line 1102, in
shared.model, shared.tokenizer = load_model(shared.model_name)
File "C:\Users\Mwall\Desktop\Hmm\oobabooga_windows\oobabooga_windows\text-generation-webui\modules\models.py", line 97, in load_model
output = load_func(model_name)
File "C:\Users\Mwall\Desktop\Hmm\oobabooga_windows\oobabooga_windows\text-generation-webui\modules\models.py", line 160, in huggingface_loader
model = model.cuda()
File "C:\Users\Mwall\Desktop\Hmm\oobabooga_windows\oobabooga_windows\installer_files\env\lib\site-packages\torch\nn\modules\module.py", line 905, in cuda
return self._apply(lambda t: t.cuda(device))
File "C:\Users\Mwall\Desktop\Hmm\oobabooga_windows\oobabooga_windows\installer_files\env\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "C:\Users\Mwall\Desktop\Hmm\oobabooga_windows\oobabooga_windows\installer_files\env\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "C:\Users\Mwall\Desktop\Hmm\oobabooga_windows\oobabooga_windows\installer_files\env\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "C:\Users\Mwall\Desktop\Hmm\oobabooga_windows\oobabooga_windows\installer_files\env\lib\site-packages\torch\nn\modules\module.py", line 820, in _apply
param_applied = fn(param)
File "C:\Users\Mwall\Desktop\Hmm\oobabooga_windows\oobabooga_windows\installer_files\env\lib\site-packages\torch\nn\modules\module.py", line 905, in
return self.apply(lambda t: t.cuda(device))
File "C:\Users\Mwall\Desktop\Hmm\oobabooga_windows\oobabooga_windows\installer_files\env\lib\site-packages\torch\cuda_init.py", line 239, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
Press any key to continue . . .
I need help, I went through the whole installer and set it to CPU and it was running!
Now it wants to run CUDA cores and I don;t know how to disable it :_(