Getting an error TypeError: unsupported operand type(s) for *: 'Tensor' and 'NoneType'
Trying to finetune the model in my GPU Machine ( Tesla v100). But getting an error as below , But this is working fine in colab
modelling_RW.py", line 93, in forward
return (q * cos) + (rotate_half(q) * sin), (k * cos) + (rotate_half(k) * sin)
~~^~~~~
TypeError: unsupported operand type(s) for *: 'Tensor' and 'NoneType'
any help on this is highly appreciated
Getting the same error
@NajiAboo by setting device_map to auto it fixed the issue. I am using this script: https://gist.github.com/pacman100/1731b41f7a90a87b457e8c5415ff1c14?permalink_comment_id=4600438
having the same error, any help
model = AutoModelForCausalLM.from_pretrained(
model_name,
quantization_config=bnb_config,
device_map="auto",
trust_remote_code=True
)
its working now thanks, have to update device_map="auto",
as shown above,
I tried changing the device_map,
But I am getting below error now
ValueError: You can't train a model that has been loaded in 8-bit precision on a different device than the one you're training on. Make sure you loaded the model on the correct device using for example device_map={'':torch.cuda.current_device()}you're training on. Make sure you loaded the model on the correct device using for example
device_map={'':torch.cuda.current_device() or device_map={'':torch.xpu.current_device()}
suggestion please
Running into similar issue. Any update on this?
Same error at my end. Resolved for the time being by not passing the bitsandbytes, version 0.40.0.