RuntimeError: cutlassF: no kernel found to launch!
python qwen_moe_test.py
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████| 8/8 [00:07<00:00, 1.04it/s]
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Traceback (most recent call last):
File "/home/czb/src/czb/qwen_moe_test.py", line 23, in
generated_ids = model.generate(
File "/home/czb/miniconda3/envs/agi/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/czb/miniconda3/envs/agi/lib/python3.10/site-packages/transformers/generation/utils.py", line 1577, in generate
result = self._sample(
File "/home/czb/miniconda3/envs/agi/lib/python3.10/site-packages/transformers/generation/utils.py", line 2733, in _sample
outputs = self(
File "/home/czb/miniconda3/envs/agi/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/czb/miniconda3/envs/agi/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/czb/miniconda3/envs/agi/lib/python3.10/site-packages/accelerate/hooks.py", line 166, in new_forward
output = module._old_forward(*args, **kwargs)
File "/home/czb/miniconda3/envs/agi/lib/python3.10/site-packages/transformers/models/qwen2_moe/modeling_qwen2_moe.py", line 1356, in forward
outputs = self.model(
File "/home/czb/miniconda3/envs/agi/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/czb/miniconda3/envs/agi/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/czb/miniconda3/envs/agi/lib/python3.10/site-packages/transformers/models/qwen2_moe/modeling_qwen2_moe.py", line 1225, in forward
layer_outputs = decoder_layer(
File "/home/czb/miniconda3/envs/agi/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/czb/miniconda3/envs/agi/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/czb/miniconda3/envs/agi/lib/python3.10/site-packages/accelerate/hooks.py", line 166, in new_forward
output = module._old_forward(*args, **kwargs)
File "/home/czb/miniconda3/envs/agi/lib/python3.10/site-packages/transformers/models/qwen2_moe/modeling_qwen2_moe.py", line 922, in forward
hidden_states, self_attn_weights, present_key_value = self.self_attn(
File "/home/czb/miniconda3/envs/agi/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/czb/miniconda3/envs/agi/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/czb/miniconda3/envs/agi/lib/python3.10/site-packages/accelerate/hooks.py", line 166, in new_forward
output = module._old_forward(*args, **kwargs)
File "/home/czb/miniconda3/envs/agi/lib/python3.10/site-packages/transformers/models/qwen2_moe/modeling_qwen2_moe.py", line 775, in forward
attn_output = torch.nn.functional.scaled_dot_product_attention(
RuntimeError: cutlassF: no kernel found to launch!
2080Ti doesn't support bfloat16, try converting the model into float16 first?