CUDA error when using the code example with pipeline provided on the model page
CUDA Version 12.1
If i use the sample inference code for GPU in the model page (https://huggingface.co/microsoft/Phi-3-mini-4k-instruct), i get the below error:
/modeling_phi3.py", line 346, in forward
attn_weights = torch.matmul(query_states, key_states.transpose(2, 3)) / math.sqrt(self.head_dim)
RuntimeError: CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling cublasGemmStridedBatchedEx(handle, opa, opb, (int)m, (int)n, (int)k, (void*)&falpha, a, CUDA_R_16BF, (int)lda, stridea, b, CUDA_R_16BF, (int)ldb, strideb, (void*)&fbeta, c, CUDA_R_16BF, (int)ldc, stridec, (int)num_batches, compute_type, CUBLAS_GEMM_DEFAULT_TENSOR_OP)
If I instead use
inputs = tokenizer('What can you teach me today?', return_tensors="pt").input_ids.to('cuda')
outputs = model.generate(inputs, max_new_tokens=500)
result = tokenizer.batch_decode(outputs, skip_special_tokens=True)
print(result)
I get the generated text.
If i modify it to
messages = [{"role": "user", "content": "What can you teach me today?"}]
inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to('cuda')
outputs = model.generate(inputs, max_new_tokens=500)
result = tokenizer.batch_decode(outputs, skip_special_tokens=True)
I still get the generated text.
If i now try to use the same messages as in sample code:
messages = [
{"role": "system", "content": "You are a helpful AI assistant."},
{"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"},
{"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas
and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon
juice and honey."},
{"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"},
]
inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to('cuda')
outputs = model.generate(inputs, max_new_tokens=500)
result = tokenizer.batch_decode(outputs, skip_special_tokens=True)
print(result)
I get the same CUDA related error mentioned above.
Could somebody please help?
I believe phi-3 does not take in a system prompt. I would change the role from "system" to "user".
EDIT: see: https://github.com/ollama/ollama/issues/3848#issuecomment-2073671215 and https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/discussions/51#:~:text=The%20model%20has%20not%20been,than%20a%20separate%20system%20instruction.
Even with using just the user prompt, the error occurrence is inconsistent.
The code below generates the text:
messages = [{"role": "user", "content": "What can you teach me today?"}]
inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to('cuda')
outputs = model.generate(inputs, max_new_tokens=500)
result = tokenizer.batch_decode(outputs, skip_special_tokens=True)
But if i just add more text to the content, it fails:
messages = [{"role": "user", "content": "What Can you teach me about the emergent prperties of LLMs?"}]
inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to('cuda')
outputs = model.generate(inputs, max_new_tokens=500)
result = tokenizer.batch_decode(outputs)[0]
print(result)
I get the same CUDA error
/modeling_phi3.py", line 346, in forward
attn_weights = torch.matmul(query_states, key_states.transpose(2, 3)) / math.sqrt(self.head_dim)
RuntimeError: CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling cublasGemmStridedBatchedEx(handle, opa, opb, (int)m, (int)n, (int)k, (void*)&falpha, a, CUDA_R_16BF, (int)lda, stridea, b, CUDA_R_16BF, (int)ldb, strideb, (void*)&fbeta, c, CUDA_R_16BF, (int)ldc, stridec, (int)num_batches, compute_type, CUBLAS_GEMM_DEFAULT_TENSOR_OP)
Can somebody please suggest how to debug this?