Falcon-7B decoding error
#90
by
rahulseetharaman
- opened
I am trying to generate text using Falcon-7B model. This is the name of the model checkpoint on HF
rnosov/WizardLM-Uncensored-Falcon-7b-sharded
I am getting the following error
RuntimeError Traceback (most recent call last)
19 frames
~/.cache/huggingface/modules/transformers_modules/ehartford/WizardLM-Uncensored-Falcon-7b/a95d8a001ec405c7d33baf704a190066949f2072/modelling_RW.py in forward(self, hidden_states, alibi, attention_mask, layer_past, head_mask, use_cache, output_attentions)
277 value_layer_ = value_layer.reshape(batch_size, self.num_kv, -1, self.head_dim)
278
--> 279 attn_output = F.scaled_dot_product_attention(
280 query_layer_, key_layer_, value_layer_, None, 0.0, is_causal=True
281 )
RuntimeError: Expected query, key, and value to have the same dtype, but got query.dtype: float key.dtype: float and value.dtype: c10::Half instead.
This is how I invoke the code.
model = AutoModelForCausalLM.from_pretrained(
model_name,
load_in_8bit=True,
trust_remote_code=True
)
model.config.use_cache = False
tokenizer = AutoTokenizer.from_pretrained(model_name, return_token_type_ids=False)
def generate_text(prompt, prefix, max_new_tokens=100, num_beams=3, temperature=0.7, num_return_sequences = 3):
model_inputs = tokenizer(prompt + prefix, return_tensors='pt', return_token_type_ids=False)
model_output = model.generate(**model_inputs,
max_new_tokens=max_new_tokens,
num_beams=num_beams,
do_sample=True,
temperature=temperature,
num_return_sequences=num_return_sequences)
output_text = tokenizer.batch_decode(model_output, skip_special_tokens=True)
return output_text
How do I resolve this issue ? Any help in debugging this is appreciated, thanks!