Model answering with all newlines?

#19
by jamesbraza - opened
from huggingface_hub import InferenceClient  # huggingface-hub[inference]==0.17.3

client = InferenceClient(model="HuggingFaceH4/zephyr-7b-alpha")
hi = client.text_generation(
    "Some choices are given below. It is provided in a numbered list (1 to 2), where"
    " each item in the list corresponds to a summary.\n---------------------\n(1)"
    " Provides information on cell lines like cell aliases, planes, and trains\n\n(2)"
    " Provides information on abc 123\n---------------------\nUsing only the choices above"
    " and not prior knowledge, return the choice that is most relevant to the question:"
    " 'What are the aliases for MLE12?'\n\n\nThe output should be ONLY JSON formatted"
    " as a JSON instance.\n\nHere is an example:\n[\n    {{\n        choice: 1,\n      "
    '  reason: "<insert reason for choice>"\n    }},\n    ...\n]\n'
)

Here is a prompt that leads to the model generating 20 \n newlines. What is the issue here, why would it do that?

Hugging Face H4 org

Hello @jamesbraza the model was trained with a chat template and you need to format your inputs this way to ensure the model terminates generation at the right place. See the README for an example on how to format the inputs :)

Ah gotchu, and thank you!

Sign up or log in to comment