Text Generation
Transformers
PyTorch
Safetensors
English
llama
conversational
text-generation-inference
Inference Endpoints

Add indentation (4 spaces) to `tokenizer_config.json` for readability

#5
by alvarobartt HF staff - opened
No description provided.

Also I think that a potential improvement would be to add the chat_template as mentioned by @lewtun already at https://huggingface.co/allenai/tulu-2-dpo-70b/discussions/2, even though Tulu is probably intended for instruction-following scenarios rather than chat-like ones, as there seems to be no "system" or similar, but templating may help users with formatting πŸ’ͺ🏻

Thanks! Yeah, I'm planning to add the chat_template, since it seems quite useful.

hamishivi changed pull request status to merged

Sign up or log in to comment