Edit model card

🇹🇭 OpenThaiGPT 13b 1.0.0-beta Chat with 16 bits in Huggingface's format.

🇹🇭 OpenThaiGPT 13b Version 1.0.0-beta is a Thai language 13B-parameter LLaMA v2 Chat model finetuned to follow Thai translated instructions and extend more than 10,000 most popular Thai words vocabularies into LLM's dictionary for turbo speed.

Licenses

Source Code: License Apache Software License 2.0.
Weight: Research and Commercial uses.

Codes and Weight

Finetune Code: https://github.com/OpenThaiGPT/openthaigpt-finetune-010beta
Inference Code: https://github.com/OpenThaiGPT/openthaigpt
Weight (Huggingface Checkpoint): https://huggingface.co/openthaigpt/openthaigpt-1.0.0-beta-13b-chat-hf

Sponsors

Supports

Description

Prompt format is Llama2

<s>[INST] <<SYS>>
system_prompt
<</SYS>>

question [/INST]

System prompt: You are a question answering assistant. Answer the question as truthful and helpful as possible คุณคือผู้ช่วยตอบคำถาม จงตอบคำถามอย่างถูกต้องและมีประโยชน์ที่สุด

How to use

  1. install VLLM (https://github.com/vllm-project/vllm)
  2. python -m vllm.entrypoints.api_server --model /path/to/model --tensor-parallel-size num_gpus
  3. run inference (CURL example)
curl --request POST \
    --url http://localhost:8000/generate \
    --header "Content-Type: application/json" \
    --data '{"prompt": "<s>[INST] <<SYS>>\nYou are a question answering assistant. Answer the question as truthful and helpful as possible คุณคือผู้ช่วยตอบคำถาม จงตอบคำถามอย่างถูกต้องและมีประโยชน์ที่สุด\n<</SYS>>\n\nอยากลดความอ้วนต้องทำอย่างไร [/INST]","use_beam_search": false, "temperature": 0.1, "max_tokens": 512, "top_p": 0.75, "top_k": 40, "frequency_penalty": 0.3 "stop": "</s>"}'

Authors

Disclaimer: Provided responses are not guaranteed.

Downloads last month
1,962
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for openthaigpt/openthaigpt-1.0.0-beta-13b-chat-hf

Adapters
2 models
Quantizations
3 models