Update README.md
Browse files
README.md
CHANGED
@@ -29,6 +29,7 @@ SambaLingo-Thai-Base is a pretrained Bi-lingual Thai and English model that adap
|
|
29 |
- **Language(s):** Thai, English
|
30 |
- **Finetuned from model:** [Llama 2](https://huggingface.co/meta-llama/Llama-2-7b-hf)
|
31 |
- **Try the chat version of this model**: [SambaLingo-chat-space](https://huggingface.co/spaces/sambanovasystems/SambaLingo-chat-space).
|
|
|
32 |
- **Blog Post**: [sambalingo-open-source-language-experts](https://sambanova.ai/blog/sambalingo-open-source-language-experts)
|
33 |
|
34 |
## Getting Started
|
@@ -54,18 +55,7 @@ All pre-training is done on the [Cultura-X](https://huggingface.co/datasets/uonl
|
|
54 |
We extended the vocabulary of the base llama model from 32,000 tokens to 57,000 tokens by adding up to 25,000 non-overlapping tokens from the new language.
|
55 |
|
56 |
## Evaluation
|
57 |
-
|
58 |
-
| | SambaLingo-Thai-Base | typhoon-7b | bloom-7b1 | xglm-7.5B | mGPT-13B |
|
59 |
-
|------------------------------|------------|-----------|-----------|----------|--------|
|
60 |
-
| Perplexity (Lower Is Better) | **1.288** | 1.373 | 1.834 | 1.394 | 1.966 |
|
61 |
-
| FLORES en->th (8 shot, CHRF) | **0.433** | 0.347 | 0.095 | 0.198 | 0.032 |
|
62 |
-
| FLORES th->en (8 shot, CHRF) | **0.536** | 0.465 | 0.138 | 0.431 | 0.016 |
|
63 |
-
| FLORES en->th (8 shot, BLEU) | **0.019** | 0.004 | 0.000 | 0.003 | 0.000 |
|
64 |
-
| FLORES th->en (8 shot, BLEU) | **0.247** | 0.188 | 0.003 | 0.147 | 0.000 |
|
65 |
-
| Belebele (3 shot) | 37.11% | **52.22%** | 24.11% | 22.44% | 26.89% |
|
66 |
-
| SIB-200 (3 shot) | 62.25% | **75.49%** | 23.04% | 63.73% | 44.12% |
|
67 |
-
| XCOPA (0 shot) | **61.40%** | 60.60% | 55.40% | 59.40% | 52.80% |
|
68 |
-
| XNLI (0 shot) | **44.65%** | 43.01% | 34.87% | 43.73% | 39.24% |
|
69 |
|
70 |
## Uses
|
71 |
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
@@ -109,12 +99,12 @@ We would like to give a special thanks to the following groups:
|
|
109 |
|
110 |
## Cite SambaLingo
|
111 |
```
|
112 |
-
@
|
113 |
-
|
114 |
-
|
115 |
-
|
116 |
-
|
117 |
-
|
118 |
-
|
119 |
}
|
120 |
```
|
|
|
29 |
- **Language(s):** Thai, English
|
30 |
- **Finetuned from model:** [Llama 2](https://huggingface.co/meta-llama/Llama-2-7b-hf)
|
31 |
- **Try the chat version of this model**: [SambaLingo-chat-space](https://huggingface.co/spaces/sambanovasystems/SambaLingo-chat-space).
|
32 |
+
- **Paper:** [SambaLingo: Teaching Large Language Models New Languages](https://arxiv.org/abs/2404.05829)
|
33 |
- **Blog Post**: [sambalingo-open-source-language-experts](https://sambanova.ai/blog/sambalingo-open-source-language-experts)
|
34 |
|
35 |
## Getting Started
|
|
|
55 |
We extended the vocabulary of the base llama model from 32,000 tokens to 57,000 tokens by adding up to 25,000 non-overlapping tokens from the new language.
|
56 |
|
57 |
## Evaluation
|
58 |
+
For evaluation results see our paper: [SambaLingo: Teaching Large Language Models New Languages](https://arxiv.org/abs/2404.05829)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
59 |
|
60 |
## Uses
|
61 |
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
|
|
99 |
|
100 |
## Cite SambaLingo
|
101 |
```
|
102 |
+
@misc{csaki2024sambalingo,
|
103 |
+
title={SambaLingo: Teaching Large Language Models New Languages},
|
104 |
+
author={Zoltan Csaki and Bo Li and Jonathan Li and Qiantong Xu and Pian Pawakapan and Leon Zhang and Yun Du and Hengyu Zhao and Changran Hu and Urmish Thakker},
|
105 |
+
year={2024},
|
106 |
+
eprint={2404.05829},
|
107 |
+
archivePrefix={arXiv},
|
108 |
+
primaryClass={cs.CL}
|
109 |
}
|
110 |
```
|