Update README.md
Browse files
README.md
CHANGED
@@ -27,7 +27,7 @@ Model checkpoints are released at huggingface:
|
|
27 |
| ALMA-13B | [haoranxu/ALMA-13B](https://huggingface.co/haoranxu/ALMA-13B) | - |
|
28 |
| ALMA-13B-LoRA | [haoranxu/ALMA-13B-Pretrain](https://huggingface.co/haoranxu/ALMA-13B-Pretrain) | [haoranxu/ALMA-13B-Pretrain-LoRA](https://huggingface.co/haoranxu/ALMA-13B-Pretrain-LoRA) |
|
29 |
|
30 |
-
Note that
|
31 |
|
32 |
A quick start to use our best system (ALMA-13B-LoRA) for translation. An example of translating "我爱机器翻译。" into English:
|
33 |
```
|
|
|
27 |
| ALMA-13B | [haoranxu/ALMA-13B](https://huggingface.co/haoranxu/ALMA-13B) | - |
|
28 |
| ALMA-13B-LoRA | [haoranxu/ALMA-13B-Pretrain](https://huggingface.co/haoranxu/ALMA-13B-Pretrain) | [haoranxu/ALMA-13B-Pretrain-LoRA](https://huggingface.co/haoranxu/ALMA-13B-Pretrain-LoRA) |
|
29 |
|
30 |
+
**Note that `ALMA-7B-Pretrain` and `ALMA-13B-Pretrain` are NOT translation models. They only experience stage 1 monolingual fine-tuning (20B tokens for the 7B model and 12B tokens for the 13B model), and should be utilized in conjunction with their LoRA models for translation purposes.**
|
31 |
|
32 |
A quick start to use our best system (ALMA-13B-LoRA) for translation. An example of translating "我爱机器翻译。" into English:
|
33 |
```
|