Update README.md
Browse files
README.md
CHANGED
@@ -33,7 +33,7 @@ AI 와 빅데이터 분석 전문 기업인 Linkbricks의 데이터사이언티
|
|
33 |
Deepspeed Stage=3, rslora, flash attention 2 를 사용
|
34 |
|
35 |
Dr. Yunsung Ji (Saxo), a data scientist at Linkbricks, a company specializing in AI and big data analytics, fine-tuned the NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO base model with SFT->DPO using four H100-80Gs on KT-CLOUD.
|
36 |
-
It is a Korean language model trained to handle complex Korean logic problems through Korean-Chinese-English-Japanese cross-training data and logical data, and
|
37 |
|
38 |
|
39 |
www.linkbricks.com, www.linkbricks.vc
|
|
|
33 |
Deepspeed Stage=3, rslora, flash attention 2 를 사용
|
34 |
|
35 |
Dr. Yunsung Ji (Saxo), a data scientist at Linkbricks, a company specializing in AI and big data analytics, fine-tuned the NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO base model with SFT->DPO using four H100-80Gs on KT-CLOUD.
|
36 |
+
It is a Korean language model trained to handle complex Korean logic problems through Korean-Chinese-English-Japanese cross-training data and logical data, and Tokenizer uses the base model without word expansion.
|
37 |
|
38 |
|
39 |
www.linkbricks.com, www.linkbricks.vc
|