4yo1 commited on
Commit
75a5820
1 Parent(s): 1097664

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +45 -0
README.md ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ language:
4
+ - en
5
+ - ko
6
+ pipeline_tag: translation
7
+ tags:
8
+ - llama-3-ko
9
+ license: mit
10
+ datasets:
11
+ - 4yo1/llama3_test1
12
+ ---
13
+
14
+ ### Model Card for Model ID
15
+ ### Model Details
16
+
17
+ Model Card: LLaMA3-ENG-KO-8B-SL5 with Fine-Tuning
18
+ Model Overview
19
+ Model Name: LLaMA3-ENG-KO-8B-SL5
20
+
21
+ Model Type: Transformer-based Language Model
22
+
23
+ Model Size: 8 billion parameters
24
+
25
+ by: 4yo1
26
+
27
+ Languages: English and Korean
28
+
29
+ ### Model Description
30
+ LLaMA3-ENG-KO-8B-SL5 is a language model pre-trained on a diverse corpus of English and Korean texts.
31
+ This fine-tuning approach allows the model to adapt to specific tasks or datasets with a minimal number of additional parameters, making it efficient and effective for specialized applications.
32
+
33
+ ### how to use - sample code
34
+
35
+ ```python
36
+ from transformers import AutoConfig, AutoModel, AutoTokenizer
37
+
38
+ config = AutoConfig.from_pretrained("4yo1/llama3-eng-ko-80-sl5")
39
+ model = AutoModel.from_pretrained("4yo1/llama3-eng-ko-8b-sl5")
40
+ tokenizer = AutoTokenizer.from_pretrained("4yo1/llama3-eng-ko-8b-sl5")
41
+ ```
42
+ datasets:
43
+ - 4yo1/llama3_test1
44
+
45
+ license: mit