Pragades commited on
Commit
b33600b
1 Parent(s): 9ab0058

Pragades/LlaMa_3.1_8Billion_instruct(Interviewer))

Browse files
README.md CHANGED
@@ -18,7 +18,7 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
- - Loss: 0.6314
22
 
23
  ## Model description
24
 
@@ -44,23 +44,17 @@ The following hyperparameters were used during training:
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: linear
46
  - lr_scheduler_warmup_ratio: 0.1
47
- - training_steps: 1000
48
  - mixed_precision_training: Native AMP
49
 
50
  ### Training results
51
 
52
  | Training Loss | Epoch | Step | Validation Loss |
53
  |:-------------:|:------:|:----:|:---------------:|
54
- | 1.0074 | 0.0440 | 100 | 0.8141 |
55
- | 0.7582 | 0.0879 | 200 | 0.7417 |
56
- | 0.7663 | 0.1319 | 300 | 0.7115 |
57
- | 0.6939 | 0.1758 | 400 | 0.6899 |
58
- | 0.6787 | 0.2198 | 500 | 0.6792 |
59
- | 0.6553 | 0.2637 | 600 | 0.6664 |
60
- | 0.6747 | 0.3077 | 700 | 0.6535 |
61
- | 0.6614 | 0.3516 | 800 | 0.6404 |
62
- | 0.6343 | 0.3956 | 900 | 0.6343 |
63
- | 0.6264 | 0.4396 | 1000 | 0.6314 |
64
 
65
 
66
  ### Framework versions
 
18
 
19
  This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 0.5818
22
 
23
  ## Model description
24
 
 
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: linear
46
  - lr_scheduler_warmup_ratio: 0.1
47
+ - training_steps: 450
48
  - mixed_precision_training: Native AMP
49
 
50
  ### Training results
51
 
52
  | Training Loss | Epoch | Step | Validation Loss |
53
  |:-------------:|:------:|:----:|:---------------:|
54
+ | 0.7086 | 0.0615 | 100 | 0.6413 |
55
+ | 0.6102 | 0.1229 | 200 | 0.6121 |
56
+ | 0.596 | 0.1844 | 300 | 0.5943 |
57
+ | 0.5438 | 0.2459 | 400 | 0.5818 |
 
 
 
 
 
 
58
 
59
 
60
  ### Framework versions
adapter_config.json CHANGED
@@ -20,13 +20,13 @@
20
  "rank_pattern": {},
21
  "revision": null,
22
  "target_modules": [
23
- "k_proj",
24
  "q_proj",
25
- "gate_proj",
26
  "down_proj",
27
- "up_proj",
28
  "o_proj",
29
- "v_proj"
 
 
30
  ],
31
  "task_type": "CAUSAL_LM",
32
  "use_dora": false,
 
20
  "rank_pattern": {},
21
  "revision": null,
22
  "target_modules": [
 
23
  "q_proj",
 
24
  "down_proj",
25
+ "gate_proj",
26
  "o_proj",
27
+ "k_proj",
28
+ "v_proj",
29
+ "up_proj"
30
  ],
31
  "task_type": "CAUSAL_LM",
32
  "use_dora": false,
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:4d6397aaa0aeb991bf1d1381856249767bef9d09b5fa7efcb0b39fb4c5976c7d
3
  size 167832240
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f259c7e8af3f9580e7fd630df9558baf177e1ffa8c2e5e3743ee20aa185600e8
3
  size 167832240
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a81b8a257e23d2dccbb41d4b10081a088a7723f8bf32bd26e184faa007922b24
3
  size 5432
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cfa3542fa429eb8840bf3b7568272cf9ac1f7dc9fa247ab5a346ba599a6aa7ea
3
  size 5432