hellonet22 commited on
Commit
a39d577
1 Parent(s): 4602c46

End of training

Browse files
Files changed (2) hide show
  1. README.md +28 -9
  2. generation_config.json +3 -3
README.md CHANGED
@@ -1,6 +1,4 @@
1
  ---
2
- license: mit
3
- base_model: gpt2
4
  tags:
5
  - generated_from_trainer
6
  model-index:
@@ -13,7 +11,9 @@ should probably proofread and complete it, then remove this comment. -->
13
 
14
  # codeparrot-ds
15
 
16
- This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
 
 
17
 
18
  ## Model description
19
 
@@ -33,20 +33,39 @@ More information needed
33
 
34
  The following hyperparameters were used during training:
35
  - learning_rate: 0.0005
36
- - train_batch_size: 128
37
- - eval_batch_size: 128
38
  - seed: 42
39
  - gradient_accumulation_steps: 8
40
- - total_train_batch_size: 1024
41
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
42
  - lr_scheduler_type: cosine
43
  - lr_scheduler_warmup_steps: 1000
44
  - num_epochs: 1
45
  - mixed_precision_training: Native AMP
46
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
47
  ### Framework versions
48
 
49
- - Transformers 4.44.0
50
- - Pytorch 2.4.0+cu121
51
- - Datasets 2.20.0
52
  - Tokenizers 0.19.1
 
1
  ---
 
 
2
  tags:
3
  - generated_from_trainer
4
  model-index:
 
11
 
12
  # codeparrot-ds
13
 
14
+ This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
15
+ It achieves the following results on the evaluation set:
16
+ - Loss: 1.1005
17
 
18
  ## Model description
19
 
 
33
 
34
  The following hyperparameters were used during training:
35
  - learning_rate: 0.0005
36
+ - train_batch_size: 32
37
+ - eval_batch_size: 32
38
  - seed: 42
39
  - gradient_accumulation_steps: 8
40
+ - total_train_batch_size: 256
41
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
42
  - lr_scheduler_type: cosine
43
  - lr_scheduler_warmup_steps: 1000
44
  - num_epochs: 1
45
  - mixed_precision_training: Native AMP
46
 
47
+ ### Training results
48
+
49
+ | Training Loss | Epoch | Step | Validation Loss |
50
+ |:-------------:|:------:|:-----:|:---------------:|
51
+ | 2.628 | 0.0766 | 5000 | 1.9111 |
52
+ | 1.7546 | 0.1533 | 10000 | 1.6929 |
53
+ | 1.6004 | 0.2299 | 15000 | 1.5822 |
54
+ | 1.5131 | 0.3065 | 20000 | 1.5084 |
55
+ | 1.4467 | 0.3832 | 25000 | 1.4519 |
56
+ | 1.3917 | 0.4598 | 30000 | 1.4033 |
57
+ | 1.3427 | 0.5365 | 35000 | 1.3530 |
58
+ | 1.293 | 0.6131 | 40000 | 1.3065 |
59
+ | 1.2472 | 0.6897 | 45000 | 1.2618 |
60
+ | 1.2003 | 0.7664 | 50000 | 1.2165 |
61
+ | 1.1583 | 0.8430 | 55000 | 1.1735 |
62
+ | 1.1135 | 0.9196 | 60000 | 1.1337 |
63
+ | 1.0763 | 0.9963 | 65000 | 1.1005 |
64
+
65
+
66
  ### Framework versions
67
 
68
+ - Transformers 4.45.0.dev0
69
+ - Pytorch 2.1.0
70
+ - Datasets 2.19.1
71
  - Tokenizers 0.19.1
generation_config.json CHANGED
@@ -1,6 +1,6 @@
1
  {
2
  "_from_model_config": true,
3
- "bos_token_id": 0,
4
- "eos_token_id": 0,
5
- "transformers_version": "4.44.0"
6
  }
 
1
  {
2
  "_from_model_config": true,
3
+ "bos_token_id": 1,
4
+ "eos_token_id": 2,
5
+ "transformers_version": "4.45.0.dev0"
6
  }