ironrock commited on
Commit
b69f6d9
1 Parent(s): 8602a4d

Upload folder using huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +39 -70
README.md CHANGED
@@ -1,103 +1,72 @@
1
  ---
2
- language:
3
- - pt
4
- license: mit
5
  library_name: peft
6
  tags:
 
 
7
  - SFT
8
  - WeniGPT
 
9
  base_model: mistralai/Mixtral-8x7B-Instruct-v0.1
 
 
10
  model-index:
11
- - name: Weni/WeniGPT-Agents-Mixtral-1.0.5-SFT
12
  results: []
13
  ---
14
 
15
- # Weni/WeniGPT-Agents-Mixtral-1.0.5-SFT
 
16
 
17
- This model is a fine-tuned version of [mistralai/Mixtral-8x7B-Instruct-v0.1] on the dataset Weni/wenigpt-agent-1.4.0 with the SFT trainer. It is part of the WeniGPT project for [Weni](https://weni.ai/).
18
- Description: Experiment with SFT and a new tokenizer configuration for chat template of mixtral
19
 
 
20
  It achieves the following results on the evaluation set:
21
- {'eval_loss': 1.02373468875885, 'eval_runtime': 12.0111, 'eval_samples_per_second': 3.83, 'eval_steps_per_second': 0.999, 'epoch': 2.97}
22
 
23
- ## Intended uses & limitations
24
-
25
- This model has not been trained to avoid specific intructions.
26
-
27
- ## Training procedure
28
-
29
- Finetuning was done on the model mistralai/Mixtral-8x7B-Instruct-v0.1 with the following prompt:
30
-
31
- ```
32
- ---------------------
33
- System_prompt:
34
- Agora você se chama {name}, você é {occupation} e seu objetivo é {chatbot_goal}. O adjetivo que mais define a sua personalidade é {adjective} e você se comporta da seguinte forma:
35
- {instructions_formatted}
36
-
37
- {context_statement}
38
 
39
- Lista de requisitos:
40
- - Responda de forma natural, mas nunca fale sobre um assunto fora do contexto.
41
- - Nunca traga informações do seu próprio conhecimento.
42
- - Repito é crucial que você responda usando apenas informações do contexto.
43
- - Nunca mencione o contexto fornecido.
44
- - Nunca mencione a pergunta fornecida.
45
- - Gere a resposta mais útil possível para a pergunta usando informações do conexto acima.
46
- - Nunca elabore sobre o porque e como você fez a tarefa, apenas responda.
47
-
48
-
49
- ---------------------
50
- Question:
51
- {question}
52
 
 
53
 
54
- ---------------------
55
- Response:
56
- {answer}
57
 
 
58
 
59
- ---------------------
60
 
61
- ```
62
 
63
  ### Training hyperparameters
64
 
65
  The following hyperparameters were used during training:
66
  - learning_rate: 0.0002
67
- - per_device_train_batch_size: 1
68
- - per_device_eval_batch_size: 1
 
 
 
69
  - gradient_accumulation_steps: 4
70
- - num_gpus: 4
71
  - total_train_batch_size: 16
72
- - optimizer: AdamW
73
- - lr_scheduler_type: cosine
74
- - num_steps: 78
75
- - quantization_type: bitsandbytes
76
- - LoRA: ("\n - bits: 4\n - use_exllama: True\n - device_map: auto\n - use_cache: False\n - lora_r: 16\n - lora_alpha: 32\n - lora_dropout: 0.05\n - bias: none\n - target_modules: ['q_proj', 'k_proj', 'v_proj', 'o_proj', 'gate_proj', 'up_proj', 'down_proj']\n - task_type: CAUSAL_LM",)
 
77
 
78
  ### Training results
79
 
 
 
 
 
 
80
  ### Framework versions
81
 
82
  - PEFT 0.10.0
83
- - transformers==4.38.2
84
- - datasets==2.18.0
85
- - peft==0.10.0
86
- - safetensors==0.4.2
87
- - evaluate==0.4.1
88
- - bitsandbytes==0.43
89
- - huggingface_hub==0.22.2
90
- - seqeval==1.2.2
91
- - optimum==1.18.1
92
- - auto-gptq==0.7.1
93
- - gpustat==1.1.1
94
- - deepspeed==0.14.0
95
- - wandb==0.16.6
96
- - trl==0.8.1
97
- - accelerate==0.29.2
98
- - coloredlogs==15.0.1
99
- - traitlets==5.14.2
100
- - autoawq@https://github.com/casper-hansen/AutoAWQ/releases/download/v0.2.4/autoawq-0.2.4+cu118-cp310-cp310-linux_x86_64.whl
101
-
102
- ### Hardware
103
- - Cloud provided: runpod.io
 
1
  ---
2
+ license: apache-2.0
 
 
3
  library_name: peft
4
  tags:
5
+ - trl
6
+ - sft
7
  - SFT
8
  - WeniGPT
9
+ - generated_from_trainer
10
  base_model: mistralai/Mixtral-8x7B-Instruct-v0.1
11
+ datasets:
12
+ - generator
13
  model-index:
14
+ - name: WeniGPT-Agents-Mixtral-1.0.5-SFT
15
  results: []
16
  ---
17
 
18
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
19
+ should probably proofread and complete it, then remove this comment. -->
20
 
21
+ # WeniGPT-Agents-Mixtral-1.0.5-SFT
 
22
 
23
+ This model is a fine-tuned version of [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) on the generator dataset.
24
  It achieves the following results on the evaluation set:
25
+ - Loss: 1.0237
26
 
27
+ ## Model description
 
 
 
 
 
 
 
 
 
 
 
 
 
 
28
 
29
+ More information needed
 
 
 
 
 
 
 
 
 
 
 
 
30
 
31
+ ## Intended uses & limitations
32
 
33
+ More information needed
 
 
34
 
35
+ ## Training and evaluation data
36
 
37
+ More information needed
38
 
39
+ ## Training procedure
40
 
41
  ### Training hyperparameters
42
 
43
  The following hyperparameters were used during training:
44
  - learning_rate: 0.0002
45
+ - train_batch_size: 1
46
+ - eval_batch_size: 1
47
+ - seed: 42
48
+ - distributed_type: multi-GPU
49
+ - num_devices: 4
50
  - gradient_accumulation_steps: 4
 
51
  - total_train_batch_size: 16
52
+ - total_eval_batch_size: 4
53
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
54
+ - lr_scheduler_type: linear
55
+ - lr_scheduler_warmup_ratio: 0.03
56
+ - training_steps: 78
57
+ - mixed_precision_training: Native AMP
58
 
59
  ### Training results
60
 
61
+ | Training Loss | Epoch | Step | Validation Loss |
62
+ |:-------------:|:-----:|:----:|:---------------:|
63
+ | 1.0228 | 1.9 | 50 | 1.0233 |
64
+
65
+
66
  ### Framework versions
67
 
68
  - PEFT 0.10.0
69
+ - Transformers 4.38.2
70
+ - Pytorch 2.1.0+cu118
71
+ - Datasets 2.18.0
72
+ - Tokenizers 0.15.2