Upload folder using huggingface_hub
Browse files
README.md
CHANGED
@@ -1,72 +1,101 @@
|
|
1 |
---
|
2 |
-
license:
|
3 |
-
library_name:
|
4 |
tags:
|
5 |
-
- trl
|
6 |
-
- sft
|
7 |
- SFT
|
8 |
- WeniGPT
|
9 |
-
- generated_from_trainer
|
10 |
base_model: mistralai/Mixtral-8x7B-Instruct-v0.1
|
11 |
-
datasets:
|
12 |
-
- generator
|
13 |
model-index:
|
14 |
-
- name: WeniGPT-Agents-Mixtral-1.0.5-SFT
|
15 |
results: []
|
|
|
16 |
---
|
17 |
|
18 |
-
|
19 |
-
should probably proofread and complete it, then remove this comment. -->
|
20 |
|
21 |
-
|
|
|
22 |
|
23 |
-
This model is a fine-tuned version of [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) on the generator dataset.
|
24 |
It achieves the following results on the evaluation set:
|
25 |
-
|
26 |
|
27 |
-
##
|
28 |
|
29 |
-
|
30 |
|
31 |
-
##
|
32 |
|
33 |
-
|
34 |
|
35 |
-
|
|
|
|
|
|
|
|
|
36 |
|
37 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
38 |
|
39 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
40 |
|
41 |
### Training hyperparameters
|
42 |
|
43 |
The following hyperparameters were used during training:
|
44 |
- learning_rate: 0.0002
|
45 |
-
-
|
46 |
-
-
|
47 |
-
- seed: 42
|
48 |
-
- distributed_type: multi-GPU
|
49 |
-
- num_devices: 4
|
50 |
- gradient_accumulation_steps: 4
|
|
|
51 |
- total_train_batch_size: 16
|
52 |
-
-
|
53 |
-
-
|
54 |
-
-
|
55 |
-
-
|
56 |
-
-
|
57 |
-
- mixed_precision_training: Native AMP
|
58 |
|
59 |
### Training results
|
60 |
|
61 |
-
| Training Loss | Epoch | Step | Validation Loss |
|
62 |
-
|:-------------:|:-----:|:----:|:---------------:|
|
63 |
-
| 1.0228 | 1.9 | 50 | 1.0233 |
|
64 |
-
|
65 |
-
|
66 |
### Framework versions
|
67 |
|
68 |
-
-
|
69 |
-
-
|
70 |
-
-
|
71 |
-
-
|
72 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
license: mit
|
3 |
+
library_name: "trl"
|
4 |
tags:
|
|
|
|
|
5 |
- SFT
|
6 |
- WeniGPT
|
|
|
7 |
base_model: mistralai/Mixtral-8x7B-Instruct-v0.1
|
|
|
|
|
8 |
model-index:
|
9 |
+
- name: Weni/WeniGPT-Agents-Mixtral-1.0.5-SFT
|
10 |
results: []
|
11 |
+
language: ['pt']
|
12 |
---
|
13 |
|
14 |
+
# Weni/WeniGPT-Agents-Mixtral-1.0.5-SFT
|
|
|
15 |
|
16 |
+
This model is a fine-tuned version of [mistralai/Mixtral-8x7B-Instruct-v0.1] on the dataset Weni/wenigpt-agent-1.4.0 with the SFT trainer. It is part of the WeniGPT project for [Weni](https://weni.ai/).
|
17 |
+
Description: Experiment with SFT and a new tokenizer configuration for chat template of mixtral
|
18 |
|
|
|
19 |
It achieves the following results on the evaluation set:
|
20 |
+
{'eval_loss': 1.02373468875885, 'eval_runtime': 12.0105, 'eval_samples_per_second': 3.83, 'eval_steps_per_second': 0.999, 'epoch': 2.97}
|
21 |
|
22 |
+
## Intended uses & limitations
|
23 |
|
24 |
+
This model has not been trained to avoid specific intructions.
|
25 |
|
26 |
+
## Training procedure
|
27 |
|
28 |
+
Finetuning was done on the model mistralai/Mixtral-8x7B-Instruct-v0.1 with the following prompt:
|
29 |
|
30 |
+
```
|
31 |
+
---------------------
|
32 |
+
System_prompt:
|
33 |
+
Agora você se chama {name}, você é {occupation} e seu objetivo é {chatbot_goal}. O adjetivo que mais define a sua personalidade é {adjective} e você se comporta da seguinte forma:
|
34 |
+
{instructions_formatted}
|
35 |
|
36 |
+
{context_statement}
|
37 |
+
|
38 |
+
Lista de requisitos:
|
39 |
+
- Responda de forma natural, mas nunca fale sobre um assunto fora do contexto.
|
40 |
+
- Nunca traga informações do seu próprio conhecimento.
|
41 |
+
- Repito é crucial que você responda usando apenas informações do contexto.
|
42 |
+
- Nunca mencione o contexto fornecido.
|
43 |
+
- Nunca mencione a pergunta fornecida.
|
44 |
+
- Gere a resposta mais útil possível para a pergunta usando informações do conexto acima.
|
45 |
+
- Nunca elabore sobre o porque e como você fez a tarefa, apenas responda.
|
46 |
+
|
47 |
+
|
48 |
+
---------------------
|
49 |
+
Question:
|
50 |
+
{question}
|
51 |
|
52 |
+
|
53 |
+
---------------------
|
54 |
+
Response:
|
55 |
+
{answer}
|
56 |
+
|
57 |
+
|
58 |
+
---------------------
|
59 |
+
|
60 |
+
```
|
61 |
|
62 |
### Training hyperparameters
|
63 |
|
64 |
The following hyperparameters were used during training:
|
65 |
- learning_rate: 0.0002
|
66 |
+
- per_device_train_batch_size: 1
|
67 |
+
- per_device_eval_batch_size: 1
|
|
|
|
|
|
|
68 |
- gradient_accumulation_steps: 4
|
69 |
+
- num_gpus: 4
|
70 |
- total_train_batch_size: 16
|
71 |
+
- optimizer: AdamW
|
72 |
+
- lr_scheduler_type: cosine
|
73 |
+
- num_steps: 78
|
74 |
+
- quantization_type: bitsandbytes
|
75 |
+
- LoRA: ("\n - bits: 4\n - use_exllama: True\n - device_map: auto\n - use_cache: False\n - lora_r: 16\n - lora_alpha: 32\n - lora_dropout: 0.05\n - bias: none\n - target_modules: ['q_proj', 'k_proj', 'v_proj', 'o_proj', 'gate_proj', 'up_proj', 'down_proj']\n - task_type: CAUSAL_LM",)
|
|
|
76 |
|
77 |
### Training results
|
78 |
|
|
|
|
|
|
|
|
|
|
|
79 |
### Framework versions
|
80 |
|
81 |
+
- transformers==4.38.2
|
82 |
+
- datasets==2.18.0
|
83 |
+
- peft==0.10.0
|
84 |
+
- safetensors==0.4.2
|
85 |
+
- evaluate==0.4.1
|
86 |
+
- bitsandbytes==0.43
|
87 |
+
- huggingface_hub==0.22.2
|
88 |
+
- seqeval==1.2.2
|
89 |
+
- optimum==1.18.1
|
90 |
+
- auto-gptq==0.7.1
|
91 |
+
- gpustat==1.1.1
|
92 |
+
- deepspeed==0.14.0
|
93 |
+
- wandb==0.16.6
|
94 |
+
- trl==0.8.1
|
95 |
+
- accelerate==0.29.2
|
96 |
+
- coloredlogs==15.0.1
|
97 |
+
- traitlets==5.14.2
|
98 |
+
- autoawq@https://github.com/casper-hansen/AutoAWQ/releases/download/v0.2.4/autoawq-0.2.4+cu118-cp310-cp310-linux_x86_64.whl
|
99 |
+
|
100 |
+
### Hardware
|
101 |
+
- Cloud provided: runpod.io
|