vibhorag101
commited on
Commit
•
806cff1
1
Parent(s):
a9c4857
Update README.md
Browse files
README.md
CHANGED
@@ -11,7 +11,7 @@ pipeline_tag: text-generation
|
|
11 |
|
12 |
<!-- Provide a quick summary of what the model is/does. -->
|
13 |
- This model is a finetune of the **llama-2-7b-chat-hf** model on a therapy dataset.
|
14 |
-
- The model aims to provide basic therapy to the users and improve their mental health until they
|
15 |
- The model has been adjusted to encourage giving cheerful responses to the user. The system prompt has been mentioned below.
|
16 |
|
17 |
## Model Details
|
@@ -20,8 +20,8 @@ pipeline_tag: text-generation
|
|
20 |
- 48 Core Intel Xeon
|
21 |
- 128GB RAM.
|
22 |
### Model Hyperparameters
|
23 |
-
- This [training script](https://github.com/
|
24 |
-
- The shareGPT format dataset was converted to llama-2 training format using this [script](https://github.com/
|
25 |
- num_train_epochs = 3
|
26 |
- per_device_train_batch_size = 2
|
27 |
- per_device_eval_batch_size = 2
|
|
|
11 |
|
12 |
<!-- Provide a quick summary of what the model is/does. -->
|
13 |
- This model is a finetune of the **llama-2-7b-chat-hf** model on a therapy dataset.
|
14 |
+
- The model aims to provide basic therapy to the users and improve their mental health until they seek professional help.
|
15 |
- The model has been adjusted to encourage giving cheerful responses to the user. The system prompt has been mentioned below.
|
16 |
|
17 |
## Model Details
|
|
|
20 |
- 48 Core Intel Xeon
|
21 |
- 128GB RAM.
|
22 |
### Model Hyperparameters
|
23 |
+
- This [training script](https://github.com/phr-winter23/phr-mental-chat/blob/main/finetuneModel/finetuneScriptLLaMA-2.ipynb) was used to do the finetuning.
|
24 |
+
- The shareGPT format dataset was converted to llama-2 training format using this [script](https://github.com/phr-winter23/phr-mental-chat/blob/main/finetuneModel/llamaDataMaker.ipynb).
|
25 |
- num_train_epochs = 3
|
26 |
- per_device_train_batch_size = 2
|
27 |
- per_device_eval_batch_size = 2
|