macadeliccc commited on
Commit
51a1b9e
1 Parent(s): f00b978

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -18
README.md CHANGED
@@ -6,28 +6,16 @@ datasets:
6
 
7
  # Opus-Samantha-Llama-3-8B
8
 
9
- Opus-Samantha-Llama-3-8B is a SFT model made with [AutoSloth](https://colab.research.google.com/drive/1Zo0sVEb2lqdsUm9dy2PTzGySxdF9CNkc#scrollTo=MmLkhAjzYyJ4) by [macadeliccc](https://huggingface.co/macadeliccc)
10
 
11
- Trained on 1xL4 for 1 hour
12
-
13
- _model is curretly very nsfw. uneven distribution of subjects in dataset. will be back with v2_
14
 
15
 
16
  ## Process
17
 
18
- - Original Model: [unsloth/llama-3-8b](https://huggingface.co/unsloth/llama-3-8b)
19
  - Datatset: [macadeliccc/opus_samantha](https://huggingface.co/datasets/macadeliccc/opus_samantha)
20
 
21
- - Learning Rate: 2e-05
22
- - Steps: 2772
23
- - Warmup Steps: 277
24
- - Per Device Train Batch Size: 2
25
- - Gradient Accumulation Steps 1
26
- - Optimizer: paged_adamw_8bit
27
- - Max Sequence Length: 4096
28
- - Max Prompt Length: 2048
29
- - Max Length: 2048
30
-
31
  ## 💻 Usage
32
 
33
  ```python
@@ -43,6 +31,3 @@ pipeline("Hey how are you doing today?")
43
 
44
  ```
45
 
46
- <div align="center">
47
- <img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/made%20with%20unsloth.png" height="50" align="center" />
48
- </div>
 
6
 
7
  # Opus-Samantha-Llama-3-8B
8
 
9
+ Trained on 1xA100
10
 
11
+ **5/11/24: Model has been updated and performs much better**
 
 
12
 
13
 
14
  ## Process
15
 
16
+ - Original Model: [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B)
17
  - Datatset: [macadeliccc/opus_samantha](https://huggingface.co/datasets/macadeliccc/opus_samantha)
18
 
 
 
 
 
 
 
 
 
 
 
19
  ## 💻 Usage
20
 
21
  ```python
 
31
 
32
  ```
33