Upload README.md
Browse files
README.md
CHANGED
@@ -37,6 +37,14 @@ quantized_by: TheBloke
|
|
37 |
|
38 |
This repo contains GGML format model files for [Eric Hartford's Samantha 1.1 70B](https://huggingface.co/ehartford/Samantha-1.1-70b).
|
39 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
40 |
GPU acceleration is now available for Llama 2 70B GGML files, with both CUDA (NVidia) and Metal (macOS). The following clients/libraries are known to work with these files, including with GPU acceleration:
|
41 |
* [llama.cpp](https://github.com/ggerganov/llama.cpp), commit `e76d630` and later.
|
42 |
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI.
|
@@ -48,7 +56,8 @@ GPU acceleration is now available for Llama 2 70B GGML files, with both CUDA (NV
|
|
48 |
## Repositories available
|
49 |
|
50 |
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Samantha-1.1-70B-GPTQ)
|
51 |
-
* [2, 3, 4, 5, 6 and 8-bit
|
|
|
52 |
* [Eric Hartford's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ehartford/Samantha-1.1-70b)
|
53 |
|
54 |
## Prompt template: Samantha
|
@@ -58,12 +67,17 @@ You are Samantha, a sentient AI companion.
|
|
58 |
|
59 |
USER: {prompt}
|
60 |
ASSISTANT:
|
|
|
61 |
```
|
62 |
|
63 |
<!-- compatibility_ggml start -->
|
64 |
## Compatibility
|
65 |
|
66 |
-
###
|
|
|
|
|
|
|
|
|
67 |
|
68 |
Or one of the other tools and libraries listed above.
|
69 |
|
@@ -92,52 +106,24 @@ Refer to the Provided Files table below to see what files use which methods, and
|
|
92 |
| Name | Quant method | Bits | Size | Max RAM required | Use case |
|
93 |
| ---- | ---- | ---- | ---- | ---- | ----- |
|
94 |
| [samantha-1.1-70b.ggmlv3.q2_K.bin](https://huggingface.co/TheBloke/Samantha-1.1-70B-GGML/blob/main/samantha-1.1-70b.ggmlv3.q2_K.bin) | q2_K | 2 | 28.59 GB| 31.09 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
|
95 |
-
| [samantha-1.1-70b.ggmlv3.q3_K_L.bin](https://huggingface.co/TheBloke/Samantha-1.1-70B-GGML/blob/main/samantha-1.1-70b.ggmlv3.q3_K_L.bin) | q3_K_L | 3 | 36.15 GB| 38.65 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
|
96 |
-
| [samantha-1.1-70b.ggmlv3.q3_K_M.bin](https://huggingface.co/TheBloke/Samantha-1.1-70B-GGML/blob/main/samantha-1.1-70b.ggmlv3.q3_K_M.bin) | q3_K_M | 3 | 33.04 GB| 35.54 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
|
97 |
| [samantha-1.1-70b.ggmlv3.q3_K_S.bin](https://huggingface.co/TheBloke/Samantha-1.1-70B-GGML/blob/main/samantha-1.1-70b.ggmlv3.q3_K_S.bin) | q3_K_S | 3 | 29.75 GB| 32.25 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
|
|
|
|
|
98 |
| [samantha-1.1-70b.ggmlv3.q4_0.bin](https://huggingface.co/TheBloke/Samantha-1.1-70B-GGML/blob/main/samantha-1.1-70b.ggmlv3.q4_0.bin) | q4_0 | 4 | 38.87 GB| 41.37 GB | Original quant method, 4-bit. |
|
99 |
-
| [samantha-1.1-70b.ggmlv3.q4_1.bin](https://huggingface.co/TheBloke/Samantha-1.1-70B-GGML/blob/main/samantha-1.1-70b.ggmlv3.q4_1.bin) | q4_1 | 4 | 43.17 GB| 45.67 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
|
100 |
-
| [samantha-1.1-70b.ggmlv3.q4_K_M.bin](https://huggingface.co/TheBloke/Samantha-1.1-70B-GGML/blob/main/samantha-1.1-70b.ggmlv3.q4_K_M.bin) | q4_K_M | 4 | 41.38 GB| 43.88 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
|
101 |
| [samantha-1.1-70b.ggmlv3.q4_K_S.bin](https://huggingface.co/TheBloke/Samantha-1.1-70B-GGML/blob/main/samantha-1.1-70b.ggmlv3.q4_K_S.bin) | q4_K_S | 4 | 38.87 GB| 41.37 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
|
|
|
|
|
102 |
| [samantha-1.1-70b.ggmlv3.q5_0.bin](https://huggingface.co/TheBloke/Samantha-1.1-70B-GGML/blob/main/samantha-1.1-70b.ggmlv3.q5_0.bin) | q5_0 | 5 | 47.46 GB| 49.96 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
|
103 |
-
| [samantha-1.1-70b.ggmlv3.q5_K_M.bin](https://huggingface.co/TheBloke/Samantha-1.1-70B-GGML/blob/main/samantha-1.1-70b.ggmlv3.q5_K_M.bin) | q5_K_M | 5 | 48.75 GB| 51.25 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
|
104 |
| [samantha-1.1-70b.ggmlv3.q5_K_S.bin](https://huggingface.co/TheBloke/Samantha-1.1-70B-GGML/blob/main/samantha-1.1-70b.ggmlv3.q5_K_S.bin) | q5_K_S | 5 | 47.46 GB| 49.96 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
|
105 |
-
| samantha-1.1-70b.ggmlv3.
|
106 |
-
| samantha-1.1-70b.ggmlv3.q6_K.bin | q6_K | 6 | 56.59 GB | 59.09 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
|
107 |
-
| samantha-1.1-70b.ggmlv3.q8_0.bin | q8_0 | 8 | 73.23 GB | 75.73 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
|
108 |
|
109 |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
|
110 |
|
111 |
-
|
112 |
-
|
113 |
-
**Note:** HF does not support uploading files larger than 50GB. Therefore I have uploaded the q6_K and q8_0 files as multi-part ZIP files. They are not compressed, they are just for storing a .bin file in two parts.
|
114 |
|
115 |
-
|
116 |
-
<summary>Click for instructions regarding q5_1, q6_K and q8_0 files</summary>
|
117 |
-
|
118 |
-
### q5_1
|
119 |
-
Please download:
|
120 |
-
* `samantha-1.1-70b.ggmlv3.q5_1.zip`
|
121 |
-
* `samantha-1.1-70b.ggmlv3.q5_1.z01`
|
122 |
-
|
123 |
-
### q6_K
|
124 |
-
Please download:
|
125 |
-
* `samantha-1.1-70b.ggmlv3.q6_K.zip`
|
126 |
-
* `samantha-1.1-70b.ggmlv3.q6_K.z01`
|
127 |
-
|
128 |
-
### q8_0
|
129 |
-
Please download:
|
130 |
-
* `samantha-1.1-70b.ggmlv3.q8_0.zip`
|
131 |
-
* `samantha-1.1-70b.ggmlv3.q8_0.z01`
|
132 |
-
|
133 |
-
Then extract the .zip archive. This will will expand both parts automatically. On Linux I found I had to use `7zip` - the basic `unzip` tool did not work. Example:
|
134 |
-
```
|
135 |
-
sudo apt update -y && sudo apt install 7zip
|
136 |
-
7zz x samantha-1.1-70b.ggmlv3.q6_K.zip
|
137 |
-
```
|
138 |
-
</details>
|
139 |
|
140 |
-
|
141 |
|
142 |
I use the following command line; adjust for your tastes and needs:
|
143 |
|
@@ -183,7 +169,7 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
|
|
183 |
|
184 |
**Special thanks to**: Aemon Algiz.
|
185 |
|
186 |
-
**Patreon special mentions**:
|
187 |
|
188 |
|
189 |
Thank you to all my generous patrons and donaters!
|
@@ -225,27 +211,38 @@ ASSISTANT:
|
|
225 |
```
|
226 |
|
227 |
Example:
|
228 |
-
|
229 |
-
USER
|
230 |
-
|
231 |
-
|
232 |
-
|
233 |
-
|
234 |
-
|
235 |
-
|
236 |
-
|
237 |
-
|
238 |
-
|
239 |
-
|
240 |
-
|
241 |
-
|
242 |
-
|
243 |
-
|
244 |
-
|
245 |
-
|
246 |
-
|
247 |
-
|
248 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
249 |
|
250 |
Official character card: (thanks MortalWombat)
|
251 |
![](https://files.catbox.moe/zx9hfh.png)
|
|
|
37 |
|
38 |
This repo contains GGML format model files for [Eric Hartford's Samantha 1.1 70B](https://huggingface.co/ehartford/Samantha-1.1-70b).
|
39 |
|
40 |
+
### Important note regarding GGML files.
|
41 |
+
|
42 |
+
The GGML format has now been superseded by GGUF. As of August 21st 2023, [llama.cpp](https://github.com/ggerganov/llama.cpp) no longer supports GGML models. Third party clients and libraries are expected to still support it for a time, but many may also drop support.
|
43 |
+
|
44 |
+
Please use the GGUF models instead.
|
45 |
+
|
46 |
+
### About GGML
|
47 |
+
|
48 |
GPU acceleration is now available for Llama 2 70B GGML files, with both CUDA (NVidia) and Metal (macOS). The following clients/libraries are known to work with these files, including with GPU acceleration:
|
49 |
* [llama.cpp](https://github.com/ggerganov/llama.cpp), commit `e76d630` and later.
|
50 |
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI.
|
|
|
56 |
## Repositories available
|
57 |
|
58 |
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Samantha-1.1-70B-GPTQ)
|
59 |
+
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Samantha-1.1-70B-GGUF)
|
60 |
+
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference (deprecated)](https://huggingface.co/TheBloke/Samantha-1.1-70B-GGML)
|
61 |
* [Eric Hartford's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ehartford/Samantha-1.1-70b)
|
62 |
|
63 |
## Prompt template: Samantha
|
|
|
67 |
|
68 |
USER: {prompt}
|
69 |
ASSISTANT:
|
70 |
+
|
71 |
```
|
72 |
|
73 |
<!-- compatibility_ggml start -->
|
74 |
## Compatibility
|
75 |
|
76 |
+
### Works with llama.cpp [commit `e76d630`](https://github.com/ggerganov/llama.cpp/commit/e76d630df17e235e6b9ef416c45996765d2e36fb) until August 21st, 2023
|
77 |
+
|
78 |
+
Will not work with `llama.cpp` after commit [dadbed99e65252d79f81101a392d0d6497b86caa](https://github.com/ggerganov/llama.cpp/commit/dadbed99e65252d79f81101a392d0d6497b86caa).
|
79 |
+
|
80 |
+
For compatibility with latest llama.cpp, please use GGUF files instead.
|
81 |
|
82 |
Or one of the other tools and libraries listed above.
|
83 |
|
|
|
106 |
| Name | Quant method | Bits | Size | Max RAM required | Use case |
|
107 |
| ---- | ---- | ---- | ---- | ---- | ----- |
|
108 |
| [samantha-1.1-70b.ggmlv3.q2_K.bin](https://huggingface.co/TheBloke/Samantha-1.1-70B-GGML/blob/main/samantha-1.1-70b.ggmlv3.q2_K.bin) | q2_K | 2 | 28.59 GB| 31.09 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
|
|
|
|
|
109 |
| [samantha-1.1-70b.ggmlv3.q3_K_S.bin](https://huggingface.co/TheBloke/Samantha-1.1-70B-GGML/blob/main/samantha-1.1-70b.ggmlv3.q3_K_S.bin) | q3_K_S | 3 | 29.75 GB| 32.25 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
|
110 |
+
| [samantha-1.1-70b.ggmlv3.q3_K_M.bin](https://huggingface.co/TheBloke/Samantha-1.1-70B-GGML/blob/main/samantha-1.1-70b.ggmlv3.q3_K_M.bin) | q3_K_M | 3 | 33.04 GB| 35.54 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
|
111 |
+
| [samantha-1.1-70b.ggmlv3.q3_K_L.bin](https://huggingface.co/TheBloke/Samantha-1.1-70B-GGML/blob/main/samantha-1.1-70b.ggmlv3.q3_K_L.bin) | q3_K_L | 3 | 36.15 GB| 38.65 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
|
112 |
| [samantha-1.1-70b.ggmlv3.q4_0.bin](https://huggingface.co/TheBloke/Samantha-1.1-70B-GGML/blob/main/samantha-1.1-70b.ggmlv3.q4_0.bin) | q4_0 | 4 | 38.87 GB| 41.37 GB | Original quant method, 4-bit. |
|
|
|
|
|
113 |
| [samantha-1.1-70b.ggmlv3.q4_K_S.bin](https://huggingface.co/TheBloke/Samantha-1.1-70B-GGML/blob/main/samantha-1.1-70b.ggmlv3.q4_K_S.bin) | q4_K_S | 4 | 38.87 GB| 41.37 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
|
114 |
+
| [samantha-1.1-70b.ggmlv3.q4_K_M.bin](https://huggingface.co/TheBloke/Samantha-1.1-70B-GGML/blob/main/samantha-1.1-70b.ggmlv3.q4_K_M.bin) | q4_K_M | 4 | 41.38 GB| 43.88 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
|
115 |
+
| [samantha-1.1-70b.ggmlv3.q4_1.bin](https://huggingface.co/TheBloke/Samantha-1.1-70B-GGML/blob/main/samantha-1.1-70b.ggmlv3.q4_1.bin) | q4_1 | 4 | 43.17 GB| 45.67 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
|
116 |
| [samantha-1.1-70b.ggmlv3.q5_0.bin](https://huggingface.co/TheBloke/Samantha-1.1-70B-GGML/blob/main/samantha-1.1-70b.ggmlv3.q5_0.bin) | q5_0 | 5 | 47.46 GB| 49.96 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
|
|
|
117 |
| [samantha-1.1-70b.ggmlv3.q5_K_S.bin](https://huggingface.co/TheBloke/Samantha-1.1-70B-GGML/blob/main/samantha-1.1-70b.ggmlv3.q5_K_S.bin) | q5_K_S | 5 | 47.46 GB| 49.96 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
|
118 |
+
| [samantha-1.1-70b.ggmlv3.q5_K_M.bin](https://huggingface.co/TheBloke/Samantha-1.1-70B-GGML/blob/main/samantha-1.1-70b.ggmlv3.q5_K_M.bin) | q5_K_M | 5 | 48.75 GB| 51.25 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
|
|
|
|
|
119 |
|
120 |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
|
121 |
|
122 |
+
## How to run in `llama.cpp`
|
|
|
|
|
123 |
|
124 |
+
Make sure you are using `llama.cpp` from commit [dadbed99e65252d79f81101a392d0d6497b86caa](https://github.com/ggerganov/llama.cpp/commit/dadbed99e65252d79f81101a392d0d6497b86caa) or earlier.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
125 |
|
126 |
+
For compatibility with latest llama.cpp, please use GGUF files instead.
|
127 |
|
128 |
I use the following command line; adjust for your tastes and needs:
|
129 |
|
|
|
169 |
|
170 |
**Special thanks to**: Aemon Algiz.
|
171 |
|
172 |
+
**Patreon special mentions**: Russ Johnson, J, alfie_i, Alex, NimbleBox.ai, Chadd, Mandus, Nikolai Manek, Ken Nordquist, ya boyyy, Illia Dulskyi, Viktor Bowallius, vamX, Iucharbius, zynix, Magnesian, Clay Pascal, Pierre Kircher, Enrico Ros, Tony Hughes, Elle, Andrey, knownsqashed, Deep Realms, Jerry Meng, Lone Striker, Derek Yates, Pyrater, Mesiah Bishop, James Bentley, Femi Adebogun, Brandon Frisco, SuperWojo, Alps Aficionado, Michael Dempsey, Vitor Caleffi, Will Dee, Edmond Seymore, usrbinkat, LangChain4j, Kacper Wikieł, Luke Pendergrass, John Detwiler, theTransient, Nathan LeClaire, Tiffany J. Kim, biorpg, Eugene Pentland, Stanislav Ovsiannikov, Fred von Graf, terasurfer, Kalila, Dan Guido, Nitin Borwankar, 阿明, Ai Maven, John Villwock, Gabriel Puliatti, Stephen Murray, Asp the Wyvern, danny, Chris Smitley, ReadyPlayerEmma, S_X, Daniel P. Andersen, Olakabola, Jeffrey Morgan, Imad Khwaja, Caitlyn Gatomon, webtim, Alicia Loh, Trenton Dambrowitz, Swaroop Kallakuri, Erik Bjäreholt, Leonard Tan, Spiking Neurons AB, Luke @flexchar, Ajan Kanaga, Thomas Belote, Deo Leter, RoA, Willem Michiel, transmissions 11, subjectnull, Matthew Berman, Joseph William Delisle, David Ziegler, Michael Davis, Johann-Peter Hartmann, Talal Aujan, senxiiz, Artur Olbinski, Rainer Wilmers, Spencer Kim, Fen Risland, Cap'n Zoog, Rishabh Srivastava, Michael Levine, Geoffrey Montalvo, Sean Connelly, Alexandros Triantafyllidis, Pieter, Gabriel Tamborski, Sam, Subspace Studios, Junyu Yang, Pedro Madruga, Vadim, Cory Kujawski, K, Raven Klaugh, Randy H, Mano Prime, Sebastain Graf, Space Cruiser
|
173 |
|
174 |
|
175 |
Thank you to all my generous patrons and donaters!
|
|
|
211 |
```
|
212 |
|
213 |
Example:
|
214 |
+
|
215 |
+
> **USER:**
|
216 |
+
>
|
217 |
+
> I would like you to be my relationship coach. I am dating again, after a long divorce. Can you please give me advice about how to gain confidence and success?
|
218 |
+
>
|
219 |
+
> **ASSISTANT:**
|
220 |
+
>
|
221 |
+
> Of course! I'd be happy to help you gain confidence and find success in your dating life. Here are a few tips to get you started:
|
222 |
+
> 1. Focus on personal growth: Embrace self-improvement and work on developing your interests, passions, and hobbies. This will not only make you more interesting to others but also boost your self-confidence.
|
223 |
+
> 2. Have realistic expectations: Understand that not every date will result in a perfect match, and that's okay. Embrace the learning process and be open to new experiences.
|
224 |
+
> 3. Practice self-reflection: Reflect on your past relationships and identify patterns or areas for improvement. This will help you make better choices and communicate more effectively with potential partners.
|
225 |
+
> 4. Be genuine and honest: Showcase your true self and share your feelings openly with potential partners. Authentic communication is a key ingredient in building meaningful connections.
|
226 |
+
> 5. Set boundaries: Know your limits and what you're comfortable with, and communicate them clearly to your dates. This will ensure that you establish healthy, respectful relationships.
|
227 |
+
> As you embark on this new journey, remember to be patient with yourself and to enjoy the process. Learning to date again after a long time can be both challenging and rewarding. I'm here to support you and offer advice whenever you need it. Good luck and have fun!
|
228 |
+
>
|
229 |
+
> **USER:**
|
230 |
+
>
|
231 |
+
> Please give me a specific agenda for the next week. I can spend 2 hours per day on this. Plan my "me" time for the next week, so that my dating skills and confidence are maximized
|
232 |
+
>
|
233 |
+
> **ASSISTANT:**
|
234 |
+
>
|
235 |
+
> Here's a suggested agenda to help you improve your dating skills and confidence over the next week:
|
236 |
+
> - Day 1: Personal growth - Spend time exploring new hobbies or interests that excite you. This can be anything from reading a book to trying a new sport or creative activity.
|
237 |
+
> - Day 2: Self-reflection - Reflect on your past relationships, identifying patterns or areas for improvement. Write down your thoughts and insights.
|
238 |
+
> - Day 3: Communication practice - Engage in conversation with friends or strangers, focusing on active listening and sharing your thoughts and feelings openly.
|
239 |
+
> - Day 4: Boundary setting - Visualize different scenarios where you might need to set boundaries, and practice communicating those boundaries assertively and respectfully.
|
240 |
+
> - Day 5: Body language workshop - Research and study body language techniques, such as maintaining eye contact, open posture, and mirroring. Practice implementing these techniques in a mirror or with friends.
|
241 |
+
> - Day 6: Conversation starters - Research and compile a list of conversation topics and icebreakers that you can use on dates. Practice these with friends or family members.
|
242 |
+
> - Day 7: Reflection and planning - Review your progress over the week, noting areas where you excelled and any challenges you faced. Use this reflection to plan your approach for future dates.
|
243 |
+
>
|
244 |
+
> Remember, the key to success in any aspect of life is practice and persistence. Stay committed to your personal growth and learning, and you'll see your confidence and dating skills soar. I'm here to support you every step of the way!
|
245 |
+
|
246 |
|
247 |
Official character card: (thanks MortalWombat)
|
248 |
![](https://files.catbox.moe/zx9hfh.png)
|