Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,61 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
tags:
|
3 |
+
- llama
|
4 |
+
- alpaca
|
5 |
+
- cot
|
6 |
+
- vicuna
|
7 |
+
- uncensored
|
8 |
+
- merge
|
9 |
+
- mix
|
10 |
+
---
|
11 |
+
|
12 |
+
## 30B-Lazarus
|
13 |
+
|
14 |
+
## Composition:
|
15 |
+
[] = applied as LoRA to a composite model | () = combined as composite models
|
16 |
+
|
17 |
+
[SuperCOT([gtp4xalpaca(manticorechatpygalpha+vicunaunlocked)]+[StoryV2(kaiokendev-SuperHOT-LoRA-prototype30b-8192)])]
|
18 |
+
|
19 |
+
This model is the result of an experimental use of LoRAs on language models and model merges that are not the base HuggingFace-format LLaMA model they were intended for.
|
20 |
+
The desired outcome is to additively apply desired features without paradoxically watering down a model's effective behavior.
|
21 |
+
|
22 |
+
Potential limitations - LoRAs applied on top of each other may intercompete.
|
23 |
+
|
24 |
+
Subjective results - very promising. Further experimental tests and objective tests are required.
|
25 |
+
|
26 |
+
Instruct and Setup Suggestions:
|
27 |
+
|
28 |
+
Alpaca instruct is primary, Vicuna instruct format may work.
|
29 |
+
If using KoboldAI or Text-Generation-WebUI, recommend switching between Godlike and Storywriter presets and adjusting output length + instructions in memory.
|
30 |
+
Other presets as well as custom settings can yield highly different results, especially Temperature.
|
31 |
+
If poking it with a stick doesn't work try poking harder.
|
32 |
+
|
33 |
+
## Language Models and LoRAs Used Credits:
|
34 |
+
|
35 |
+
manticore-30b-chat-pyg-alpha [Epoch0.4] by openaccess-ai-collective
|
36 |
+
|
37 |
+
https://huggingface.co/openaccess-ai-collective/manticore-30b-chat-pyg-alpha
|
38 |
+
|
39 |
+
SuperCOT-LoRA [30B] by kaiokendev
|
40 |
+
|
41 |
+
https://huggingface.co/kaiokendev/SuperCOT-LoRA
|
42 |
+
|
43 |
+
Storytelling-LLaMa-LoRA [30B, Version 2] by GamerUnTouch
|
44 |
+
|
45 |
+
https://huggingface.co/GamerUntouch/Storytelling-LLaMa-LoRAs
|
46 |
+
|
47 |
+
SuperHOT Prototype [30b 8k ctx] by kaiokendev
|
48 |
+
|
49 |
+
https://huggingface.co/kaiokendev/SuperHOT-LoRA-prototype
|
50 |
+
|
51 |
+
ChanSung's GPT4-Alpaca-LoRA
|
52 |
+
https://huggingface.co/chansung/gpt4-alpaca-lora-30b
|
53 |
+
|
54 |
+
Neko-Institute-of-Science's Vicuna Unlocked LoRA (Checkpoint 46080)
|
55 |
+
https://huggingface.co/Neko-Institute-of-Science/VicUnLocked-30b-LoRA
|
56 |
+
|
57 |
+
Also thanks to Meta for LLaMA.
|
58 |
+
|
59 |
+
Each model and LoRA was hand picked and considered for what it could contribute to this ensemble.
|
60 |
+
Thanks to each and every one of you for your incredible work developing some of the best things
|
61 |
+
to come out of this community.
|