brucethemoose
commited on
Commit
•
c54f3bf
1
Parent(s):
6f008c2
Update README.md
Browse files
README.md
CHANGED
@@ -27,7 +27,7 @@ Disappointed with some quirks of my previous kitchen sink merges (like token/ins
|
|
27 |
|
28 |
- [adamo1139/yi-34b-200k-rawrr-dpo-2](https://huggingface.co/adamo1139/yi-34b-200k-rawrr-dpo-2) the base for the limarp lora, this is base Yi gently finetuned to discourage refusals.
|
29 |
|
30 |
-
- [
|
31 |
|
32 |
I consider this a more "focused" merge that previous ones. I will investigate other models (perhaps chatML models?) for a more "factual assistant" focused merge, as well as a coding-focused merge if I can't find one to suit my needs.
|
33 |
|
|
|
27 |
|
28 |
- [adamo1139/yi-34b-200k-rawrr-dpo-2](https://huggingface.co/adamo1139/yi-34b-200k-rawrr-dpo-2) the base for the limarp lora, this is base Yi gently finetuned to discourage refusals.
|
29 |
|
30 |
+
- [migtissera/Tess-M-Creative-v1.0](https://huggingface.co/migtissera/Tess-M-Creative-v1.0) and [NousResearch/Nous-Capybara-34B](https://huggingface.co/NousResearch/Nous-Capybara-34B) are both "undertrained" Yi models. I find they excel at raw completion performance (like long novel continuations) while still retaining some Vicuna instruct ability. This may be why some still prefer the original Tess 1.0/Capybara merge.
|
31 |
|
32 |
I consider this a more "focused" merge that previous ones. I will investigate other models (perhaps chatML models?) for a more "factual assistant" focused merge, as well as a coding-focused merge if I can't find one to suit my needs.
|
33 |
|