limin(gate)
commited on
Commit
•
75a8b45
1
Parent(s):
8d31526
Update README.md
Browse files
README.md
CHANGED
@@ -10,8 +10,23 @@ base_model:
|
|
10 |
- eren23/dpo-binarized-NeutrixOmnibe-7B
|
11 |
license: apache-2.0
|
12 |
---
|
|
|
13 |
|
14 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
15 |
|
16 |
NeuralPipe-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
|
17 |
* [liminerity/binarized-ingotrix-slerp-7b](https://huggingface.co/liminerity/binarized-ingotrix-slerp-7b)
|
|
|
10 |
- eren23/dpo-binarized-NeutrixOmnibe-7B
|
11 |
license: apache-2.0
|
12 |
---
|
13 |
+
Title: Introducing Omningotex-7b: The World's Most Accurate 7B LLM
|
14 |
|
15 |
+
Today, I'm excited to share the creation of a groundbreaking language model, "liminerity/Omningotex-7b-slerp." This model has achieved an impressive accuracy rate of 76.33%, making it the most accurate 7B LLM in the world.
|
16 |
+
The journey to create Omningotex-7b-slerp began with an experimental process called "merging." I started with a model named "ingot-7b-slerp," which was created by merging two other LLMs, "blurred-beagle-7b-slerp" (by myself, liminerity) and "Macaroni-7b-Tied" (by andrijdavid), a total of eight times over.
|
17 |
+
After the successful creation of ingot-7b-slerp, I proceeded to merge it with another model, "dpo-binarized-NeuralTrix-7B" by eren23, using gradient slerp. The resulting model, "binarized-ingotrix-slerp-7b," achieved an accuracy rate of 76.04%.
|
18 |
+
To further enhance the model's performance, I decided to merge "binarized-ingotrix-slerp-7b" with "dpo-binarized-NeutrixOmnibe-7B" by eren23 once again. The resulting model, "Omningotex-7b," is now the most accurate 7B LLM available.
|
19 |
+
This breakthrough in LLM accuracy was achieved through a combination of careful experimentation and a deep understanding of the underlying algorithms and techniques. I believe that Omningotex-7b-slerp's success demonstrates the potential for further advancements in the field of natural language processing and artificial intelligence.
|
20 |
+
I look forward to sharing more updates and insights as I continue to explore the possibilities of LLMs and push the boundaries of what is possible in the world of AI. Stay tuned for more exciting developments in the future!
|
21 |
+
|
22 |
+
A huge thank you to Maxime Labonne and his creation of LazyMergeKit colab project. Use of it helped me gain a further grasp of the concepts at play and led to the creation of this model. I'm sure it won't be number 1 for long which excited me even more!
|
23 |
+
|
24 |
+
Next, I set out to learn how to fine-tune with the resources I have available.
|
25 |
+
My next overall goal is to try and find a way to produce a smaller model with high accuracy either through merging down using fewer layers after each merge. I may need to include finetuning between each merge or merging larger more accurate models into a smaller base while maintaining accuracy and performance. Every version of "TinyMistral" I come by seems to be bricked in the sense it spits out nonsense. Thank you for your time If you read this all the way.
|
26 |
+
|
27 |
+
|
28 |
+
|
29 |
+
# Omningotex-7B-slerp
|
30 |
|
31 |
NeuralPipe-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
|
32 |
* [liminerity/binarized-ingotrix-slerp-7b](https://huggingface.co/liminerity/binarized-ingotrix-slerp-7b)
|