RichardErkhov's picture
uploaded readme
9406728 verified
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Hermes-2-Theta-L3-Euryale-Ties-0.8-70B - GGUF
- Model creator: https://huggingface.co/juvi21/
- Original model: https://huggingface.co/juvi21/Hermes-2-Theta-L3-Euryale-Ties-0.8-70B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Hermes-2-Theta-L3-Euryale-Ties-0.8-70B.Q2_K.gguf](https://huggingface.co/RichardErkhov/juvi21_-_Hermes-2-Theta-L3-Euryale-Ties-0.8-70B-gguf/blob/main/Hermes-2-Theta-L3-Euryale-Ties-0.8-70B.Q2_K.gguf) | Q2_K | 24.56GB |
| [Hermes-2-Theta-L3-Euryale-Ties-0.8-70B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/juvi21_-_Hermes-2-Theta-L3-Euryale-Ties-0.8-70B-gguf/blob/main/Hermes-2-Theta-L3-Euryale-Ties-0.8-70B.IQ3_XS.gguf) | IQ3_XS | 27.29GB |
| [Hermes-2-Theta-L3-Euryale-Ties-0.8-70B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/juvi21_-_Hermes-2-Theta-L3-Euryale-Ties-0.8-70B-gguf/blob/main/Hermes-2-Theta-L3-Euryale-Ties-0.8-70B.IQ3_S.gguf) | IQ3_S | 28.79GB |
| [Hermes-2-Theta-L3-Euryale-Ties-0.8-70B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/juvi21_-_Hermes-2-Theta-L3-Euryale-Ties-0.8-70B-gguf/blob/main/Hermes-2-Theta-L3-Euryale-Ties-0.8-70B.Q3_K_S.gguf) | Q3_K_S | 28.79GB |
| [Hermes-2-Theta-L3-Euryale-Ties-0.8-70B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/juvi21_-_Hermes-2-Theta-L3-Euryale-Ties-0.8-70B-gguf/blob/main/Hermes-2-Theta-L3-Euryale-Ties-0.8-70B.IQ3_M.gguf) | IQ3_M | 29.74GB |
| [Hermes-2-Theta-L3-Euryale-Ties-0.8-70B.Q3_K.gguf](https://huggingface.co/RichardErkhov/juvi21_-_Hermes-2-Theta-L3-Euryale-Ties-0.8-70B-gguf/blob/main/Hermes-2-Theta-L3-Euryale-Ties-0.8-70B.Q3_K.gguf) | Q3_K | 31.91GB |
| [Hermes-2-Theta-L3-Euryale-Ties-0.8-70B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/juvi21_-_Hermes-2-Theta-L3-Euryale-Ties-0.8-70B-gguf/blob/main/Hermes-2-Theta-L3-Euryale-Ties-0.8-70B.Q3_K_M.gguf) | Q3_K_M | 31.91GB |
| [Hermes-2-Theta-L3-Euryale-Ties-0.8-70B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/juvi21_-_Hermes-2-Theta-L3-Euryale-Ties-0.8-70B-gguf/blob/main/Hermes-2-Theta-L3-Euryale-Ties-0.8-70B.Q3_K_L.gguf) | Q3_K_L | 34.59GB |
| [Hermes-2-Theta-L3-Euryale-Ties-0.8-70B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/juvi21_-_Hermes-2-Theta-L3-Euryale-Ties-0.8-70B-gguf/blob/main/Hermes-2-Theta-L3-Euryale-Ties-0.8-70B.IQ4_XS.gguf) | IQ4_XS | 35.64GB |
| [Hermes-2-Theta-L3-Euryale-Ties-0.8-70B.Q4_0.gguf](https://huggingface.co/RichardErkhov/juvi21_-_Hermes-2-Theta-L3-Euryale-Ties-0.8-70B-gguf/blob/main/Hermes-2-Theta-L3-Euryale-Ties-0.8-70B.Q4_0.gguf) | Q4_0 | 37.22GB |
| [Hermes-2-Theta-L3-Euryale-Ties-0.8-70B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/juvi21_-_Hermes-2-Theta-L3-Euryale-Ties-0.8-70B-gguf/tree/main/) | IQ4_NL | 37.58GB |
| [Hermes-2-Theta-L3-Euryale-Ties-0.8-70B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/juvi21_-_Hermes-2-Theta-L3-Euryale-Ties-0.8-70B-gguf/tree/main/) | Q4_K_S | 37.58GB |
| [Hermes-2-Theta-L3-Euryale-Ties-0.8-70B.Q4_K.gguf](https://huggingface.co/RichardErkhov/juvi21_-_Hermes-2-Theta-L3-Euryale-Ties-0.8-70B-gguf/tree/main/) | Q4_K | 39.6GB |
| [Hermes-2-Theta-L3-Euryale-Ties-0.8-70B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/juvi21_-_Hermes-2-Theta-L3-Euryale-Ties-0.8-70B-gguf/tree/main/) | Q4_K_M | 39.6GB |
| [Hermes-2-Theta-L3-Euryale-Ties-0.8-70B.Q4_1.gguf](https://huggingface.co/RichardErkhov/juvi21_-_Hermes-2-Theta-L3-Euryale-Ties-0.8-70B-gguf/tree/main/) | Q4_1 | 41.27GB |
| [Hermes-2-Theta-L3-Euryale-Ties-0.8-70B.Q5_0.gguf](https://huggingface.co/RichardErkhov/juvi21_-_Hermes-2-Theta-L3-Euryale-Ties-0.8-70B-gguf/tree/main/) | Q5_0 | 45.32GB |
| [Hermes-2-Theta-L3-Euryale-Ties-0.8-70B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/juvi21_-_Hermes-2-Theta-L3-Euryale-Ties-0.8-70B-gguf/tree/main/) | Q5_K_S | 45.32GB |
| [Hermes-2-Theta-L3-Euryale-Ties-0.8-70B.Q5_K.gguf](https://huggingface.co/RichardErkhov/juvi21_-_Hermes-2-Theta-L3-Euryale-Ties-0.8-70B-gguf/tree/main/) | Q5_K | 46.52GB |
| [Hermes-2-Theta-L3-Euryale-Ties-0.8-70B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/juvi21_-_Hermes-2-Theta-L3-Euryale-Ties-0.8-70B-gguf/tree/main/) | Q5_K_M | 46.52GB |
| [Hermes-2-Theta-L3-Euryale-Ties-0.8-70B.Q5_1.gguf](https://huggingface.co/RichardErkhov/juvi21_-_Hermes-2-Theta-L3-Euryale-Ties-0.8-70B-gguf/tree/main/) | Q5_1 | 49.36GB |
| [Hermes-2-Theta-L3-Euryale-Ties-0.8-70B.Q6_K.gguf](https://huggingface.co/RichardErkhov/juvi21_-_Hermes-2-Theta-L3-Euryale-Ties-0.8-70B-gguf/tree/main/) | Q6_K | 53.91GB |
| [Hermes-2-Theta-L3-Euryale-Ties-0.8-70B.Q8_0.gguf](https://huggingface.co/RichardErkhov/juvi21_-_Hermes-2-Theta-L3-Euryale-Ties-0.8-70B-gguf/tree/main/) | Q8_0 | 69.83GB |
Original model description:
---
base_model:
- Sao10K/L3-70B-Euryale-v2.1
- NousResearch/Hermes-2-Theta-Llama-3-70B
library_name: transformers
tags:
- mergekit
- merge
---
# Hermes-2-Theta-L3-Euryale-Ties-0.8-70B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
I wanted to have enhanced roleplay performance and merged two of my favorite models together. I maybe release some merges like this with different methods and biases. This config seemed to be the best mix of both world on first glance.
I will update my experiences further in the future.
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [NousResearch/Hermes-2-Theta-Llama-3-70B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-70B) as a base.
### Models Merged
The following models were included in the merge:
* [Sao10K/L3-70B-Euryale-v2.1](https://huggingface.co/Sao10K/L3-70B-Euryale-v2.1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: NousResearch/Hermes-2-Theta-Llama-3-70B
parameters:
weight: 0.8
- model: Sao10K/L3-70B-Euryale-v2.1
parameters:
weight: 0.2
merge_method: ties
base_model: NousResearch/Hermes-2-Theta-Llama-3-70B
parameters:
density: 0.7
normalize: true
dtype: float16
tokenizer_source: base
```