File size: 1,141 Bytes
8629a8b
 
 
935f312
975039f
 
acd4c8e
935f312
8629a8b
 
eca670a
9819c32
 
 
8629a8b
 
 
 
 
34dcc77
 
 
77f8db3
 
3dfb655
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
---
tags:
- gguf
- quantized
- roleplay
- imatrix
- mistral
inference: false
---

This repository hosts GGUF-Imatrix quantizations for [ResplendentAI/Datura_7B](https://huggingface.co/ResplendentAI/Datura_7B).
```
Base⇢ GGUF(F16)⇢ Imatrix-Data(F16)⇢ GGUF(Imatrix-Quants)
```
```python
    quantization_options = [
        "Q4_K_M", "Q5_K_M", "Q6_K", "Q8_0"
    ]
```

**This is experimental.**

For imatrix data generation, kalomaze's `groups_merged.txt` with added roleplay chats was used, you can find it [here](https://huggingface.co/Lewdiculous/Datura_7B-GGUF-Imatrix/blob/main/imatrix-with-rp-format-data.txt).

The goal is to measure the (hopefully positive) impact of this data for consistent formatting in roleplay chatting scenarios.

**Original model information:**

# Datura 7B

![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/626dfb8786671a29c715f8a9/BDijZ3YGo9ARto4FOrDoj.jpeg)

Flora with a bit of toxicity.

I've been making progress with my collection of tools, so I thought maybe I'd try something a little more toxic for this space. This should make for a more receptive model with fewer refusals.