File size: 3,787 Bytes
e72e7c7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
02842eb
 
 
 
 
 
 
 
 
e72e7c7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
---
license: apache-2.0
pipeline_tag: text-generation
tags:
- chat
- mistral
- roleplay
- creative-writing
base_model:
- nbeerbower/mistral-nemo-bophades-12B
- anthracite-org/magnum-v2-12b
- Sao10K/MN-12B-Lyra-v3
- Gryphe/Pantheon-RP-1.6-12b-Nemo
language:
- en
---
Quanting the Stardust V2 model. According to the model card this is a slightly different tune which you can see in the Usecase section from the original model card below. Highly recommend giving it a read-over and determining if you still want to try it before downloading. Either way, I intend to give it a shot.
<br>

[This is the EXL2 6bpw version of this model. Find the original model here.](https://huggingface.co/Luni/StarDust-12b-v2)
<br>
[Find the 8bpw version here.](https://huggingface.co/Statuo/Stardust-V2-EXL2-8bpw)
<br>
[Find the 4bpw version here.](https://huggingface.co/Statuo/Stardust-V2-EXL2-4bpw)


![image/png](https://cdn-uploads.huggingface.co/production/uploads/6303fa71fc783bfc7443e7ae/c3ddWBoz-lINEykUDCoXy.png)

![image/png](https://cdn-uploads.huggingface.co/production/uploads/6303fa71fc783bfc7443e7ae/hOpgDxJS2sDO7HzuC9e18.png)


# StarDust-12b-v2

## Quants

- GGUF: [mradermacher/StarDust-12b-v2-GGUF](https://huggingface.co/mradermacher/StarDust-12b-v2-GGUF)
- weighted/imatrix GGUF: [mradermacher/StarDust-12b-v2-i1-GGUF](https://huggingface.co/mradermacher/StarDust-12b-v2-i1-GGUF/tree/main)
- exl2: [lucyknada/Luni_StarDust-12b-v2-exl2](https://huggingface.co/lucyknada/Luni_StarDust-12b-v2-exl2)

## Description | Usecase

- The result of this merge is in my opinion a more vibrant and less generic sonnet inspired prose, it's able to be gentle and harsh where asked.
- The v2 uses the non-kto magnum which tends to have less "claudeism" (making the story feel rather repetitive)
- Note on Non-Kto: There is a very big gap between people preferring and disliking the KTO. To make things easier, you can still use [Luni/StarDust-12b-v1](https://huggingface.co/Luni/StarDust-12b-v1) which has the KTO version.
- In early testing users have reported a much better experience in longer roleplays and a abillity to add a creative touch to the stable experiencve.

Just like with v1:
- This model is intended to be used as a Role-playing model.
- Its direct conversational output is... I can't even say it's luck, it's just not made for it.
- Extension to Conversational output: The Model is designed for roleplay, direct instructing or general purpose is NOT recommended.

## Initial Feedback

- Initial feedback has proven the model to be a solid "go-to" choice for creative storywriting
- The prose has been certified as "amazing" with many making it their default model.


## Prompting

### ChatML has proven to be the BEST choice.

Both Mistral and ChatML should work though I had better results with ChatML:
ChatML Example:
```py
"""<|im_start|>user
Hi there!<|im_end|>
<|im_start|>assistant
Nice to meet you!<|im_end|>
<|im_start|>user
Can I ask a question?<|im_end|>
<|im_start|>assistant
"""
```



## Merge Details
### Merge Method

This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [Sao10K/MN-12B-Lyra-v3](https://huggingface.co/Sao10K/MN-12B-Lyra-v3) as a base.

### Models Merged

The following models were included in the merge:
* [nbeerbower/mistral-nemo-bophades-12B](https://huggingface.co/nbeerbower/mistral-nemo-bophades-12B)
* [anthracite-org/magnum-v2-12b](https://huggingface.co/anthracite-org/magnum-v2-12b)
* [Gryphe/Pantheon-RP-1.6-12b-Nemo](https://huggingface.co/Gryphe/Pantheon-RP-1.6-12b-Nemo)
* [Sao10K/MN-12B-Lyra-v3](https://huggingface.co/Sao10K/MN-12B-Lyra-v3)

### Special Thanks

Special thanks to the SillyTilly and myself for helping me find the energy to finish this.