Update README.md
Browse files
README.md
CHANGED
@@ -20,8 +20,8 @@ Luminurse is a merge based on Lumimaid, enhanced with a biomedical model (at hig
|
|
20 |
Boosting temperature has the interesting property of reducing repetitiveness and increasing verbosity of the model at the same time. Higher temperature also increases the odds of reasoning slippage (which can be manually mitigated by swiping for regeneration), so settings should be adjusted according to one's comfort levels. Lightly tested using Instruct prompts with temperature in the range of 1 to 1.6 (pick something in between to start, perhaps in the range of 1.2-1.45) and minP=0.01.
|
21 |
|
22 |
- [static GGUFs, llama-bpe pre-tokenizer](https://huggingface.co/grimjim/Llama-3-Luminurse-v0.2-OAS-8B-GGUF)
|
23 |
-
- [static GGUFs, smaug-bpe pre-tokenizer c/o mradermacher](https://huggingface.co/mradermacher/Llama-3-Luminurse-v0.2-OAS-8B-GGUF)
|
24 |
- [8bpw exl2 quant](https://huggingface.co/grimjim/Llama-3-Luminurse-v0.2-OAS-8B-8bpw-exl2)
|
|
|
25 |
- [weighted/imatrix GGUFs, smaug-bpe pre-tokenizer c/o mradermacher](https://huggingface.co/mradermacher/Llama-3-Luminurse-v0.2-OAS-8B-i1-GGUF)
|
26 |
|
27 |
Built with Meta Llama 3.
|
|
|
20 |
Boosting temperature has the interesting property of reducing repetitiveness and increasing verbosity of the model at the same time. Higher temperature also increases the odds of reasoning slippage (which can be manually mitigated by swiping for regeneration), so settings should be adjusted according to one's comfort levels. Lightly tested using Instruct prompts with temperature in the range of 1 to 1.6 (pick something in between to start, perhaps in the range of 1.2-1.45) and minP=0.01.
|
21 |
|
22 |
- [static GGUFs, llama-bpe pre-tokenizer](https://huggingface.co/grimjim/Llama-3-Luminurse-v0.2-OAS-8B-GGUF)
|
|
|
23 |
- [8bpw exl2 quant](https://huggingface.co/grimjim/Llama-3-Luminurse-v0.2-OAS-8B-8bpw-exl2)
|
24 |
+
- [static GGUFs, smaug-bpe pre-tokenizer c/o mradermacher](https://huggingface.co/mradermacher/Llama-3-Luminurse-v0.2-OAS-8B-GGUF)
|
25 |
- [weighted/imatrix GGUFs, smaug-bpe pre-tokenizer c/o mradermacher](https://huggingface.co/mradermacher/Llama-3-Luminurse-v0.2-OAS-8B-i1-GGUF)
|
26 |
|
27 |
Built with Meta Llama 3.
|