--- inference: false license: other model_type: llama ---
TheBloke's LLM work is generously supported by a grant from andreessen horowitz (a16z)
“Luna AI Llama2 Uncensored” is a Llama2 based Chat model
fine-tuned on over 40,000 long form chat discussions
This model was fine-tuned by Tap, the creator of Luna AI.
The result is an enhanced Llama2 7b model that rivals ChatGPT in performance
across a variety of tasks.
This model stands out for its long responses, low hallucination rate, and absence of censorship mechanisms.
The fine-tuning process was performed on an 8x a100 80GB machine.
The model was trained almost entirely on synthetic outputs.
This includes data from diverse sources which we included to create our custom dataset, it includes multiple rounds of chats between Human & AI.
The model follows the Vicuna 1.1/ OpenChat format:
``` USER: I have difficulties in making friends, and I really need someone to talk to. Would you be my friend? ASSISTANT: Of course! Friends are always here for each other. What do you like to do? ```The model is currently being uploaded in FP16 format,
and there are plans to convert the model to GGML and GPTQ 4bit quantizations.
The data used to train the model is collected from various sources, mostly from the Web.
As such, it contains offensive, harmful and biased content.
We thus expect the model to exhibit such biases from the training data.
The model is not intended to inform decisions about matters central to human life,
and should not be used in such a way.
Risks and harms of large language models include the generation of harmful, offensive or biased content.
These models are often prone to generating incorrect information, sometimes referred to as hallucinations.
We do not expect our model to be an exception in this regard.