Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Saul-7B-Base - GGUF - Model creator: https://huggingface.co/Equall/ - Original model: https://huggingface.co/Equall/Saul-7B-Base/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Saul-7B-Base.Q2_K.gguf](https://huggingface.co/RichardErkhov/Equall_-_Saul-7B-Base-gguf/blob/main/Saul-7B-Base.Q2_K.gguf) | Q2_K | 2.53GB | | [Saul-7B-Base.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Equall_-_Saul-7B-Base-gguf/blob/main/Saul-7B-Base.IQ3_XS.gguf) | IQ3_XS | 2.81GB | | [Saul-7B-Base.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Equall_-_Saul-7B-Base-gguf/blob/main/Saul-7B-Base.IQ3_S.gguf) | IQ3_S | 2.96GB | | [Saul-7B-Base.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Equall_-_Saul-7B-Base-gguf/blob/main/Saul-7B-Base.Q3_K_S.gguf) | Q3_K_S | 2.95GB | | [Saul-7B-Base.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Equall_-_Saul-7B-Base-gguf/blob/main/Saul-7B-Base.IQ3_M.gguf) | IQ3_M | 3.06GB | | [Saul-7B-Base.Q3_K.gguf](https://huggingface.co/RichardErkhov/Equall_-_Saul-7B-Base-gguf/blob/main/Saul-7B-Base.Q3_K.gguf) | Q3_K | 3.28GB | | [Saul-7B-Base.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Equall_-_Saul-7B-Base-gguf/blob/main/Saul-7B-Base.Q3_K_M.gguf) | Q3_K_M | 3.28GB | | [Saul-7B-Base.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Equall_-_Saul-7B-Base-gguf/blob/main/Saul-7B-Base.Q3_K_L.gguf) | Q3_K_L | 3.56GB | | [Saul-7B-Base.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Equall_-_Saul-7B-Base-gguf/blob/main/Saul-7B-Base.IQ4_XS.gguf) | IQ4_XS | 3.67GB | | [Saul-7B-Base.Q4_0.gguf](https://huggingface.co/RichardErkhov/Equall_-_Saul-7B-Base-gguf/blob/main/Saul-7B-Base.Q4_0.gguf) | Q4_0 | 3.83GB | | [Saul-7B-Base.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Equall_-_Saul-7B-Base-gguf/blob/main/Saul-7B-Base.IQ4_NL.gguf) | IQ4_NL | 3.87GB | | [Saul-7B-Base.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Equall_-_Saul-7B-Base-gguf/blob/main/Saul-7B-Base.Q4_K_S.gguf) | Q4_K_S | 3.86GB | | [Saul-7B-Base.Q4_K.gguf](https://huggingface.co/RichardErkhov/Equall_-_Saul-7B-Base-gguf/blob/main/Saul-7B-Base.Q4_K.gguf) | Q4_K | 4.07GB | | [Saul-7B-Base.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Equall_-_Saul-7B-Base-gguf/blob/main/Saul-7B-Base.Q4_K_M.gguf) | Q4_K_M | 4.07GB | | [Saul-7B-Base.Q4_1.gguf](https://huggingface.co/RichardErkhov/Equall_-_Saul-7B-Base-gguf/blob/main/Saul-7B-Base.Q4_1.gguf) | Q4_1 | 4.24GB | | [Saul-7B-Base.Q5_0.gguf](https://huggingface.co/RichardErkhov/Equall_-_Saul-7B-Base-gguf/blob/main/Saul-7B-Base.Q5_0.gguf) | Q5_0 | 4.65GB | | [Saul-7B-Base.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Equall_-_Saul-7B-Base-gguf/blob/main/Saul-7B-Base.Q5_K_S.gguf) | Q5_K_S | 4.65GB | | [Saul-7B-Base.Q5_K.gguf](https://huggingface.co/RichardErkhov/Equall_-_Saul-7B-Base-gguf/blob/main/Saul-7B-Base.Q5_K.gguf) | Q5_K | 4.78GB | | [Saul-7B-Base.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Equall_-_Saul-7B-Base-gguf/blob/main/Saul-7B-Base.Q5_K_M.gguf) | Q5_K_M | 4.78GB | | [Saul-7B-Base.Q5_1.gguf](https://huggingface.co/RichardErkhov/Equall_-_Saul-7B-Base-gguf/blob/main/Saul-7B-Base.Q5_1.gguf) | Q5_1 | 5.07GB | | [Saul-7B-Base.Q6_K.gguf](https://huggingface.co/RichardErkhov/Equall_-_Saul-7B-Base-gguf/blob/main/Saul-7B-Base.Q6_K.gguf) | Q6_K | 5.53GB | | [Saul-7B-Base.Q8_0.gguf](https://huggingface.co/RichardErkhov/Equall_-_Saul-7B-Base-gguf/blob/main/Saul-7B-Base.Q8_0.gguf) | Q8_0 | 7.17GB | Original model description: --- library_name: transformers tags: - legal license: mit language: - en --- # Equall/Saul-Base-v1 This is the base model for Equall/Saul-Base, a large instruct language model tailored for Legal domain. This model is obtained by continue pretraining of Mistral-7B. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/644a900e3a619fe72b14af0f/OU4Y3s-WckYKMN4fQkNiS.png) ## Model Details ### Model Description This is the model card of a đŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** Equall.ai in collaboration with CentraleSupelec, Sorbonne UniversitĂ©, Instituto Superior TĂ©cnico and NOVA School of Law - **Model type:** 7B - **Language(s) (NLP):** English - **License:** MIT ### Model Sources - **Paper:** https://arxiv.org/abs/2403.03883 ## Uses You can use it for legal use cases that involves generation. Here's how you can run the model using the pipeline() function from đŸ€— Transformers: ```python # Install transformers from source - only needed for versions <= v4.34 # pip install git+https://github.com/huggingface/transformers.git # pip install accelerate import torch from transformers import pipeline pipe = pipeline("text-generation", model="Equall/Saul-Instruct-v1", torch_dtype=torch.bfloat16, device_map="auto") # We use the tokenizer’s chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating messages = [ {"role": "user", "content": "[YOUR QUERY GOES HERE]"}, ] prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipe(prompt, max_new_tokens=256, do_sample=False) print(outputs[0]["generated_text"]) ``` ## Bias, Risks, and Limitations This model is built upon the technology of LLM, which comes with inherent limitations. It may occasionally generate inaccurate or nonsensical outputs. Furthermore, being a 7B model, it's anticipated to exhibit less robust performance compared to larger models, such as the 70B variant. ## Citation **BibTeX:** ```bibtex @misc{colombo2024saullm7b, title={SaulLM-7B: A pioneering Large Language Model for Law}, author={Pierre Colombo and Telmo Pessoa Pires and Malik Boudiaf and Dominic Culver and Rui Melo and Caio Corro and Andre F. T. Martins and Fabrizio Esposito and Vera LĂșcia Raposo and Sofia Morgado and Michael Desa}, year={2024}, eprint={2403.03883}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```