File size: 4,079 Bytes
3273217 fb9f1fe ca38aa7 331737a 3273217 73d3c22 3273217 fb9f1fe 3273217 1af15e7 3273217 6cf98f9 3273217 6cf98f9 3273217 6cf98f9 3273217 6cf98f9 3273217 6cf98f9 bd2fc47 6cf98f9 bd2fc47 6cf98f9 3273217 217eaaa 3273217 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 |
---
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
tags:
- ExLlamaV2
- 5bit
- Mistral
- Mistral-7B
- quantized
- exl2
- 5.0-bpw
---
# Model Card for alokabhishek/Mistral-7B-Instruct-v0.2-5.0-bpw-exl2
<!-- Provide a quick summary of what the model is/does. -->
This repo contains 5-bit quantized (using ExLlamaV2) model Mistral AI_'s Mistral-7B-Instruct-v0.2
## Model Details
- Model creator: [Mistral AI_](https://huggingface.co/mistralai)
- Original model: [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
### About quantization using ExLlamaV2
- ExLlamaV2 github repo: [ExLlamaV2 github repo](https://github.com/turboderp/exllamav2)
# How to Get Started with the Model
Use the code below to get started with the model.
## How to run from Python code
#### First install the package
```shell
# Install ExLLamaV2
!git clone https://github.com/turboderp/exllamav2
!pip install -e exllamav2
```
#### Import
```python
from huggingface_hub import login, HfApi, create_repo
from torch import bfloat16
import locale
import torch
import os
```
#### set up variables
```python
# Define the model ID for the desired model
model_id = "alokabhishek/Mistral-7B-Instruct-v0.2-5.0-bpw-exl2"
BPW = 5.0
# define variables
model_name = model_id.split("/")[-1]
```
#### Download the quantized model
```shell
!git-lfs install
# download the model to loacl directory
!git clone https://{username}:{HF_TOKEN}@huggingface.co/{model_id} {model_name}
```
#### Run Inference on quantized model using
```shell
# Run model
!python exllamav2/test_inference.py -m {model_name}/ -p "Tell me a funny joke about Large Language Models meeting a Blackhole in an intergalactic Bar."
```
```python
import sys, os
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from exllamav2 import (
ExLlamaV2,
ExLlamaV2Config,
ExLlamaV2Cache,
ExLlamaV2Tokenizer,
)
from exllamav2.generator import ExLlamaV2BaseGenerator, ExLlamaV2Sampler
import time
# Initialize model and cache
model_directory = "/model_path/Mistral-7B-Instruct-v0.2-5.0-bpw-exl2/"
print("Loading model: " + model_directory)
config = ExLlamaV2Config(model_directory)
model = ExLlamaV2(config)
cache = ExLlamaV2Cache(model, lazy=True)
model.load_autosplit(cache)
tokenizer = ExLlamaV2Tokenizer(config)
# Initialize generator
generator = ExLlamaV2BaseGenerator(model, cache, tokenizer)
# Generate some text
settings = ExLlamaV2Sampler.Settings()
settings.temperature = 0.85
settings.top_k = 50
settings.top_p = 0.8
settings.token_repetition_penalty = 1.01
settings.disallow_tokens(tokenizer, [tokenizer.eos_token_id])
prompt = "Tell me a funny joke about Large Language Models meeting a Blackhole in an intergalactic Bar."
max_new_tokens = 512
generator.warmup()
time_begin = time.time()
output = generator.generate_simple(prompt, settings, max_new_tokens, seed=1234)
time_end = time.time()
time_total = time_end - time_begin
print(output)
print()
print(f"Response generated in {time_total:.2f} seconds")
```
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |