Edit model card

This is an INT4 quantized version of the mistralai/Mistral-7B-Instruct-v0.2 model. The Python packages used in creating this model are as follows:

openvino==2024.5.0rc1
optimum==1.23.3
optimum-intel==1.20.1
nncf==2.13.0
torch==2.5.1
transformers==4.46.2

This quantized model is created using the following command:

optimum-cli export openvino --model "mistralai/Mistral-7B-Instruct-v0.2" --weight-format int4 --group-size 128 --sym --ratio 1 --all-layers ./Mistral-7B-Instruct-v0.2-ov-int4

For more details, run the following command from your Python environment: optimum-cli export openvino --help

INFO:nncf:Statistics of the bitwidth distribution:

Num bits (N) % all parameters (layers) % ratio-defining parameters (layers)
4 100% (226 / 226) 100% (226 / 226)
Downloads last month
18
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Collection including jojo1899/Mistral-7B-Instruct-v0.2-ov-int4