|
--- |
|
base_model: NousResearch/Nous-Hermes-2-Yi-34B |
|
inference: true |
|
model_type: llama |
|
quantized_by: mgoin |
|
tags: |
|
- nm-vllm |
|
- sparse |
|
--- |
|
|
|
## Nous-Hermes-2-Yi-34B-pruned50 |
|
This repo contains model files for [Nous Hermes 2 - Yi-34B](https://huggingface.co/NousResearch/Nous-Hermes-2-Yi-34B) optimized for [NM-vLLM](https://github.com/neuralmagic/nm-vllm), a high-throughput serving engine for compressed LLMs. |
|
|
|
This model was pruned with [SparseGPT](https://arxiv.org/abs/2301.00774), using [SparseML](https://github.com/neuralmagic/sparseml). |
|
|
|
## Inference |
|
Install [NM-vLLM](https://github.com/neuralmagic/nm-vllm) for fast inference and low memory-usage: |
|
```bash |
|
pip install nm-vllm[sparse] |
|
``` |
|
Run in a Python pipeline for local inference: |
|
```python |
|
from vllm import LLM, SamplingParams |
|
|
|
model = LLM("nm-testing/Nous-Hermes-2-Yi-34B-pruned50", sparsity="sparse_w16a16") |
|
prompt = "How to make banana bread?" |
|
formatted_prompt = f"<|im_start|>User:{prompt}\n<|im_start|>assistant:\n" |
|
|
|
sampling_params = SamplingParams(max_tokens=100, temperature=0) |
|
outputs = model.generate(formatted_prompt, sampling_params=sampling_params) |
|
print(outputs[0].outputs[0].text) |
|
""" |
|
To make banana bread, you will need the following ingredients: |
|
|
|
Ingredients: |
|
- 2 ripe bananas |
|
- 1 cup all-purpose flour |
|
- 1/2 cup sugar |
|
- 1/2 cup butter |
|
- 1 teaspoon baking soda |
|
- 1 teaspoon baking powder |
|
- 1/2 teaspoon salt |
|
- 1/2 cup milk |
|
- 1 teaspoon vanilla extract |
|
|
|
Instructions: |
|
1. Preheat the oven to 3 |
|
""" |
|
``` |
|
|
|
## Prompt template |
|
|
|
``` |
|
<|im_start|>User:{prompt}\n<|im_start|>assistant:\n |
|
``` |
|
|
|
## Sparsification |
|
For details on how this model was sparsified, see the `recipe.yaml` in this repo and follow the instructions below. |
|
|
|
Install [SparseML](https://github.com/neuralmagic/sparseml): |
|
```bash |
|
git clone https://github.com/neuralmagic/sparseml |
|
pip install -e "sparseml[transformers]" |
|
``` |
|
|
|
Replace the recipe as you like and run this one-shot compression script to apply SparseGPT: |
|
```python |
|
import sparseml.transformers |
|
|
|
original_model_name = "NousResearch/Nous-Hermes-2-Yi-34B" |
|
calibration_dataset = "open_platypus" |
|
output_directory = "output/" |
|
|
|
recipe = """ |
|
test_stage: |
|
obcq_modifiers: |
|
SparseGPTModifier: |
|
sparsity: 0.5 |
|
sequential_update: true |
|
mask_structure: 0:0 |
|
targets: ['re:model.layers.\d*$'] |
|
""" |
|
|
|
# Apply SparseGPT to the model |
|
sparseml.transformers.oneshot( |
|
model=original_model_name, |
|
dataset=calibration_dataset, |
|
recipe=recipe, |
|
output_dir=output_directory, |
|
) |
|
``` |
|
|
|
## Slack |
|
|
|
For further support, and discussions on these models and AI in general, join [Neural Magic's Slack Community](https://join.slack.com/t/discuss-neuralmagic/shared_invite/zt-q1a1cnvo-YBoICSIw3L1dmQpjBeDurQ) |