|
--- |
|
base_model: NousResearch/Nous-Hermes-2-Yi-34B |
|
inference: true |
|
model_type: llama |
|
quantized_by: mgoin |
|
tags: |
|
- nm-vllm |
|
- sparse |
|
--- |
|
|
|
## Nous-Hermes-2-Yi-34B-pruned2.4 |
|
This repo contains model files for [Nous Hermes 2 - Yi-34B](https://huggingface.co/NousResearch/Nous-Hermes-2-Yi-34B) optimized for [NM-vLLM](https://github.com/neuralmagic/nm-vllm), a high-throughput serving engine for compressed LLMs. |
|
|
|
This model was pruned with [SparseGPT](https://arxiv.org/abs/2301.00774), using [SparseML](https://github.com/neuralmagic/sparseml). |
|
|
|
## Inference |
|
Install [NM-vLLM](https://github.com/neuralmagic/nm-vllm) for fast inference and low memory-usage: |
|
```bash |
|
pip install nm-vllm[sparse] |
|
``` |
|
Run in a Python pipeline for local inference: |
|
```python |
|
from vllm import LLM, SamplingParams |
|
|
|
model = LLM("nm-testing/Nous-Hermes-2-Yi-34B-pruned2.4", sparsity="semi_structured_sparse_w16a16") |
|
prompt = "How to make banana bread?" |
|
formatted_prompt = f"<|im_start|>User:{prompt}\n<|im_start|>assistant:\n" |
|
|
|
sampling_params = SamplingParams(max_tokens=100, temperature=0) |
|
outputs = model.generate(formatted_prompt, sampling_params=sampling_params) |
|
print(outputs[0].outputs[0].text) |
|
""" |
|
To make banana bread, follow these steps: |
|
1. Gather the ingredients: |
|
- 2 ripe bananas |
|
- 2 cups of flour |
|
- 1 teaspoon of baking powder |
|
- 1 teaspoon of salt |
|
- 1 teaspoon of sugar |
|
- 1 teaspoon of vanilla extract |
|
2. Preheat the oven to 350°F. |
|
3. In a mixing bowl, combine the flour, baking powder, salt, sugar, and vanilla extract. |
|
4. |
|
""" |
|
``` |
|
|
|
## Prompt template |
|
|
|
``` |
|
<|im_start|>User:{prompt}\n<|im_start|>assistant:\n |
|
``` |
|
|
|
## Sparsification |
|
For details on how this model was sparsified, see the `recipe.yaml` in this repo and follow the instructions below. |
|
|
|
Install [SparseML](https://github.com/neuralmagic/sparseml): |
|
```bash |
|
git clone https://github.com/neuralmagic/sparseml |
|
pip install -e "sparseml[transformers]" |
|
``` |
|
|
|
Replace the recipe as you like and run this one-shot compression script to apply SparseGPT: |
|
```python |
|
import sparseml.transformers |
|
|
|
original_model_name = "NousResearch/Nous-Hermes-2-Yi-34B" |
|
calibration_dataset = "open_platypus" |
|
output_directory = "output/" |
|
|
|
recipe = """ |
|
test_stage: |
|
obcq_modifiers: |
|
SparseGPTModifier: |
|
sparsity: 0.5 |
|
sequential_update: true |
|
mask_structure: '2:4' |
|
targets: ['re:model.layers.\d*$'] |
|
""" |
|
|
|
# Apply SparseGPT to the model |
|
sparseml.transformers.oneshot( |
|
model=original_model_name, |
|
dataset=calibration_dataset, |
|
recipe=recipe, |
|
output_dir=output_directory, |
|
) |
|
``` |
|
|
|
## Slack |
|
|
|
For further support, and discussions on these models and AI in general, join [Neural Magic's Slack Community](https://join.slack.com/t/discuss-neuralmagic/shared_invite/zt-q1a1cnvo-YBoICSIw3L1dmQpjBeDurQ) |