jburtoft's picture
Update README.md
3b9c8b0 verified
|
raw
history blame
No virus
4.79 kB
metadata
license: apache-2.0
language:
  - en
pipeline_tag: text-generation
inference: false
tags:
  - pytorch
  - inferentia2
  - neuron

Neuronx model for upstage/SOLAR-10.7B-v1.0

This repository contains AWS Inferentia2 and neuronx compatible checkpoints for upstage/SOLAR-10.7B-v1.0. You can find detailed information about the base model on its Model Card.

This model card also includes instructions for how to compile other SOLAR models with other settings if this combination isn't quite what you are looking for.

This model has been exported to the neuron format using specific input_shapes and compiler parameters detailed in the paragraphs below.

It has been compiled to run on an inf2.24xlarge instance on AWS. Note that while the inf2.24xlarge has 12 cores, this compilation is only use 8. For this model and configuration, the cores have to be a power of 2.

This has been compiled using version 2.16 of the Neuron SDK. Make sure your environment has version 2.16 installed

Please refer to the 🤗 optimum-neuron documentation for an explanation of these parameters.

Set up the environment

First, use the DLAMI image from Hugging Face. It has most of the utilities and drivers preinstalled. However, you may need to update to version 2.16 to use these binaries.

sudo apt-get update -y \
 && sudo apt-get install -y --no-install-recommends \
    aws-neuronx-dkms=2.15.9.0 \
    aws-neuronx-collectives=2.19.7.0-530fb3064 \
    aws-neuronx-runtime-lib=2.19.5.0-97e2d271b \
    aws-neuronx-tools=2.16.1.0 

pip3 install --upgrade \
    neuronx-cc==2.12.54.0 \
    torch-neuronx==1.13.1.1.13.0 \
    transformers-neuronx==0.9.474 \
    --extra-index-url=https://pip.repos.neuron.amazonaws.com

pip3 install git+https://github.com/huggingface/optimum-neuron.git

Running inference from this repository

from optimum.neuron import pipeline
p = pipeline('text-generation', 'jburtoft/SOLAR-10.7B-v1.0-neuron-24xlarge-2.16-8core-4096')
p("Hi, my name is ",
    do_sample=True,
    top_k=10,
    temperature=0.1,
    top_p=0.95,
    num_return_sequences=1,
    max_length=200,
)

sample output:

Setting `pad_token_id` to `eos_token_id`:2 for open-end generation.
2024-Jan-13 04:48:45.0857 15117:15313 [6] nccl_net_ofi_init:1415 CCOM WARN NET/OFI aws-ofi-nccl initialization failed
2024-Jan-13 04:48:45.0857 15117:15313 [6] init.cc:137 CCOM WARN OFI plugin initNet() failed is EFA enabled?
[{'generated_text': 'Hi, my name is ***** ***** I am calling from ***** ***** and I am calling to see if you have any questions about your ***** ***** account.\nHi, my name is ***** ***** I am calling from ***** ***** and I am calling to see if you have any questions about your ***** ***** account.\nHi, my name is ***** ***** I am calling from ***** ***** and I am calling to see if you have any questions about your ***** ***** account.\nHi, my name is ***** ***** I am calling from ***** ***** and I am calling to see if you have any questions about your ***** ***** account.\nHi, my name is ***** ***** I am calling from ***** ***** and I am calling to see if'}]

##Compiling for different instances or settings

If this repository doesn't have the exact version or settings, you can compile your own.

from optimum.neuron import NeuronModelForCausalLM #num_cores should be changed based on the instance. inf2.24xlarge has 6 neuron processors (they have two cores each) so 12 total input_shapes = {"batch_size": 1, "sequence_length": 4096} compiler_args = {"num_cores": 8, "auto_cast_type": 'fp16'} model = NeuronModelForCausalLM.from_pretrained("SOLAR-10.7B-v1.0", export=True, **compiler_args, **input_shapes) model.save_pretrained("SOLAR-10.7B-v1.0-neuron-24xlarge-2.16-8core-4096")

from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("upstage/SOLAR-10.7B-v1.0") tokenizer.save_pretrained("SOLAR-10.7B-v1.0-neuron-24xlarge-2.16-8core-4096")

This repository contains tags specific to versions of neuronx. When using with 🤗 optimum-neuron, use the repo revision specific to the version of neuronx you are using, to load the right serialized checkpoints.

Arguments passed during export

input_shapes

{
  "batch_size": 1,
  "sequence_length": 4096,
}

compiler_args

{
  "auto_cast_type": "fp16",
  "num_cores": 8,
}