text
stringlengths
2
11.8k
import tensorflow as tf model = tf.keras.Sequential( [tf.keras.layers.Dense(10, input_shape=(10,), activation="relu"), tf.keras.layers.Dense(5, activation="softmax")] ) The above model accepts inputs having a dimension of (10, ). We can use the model for running a forward pass like so: Generate random inputs for the model. batch_size = 16 input_vector_dim = 10 random_inputs = tf.random.normal((batch_size, input_vector_dim)) Run a forward pass. _ = model(random_inputs)
In order to run the forward pass with an XLA-compiled function, we’d need to do: py xla_fn = tf.function(model, jit_compile=True) _ = xla_fn(random_inputs) The default call() function of the model is used for compiling the XLA graph. But if there’s any other model function you want to compile into XLA that’s also possible with: py my_xla_fn = tf.function(model.my_xla_fn, jit_compile=True) Running a TF text generation model with XLA from 🤗 Transformers To enable XLA-accelerated generation within 🤗 Transformers, you need to have a recent version of transformers installed. You can install it by running:
pip install transformers --upgrade And then you can run the following code:
import tensorflow as tf from transformers import AutoTokenizer, TFAutoModelForCausalLM Will error if the minimal version of Transformers is not installed. from transformers.utils import check_min_version check_min_version("4.21.0") tokenizer = AutoTokenizer.from_pretrained("openai-community/gpt2", padding_side="left", pad_token="") model = TFAutoModelForCausalLM.from_pretrained("openai-community/gpt2") input_string = ["TensorFlow is"] One line to create an XLA generation function xla_generate = tf.function(model.generate, jit_compile=True) tokenized_input = tokenizer(input_string, return_tensors="tf") generated_tokens = xla_generate(**tokenized_input, num_beams=2) decoded_text = tokenizer.decode(generated_tokens[0], skip_special_tokens=True) print(f"Generated -- {decoded_text}") Generated -- TensorFlow is an open-source, open-source, distributed-source application # framework for the
As you can notice, enabling XLA on generate() is just a single line of code. The rest of the code remains unchanged. However, there are a couple of gotchas in the above code snippet that are specific to XLA. You need to be aware of those to realize the speed-ups that XLA can bring in. We discuss these in the following section. Gotchas to be aware of When you are executing an XLA-enabled function (like xla_generate() above) for the first time, it will internally try to infer the computation graph, which is time-consuming. This process is known as “tracing”. You might notice that the generation time is not fast. Successive calls of xla_generate() (or any other XLA-enabled function) won’t have to infer the computation graph, given the inputs to the function follow the same shape with which the computation graph was initially built. While this is not a problem for modalities with fixed input shapes (e.g., images), you must pay attention if you are working with variable input shape modalities (e.g., text). To ensure xla_generate() always operates with the same input shapes, you can specify the padding arguments when calling the tokenizer.
import tensorflow as tf from transformers import AutoTokenizer, TFAutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("openai-community/gpt2", padding_side="left", pad_token="") model = TFAutoModelForCausalLM.from_pretrained("openai-community/gpt2") input_string = ["TensorFlow is"] xla_generate = tf.function(model.generate, jit_compile=True) Here, we call the tokenizer with padding options. tokenized_input = tokenizer(input_string, pad_to_multiple_of=8, padding=True, return_tensors="tf") generated_tokens = xla_generate(**tokenized_input, num_beams=2) decoded_text = tokenizer.decode(generated_tokens[0], skip_special_tokens=True) print(f"Generated -- {decoded_text}")
This way, you can ensure that the inputs to xla_generate() will always receive inputs with the shape it was traced with and thus leading to speed-ups in the generation time. You can verify this with the code below:
import time import tensorflow as tf from transformers import AutoTokenizer, TFAutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("openai-community/gpt2", padding_side="left", pad_token="") model = TFAutoModelForCausalLM.from_pretrained("openai-community/gpt2") xla_generate = tf.function(model.generate, jit_compile=True) for input_string in ["TensorFlow is", "TensorFlow is a", "TFLite is a"]: tokenized_input = tokenizer(input_string, pad_to_multiple_of=8, padding=True, return_tensors="tf") start = time.time_ns() generated_tokens = xla_generate(**tokenized_input, num_beams=2) end = time.time_ns() print(f"Execution time -- {(end - start) / 1e6:.1f} ms\n")
On a Tesla T4 GPU, you can expect the outputs like so: ```bash Execution time -- 30819.6 ms Execution time -- 79.0 ms Execution time -- 78.9 ms `` The first call toxla_generate()` is time-consuming because of tracing, but the successive calls are orders of magnitude faster. Keep in mind that any change in the generation options at any point with trigger re-tracing and thus leading to slow-downs in the generation time. We didn’t cover all the text generation options 🤗 Transformers provides in this document. We encourage you to read the documentation for advanced use cases. Additional Resources Here, we leave you with some additional resources if you want to delve deeper into XLA in 🤗 Transformers and in general.
This Colab Notebook provides an interactive demonstration if you want to fiddle with the XLA-compatible encoder-decoder (like T5) and decoder-only (like GPT2) text generation models. This blog post provides an overview of the comparison benchmarks for XLA-compatible models along with a friendly introduction to XLA in TensorFlow. This blog post discusses our design philosophy behind adding XLA support to the TensorFlow models in 🤗 Transformers. Recommended posts for learning more about XLA and TensorFlow graphs in general: XLA: Optimizing Compiler for Machine Learning Introduction to graphs and tf.function Better performance with tf.function
Contribute new quantization method Transformers supports and integrates many quantization methods such as QLoRA, GPTQ, LLM.int8, and AWQ. However, there are other quantization approaches that are not yet integrated. To make adding and using these quantization methods with Transformers models easier, you should use the [HfQuantizer] class. The [HfQuantizer] is designed as an internal helper class for adding a quantization method instead of something you apply to every PyTorch module. This guide will show you how to integrate a new quantization method with the [HfQuantizer] class. Requirements Before integrating a new quantization method into Transformers, ensure the method you are trying to add meets the following prerequisites. Only quantization methods that can be run with PyTorch modules are currently supported.
The quantization method is available through a Python package that is pip-installable by anyone (it is also fine if you can only install the package from source). Ideally, pre-compiled kernels are included in the pip package. The method can run on commonly-used hardware (CPU, GPU, ). The method is wrapped in a nn.Module (e.g., Linear8bitLt, Linear4bit), and the quantized linear layer should have the following definition:
class Linear4bit(nn.Module): def init(self, ): def forward(self, x): return my_4bit_kernel(x, self.weight, self.bias) This way, Transformers models can be easily quantized by replacing some instances of nn.Linear with a target class. The quantization method should be serializable. You can save the quantized weights locally or push them to the Hub. Make sure the package that contains the quantization kernels/primitive is stable (no frequent breaking changes).
For some quantization methods, they may require "pre-quantizing" the models through data calibration (e.g., AWQ). In this case, we prefer to only support inference in Transformers and let the third-party library maintained by the ML community deal with the model quantization itself. Build a new HFQuantizer class
Create a new quantization config class inside src/transformers/utils/quantization_config.py and make sure to expose the new quantization config inside Transformers main init by adding it to the _import_structure object of src/transformers/init.py.
Create a new file inside src/transformers/quantizers/ named quantizer_your_method.py, and make it inherit from src/transformers/quantizers/base.py::HfQuantizer. Make sure to add the new quantizer and quantization config in the quantization auto-mapping in src/transformers/quantizers/auto.py. Define the following class attributes/property methods for your quantization method:
Define the following class attributes/property methods for your quantization method: requires_calibration: Whether the quantization method requires a data calibration process. If set to True, you can only support inference (with quantized weights) and not inference and quantization.
required_packages: A list of strings of the required packages to use the quantized weights. You might need to define some new utility methods such as is_auto_awq_available in transformers/src/utils/import_utils.py. requires_parameters_quantization: Only required if your quantization method requires extra attention to the underlying nn.Parameter object. For example, bitsandbytes uses Params4bit and Int8Param, which requires some extra attention when quantizing the model. Most of the recent quantization method packs int2/int4 weights inside torch.uint8 weights, so this flag should not be really required (set to False by default). is_serializable: A property method to determine whether the method is serializable or not.
is_trainable: A property method to determine whether you can fine-tune models on top of the quantization method (with or without PEFT approaches). Write the validate_environment and update_torch_dtype methods. These methods are called before creating the quantized model to ensure users use the right configuration. You can have a look at how this is done on other quantizers.
Write the _process_model_before_weight_loading method. In Transformers, the quantized models are initialized first on the "meta" device before loading the weights. This means the _process_model_before_weight_loading method takes care of manipulating the model skeleton to replace some modules (e.g., nn.Linear) with the target modules (quantization modules). You can define a module replacement logic or any other utility method by creating a new file in transformers/src/integrations/ and exposing the relevant methods in that folder's __init__.py file. The best starting point would be to have a look at another quantization methods such as quantizer_awq.py.
Write the _process_model_after_weight_loading method. This method enables implementing additional features that require manipulating the model after loading the weights. Document everything! Make sure your quantization method is documented in the docs/source/en/quantization.md file.
Document everything! Make sure your quantization method is documented in the docs/source/en/quantization.md file. Add tests! You should add tests by first adding the package in our nightly Dockerfile inside docker/transformers-quantization-latest-gpu and then adding a new test file in tests/quantization/xxx. Feel free to check out how it is implemented for other quantization methods.
"Autoregressive generation iteratively selects the next token from a probability distribution to generate text"
The process depicted above is repeated iteratively until some stopping condition is reached. Ideally, the stopping condition is dictated by the model, which should learn when to output an end-of-sequence (EOS) token. If this is not the case, generation stops when some predefined maximum length is reached. Properly setting up the token selection step and the stopping condition is essential to make your model behave as you'd expect on your task. That is why we have a [~generation.GenerationConfig] file associated with each model, which contains a good default generative parameterization and is loaded alongside your model. Let's talk code!
If you're interested in basic LLM usage, our high-level Pipeline interface is a great starting point. However, LLMs often require advanced features like quantization and fine control of the token selection step, which is best done through [~generation.GenerationMixin.generate]. Autoregressive generation with LLMs is also resource-intensive and should be executed on a GPU for adequate throughput. First, you need to load the model.
First, you need to load the model. from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained( "mistralai/Mistral-7B-v0.1", device_map="auto", load_in_4bit=True ) You'll notice two flags in the from_pretrained call: device_map ensures the model is moved to your GPU(s) load_in_4bit applies 4-bit dynamic quantization to massively reduce the resource requirements
device_map ensures the model is moved to your GPU(s) load_in_4bit applies 4-bit dynamic quantization to massively reduce the resource requirements There are other ways to initialize a model, but this is a good baseline to begin with an LLM. Next, you need to preprocess your text input with a tokenizer.
There are other ways to initialize a model, but this is a good baseline to begin with an LLM. Next, you need to preprocess your text input with a tokenizer. from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-v0.1", padding_side="left") model_inputs = tokenizer(["A list of colors: red, blue"], return_tensors="pt").to("cuda")
The model_inputs variable holds the tokenized text input, as well as the attention mask. While [~generation.GenerationMixin.generate] does its best effort to infer the attention mask when it is not passed, we recommend passing it whenever possible for optimal results. After tokenizing the inputs, you can call the [~generation.GenerationMixin.generate] method to returns the generated tokens. The generated tokens then should be converted to text before printing.
generated_ids = model.generate(**model_inputs) tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] 'A list of colors: red, blue, green, yellow, orange, purple, pink,' Finally, you don't need to do it one sequence at a time! You can batch your inputs, which will greatly improve the throughput at a small latency and memory cost. All you need to do is to make sure you pad your inputs properly (more on that below).
tokenizer.pad_token = tokenizer.eos_token # Most LLMs don't have a pad token by default model_inputs = tokenizer( ["A list of colors: red, blue", "Portugal is"], return_tensors="pt", padding=True ).to("cuda") generated_ids = model.generate(**model_inputs) tokenizer.batch_decode(generated_ids, skip_special_tokens=True) ['A list of colors: red, blue, green, yellow, orange, purple, pink,', 'Portugal is a country in southwestern Europe, on the Iber']
And that's it! In a few lines of code, you can harness the power of an LLM. Common pitfalls There are many generation strategies, and sometimes the default values may not be appropriate for your use case. If your outputs aren't aligned with what you're expecting, we've created a list of the most common pitfalls and how to avoid them.
from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-v0.1") tokenizer.pad_token = tokenizer.eos_token # Most LLMs don't have a pad token by default model = AutoModelForCausalLM.from_pretrained( "mistralai/Mistral-7B-v0.1", device_map="auto", load_in_4bit=True )
Generated output is too short/long If not specified in the [~generation.GenerationConfig] file, generate returns up to 20 tokens by default. We highly recommend manually setting max_new_tokens in your generate call to control the maximum number of new tokens it can return. Keep in mind LLMs (more precisely, decoder-only models) also return the input prompt as part of the output.
model_inputs = tokenizer(["A sequence of numbers: 1, 2"], return_tensors="pt").to("cuda") By default, the output will contain up to 20 tokens generated_ids = model.generate(**model_inputs) tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] 'A sequence of numbers: 1, 2, 3, 4, 5' Setting max_new_tokens allows you to control the maximum length generated_ids = model.generate(**model_inputs, max_new_tokens=50) tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] 'A sequence of numbers: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,'
Incorrect generation mode By default, and unless specified in the [~generation.GenerationConfig] file, generate selects the most likely token at each iteration (greedy decoding). Depending on your task, this may be undesirable; creative tasks like chatbots or writing an essay benefit from sampling. On the other hand, input-grounded tasks like audio transcription or translation benefit from greedy decoding. Enable sampling with do_sample=True, and you can learn more about this topic in this blog post.
Set seed or reproducibility -- you don't need this unless you want full reproducibility from transformers import set_seed set_seed(42) model_inputs = tokenizer(["I am a cat."], return_tensors="pt").to("cuda") LLM + greedy decoding = repetitive, boring output generated_ids = model.generate(**model_inputs) tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] 'I am a cat. I am a cat. I am a cat. I am a cat' With sampling, the output becomes more creative! generated_ids = model.generate(**model_inputs, do_sample=True) tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] 'I am a cat. Specifically, I am an indoor-only cat. I'
Wrong padding side LLMs are decoder-only architectures, meaning they continue to iterate on your input prompt. If your inputs do not have the same length, they need to be padded. Since LLMs are not trained to continue from pad tokens, your input needs to be left-padded. Make sure you also don't forget to pass the attention mask to generate!
The tokenizer initialized above has right-padding active by default: the 1st sequence, which is shorter, has padding on the right side. Generation fails to capture the logic. model_inputs = tokenizer( ["1, 2, 3", "A, B, C, D, E"], padding=True, return_tensors="pt" ).to("cuda") generated_ids = model.generate(**model_inputs) tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] '1, 2, 33333333333' With left-padding, it works as expected! tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-v0.1", padding_side="left") tokenizer.pad_token = tokenizer.eos_token # Most LLMs don't have a pad token by default model_inputs = tokenizer( ["1, 2, 3", "A, B, C, D, E"], padding=True, return_tensors="pt" ).to("cuda") generated_ids = model.generate(**model_inputs) tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] '1, 2, 3, 4, 5, 6,'
Wrong prompt Some models and tasks expect a certain input prompt format to work properly. When this format is not applied, you will get a silent performance degradation: the model kinda works, but not as well as if you were following the expected prompt. More information about prompting, including which models and tasks need to be careful, is available in this guide. Let's see an example with a chat LLM, which makes use of chat templating: thon
tokenizer = AutoTokenizer.from_pretrained("HuggingFaceH4/zephyr-7b-alpha") model = AutoModelForCausalLM.from_pretrained( "HuggingFaceH4/zephyr-7b-alpha", device_map="auto", load_in_4bit=True ) set_seed(0) prompt = """How many helicopters can a human eat in one sitting? Reply as a thug.""" model_inputs = tokenizer([prompt], return_tensors="pt").to("cuda") input_length = model_inputs.input_ids.shape[1] generated_ids = model.generate(**model_inputs, max_new_tokens=20) print(tokenizer.batch_decode(generated_ids[:, input_length:], skip_special_tokens=True)[0]) "I'm not a thug, but i can tell you that a human cannot eat" Oh no, it did not follow our instruction to reply as a thug! Let's see what happens when we write a better prompt and use the right template for this model (through tokenizer.apply_chat_template) set_seed(0) messages = [ { "role": "system", "content": "You are a friendly chatbot who always responds in the style of a thug", }, {"role": "user", "content": "How many helicopters can a human eat in one sitting?"}, ] model_inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to("cuda") input_length = model_inputs.shape[1] generated_ids = model.generate(model_inputs, do_sample=True, max_new_tokens=20) print(tokenizer.batch_decode(generated_ids[:, input_length:], skip_special_tokens=True)[0]) 'None, you thug. How bout you try to focus on more useful questions?' As we can see, it followed a proper thug style 😎
Further resources While the autoregressive generation process is relatively straightforward, making the most out of your LLM can be a challenging endeavor because there are many moving parts. For your next steps to help you dive deeper into LLM usage and understanding: Advanced generate usage
Guide on how to control different generation methods, how to set up the generation configuration file, and how to stream the output; Guide on the prompt template for chat LLMs; Guide on to get the most of prompt design; API reference on [~generation.GenerationConfig], [~generation.GenerationMixin.generate], and generate-related classes. Most of the classes, including the logits processors, have usage examples! LLM leaderboards
LLM leaderboards Open LLM Leaderboard, which focuses on the quality of the open-source models; Open LLM-Perf Leaderboard, which focuses on LLM throughput. Latency, throughput and memory utilization Guide on how to optimize LLMs for speed and memory; Guide on quantization such as bitsandbytes and autogptq, which shows you how to drastically reduce your memory requirements. Related libraries
Guide on how to optimize LLMs for speed and memory; Guide on quantization such as bitsandbytes and autogptq, which shows you how to drastically reduce your memory requirements. Related libraries text-generation-inference, a production-ready server for LLMs; optimum, an extension of 🤗 Transformers that optimizes for specific hardware devices.
Export to TorchScript This is the very beginning of our experiments with TorchScript and we are still exploring its capabilities with variable-input-size models. It is a focus of interest to us and we will deepen our analysis in upcoming releases, with more code examples, a more flexible implementation, and benchmarks comparing Python-based codes with compiled TorchScript. According to the TorchScript documentation: TorchScript is a way to create serializable and optimizable models from PyTorch code.
There are two PyTorch modules, JIT and TRACE, that allow developers to export their models to be reused in other programs like efficiency-oriented C++ programs. We provide an interface that allows you to export 🤗 Transformers models to TorchScript so they can be reused in a different environment than PyTorch-based Python programs. Here, we explain how to export and use our models using TorchScript. Exporting a model requires two things:
model instantiation with the torchscript flag a forward pass with dummy inputs
These necessities imply several things developers should be careful about as detailed below. TorchScript flag and tied weights The torchscript flag is necessary because most of the 🤗 Transformers language models have tied weights between their Embedding layer and their Decoding layer. TorchScript does not allow you to export models that have tied weights, so it is necessary to untie and clone the weights beforehand. Models instantiated with the torchscript flag have their Embedding layer and Decoding layer separated, which means that they should not be trained down the line. Training would desynchronize the two layers, leading to unexpected results. This is not the case for models that do not have a language model head, as those do not have tied weights. These models can be safely exported without the torchscript flag. Dummy inputs and standard lengths The dummy inputs are used for a models forward pass. While the inputs' values are propagated through the layers, PyTorch keeps track of the different operations executed on each tensor. These recorded operations are then used to create the trace of the model. The trace is created relative to the inputs' dimensions. It is therefore constrained by the dimensions of the dummy input, and will not work for any other sequence length or batch size. When trying with a different size, the following error is raised: `The expanded size of the tensor (3) must match the existing size (7) at non-singleton dimension 2` We recommended you trace the model with a dummy input size at least as large as the largest input that will be fed to the model during inference. Padding can help fill the missing values. However, since the model is traced with a larger input size, the dimensions of the matrix will also be large, resulting in more calculations. Be careful of the total number of operations done on each input and follow the performance closely when exporting varying sequence-length models. Using TorchScript in Python This section demonstrates how to save and load models as well as how to use the trace for inference. Saving a model To export a BertModel with TorchScript, instantiate BertModel from the BertConfig class and then save it to disk under the filename traced_bert.pt: thon from transformers import BertModel, BertTokenizer, BertConfig import torch enc = BertTokenizer.from_pretrained("google-bert/bert-base-uncased") Tokenizing input text text = "[CLS] Who was Jim Henson ? [SEP] Jim Henson was a puppeteer [SEP]" tokenized_text = enc.tokenize(text) Masking one of the input tokens masked_index = 8 tokenized_text[masked_index] = "[MASK]" indexed_tokens = enc.convert_tokens_to_ids(tokenized_text) segments_ids = [0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1] Creating a dummy input tokens_tensor = torch.tensor([indexed_tokens]) segments_tensors = torch.tensor([segments_ids]) dummy_input = [tokens_tensor, segments_tensors] Initializing the model with the torchscript flag Flag set to True even though it is not necessary as this model does not have an LM Head. config = BertConfig( vocab_size_or_config_json_file=32000, hidden_size=768, num_hidden_layers=12, num_attention_heads=12, intermediate_size=3072, torchscript=True, ) Instantiating the model model = BertModel(config) The model needs to be in evaluation mode model.eval() If you are instantiating the model with from_pretrained you can also easily set the TorchScript flag model = BertModel.from_pretrained("google-bert/bert-base-uncased", torchscript=True) Creating the trace traced_model = torch.jit.trace(model, [tokens_tensor, segments_tensors]) torch.jit.save(traced_model, "traced_bert.pt")
Loading a model Now you can load the previously saved BertModel, traced_bert.pt, from disk and use it on the previously initialised dummy_input: thon loaded_model = torch.jit.load("traced_bert.pt") loaded_model.eval() all_encoder_layers, pooled_output = loaded_model(*dummy_input)
Using a traced model for inference Use the traced model for inference by using its __call__ dunder method: python traced_model(tokens_tensor, segments_tensors) Deploy Hugging Face TorchScript models to AWS with the Neuron SDK AWS introduced the Amazon EC2 Inf1 instance family for low cost, high performance machine learning inference in the cloud. The Inf1 instances are powered by the AWS Inferentia chip, a custom-built hardware accelerator, specializing in deep learning inferencing workloads. AWS Neuron is the SDK for Inferentia that supports tracing and optimizing transformers models for deployment on Inf1. The Neuron SDK provides:
Easy-to-use API with one line of code change to trace and optimize a TorchScript model for inference in the cloud. Out of the box performance optimizations for improved cost-performance. Support for Hugging Face transformers models built with either PyTorch or TensorFlow.
Implications Transformers models based on the BERT (Bidirectional Encoder Representations from Transformers) architecture, or its variants such as distilBERT and roBERTa run best on Inf1 for non-generative tasks such as extractive question answering, sequence classification, and token classification. However, text generation tasks can still be adapted to run on Inf1 according to this AWS Neuron MarianMT tutorial. More information about models that can be converted out of the box on Inferentia can be found in the Model Architecture Fit section of the Neuron documentation. Dependencies Using AWS Neuron to convert models requires a Neuron SDK environment which comes preconfigured on AWS Deep Learning AMI. Converting a model for AWS Neuron Convert a model for AWS NEURON using the same code from Using TorchScript in Python to trace a BertModel. Import the torch.neuron framework extension to access the components of the Neuron SDK through a Python API: python from transformers import BertModel, BertTokenizer, BertConfig import torch import torch.neuron You only need to modify the following line: diff - torch.jit.trace(model, [tokens_tensor, segments_tensors]) + torch.neuron.trace(model, [token_tensor, segments_tensors]) This enables the Neuron SDK to trace the model and optimize it for Inf1 instances. To learn more about AWS Neuron SDK features, tools, example tutorials and latest updates, please see the AWS NeuronSDK documentation.
Training on TPU with TensorFlow If you don't need long explanations and just want TPU code samples to get started with, check out our TPU example notebook!
What is a TPU? A TPU is a Tensor Processing Unit. They are hardware designed by Google, which are used to greatly speed up the tensor computations within neural networks, much like GPUs. They can be used for both network training and inference. They are generally accessed through Google’s cloud services, but small TPUs can also be accessed directly for free through Google Colab and Kaggle Kernels. Because all TensorFlow models in 🤗 Transformers are Keras models, most of the methods in this document are generally applicable to TPU training for any Keras model! However, there are a few points that are specific to the HuggingFace ecosystem (hug-o-system?) of Transformers and Datasets, and we’ll make sure to flag them up when we get to them. What kinds of TPU are available? New users are often very confused by the range of TPUs, and the different ways to access them. The first key distinction to understand is the difference between TPU Nodes and TPU VMs. When you use a TPU Node, you are effectively indirectly accessing a remote TPU. You will need a separate VM, which will initialize your network and data pipeline and then forward them to the remote node. When you use a TPU on Google Colab, you are accessing it in the TPU Node style. Using TPU Nodes can have some quite unexpected behaviour for people who aren’t used to them! In particular, because the TPU is located on a physically different system to the machine you’re running your Python code on, your data cannot be local to your machine - any data pipeline that loads from your machine’s internal storage will totally fail! Instead, data must be stored in Google Cloud Storage where your data pipeline can still access it, even when the pipeline is running on the remote TPU node.
If you can fit all your data in memory as np.ndarray or tf.Tensor, then you can fit() on that data even when using Colab or a TPU Node, without needing to upload it to Google Cloud Storage.
🤗Specific Hugging Face Tip🤗: The methods Dataset.to_tf_dataset() and its higher-level wrapper model.prepare_tf_dataset() , which you will see throughout our TF code examples, will both fail on a TPU Node. The reason for this is that even though they create a tf.data.Dataset it is not a “pure” tf.data pipeline and uses tf.numpy_function or Dataset.from_generator() to stream data from the underlying HuggingFace Dataset. This HuggingFace Dataset is backed by data that is on a local disc and which the remote TPU Node will not be able to read.
The second way to access a TPU is via a TPU VM. When using a TPU VM, you connect directly to the machine that the TPU is attached to, much like training on a GPU VM. TPU VMs are generally easier to work with, particularly when it comes to your data pipeline. All of the above warnings do not apply to TPU VMs! This is an opinionated document, so here’s our opinion: Avoid using TPU Node if possible. It is more confusing and more difficult to debug than TPU VMs. It is also likely to be unsupported in future - Google’s latest TPU, TPUv4, can only be accessed as a TPU VM, which suggests that TPU Nodes are increasingly going to become a “legacy” access method. However, we understand that the only free TPU access is on Colab and Kaggle Kernels, which uses TPU Node - so we’ll try to explain how to handle it if you have to! Check the TPU example notebook for code samples that explain this in more detail. What sizes of TPU are available? A single TPU (a v2-8/v3-8/v4-8) runs 8 replicas. TPUs exist in pods that can run hundreds or thousands of replicas simultaneously. When you use more than a single TPU but less than a whole pod (for example, a v3-32), your TPU fleet is referred to as a pod slice. When you access a free TPU via Colab, you generally get a single v2-8 TPU. I keep hearing about this XLA thing. What’s XLA, and how does it relate to TPUs? XLA is an optimizing compiler, used by both TensorFlow and JAX. In JAX it is the only compiler, whereas in TensorFlow it is optional (but mandatory on TPU!). The easiest way to enable it when training a Keras model is to pass the argument jit_compile=True to model.compile(). If you don’t get any errors and performance is good, that’s a great sign that you’re ready to move to TPU! Debugging on TPU is generally a bit harder than on CPU/GPU, so we recommend getting your code running on CPU/GPU with XLA first before trying it on TPU. You don’t have to train for long, of course - just for a few steps to make sure that your model and data pipeline are working like you expect them to.
XLA compiled code is usually faster - so even if you’re not planning to run on TPU, adding jit_compile=True can improve your performance. Be sure to note the caveats below about XLA compatibility, though!
Tip born of painful experience: Although using jit_compile=True is a good way to get a speed boost and test if your CPU/GPU code is XLA-compatible, it can actually cause a lot of problems if you leave it in when actually training on TPU. XLA compilation will happen implicitly on TPU, so remember to remove that line before actually running your code on a TPU!
How do I make my model XLA compatible? In many cases, your code is probably XLA-compatible already! However, there are a few things that work in normal TensorFlow that don’t work in XLA. We’ve distilled them into three core rules below:
🤗Specific HuggingFace Tip🤗: We’ve put a lot of effort into rewriting our TensorFlow models and loss functions to be XLA-compatible. Our models and loss functions generally obey rule #1 and #2 by default, so you can skip over them if you’re using transformers models. Don’t forget about these rules when writing your own models and loss functions, though!
XLA Rule #1: Your code cannot have “data-dependent conditionals” What that means is that any if statement cannot depend on values inside a tf.Tensor. For example, this code block cannot be compiled with XLA! python if tf.reduce_sum(tensor) > 10: tensor = tensor / 2.0 This might seem very restrictive at first, but most neural net code doesn’t need to do this. You can often get around this restriction by using tf.cond (see the documentation here) or by removing the conditional and finding a clever math trick with indicator variables instead, like so: python sum_over_10 = tf.cast(tf.reduce_sum(tensor) > 10, tf.float32) tensor = tensor / (1.0 + sum_over_10) This code has exactly the same effect as the code above, but by avoiding a conditional, we ensure it will compile with XLA without problems! XLA Rule #2: Your code cannot have “data-dependent shapes” What this means is that the shape of all of the tf.Tensor objects in your code cannot depend on their values. For example, the function tf.unique cannot be compiled with XLA, because it returns a tensor containing one instance of each unique value in the input. The shape of this output will obviously be different depending on how repetitive the input Tensor was, and so XLA refuses to handle it! In general, most neural network code obeys rule #2 by default. However, there are a few common cases where it becomes a problem. One very common one is when you use label masking, setting your labels to a negative value to indicate that those positions should be ignored when computing the loss. If you look at NumPy or PyTorch loss functions that support label masking, you will often see code like this that uses boolean indexing: python label_mask = labels >= 0 masked_outputs = outputs[label_mask] masked_labels = labels[label_mask] loss = compute_loss(masked_outputs, masked_labels) mean_loss = torch.mean(loss) This code is totally fine in NumPy or PyTorch, but it breaks in XLA! Why? Because the shape of masked_outputs and masked_labels depends on how many positions are masked - that makes it a data-dependent shape. However, just like for rule #1, we can often rewrite this code to yield exactly the same output without any data-dependent shapes. python label_mask = tf.cast(labels >= 0, tf.float32) loss = compute_loss(outputs, labels) loss = loss * label_mask # Set negative label positions to 0 mean_loss = tf.reduce_sum(loss) / tf.reduce_sum(label_mask) Here, we avoid data-dependent shapes by computing the loss for every position, but zeroing out the masked positions in both the numerator and denominator when we calculate the mean, which yields exactly the same result as the first block while maintaining XLA compatibility. Note that we use the same trick as in rule #1 - converting a tf.bool to tf.float32 and using it as an indicator variable. This is a really useful trick, so remember it if you need to convert your own code to XLA! XLA Rule #3: XLA will need to recompile your model for every different input shape it sees This is the big one. What this means is that if your input shapes are very variable, XLA will have to recompile your model over and over, which will create huge performance problems. This commonly arises in NLP models, where input texts have variable lengths after tokenization. In other modalities, static shapes are more common and this rule is much less of a problem. How can you get around rule #3? The key is padding - if you pad all your inputs to the same length, and then use an attention_mask, you can get the same results as you’d get from variable shapes, but without any XLA issues. However, excessive padding can cause severe slowdown too - if you pad all your samples to the maximum length in the whole dataset, you might end up with batches consisting endless padding tokens, which will waste a lot of compute and memory! There isn’t a perfect solution to this problem. However, you can try some tricks. One very useful trick is to pad batches of samples up to a multiple of a number like 32 or 64 tokens. This often only increases the number of tokens by a small amount, but it hugely reduces the number of unique input shapes, because every input shape now has to be a multiple of 32 or 64. Fewer unique input shapes means fewer XLA compilations!
🤗Specific HuggingFace Tip🤗: Our tokenizers and data collators have methods that can help you here. You can use padding="max_length" or padding="longest" when calling tokenizers to get them to output padded data. Our tokenizers and data collators also have a pad_to_multiple_of argument that you can use to reduce the number of unique input shapes you see!
How do I actually train my model on TPU? Once your training is XLA-compatible and (if you’re using TPU Node / Colab) your dataset has been prepared appropriately, running on TPU is surprisingly easy! All you really need to change in your code is to add a few lines to initialize your TPU, and to ensure that your model and dataset are created inside a TPUStrategy scope. Take a look at our TPU example notebook to see this in action! Summary There was a lot in here, so let’s summarize with a quick checklist you can follow when you want to get your model ready for TPU training:
Make sure your code follows the three rules of XLA Compile your model with jit_compile=True on CPU/GPU and confirm that you can train it with XLA Either load your dataset into memory or use a TPU-compatible dataset loading approach (see notebook) Migrate your code either to Colab (with accelerator set to “TPU”) or a TPU VM on Google Cloud Add TPU initializer code (see notebook) Create your TPUStrategy and make sure dataset loading and model creation are inside the strategy.scope() (see notebook) Don’t forget to take jit_compile=True out again when you move to TPU! 🙏🙏🙏🥺🥺🥺 Call model.fit() You did it!
Quick tour [[open-in-colab]] Get up and running with 🤗 Transformers! Whether you're a developer or an everyday user, this quick tour will help you get started and show you how to use the [pipeline] for inference, load a pretrained model and preprocessor with an AutoClass, and quickly train a model with PyTorch or TensorFlow. If you're a beginner, we recommend checking out our tutorials or course next for more in-depth explanations of the concepts introduced here. Before you begin, make sure you have all the necessary libraries installed:
!pip install transformers datasets You'll also need to install your preferred machine learning framework: pip install torch pip install tensorflow Pipeline The [pipeline] is the easiest and fastest way to use a pretrained model for inference. You can use the [pipeline] out-of-the-box for many tasks across different modalities, some of which are shown in the table below: For a complete list of available tasks, check out the pipeline API reference.
| Task | Description | Modality | Pipeline identifier | |------------------------------|--------------------------------------------------------------------------------------------------------------|-----------------|-----------------------------------------------| | Text classification | assign a label to a given sequence of text | NLP | pipeline(task=“sentiment-analysis”) | | Text generation | generate text given a prompt | NLP | pipeline(task=“text-generation”) | | Summarization | generate a summary of a sequence of text or document | NLP | pipeline(task=“summarization”) | | Image classification | assign a label to an image | Computer vision | pipeline(task=“image-classification”) | | Image segmentation | assign a label to each individual pixel of an image (supports semantic, panoptic, and instance segmentation) | Computer vision | pipeline(task=“image-segmentation”) | | Object detection | predict the bounding boxes and classes of objects in an image | Computer vision | pipeline(task=“object-detection”) | | Audio classification | assign a label to some audio data | Audio | pipeline(task=“audio-classification”) | | Automatic speech recognition | transcribe speech into text | Audio | pipeline(task=“automatic-speech-recognition”) | | Visual question answering | answer a question about the image, given an image and a question | Multimodal | pipeline(task=“vqa”) | | Document question answering | answer a question about the document, given a document and a question | Multimodal | pipeline(task="document-question-answering") | | Image captioning | generate a caption for a given image | Multimodal | pipeline(task="image-to-text") | Start by creating an instance of [pipeline] and specifying a task you want to use it for. In this guide, you'll use the [pipeline] for sentiment analysis as an example:
from transformers import pipeline classifier = pipeline("sentiment-analysis") The [pipeline] downloads and caches a default pretrained model and tokenizer for sentiment analysis. Now you can use the classifier on your target text: classifier("We are very happy to show you the 🤗 Transformers library.") [{'label': 'POSITIVE', 'score': 0.9998}] If you have more than one input, pass your inputs as a list to the [pipeline] to return a list of dictionaries:
If you have more than one input, pass your inputs as a list to the [pipeline] to return a list of dictionaries: results = classifier(["We are very happy to show you the 🤗 Transformers library.", "We hope you don't hate it."]) for result in results: print(f"label: {result['label']}, with score: {round(result['score'], 4)}") label: POSITIVE, with score: 0.9998 label: NEGATIVE, with score: 0.5309
The [pipeline] can also iterate over an entire dataset for any task you like. For this example, let's choose automatic speech recognition as our task: import torch from transformers import pipeline speech_recognizer = pipeline("automatic-speech-recognition", model="facebook/wav2vec2-base-960h") Load an audio dataset (see the 🤗 Datasets Quick Start for more details) you'd like to iterate over. For example, load the MInDS-14 dataset:
Load an audio dataset (see the 🤗 Datasets Quick Start for more details) you'd like to iterate over. For example, load the MInDS-14 dataset: from datasets import load_dataset, Audio dataset = load_dataset("PolyAI/minds14", name="en-US", split="train") # doctest: +IGNORE_RESULT You need to make sure the sampling rate of the dataset matches the sampling rate facebook/wav2vec2-base-960h was trained on:
You need to make sure the sampling rate of the dataset matches the sampling rate facebook/wav2vec2-base-960h was trained on: dataset = dataset.cast_column("audio", Audio(sampling_rate=speech_recognizer.feature_extractor.sampling_rate)) The audio files are automatically loaded and resampled when calling the "audio" column. Extract the raw waveform arrays from the first 4 samples and pass it as a list to the pipeline:
result = speech_recognizer(dataset[:4]["audio"]) print([d["text"] for d in result]) ['I WOULD LIKE TO SET UP A JOINT ACCOUNT WITH MY PARTNER HOW DO I PROCEED WITH DOING THAT', "FONDERING HOW I'D SET UP A JOIN TO HELL T WITH MY WIFE AND WHERE THE AP MIGHT BE", "I I'D LIKE TOY SET UP A JOINT ACCOUNT WITH MY PARTNER I'M NOT SEEING THE OPTION TO DO IT ON THE APSO I CALLED IN TO GET SOME HELP CAN I JUST DO IT OVER THE PHONE WITH YOU AND GIVE YOU THE INFORMATION OR SHOULD I DO IT IN THE AP AN I'M MISSING SOMETHING UQUETTE HAD PREFERRED TO JUST DO IT OVER THE PHONE OF POSSIBLE THINGS", 'HOW DO I FURN A JOINA COUT']
For larger datasets where the inputs are big (like in speech or vision), you'll want to pass a generator instead of a list to load all the inputs in memory. Take a look at the pipeline API reference for more information. Use another model and tokenizer in the pipeline The [pipeline] can accommodate any model from the Hub, making it easy to adapt the [pipeline] for other use-cases. For example, if you'd like a model capable of handling French text, use the tags on the Hub to filter for an appropriate model. The top filtered result returns a multilingual BERT model finetuned for sentiment analysis you can use for French text:
model_name = "nlptown/bert-base-multilingual-uncased-sentiment" Use [AutoModelForSequenceClassification] and [AutoTokenizer] to load the pretrained model and it's associated tokenizer (more on an AutoClass in the next section):
from transformers import AutoTokenizer, AutoModelForSequenceClassification model = AutoModelForSequenceClassification.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) `` </pt> <tf> Use [TFAutoModelForSequenceClassification] and [AutoTokenizer] to load the pretrained model and it's associated tokenizer (more on anTFAutoClass` in the next section):
from transformers import AutoTokenizer, TFAutoModelForSequenceClassification model = TFAutoModelForSequenceClassification.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) Specify the model and tokenizer in the [pipeline], and now you can apply the classifier on French text:
Specify the model and tokenizer in the [pipeline], and now you can apply the classifier on French text: classifier = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer) classifier("Nous sommes très heureux de vous présenter la bibliothèque 🤗 Transformers.") [{'label': '5 stars', 'score': 0.7273}]
If you can't find a model for your use-case, you'll need to finetune a pretrained model on your data. Take a look at our finetuning tutorial to learn how. Finally, after you've finetuned your pretrained model, please consider sharing the model with the community on the Hub to democratize machine learning for everyone! 🤗 AutoClass
Under the hood, the [AutoModelForSequenceClassification] and [AutoTokenizer] classes work together to power the [pipeline] you used above. An AutoClass is a shortcut that automatically retrieves the architecture of a pretrained model from its name or path. You only need to select the appropriate AutoClass for your task and it's associated preprocessing class. Let's return to the example from the previous section and see how you can use the AutoClass to replicate the results of the [pipeline]. AutoTokenizer A tokenizer is responsible for preprocessing text into an array of numbers as inputs to a model. There are multiple rules that govern the tokenization process, including how to split a word and at what level words should be split (learn more about tokenization in the tokenizer summary). The most important thing to remember is you need to instantiate a tokenizer with the same model name to ensure you're using the same tokenization rules a model was pretrained with. Load a tokenizer with [AutoTokenizer]:
from transformers import AutoTokenizer model_name = "nlptown/bert-base-multilingual-uncased-sentiment" tokenizer = AutoTokenizer.from_pretrained(model_name) Pass your text to the tokenizer:
Pass your text to the tokenizer: encoding = tokenizer("We are very happy to show you the 🤗 Transformers library.") print(encoding) {'input_ids': [101, 11312, 10320, 12495, 19308, 10114, 11391, 10855, 10103, 100, 58263, 13299, 119, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} The tokenizer returns a dictionary containing:
The tokenizer returns a dictionary containing: input_ids: numerical representations of your tokens. attention_mask: indicates which tokens should be attended to. A tokenizer can also accept a list of inputs, and pad and truncate the text to return a batch with uniform length: pt_batch = tokenizer( ["We are very happy to show you the 🤗 Transformers library.", "We hope you don't hate it."], padding=True, truncation=True, max_length=512, return_tensors="pt", )
tf_batch = tokenizer( ["We are very happy to show you the 🤗 Transformers library.", "We hope you don't hate it."], padding=True, truncation=True, max_length=512, return_tensors="tf", ) Check out the preprocess tutorial for more details about tokenization, and how to use an [AutoImageProcessor], [AutoFeatureExtractor] and [AutoProcessor] to preprocess image, audio, and multimodal inputs. AutoModel
AutoModel 🤗 Transformers provides a simple and unified way to load pretrained instances. This means you can load an [AutoModel] like you would load an [AutoTokenizer]. The only difference is selecting the correct [AutoModel] for the task. For text (or sequence) classification, you should load [AutoModelForSequenceClassification]:
from transformers import AutoModelForSequenceClassification model_name = "nlptown/bert-base-multilingual-uncased-sentiment" pt_model = AutoModelForSequenceClassification.from_pretrained(model_name) See the task summary for tasks supported by an [AutoModel] class. Now pass your preprocessed batch of inputs directly to the model. You just have to unpack the dictionary by adding **: pt_outputs = pt_model(**pt_batch)
Now pass your preprocessed batch of inputs directly to the model. You just have to unpack the dictionary by adding **: pt_outputs = pt_model(**pt_batch) The model outputs the final activations in the logits attribute. Apply the softmax function to the logits to retrieve the probabilities:
from torch import nn pt_predictions = nn.functional.softmax(pt_outputs.logits, dim=-1) print(pt_predictions) tensor([[0.0021, 0.0018, 0.0115, 0.2121, 0.7725], [0.2084, 0.1826, 0.1969, 0.1755, 0.2365]], grad_fn=) `` </pt> <tf> 🤗 Transformers provides a simple and unified way to load pretrained instances. This means you can load an [TFAutoModel] like you would load an [AutoTokenizer]. The only difference is selecting the correct [TFAutoModel] for the task. For text (or sequence) classification, you should load [TFAutoModelForSequenceClassification`]:
from transformers import TFAutoModelForSequenceClassification model_name = "nlptown/bert-base-multilingual-uncased-sentiment" tf_model = TFAutoModelForSequenceClassification.from_pretrained(model_name) See the task summary for tasks supported by an [AutoModel] class. Now pass your preprocessed batch of inputs directly to the model. You can pass the tensors as-is: tf_outputs = tf_model(tf_batch)
See the task summary for tasks supported by an [AutoModel] class. Now pass your preprocessed batch of inputs directly to the model. You can pass the tensors as-is: tf_outputs = tf_model(tf_batch) The model outputs the final activations in the logits attribute. Apply the softmax function to the logits to retrieve the probabilities: import tensorflow as tf tf_predictions = tf.nn.softmax(tf_outputs.logits, axis=-1) tf_predictions # doctest: +IGNORE_RESULT
All 🤗 Transformers models (PyTorch or TensorFlow) output the tensors before the final activation function (like softmax) because the final activation function is often fused with the loss. Model outputs are special dataclasses so their attributes are autocompleted in an IDE. The model outputs behave like a tuple or a dictionary (you can index with an integer, a slice or a string) in which case, attributes that are None are ignored. Save a model
Save a model Once your model is fine-tuned, you can save it with its tokenizer using [PreTrainedModel.save_pretrained]: pt_save_directory = "./pt_save_pretrained" tokenizer.save_pretrained(pt_save_directory) # doctest: +IGNORE_RESULT pt_model.save_pretrained(pt_save_directory) When you are ready to use the model again, reload it with [PreTrainedModel.from_pretrained]:
When you are ready to use the model again, reload it with [PreTrainedModel.from_pretrained]: pt_model = AutoModelForSequenceClassification.from_pretrained("./pt_save_pretrained") `` </pt> <tf> Once your model is fine-tuned, you can save it with its tokenizer using [TFPreTrainedModel.save_pretrained`]: tf_save_directory = "./tf_save_pretrained" tokenizer.save_pretrained(tf_save_directory) # doctest: +IGNORE_RESULT tf_model.save_pretrained(tf_save_directory)
tf_save_directory = "./tf_save_pretrained" tokenizer.save_pretrained(tf_save_directory) # doctest: +IGNORE_RESULT tf_model.save_pretrained(tf_save_directory) When you are ready to use the model again, reload it with [TFPreTrainedModel.from_pretrained]: tf_model = TFAutoModelForSequenceClassification.from_pretrained("./tf_save_pretrained")
When you are ready to use the model again, reload it with [TFPreTrainedModel.from_pretrained]: tf_model = TFAutoModelForSequenceClassification.from_pretrained("./tf_save_pretrained") One particularly cool 🤗 Transformers feature is the ability to save a model and reload it as either a PyTorch or TensorFlow model. The from_pt or from_tf parameter can convert the model from one framework to the other:
from transformers import AutoModel tokenizer = AutoTokenizer.from_pretrained(tf_save_directory) pt_model = AutoModelForSequenceClassification.from_pretrained(tf_save_directory, from_tf=True) from transformers import TFAutoModel tokenizer = AutoTokenizer.from_pretrained(pt_save_directory) tf_model = TFAutoModelForSequenceClassification.from_pretrained(pt_save_directory, from_pt=True)
Custom model builds You can modify the model's configuration class to change how a model is built. The configuration specifies a model's attributes, such as the number of hidden layers or attention heads. You start from scratch when you initialize a model from a custom configuration class. The model attributes are randomly initialized, and you'll need to train the model before you can use it to get meaningful results. Start by importing [AutoConfig], and then load the pretrained model you want to modify. Within [AutoConfig.from_pretrained], you can specify the attribute you want to change, such as the number of attention heads:
from transformers import AutoConfig my_config = AutoConfig.from_pretrained("distilbert/distilbert-base-uncased", n_heads=12) Create a model from your custom configuration with [AutoModel.from_config]: from transformers import AutoModel my_model = AutoModel.from_config(my_config) `` </pt> <tf> Create a model from your custom configuration with [TFAutoModel.from_config`]: from transformers import TFAutoModel my_model = TFAutoModel.from_config(my_config)