source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/how_to_hack_models.md
https://huggingface.co/docs/transformers/en/how_to_hack_models/#how-to-hack-any-transformers-model
.md
The [🤗 Transformers](https://github.com/huggingface/transformers) library offers a collection of pre-trained models and tools for natural language processing, vision, and beyond. While these models cover a wide range of applications, you might encounter use cases that aren't supported out of the box. Customizing models can unlock new possibilities, such as adding new layers, altering architectures, or optimizing attention mechanisms. This guide will show you how to modify existing Transformers models to fit
21_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/how_to_hack_models.md
https://huggingface.co/docs/transformers/en/how_to_hack_models/#how-to-hack-any-transformers-model
.md
architectures, or optimizing attention mechanisms. This guide will show you how to modify existing Transformers models to fit your specific needs. The great thing is, you don’t have to step away from the Transformers framework to make these changes. You can actually modify models directly in Transformers and still take advantage of features like the [Trainer API](https://huggingface.co/docs/transformers/main/en/main_classes/trainer),
21_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/how_to_hack_models.md
https://huggingface.co/docs/transformers/en/how_to_hack_models/#how-to-hack-any-transformers-model
.md
still take advantage of features like the [Trainer API](https://huggingface.co/docs/transformers/main/en/main_classes/trainer), [PreTrainedModel](https://huggingface.co/docs/transformers/main/en/main_classes/model#transformers.PreTrainedModel), and efficient fine-tuning with tools like [PEFT](https://huggingface.co/docs/peft/index).
21_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/how_to_hack_models.md
https://huggingface.co/docs/transformers/en/how_to_hack_models/#how-to-hack-any-transformers-model
.md
In this guide, we’ll walk you through how to customize existing Transformers models to meet your requirements—without losing the benefits of the ecosystem. You'll learn how to: - Modify a model's architecture by changing its attention mechanism. - Apply techniques like Low-Rank Adaptation (LoRA) to specific model components. We encourage you to contribute your own hacks and share them here with the community1
21_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/how_to_hack_models.md
https://huggingface.co/docs/transformers/en/how_to_hack_models/#example-modifying-the-attention-mechanism-in-the-segment-anything-model-sam
.md
The **Segment Anything Model (SAM)** is a state-of-the-art model for image segmentation. In its default implementation, SAM uses a combined query-key-value (`qkv`) projection in its attention mechanism. However, you might want to fine-tune only specific components of the attention mechanism, such as the query (`q`) and value (`v`) projections, to reduce the number of trainable parameters and computational resources required.
21_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/how_to_hack_models.md
https://huggingface.co/docs/transformers/en/how_to_hack_models/#motivation
.md
By splitting the combined `qkv` projection into separate `q`, `k`, and `v` projections, you can apply techniques like **LoRA** (Low-Rank Adaptation) to only the `q` and `v` projections. This approach allows you to: - Fine-tune fewer parameters, reducing computational overhead. - Potentially achieve better performance by focusing on specific components. - Experiment with different adaptation strategies in the attention mechanism.
21_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/how_to_hack_models.md
https://huggingface.co/docs/transformers/en/how_to_hack_models/#step-1-create-a-custom-attention-class
.md
Next, subclass the original `SamVisionAttention` class and modify it to have separate `q`, `k`, and `v` projections. ```python import torch import torch.nn as nn from transformers.models.sam.modeling_sam import SamVisionAttention
21_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/how_to_hack_models.md
https://huggingface.co/docs/transformers/en/how_to_hack_models/#step-1-create-a-custom-attention-class
.md
class SamVisionAttentionSplit(SamVisionAttention, nn.Module): def __init__(self, config, window_size): super().__init__(config, window_size) del self.qkv # Separate q, k, v projections self.q = nn.Linear(config.hidden_size, config.hidden_size, bias=config.qkv_bias) self.k = nn.Linear(config.hidden_size, config.hidden_size, bias=config.qkv_bias) self.v = nn.Linear(config.hidden_size, config.hidden_size, bias=config.qkv_bias) self._register_load_state_dict_pre_hook(self.split_q_k_v_load_hook)
21_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/how_to_hack_models.md
https://huggingface.co/docs/transformers/en/how_to_hack_models/#step-1-create-a-custom-attention-class
.md
def split_q_k_v_load_hook(self, state_dict, prefix, *args): keys_to_delete = [] for key in list(state_dict.keys()): if "qkv." in key: # Split q, k, v from the combined projection q, k, v = state_dict[key].chunk(3, dim=0) # Replace with individual q, k, v projections state_dict[key.replace("qkv.", "q.")] = q state_dict[key.replace("qkv.", "k.")] = k state_dict[key.replace("qkv.", "v.")] = v # Mark the old qkv key for deletion keys_to_delete.append(key)
21_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/how_to_hack_models.md
https://huggingface.co/docs/transformers/en/how_to_hack_models/#step-1-create-a-custom-attention-class
.md
# Remove old qkv keys for key in keys_to_delete: del state_dict[key]
21_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/how_to_hack_models.md
https://huggingface.co/docs/transformers/en/how_to_hack_models/#step-1-create-a-custom-attention-class
.md
def forward(self, hidden_states: torch.Tensor, output_attentions=False) -> torch.Tensor: batch_size, height, width, _ = hidden_states.shape qkv_shapes = (batch_size * self.num_attention_heads, height * width, -1) query = self.q(hidden_states).reshape((batch_size, height * width,self.num_attention_heads, -1)).permute(0,2,1,3).reshape(qkv_shapes) key = self.k(hidden_states).reshape((batch_size, height * width,self.num_attention_heads, -1)).permute(0,2,1,3).reshape(qkv_shapes)
21_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/how_to_hack_models.md
https://huggingface.co/docs/transformers/en/how_to_hack_models/#step-1-create-a-custom-attention-class
.md
value = self.v(hidden_states).reshape((batch_size, height * width,self.num_attention_heads, -1)).permute(0,2,1,3).reshape(qkv_shapes)
21_4_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/how_to_hack_models.md
https://huggingface.co/docs/transformers/en/how_to_hack_models/#step-1-create-a-custom-attention-class
.md
attn_weights = (query * self.scale) @ key.transpose(-2, -1) if self.use_rel_pos: attn_weights = self.add_decomposed_rel_pos( attn_weights, query, self.rel_pos_h, self.rel_pos_w, (height, width), (height, width) )
21_4_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/how_to_hack_models.md
https://huggingface.co/docs/transformers/en/how_to_hack_models/#step-1-create-a-custom-attention-class
.md
attn_weights = torch.nn.functional.softmax(attn_weights, dtype=torch.float32, dim=-1).to(query.dtype) attn_probs = nn.functional.dropout(attn_weights, p=self.dropout, training=self.training) attn_output = (attn_probs @ value).reshape(batch_size, self.num_attention_heads, height, width, -1) attn_output = attn_output.permute(0, 2, 3, 1, 4).reshape(batch_size, height, width, -1) attn_output = self.proj(attn_output)
21_4_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/how_to_hack_models.md
https://huggingface.co/docs/transformers/en/how_to_hack_models/#step-1-create-a-custom-attention-class
.md
if output_attentions: outputs = (attn_output, attn_weights) else: outputs = (attn_output, None) return outputs ``` **Explanation:** - **Separate Projections:** The combined `qkv` projection is removed, and separate `q`, `k`, and `v` linear layers are created. - **Weight Loading Hook:** The `_split_qkv_load_hook` method splits the pre-trained `qkv` weights into separate `q`, `k`, and `v` weights when loading the model. This ensures compatibility with any pre-trained model.
21_4_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/how_to_hack_models.md
https://huggingface.co/docs/transformers/en/how_to_hack_models/#step-1-create-a-custom-attention-class
.md
- **Forward Pass:** Queries, keys, and values are computed separately, and the attention mechanism proceeds as usual.
21_4_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/how_to_hack_models.md
https://huggingface.co/docs/transformers/en/how_to_hack_models/#step-2-replace-the-original-attention-class
.md
Replace the original `SamVisionAttention` class with your custom class so that the model uses the modified attention mechanism. ```python from transformers import SamModel from transformers.models.sam import modeling_sam # Replace the attention class in the modeling_sam module modeling_sam.SamVisionAttention = SamVisionAttentionSplit
21_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/how_to_hack_models.md
https://huggingface.co/docs/transformers/en/how_to_hack_models/#step-2-replace-the-original-attention-class
.md
# Load the pre-trained SAM model model = SamModel.from_pretrained("facebook/sam-vit-base") ``` **Explanation:** - **Class Replacement:** By assigning your custom class to `modeling_sam.SamVisionAttention`, any instances of `SamVisionAttention` in the model will use the modified version. Thus when you call `SamModel`, it will use the newly defined `SamVisionAttentionSplit`. - **Model Loading:** The model is loaded using `from_pretrained`, and the custom attention mechanism is integrated.
21_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/how_to_hack_models.md
https://huggingface.co/docs/transformers/en/how_to_hack_models/#step-3-apply-lora-to-specific-projections
.md
With separate `q`, `k`, and `v` projections, you can now apply LoRA to specific components, such as the `q` and `v` projections. ```python from peft import LoraConfig, get_peft_model config = LoraConfig( r=16, lora_alpha=32, target_modules=["q", "v"], # Apply LoRA to q and v projections lora_dropout=0.1, task_type="mask-generation" )
21_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/how_to_hack_models.md
https://huggingface.co/docs/transformers/en/how_to_hack_models/#step-3-apply-lora-to-specific-projections
.md
# Apply LoRA to the model model = get_peft_model(model, config) ``` **Explanation:** - **LoRA Configuration:** The `LoraConfig` specifies the rank `r`, scaling factor `lora_alpha`, target modules (`"q"` and `"v"`), dropout, and task type. - **Applying LoRA:** The `get_peft_model` function applies LoRA to the specified modules in the model. - **Parameter Reduction:** By focusing on `q` and `v`, you reduce the number of trainable parameters, leading to faster training and lower memory usage.
21_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/how_to_hack_models.md
https://huggingface.co/docs/transformers/en/how_to_hack_models/#step-4-verify-the-number-of-trainable-parameters
.md
It's simple to verify the number of trainable parameters and see what impact your modification had. ```python model.print_trainable_parameters() ``` **Expected Output:** ``` trainable params: 608,256 || all params: 94,343,728 || trainable%: 0.6447 trainable params: 912,384 || all params: 94,647,856 || trainable%: 0.9640 # with k ```
21_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/how_to_hack_models.md
https://huggingface.co/docs/transformers/en/how_to_hack_models/#contributing-your-own-hacks
.md
Modifying pre-trained models can open up new avenues for research and application. By understanding and adjusting the internal mechanisms of models like SAM, you can tailor them to your specific needs, optimize performance, and experiment with new ideas. If you've developed your own hacks for Transformers models and would like to share them, consider contributing to this doc. - **Open a Pull Request:** Share your code changes and improvements directly in the repository.
21_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/how_to_hack_models.md
https://huggingface.co/docs/transformers/en/how_to_hack_models/#contributing-your-own-hacks
.md
- **Open a Pull Request:** Share your code changes and improvements directly in the repository. - **Write Documentation:** Provide clear explanations and examples of your modifications. - **Engage with the Community:** Discuss your ideas and get feedback from other developers and researchers by opening an issue.
21_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_tpu_tf.md
https://huggingface.co/docs/transformers/en/perf_train_tpu_tf/
.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
22_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_tpu_tf.md
https://huggingface.co/docs/transformers/en/perf_train_tpu_tf/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
22_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_tpu_tf.md
https://huggingface.co/docs/transformers/en/perf_train_tpu_tf/#training-on-tpu-with-tensorflow
.md
<Tip> If you don't need long explanations and just want TPU code samples to get started with, check out [our TPU example notebook!](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/tpu_training-tf.ipynb) </Tip>
22_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_tpu_tf.md
https://huggingface.co/docs/transformers/en/perf_train_tpu_tf/#what-is-a-tpu
.md
A TPU is a **Tensor Processing Unit.** They are hardware designed by Google, which are used to greatly speed up the tensor computations within neural networks, much like GPUs. They can be used for both network training and inference. They are generally accessed through Google’s cloud services, but small TPUs can also be accessed directly for free through Google Colab and Kaggle Kernels.
22_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_tpu_tf.md
https://huggingface.co/docs/transformers/en/perf_train_tpu_tf/#what-is-a-tpu
.md
Because [all TensorFlow models in 🤗 Transformers are Keras models](https://huggingface.co/blog/tensorflow-philosophy), most of the methods in this document are generally applicable to TPU training for any Keras model! However, there are a few points that are specific to the HuggingFace ecosystem (hug-o-system?) of Transformers and Datasets, and we’ll make sure to flag them up when we get to them.
22_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_tpu_tf.md
https://huggingface.co/docs/transformers/en/perf_train_tpu_tf/#what-kinds-of-tpu-are-available
.md
New users are often very confused by the range of TPUs, and the different ways to access them. The first key distinction to understand is the difference between **TPU Nodes** and **TPU VMs.** When you use a **TPU Node**, you are effectively indirectly accessing a remote TPU. You will need a separate VM, which will initialize your network and data pipeline and then forward them to the remote node. When you use a TPU on Google Colab, you are accessing it in the **TPU Node** style.
22_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_tpu_tf.md
https://huggingface.co/docs/transformers/en/perf_train_tpu_tf/#what-kinds-of-tpu-are-available
.md
Using TPU Nodes can have some quite unexpected behaviour for people who aren’t used to them! In particular, because the TPU is located on a physically different system to the machine you’re running your Python code on, your data cannot be local to your machine - any data pipeline that loads from your machine’s internal storage will totally fail! Instead, data must be stored in Google Cloud Storage where your data pipeline can still access it, even when the pipeline is running on the remote TPU node.
22_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_tpu_tf.md
https://huggingface.co/docs/transformers/en/perf_train_tpu_tf/#what-kinds-of-tpu-are-available
.md
<Tip> If you can fit all your data in memory as `np.ndarray` or `tf.Tensor`, then you can `fit()` on that data even when using Colab or a TPU Node, without needing to upload it to Google Cloud Storage. </Tip> <Tip>
22_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_tpu_tf.md
https://huggingface.co/docs/transformers/en/perf_train_tpu_tf/#what-kinds-of-tpu-are-available
.md
**🤗Specific Hugging Face Tip🤗:** The methods `Dataset.to_tf_dataset()` and its higher-level wrapper `model.prepare_tf_dataset()` , which you will see throughout our TF code examples, will both fail on a TPU Node. The reason for this is that even though they create a `tf.data.Dataset` it is not a “pure” `tf.data` pipeline and uses `tf.numpy_function` or `Dataset.from_generator()` to stream data from the underlying HuggingFace `Dataset`. This HuggingFace `Dataset` is backed by data that is on a local disc
22_3_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_tpu_tf.md
https://huggingface.co/docs/transformers/en/perf_train_tpu_tf/#what-kinds-of-tpu-are-available
.md
to stream data from the underlying HuggingFace `Dataset`. This HuggingFace `Dataset` is backed by data that is on a local disc and which the remote TPU Node will not be able to read.
22_3_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_tpu_tf.md
https://huggingface.co/docs/transformers/en/perf_train_tpu_tf/#what-kinds-of-tpu-are-available
.md
</Tip> The second way to access a TPU is via a **TPU VM.** When using a TPU VM, you connect directly to the machine that the TPU is attached to, much like training on a GPU VM. TPU VMs are generally easier to work with, particularly when it comes to your data pipeline. All of the above warnings do not apply to TPU VMs!
22_3_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_tpu_tf.md
https://huggingface.co/docs/transformers/en/perf_train_tpu_tf/#what-kinds-of-tpu-are-available
.md
This is an opinionated document, so here’s our opinion: **Avoid using TPU Node if possible.** It is more confusing and more difficult to debug than TPU VMs. It is also likely to be unsupported in future - Google’s latest TPU, TPUv4, can only be accessed as a TPU VM, which suggests that TPU Nodes are increasingly going to become a “legacy” access method. However, we understand that the only free TPU access is on Colab and Kaggle Kernels, which uses TPU Node - so we’ll try to explain how to handle it if you
22_3_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_tpu_tf.md
https://huggingface.co/docs/transformers/en/perf_train_tpu_tf/#what-kinds-of-tpu-are-available
.md
the only free TPU access is on Colab and Kaggle Kernels, which uses TPU Node - so we’ll try to explain how to handle it if you have to! Check the [TPU example notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/tpu_training-tf.ipynb) for code samples that explain this in more detail.
22_3_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_tpu_tf.md
https://huggingface.co/docs/transformers/en/perf_train_tpu_tf/#what-sizes-of-tpu-are-available
.md
A single TPU (a v2-8/v3-8/v4-8) runs 8 replicas. TPUs exist in **pods** that can run hundreds or thousands of replicas simultaneously. When you use more than a single TPU but less than a whole pod (for example, a v3-32), your TPU fleet is referred to as a **pod slice.** When you access a free TPU via Colab, you generally get a single v2-8 TPU.
22_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_tpu_tf.md
https://huggingface.co/docs/transformers/en/perf_train_tpu_tf/#i-keep-hearing-about-this-xla-thing-whats-xla-and-how-does-it-relate-to-tpus
.md
XLA is an optimizing compiler, used by both TensorFlow and JAX. In JAX it is the only compiler, whereas in TensorFlow it is optional (but mandatory on TPU!). The easiest way to enable it when training a Keras model is to pass the argument `jit_compile=True` to `model.compile()`. If you don’t get any errors and performance is good, that’s a great sign that you’re ready to move to TPU!
22_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_tpu_tf.md
https://huggingface.co/docs/transformers/en/perf_train_tpu_tf/#i-keep-hearing-about-this-xla-thing-whats-xla-and-how-does-it-relate-to-tpus
.md
Debugging on TPU is generally a bit harder than on CPU/GPU, so we recommend getting your code running on CPU/GPU with XLA first before trying it on TPU. You don’t have to train for long, of course - just for a few steps to make sure that your model and data pipeline are working like you expect them to. <Tip>
22_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_tpu_tf.md
https://huggingface.co/docs/transformers/en/perf_train_tpu_tf/#i-keep-hearing-about-this-xla-thing-whats-xla-and-how-does-it-relate-to-tpus
.md
<Tip> XLA compiled code is usually faster - so even if you’re not planning to run on TPU, adding `jit_compile=True` can improve your performance. Be sure to note the caveats below about XLA compatibility, though! </Tip> <Tip warning={true}>
22_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_tpu_tf.md
https://huggingface.co/docs/transformers/en/perf_train_tpu_tf/#i-keep-hearing-about-this-xla-thing-whats-xla-and-how-does-it-relate-to-tpus
.md
</Tip> <Tip warning={true}> **Tip born of painful experience:** Although using `jit_compile=True` is a good way to get a speed boost and test if your CPU/GPU code is XLA-compatible, it can actually cause a lot of problems if you leave it in when actually training on TPU. XLA compilation will happen implicitly on TPU, so remember to remove that line before actually running your code on a TPU! </Tip>
22_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_tpu_tf.md
https://huggingface.co/docs/transformers/en/perf_train_tpu_tf/#how-do-i-make-my-model-xla-compatible
.md
In many cases, your code is probably XLA-compatible already! However, there are a few things that work in normal TensorFlow that don’t work in XLA. We’ve distilled them into three core rules below: <Tip>
22_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_tpu_tf.md
https://huggingface.co/docs/transformers/en/perf_train_tpu_tf/#how-do-i-make-my-model-xla-compatible
.md
<Tip> **🤗Specific HuggingFace Tip🤗:** We’ve put a lot of effort into rewriting our TensorFlow models and loss functions to be XLA-compatible. Our models and loss functions generally obey rule #1 and #2 by default, so you can skip over them if you’re using `transformers` models. Don’t forget about these rules when writing your own models and loss functions, though! </Tip>
22_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_tpu_tf.md
https://huggingface.co/docs/transformers/en/perf_train_tpu_tf/#xla-rule-1-your-code-cannot-have-data-dependent-conditionals
.md
What that means is that any `if` statement cannot depend on values inside a `tf.Tensor`. For example, this code block cannot be compiled with XLA! ```python if tf.reduce_sum(tensor) > 10: tensor = tensor / 2.0 ```
22_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_tpu_tf.md
https://huggingface.co/docs/transformers/en/perf_train_tpu_tf/#xla-rule-1-your-code-cannot-have-data-dependent-conditionals
.md
```python if tf.reduce_sum(tensor) > 10: tensor = tensor / 2.0 ``` This might seem very restrictive at first, but most neural net code doesn’t need to do this. You can often get around this restriction by using `tf.cond` (see the documentation [here](https://www.tensorflow.org/api_docs/python/tf/cond)) or by removing the conditional and finding a clever math trick with indicator variables instead, like so: ```python sum_over_10 = tf.cast(tf.reduce_sum(tensor) > 10, tf.float32)
22_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_tpu_tf.md
https://huggingface.co/docs/transformers/en/perf_train_tpu_tf/#xla-rule-1-your-code-cannot-have-data-dependent-conditionals
.md
```python sum_over_10 = tf.cast(tf.reduce_sum(tensor) > 10, tf.float32) tensor = tensor / (1.0 + sum_over_10) ``` This code has exactly the same effect as the code above, but by avoiding a conditional, we ensure it will compile with XLA without problems!
22_7_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_tpu_tf.md
https://huggingface.co/docs/transformers/en/perf_train_tpu_tf/#xla-rule-2-your-code-cannot-have-data-dependent-shapes
.md
What this means is that the shape of all of the `tf.Tensor` objects in your code cannot depend on their values. For example, the function `tf.unique` cannot be compiled with XLA, because it returns a `tensor` containing one instance of each unique value in the input. The shape of this output will obviously be different depending on how repetitive the input `Tensor` was, and so XLA refuses to handle it!
22_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_tpu_tf.md
https://huggingface.co/docs/transformers/en/perf_train_tpu_tf/#xla-rule-2-your-code-cannot-have-data-dependent-shapes
.md
In general, most neural network code obeys rule #2 by default. However, there are a few common cases where it becomes a problem. One very common one is when you use **label masking**, setting your labels to a negative value to indicate that those positions should be ignored when computing the loss. If you look at NumPy or PyTorch loss functions that support label masking, you will often see code like this that uses [boolean
22_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_tpu_tf.md
https://huggingface.co/docs/transformers/en/perf_train_tpu_tf/#xla-rule-2-your-code-cannot-have-data-dependent-shapes
.md
If you look at NumPy or PyTorch loss functions that support label masking, you will often see code like this that uses [boolean indexing](https://numpy.org/doc/stable/user/basics.indexing.html#boolean-array-indexing):
22_8_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_tpu_tf.md
https://huggingface.co/docs/transformers/en/perf_train_tpu_tf/#xla-rule-2-your-code-cannot-have-data-dependent-shapes
.md
```python label_mask = labels >= 0 masked_outputs = outputs[label_mask] masked_labels = labels[label_mask] loss = compute_loss(masked_outputs, masked_labels) mean_loss = torch.mean(loss) ```
22_8_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_tpu_tf.md
https://huggingface.co/docs/transformers/en/perf_train_tpu_tf/#xla-rule-2-your-code-cannot-have-data-dependent-shapes
.md
masked_labels = labels[label_mask] loss = compute_loss(masked_outputs, masked_labels) mean_loss = torch.mean(loss) ``` This code is totally fine in NumPy or PyTorch, but it breaks in XLA! Why? Because the shape of `masked_outputs` and `masked_labels` depends on how many positions are masked - that makes it a **data-dependent shape.** However, just like for rule #1, we can often rewrite this code to yield exactly the same output without any data-dependent shapes. ```python
22_8_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_tpu_tf.md
https://huggingface.co/docs/transformers/en/perf_train_tpu_tf/#xla-rule-2-your-code-cannot-have-data-dependent-shapes
.md
```python label_mask = tf.cast(labels >= 0, tf.float32) loss = compute_loss(outputs, labels) loss = loss * label_mask # Set negative label positions to 0 mean_loss = tf.reduce_sum(loss) / tf.reduce_sum(label_mask) ```
22_8_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_tpu_tf.md
https://huggingface.co/docs/transformers/en/perf_train_tpu_tf/#xla-rule-2-your-code-cannot-have-data-dependent-shapes
.md
``` Here, we avoid data-dependent shapes by computing the loss for every position, but zeroing out the masked positions in both the numerator and denominator when we calculate the mean, which yields exactly the same result as the first block while maintaining XLA compatibility. Note that we use the same trick as in rule #1 - converting a `tf.bool` to `tf.float32` and using it as an indicator variable. This is a really useful trick, so remember it if you need to convert your own code to XLA!
22_8_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_tpu_tf.md
https://huggingface.co/docs/transformers/en/perf_train_tpu_tf/#xla-rule-3-xla-will-need-to-recompile-your-model-for-every-different-input-shape-it-sees
.md
This is the big one. What this means is that if your input shapes are very variable, XLA will have to recompile your model over and over, which will create huge performance problems. This commonly arises in NLP models, where input texts have variable lengths after tokenization. In other modalities, static shapes are more common and this rule is much less of a problem.
22_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_tpu_tf.md
https://huggingface.co/docs/transformers/en/perf_train_tpu_tf/#xla-rule-3-xla-will-need-to-recompile-your-model-for-every-different-input-shape-it-sees
.md
How can you get around rule #3? The key is **padding** - if you pad all your inputs to the same length, and then use an `attention_mask`, you can get the same results as you’d get from variable shapes, but without any XLA issues. However, excessive padding can cause severe slowdown too - if you pad all your samples to the maximum length in the whole dataset, you might end up with batches consisting endless padding tokens, which will waste a lot of compute and memory!
22_9_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_tpu_tf.md
https://huggingface.co/docs/transformers/en/perf_train_tpu_tf/#xla-rule-3-xla-will-need-to-recompile-your-model-for-every-different-input-shape-it-sees
.md
There isn’t a perfect solution to this problem. However, you can try some tricks. One very useful trick is to **pad batches of samples up to a multiple of a number like 32 or 64 tokens.** This often only increases the number of tokens by a small amount, but it hugely reduces the number of unique input shapes, because every input shape now has to be a multiple of 32 or 64. Fewer unique input shapes means fewer XLA compilations! <Tip>
22_9_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_tpu_tf.md
https://huggingface.co/docs/transformers/en/perf_train_tpu_tf/#xla-rule-3-xla-will-need-to-recompile-your-model-for-every-different-input-shape-it-sees
.md
<Tip> **🤗Specific HuggingFace Tip🤗:** Our tokenizers and data collators have methods that can help you here. You can use `padding="max_length"` or `padding="longest"` when calling tokenizers to get them to output padded data. Our tokenizers and data collators also have a `pad_to_multiple_of` argument that you can use to reduce the number of unique input shapes you see! </Tip>
22_9_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_tpu_tf.md
https://huggingface.co/docs/transformers/en/perf_train_tpu_tf/#how-do-i-actually-train-my-model-on-tpu
.md
Once your training is XLA-compatible and (if you’re using TPU Node / Colab) your dataset has been prepared appropriately, running on TPU is surprisingly easy! All you really need to change in your code is to add a few lines to initialize your TPU, and to ensure that your model and dataset are created inside a `TPUStrategy` scope. Take a look at [our TPU example notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/tpu_training-tf.ipynb) to see this in action!
22_10_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_tpu_tf.md
https://huggingface.co/docs/transformers/en/perf_train_tpu_tf/#summary
.md
There was a lot in here, so let’s summarize with a quick checklist you can follow when you want to get your model ready for TPU training: - Make sure your code follows the three rules of XLA - Compile your model with `jit_compile=True` on CPU/GPU and confirm that you can train it with XLA - Either load your dataset into memory or use a TPU-compatible dataset loading approach (see [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/tpu_training-tf.ipynb))
22_11_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_tpu_tf.md
https://huggingface.co/docs/transformers/en/perf_train_tpu_tf/#summary
.md
- Migrate your code either to Colab (with accelerator set to “TPU”) or a TPU VM on Google Cloud - Add TPU initializer code (see [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/tpu_training-tf.ipynb)) - Create your `TPUStrategy` and make sure dataset loading and model creation are inside the `strategy.scope()` (see [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/tpu_training-tf.ipynb))
22_11_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_tpu_tf.md
https://huggingface.co/docs/transformers/en/perf_train_tpu_tf/#summary
.md
- Don’t forget to take `jit_compile=True` out again when you move to TPU! - 🙏🙏🙏🥺🥺🥺 - Call `model.fit()` - You did it!
22_11_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/
.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
23_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
23_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#quick-tour
.md
[[open-in-colab]]
23_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#quick-tour
.md
Get up and running with 🤗 Transformers! Whether you're a developer or an everyday user, this quick tour will help you get started and show you how to use the [`pipeline`] for inference, load a pretrained model and preprocessor with an [AutoClass](./model_doc/auto), and quickly train a model with PyTorch or TensorFlow. If you're a beginner, we recommend checking out our tutorials or [course](https://huggingface.co/course/chapter1/1) next for more in-depth explanations of the concepts introduced here.
23_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#quick-tour
.md
Before you begin, make sure you have all the necessary libraries installed: ```bash !pip install transformers datasets evaluate accelerate ``` You'll also need to install your preferred machine learning framework: <frameworkcontent> <pt> ```bash pip install torch ``` </pt> <tf> ```bash pip install tensorflow ``` </tf> </frameworkcontent>
23_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#pipeline
.md
<Youtube id="tiZFewofSLM"/> The [`pipeline`] is the easiest and fastest way to use a pretrained model for inference. You can use the [`pipeline`] out-of-the-box for many tasks across different modalities, some of which are shown in the table below: <Tip> For a complete list of available tasks, check out the [pipeline API reference](./main_classes/pipelines). </Tip>
23_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#pipeline
.md
<Tip> For a complete list of available tasks, check out the [pipeline API reference](./main_classes/pipelines). </Tip> | **Task** | **Description** | **Modality** | **Pipeline identifier** |
23_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#pipeline
.md
|------------------------------|--------------------------------------------------------------------------------------------------------------|-----------------|-----------------------------------------------| | Text classification | assign a label to a given sequence of text | NLP | pipeline(task=“sentiment-analysis”) |
23_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#pipeline
.md
| Text generation | generate text given a prompt | NLP | pipeline(task=“text-generation”) | | Summarization | generate a summary of a sequence of text or document | NLP | pipeline(task=“summarization”) |
23_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#pipeline
.md
| Image classification | assign a label to an image | Computer vision | pipeline(task=“image-classification”) | | Image segmentation | assign a label to each individual pixel of an image (supports semantic, panoptic, and instance segmentation) | Computer vision | pipeline(task=“image-segmentation”) |
23_2_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#pipeline
.md
| Object detection | predict the bounding boxes and classes of objects in an image | Computer vision | pipeline(task=“object-detection”) | | Audio classification | assign a label to some audio data | Audio | pipeline(task=“audio-classification”) |
23_2_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#pipeline
.md
| Automatic speech recognition | transcribe speech into text | Audio | pipeline(task=“automatic-speech-recognition”) | | Visual question answering | answer a question about the image, given an image and a question | Multimodal | pipeline(task=“vqa”) |
23_2_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#pipeline
.md
| Document question answering | answer a question about the document, given a document and a question | Multimodal | pipeline(task="document-question-answering") | | Image captioning | generate a caption for a given image | Multimodal | pipeline(task="image-to-text") |
23_2_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#pipeline
.md
Start by creating an instance of [`pipeline`] and specifying a task you want to use it for. In this guide, you'll use the [`pipeline`] for sentiment analysis as an example: ```py >>> from transformers import pipeline
23_2_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#pipeline
.md
>>> classifier = pipeline("sentiment-analysis") ``` The [`pipeline`] downloads and caches a default [pretrained model](https://huggingface.co/distilbert/distilbert-base-uncased-finetuned-sst-2-english) and tokenizer for sentiment analysis. Now you can use the `classifier` on your target text: ```py >>> classifier("We are very happy to show you the 🤗 Transformers library.") [{'label': 'POSITIVE', 'score': 0.9998}] ```
23_2_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#pipeline
.md
>>> classifier("We are very happy to show you the 🤗 Transformers library.") [{'label': 'POSITIVE', 'score': 0.9998}] ``` If you have more than one input, pass your inputs as a list to the [`pipeline`] to return a list of dictionaries: ```py >>> results = classifier(["We are very happy to show you the 🤗 Transformers library.", "We hope you don't hate it."]) >>> for result in results: ... print(f"label: {result['label']}, with score: {round(result['score'], 4)}") label: POSITIVE, with score: 0.9998
23_2_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#pipeline
.md
... print(f"label: {result['label']}, with score: {round(result['score'], 4)}") label: POSITIVE, with score: 0.9998 label: NEGATIVE, with score: 0.5309 ``` The [`pipeline`] can also iterate over an entire dataset for any task you like. For this example, let's choose automatic speech recognition as our task: ```py >>> import torch >>> from transformers import pipeline
23_2_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#pipeline
.md
>>> speech_recognizer = pipeline("automatic-speech-recognition", model="facebook/wav2vec2-base-960h") ``` Load an audio dataset (see the 🤗 Datasets [Quick Start](https://huggingface.co/docs/datasets/quickstart#audio) for more details) you'd like to iterate over. For example, load the [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) dataset: ```py >>> from datasets import load_dataset, Audio
23_2_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#pipeline
.md
>>> dataset = load_dataset("PolyAI/minds14", name="en-US", split="train") # doctest: +IGNORE_RESULT ``` You need to make sure the sampling rate of the dataset matches the sampling rate [`facebook/wav2vec2-base-960h`](https://huggingface.co/facebook/wav2vec2-base-960h) was trained on: ```py >>> dataset = dataset.cast_column("audio", Audio(sampling_rate=speech_recognizer.feature_extractor.sampling_rate)) ``` The audio files are automatically loaded and resampled when calling the `"audio"` column.
23_2_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#pipeline
.md
``` The audio files are automatically loaded and resampled when calling the `"audio"` column. Extract the raw waveform arrays from the first 4 samples and pass it as a list to the pipeline: ```py >>> result = speech_recognizer(dataset[:4]["audio"]) >>> print([d["text"] for d in result])
23_2_14
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#pipeline
.md
['I WOULD LIKE TO SET UP A JOINT ACCOUNT WITH MY PARTNER HOW DO I PROCEED WITH DOING THAT', "FONDERING HOW I'D SET UP A JOIN TO HELL T WITH MY WIFE AND WHERE THE AP MIGHT BE", "I I'D LIKE TOY SET UP A JOINT ACCOUNT WITH MY PARTNER I'M NOT SEEING THE OPTION TO DO IT ON THE APSO I CALLED IN TO GET SOME HELP CAN I JUST DO IT OVER THE PHONE WITH YOU AND GIVE YOU THE INFORMATION OR SHOULD I DO IT IN THE AP AN I'M MISSING SOMETHING UQUETTE HAD PREFERRED TO JUST DO IT OVER THE PHONE OF POSSIBLE THINGS", 'HOW DO I
23_2_15
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#pipeline
.md
I DO IT IN THE AP AN I'M MISSING SOMETHING UQUETTE HAD PREFERRED TO JUST DO IT OVER THE PHONE OF POSSIBLE THINGS", 'HOW DO I FURN A JOINA COUT']
23_2_16
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#pipeline
.md
``` For larger datasets where the inputs are big (like in speech or vision), you'll want to pass a generator instead of a list to load all the inputs in memory. Take a look at the [pipeline API reference](./main_classes/pipelines) for more information.
23_2_17
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#use-another-model-and-tokenizer-in-the-pipeline
.md
The [`pipeline`] can accommodate any model from the [Hub](https://huggingface.co/models), making it easy to adapt the [`pipeline`] for other use-cases. For example, if you'd like a model capable of handling French text, use the tags on the Hub to filter for an appropriate model. The top filtered result returns a multilingual [BERT model](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment) finetuned for sentiment analysis you can use for French text: ```py
23_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#use-another-model-and-tokenizer-in-the-pipeline
.md
```py >>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment" ``` <frameworkcontent> <pt> Use [`AutoModelForSequenceClassification`] and [`AutoTokenizer`] to load the pretrained model and it's associated tokenizer (more on an `AutoClass` in the next section): ```py >>> from transformers import AutoTokenizer, AutoModelForSequenceClassification
23_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#use-another-model-and-tokenizer-in-the-pipeline
.md
>>> model = AutoModelForSequenceClassification.from_pretrained(model_name) >>> tokenizer = AutoTokenizer.from_pretrained(model_name) ``` </pt> <tf> Use [`TFAutoModelForSequenceClassification`] and [`AutoTokenizer`] to load the pretrained model and it's associated tokenizer (more on an `TFAutoClass` in the next section): ```py >>> from transformers import AutoTokenizer, TFAutoModelForSequenceClassification
23_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#use-another-model-and-tokenizer-in-the-pipeline
.md
>>> model = TFAutoModelForSequenceClassification.from_pretrained(model_name) >>> tokenizer = AutoTokenizer.from_pretrained(model_name) ``` </tf> </frameworkcontent> Specify the model and tokenizer in the [`pipeline`], and now you can apply the `classifier` on French text: ```py >>> classifier = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer) >>> classifier("Nous sommes très heureux de vous présenter la bibliothèque 🤗 Transformers.") [{'label': '5 stars', 'score': 0.7273}] ```
23_3_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#use-another-model-and-tokenizer-in-the-pipeline
.md
[{'label': '5 stars', 'score': 0.7273}] ``` If you can't find a model for your use-case, you'll need to finetune a pretrained model on your data. Take a look at our [finetuning tutorial](./training) to learn how. Finally, after you've finetuned your pretrained model, please consider [sharing](./model_sharing) the model with the community on the Hub to democratize machine learning for everyone! 🤗
23_3_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#autoclass
.md
<Youtube id="AhChOFRegn4"/> Under the hood, the [`AutoModelForSequenceClassification`] and [`AutoTokenizer`] classes work together to power the [`pipeline`] you used above. An [AutoClass](./model_doc/auto) is a shortcut that automatically retrieves the architecture of a pretrained model from its name or path. You only need to select the appropriate `AutoClass` for your task and it's associated preprocessing class.
23_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#autoclass
.md
Let's return to the example from the previous section and see how you can use the `AutoClass` to replicate the results of the [`pipeline`].
23_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#autotokenizer
.md
A tokenizer is responsible for preprocessing text into an array of numbers as inputs to a model. There are multiple rules that govern the tokenization process, including how to split a word and at what level words should be split (learn more about tokenization in the [tokenizer summary](./tokenizer_summary)). The most important thing to remember is you need to instantiate a tokenizer with the same model name to ensure you're using the same tokenization rules a model was pretrained with.
23_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#autotokenizer
.md
Load a tokenizer with [`AutoTokenizer`]: ```py >>> from transformers import AutoTokenizer
23_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#autotokenizer
.md
>>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment" >>> tokenizer = AutoTokenizer.from_pretrained(model_name) ``` Pass your text to the tokenizer: ```py >>> encoding = tokenizer("We are very happy to show you the 🤗 Transformers library.") >>> print(encoding) {'input_ids': [101, 11312, 10320, 12495, 19308, 10114, 11391, 10855, 10103, 100, 58263, 13299, 119, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}
23_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#autotokenizer
.md
'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} ``` The tokenizer returns a dictionary containing: * [input_ids](./glossary#input-ids): numerical representations of your tokens. * [attention_mask](./glossary#attention-mask): indicates which tokens should be attended to. A tokenizer can also accept a list of inputs, and pad and truncate the text to return a batch with uniform length: <frameworkcontent> <pt> ```py
23_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#autotokenizer
.md
<frameworkcontent> <pt> ```py >>> pt_batch = tokenizer( ... ["We are very happy to show you the 🤗 Transformers library.", "We hope you don't hate it."], ... padding=True, ... truncation=True, ... max_length=512, ... return_tensors="pt", ... ) ``` </pt> <tf> ```py >>> tf_batch = tokenizer( ... ["We are very happy to show you the 🤗 Transformers library.", "We hope you don't hate it."], ... padding=True, ... truncation=True, ... max_length=512,
23_5_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#autotokenizer
.md
... padding=True, ... truncation=True, ... max_length=512, ... return_tensors="tf", ... ) ``` </tf> </frameworkcontent> <Tip> Check out the [preprocess](./preprocessing) tutorial for more details about tokenization, and how to use an [`AutoImageProcessor`], [`AutoFeatureExtractor`] and [`AutoProcessor`] to preprocess image, audio, and multimodal inputs. </Tip>
23_5_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#automodel
.md
<frameworkcontent> <pt> 🤗 Transformers provides a simple and unified way to load pretrained instances. This means you can load an [`AutoModel`] like you would load an [`AutoTokenizer`]. The only difference is selecting the correct [`AutoModel`] for the task. For text (or sequence) classification, you should load [`AutoModelForSequenceClassification`].
23_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#automodel
.md
By default, the weights are loaded in full precision (torch.float32) regardless of the actual data type the weights are stored in such as torch.float16. Set `torch_dtype="auto"` to load the weights in the data type defined in a model's `config.json` file to automatically load the most memory-optimal data type. ```py >>> from transformers import AutoModelForSequenceClassification
23_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#automodel
.md
>>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment" >>> pt_model = AutoModelForSequenceClassification.from_pretrained(model_name, torch_dtype="auto") ``` <Tip> See the [task summary](./task_summary) for tasks supported by an [`AutoModel`] class. </Tip> Now pass your preprocessed batch of inputs directly to the model. You just have to unpack the dictionary by adding `**`: ```py >>> pt_outputs = pt_model(**pt_batch) ```
23_6_2