source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/multilingual.md
https://huggingface.co/docs/transformers/en/multilingual/#mbart
.md
>>> en_text = "Do not meddle in the affairs of wizards, for they are subtle and quick to anger." >>> fi_text = "Älä sekaannu velhojen asioihin, sillä ne ovat hienovaraisia ja nopeasti vihaisia."
59_8_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/multilingual.md
https://huggingface.co/docs/transformers/en/multilingual/#mbart
.md
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/mbart-large-50-many-to-many-mmt", src_lang="fi_FI") >>> model = AutoModelForSeq2SeqLM.from_pretrained("facebook/mbart-large-50-many-to-many-mmt") ``` Tokenize the text: ```py >>> encoded_en = tokenizer(en_text, return_tensors="pt") ``` MBart forces the target language id as the first generated token to translate to the target language. Set the `forced_bos_token_id` to `en` in the `generate` method to translate to English: ```py
59_8_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/multilingual.md
https://huggingface.co/docs/transformers/en/multilingual/#mbart
.md
```py >>> generated_tokens = model.generate(**encoded_en, forced_bos_token_id=tokenizer.lang_code_to_id["en_XX"]) >>> tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) "Don't interfere with the wizard's affairs, because they are subtle, will soon get angry." ``` If you are using the `facebook/mbart-large-50-many-to-one-mmt` checkpoint, you don't need to force the target language id as the first generated token otherwise the usage is the same.
59_8_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/debugging.md
https://huggingface.co/docs/transformers/en/debugging/
.md
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
60_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/debugging.md
https://huggingface.co/docs/transformers/en/debugging/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
60_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/debugging.md
https://huggingface.co/docs/transformers/en/debugging/#debugging
.md
Training on multiple GPUs can be a tricky endeavor whether you're running into installation issues or communication problems between your GPUs. This debugging guide covers some issues you may run into and how to resolve them.
60_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/debugging.md
https://huggingface.co/docs/transformers/en/debugging/#deepspeed-cuda-installation
.md
If you're using DeepSpeed, you've probably already installed it with the following command. ```bash pip install deepspeed ``` DeepSpeed compiles CUDA C++ code and it can be a potential source of errors when building PyTorch extensions that require CUDA. These errors depend on how CUDA is installed on your system, and this section focuses on PyTorch built with *CUDA 10.2*. <Tip>
60_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/debugging.md
https://huggingface.co/docs/transformers/en/debugging/#deepspeed-cuda-installation
.md
<Tip> For any other installation issues, please [open an issue](https://github.com/microsoft/DeepSpeed/issues) with the DeepSpeed team. </Tip>
60_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/debugging.md
https://huggingface.co/docs/transformers/en/debugging/#non-identical-cuda-toolkits
.md
PyTorch comes with its own CUDA toolkit, but to use DeepSpeed with PyTorch, you need to have an identical version of CUDA installed system-wide. For example, if you installed PyTorch with `cudatoolkit==10.2` in your Python environment, then you'll also need to have CUDA 10.2 installed system-wide. If you don't have CUDA installed system-wide, you should install it first.
60_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/debugging.md
https://huggingface.co/docs/transformers/en/debugging/#non-identical-cuda-toolkits
.md
The exact location may vary from system to system, but `usr/local/cuda-10.2` is the most common location on many Unix systems. When CUDA is correctly setup and added to your `PATH` environment variable, you can find the installation location with the following command: ```bash which nvcc ```
60_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/debugging.md
https://huggingface.co/docs/transformers/en/debugging/#multiple-cuda-toolkits
.md
You may also have more than one CUDA toolkit installed system-wide. ```bash /usr/local/cuda-10.2 /usr/local/cuda-11.0 ``` Typically, package installers set the paths to whatever the last version was installed. If the package build fails because it can't find the right CUDA version (despite it being installed system-wide already), then you need to configure the `PATH` and `LD_LIBRARY_PATH` environment variables to point to the correct path.
60_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/debugging.md
https://huggingface.co/docs/transformers/en/debugging/#multiple-cuda-toolkits
.md
Take a look at the contents of these environment variables first: ```bash echo $PATH echo $LD_LIBRARY_PATH ``` `PATH` lists the locations of the executables and `LD_LIBRARY_PATH` lists where to look for shared libraries. Earlier entries are prioritized over later ones, and `:` is used to separate multiple entries. To tell the build program where to find the specific CUDA toolkit you want, insert the correct path to list first. This command prepends rather than overwrites the existing values. ```bash
60_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/debugging.md
https://huggingface.co/docs/transformers/en/debugging/#multiple-cuda-toolkits
.md
```bash # adjust the version and full path if needed export PATH=/usr/local/cuda-10.2/bin:$PATH export LD_LIBRARY_PATH=/usr/local/cuda-10.2/lib64:$LD_LIBRARY_PATH ``` In addition, you should also check the directories you assign actually exist. The `lib64` sub-directory contains various CUDA `.so` objects (like `libcudart.so`) and while it is unlikely your system names them differently, you should check the actual names and change them accordingly.
60_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/debugging.md
https://huggingface.co/docs/transformers/en/debugging/#older-cuda-versions
.md
Sometimes, older CUDA versions may refuse to build with newer compilers. For example, if you have `gcc-9` but CUDA wants `gcc-7`. Usually, installing the latest CUDA toolkit enables support for the newer compiler.
60_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/debugging.md
https://huggingface.co/docs/transformers/en/debugging/#older-cuda-versions
.md
You could also install an older version of the compiler in addition to the one you're currently using (or it may already be installed but it's not used by default and the build system can't see it). To resolve this, you can create a symlink to give the build system visibility to the older compiler. ```bash # adapt the path to your system sudo ln -s /usr/bin/gcc-7 /usr/local/cuda-10.2/bin/gcc sudo ln -s /usr/bin/g++-7 /usr/local/cuda-10.2/bin/g++ ```
60_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/debugging.md
https://huggingface.co/docs/transformers/en/debugging/#prebuild
.md
If you're still having issues with installing DeepSpeed or if you're building DeepSpeed at run time, you can try to prebuild the DeepSpeed modules before installing them. To make a local build for DeepSpeed: ```bash git clone https://github.com/microsoft/DeepSpeed/ cd DeepSpeed rm -rf build TORCH_CUDA_ARCH_LIST="8.6" DS_BUILD_CPU_ADAM=1 DS_BUILD_UTILS=1 pip install . \ --global-option="build_ext" --global-option="-j8" --no-cache -v \ --disable-pip-version-check 2>&1 | tee build.log ``` <Tip>
60_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/debugging.md
https://huggingface.co/docs/transformers/en/debugging/#prebuild
.md
--disable-pip-version-check 2>&1 | tee build.log ``` <Tip> To use NVMe offload, add the `DS_BUILD_AIO=1` parameter to the build command and make sure you install the libaio-dev package system-wide. </Tip>
60_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/debugging.md
https://huggingface.co/docs/transformers/en/debugging/#prebuild
.md
</Tip> Next, you'll have to specify your GPU's architecture by editing the `TORCH_CUDA_ARCH_LIST` variable (find a complete list of NVIDIA GPUs and their corresponding architectures on this [page](https://developer.nvidia.com/cuda-gpus)). To check the PyTorch version that corresponds to your architecture, run the following command: ```bash python -c "import torch; print(torch.cuda.get_arch_list())" ``` Find the architecture for a GPU with the following command: <hfoptions id="arch">
60_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/debugging.md
https://huggingface.co/docs/transformers/en/debugging/#prebuild
.md
``` Find the architecture for a GPU with the following command: <hfoptions id="arch"> <hfoption id="same GPUs"> ```bash CUDA_VISIBLE_DEVICES=0 python -c "import torch; print(torch.cuda.get_device_capability())" ``` </hfoption> <hfoption id="specific GPU"> To find the architecture for GPU `0`: ```bash CUDA_VISIBLE_DEVICES=0 python -c "import torch; \ print(torch.cuda.get_device_properties(torch.device('cuda')))
60_6_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/debugging.md
https://huggingface.co/docs/transformers/en/debugging/#prebuild
.md
```bash CUDA_VISIBLE_DEVICES=0 python -c "import torch; \ print(torch.cuda.get_device_properties(torch.device('cuda'))) "_CudaDeviceProperties(name='GeForce RTX 3090', major=8, minor=6, total_memory=24268MB, multi_processor_count=82)" ``` This means your GPU architecture is `8.6`. </hfoption> </hfoptions> If you get `8, 6`, then you can set `TORCH_CUDA_ARCH_LIST="8.6"`. For multiple GPUs with different architectures, list them like `TORCH_CUDA_ARCH_LIST="6.1;8.6"`.
60_6_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/debugging.md
https://huggingface.co/docs/transformers/en/debugging/#prebuild
.md
It is also possible to not specify `TORCH_CUDA_ARCH_LIST` and the build program automatically queries the GPU architecture of the build. However, it may or may not match the actual GPU on the target machine which is why it is better to explicitly specify the correct architecture. For training on multiple machines with the same setup, you'll need to make a binary wheel: ```bash git clone https://github.com/microsoft/DeepSpeed/ cd DeepSpeed rm -rf build
60_6_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/debugging.md
https://huggingface.co/docs/transformers/en/debugging/#prebuild
.md
```bash git clone https://github.com/microsoft/DeepSpeed/ cd DeepSpeed rm -rf build TORCH_CUDA_ARCH_LIST="8.6" DS_BUILD_CPU_ADAM=1 DS_BUILD_UTILS=1 \ python setup.py build_ext -j8 bdist_wheel ``` This command generates a binary wheel that'll look something like `dist/deepspeed-0.3.13+8cd046f-cp38-cp38-linux_x86_64.whl`. Now you can install this wheel locally or on another machine. ```bash pip install deepspeed-0.3.13+8cd046f-cp38-cp38-linux_x86_64.whl ```
60_6_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/debugging.md
https://huggingface.co/docs/transformers/en/debugging/#multi-gpu-network-issues-debug
.md
When training or inferencing with `DistributedDataParallel` and multiple GPU, if you run into issue of inter-communication between processes and/or nodes, you can use the following script to diagnose network issues. ```bash wget https://raw.githubusercontent.com/huggingface/transformers/main/scripts/distributed/torch-distributed-gpu-test.py ``` For example to test how 2 GPUs interact do: ```bash python -m torch.distributed.run --nproc_per_node 2 --nnodes 1 torch-distributed-gpu-test.py ```
60_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/debugging.md
https://huggingface.co/docs/transformers/en/debugging/#multi-gpu-network-issues-debug
.md
```bash python -m torch.distributed.run --nproc_per_node 2 --nnodes 1 torch-distributed-gpu-test.py ``` If both processes can talk to each and allocate GPU memory each will print an OK status. For more GPUs or nodes adjust the arguments in the script. You will find a lot more details inside the diagnostics script and even a recipe to how you could run it in a SLURM environment. An additional level of debug is to add `NCCL_DEBUG=INFO` environment variable as follows: ```bash
60_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/debugging.md
https://huggingface.co/docs/transformers/en/debugging/#multi-gpu-network-issues-debug
.md
An additional level of debug is to add `NCCL_DEBUG=INFO` environment variable as follows: ```bash NCCL_DEBUG=INFO python -m torch.distributed.run --nproc_per_node 2 --nnodes 1 torch-distributed-gpu-test.py ``` This will dump a lot of NCCL-related debug information, which you can then search online if you find that some problems are reported. Or if you're not sure how to interpret the output you can share the log file in an Issue.
60_7_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/debugging.md
https://huggingface.co/docs/transformers/en/debugging/#underflow-and-overflow-detection
.md
<Tip> This feature is currently available for PyTorch-only. </Tip> <Tip> For multi-GPU training it requires DDP (`torch.distributed.launch`). </Tip> <Tip> This feature can be used with any `nn.Module`-based model. </Tip> If you start getting `loss=NaN` or the model exhibits some other abnormal behavior due to `inf` or `nan` in activations or weights one needs to discover where the first underflow or overflow happens and what led to it. Luckily
60_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/debugging.md
https://huggingface.co/docs/transformers/en/debugging/#underflow-and-overflow-detection
.md
activations or weights one needs to discover where the first underflow or overflow happens and what led to it. Luckily you can accomplish that easily by activating a special module that will do the detection automatically. If you're using [`Trainer`], you just need to add: ```bash --debug underflow_overflow ``` to the normal command line arguments, or pass `debug="underflow_overflow"` when creating the [`TrainingArguments`] object.
60_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/debugging.md
https://huggingface.co/docs/transformers/en/debugging/#underflow-and-overflow-detection
.md
to the normal command line arguments, or pass `debug="underflow_overflow"` when creating the [`TrainingArguments`] object. If you're using your own training loop or another Trainer you can accomplish the same with: ```python from transformers.debug_utils import DebugUnderflowOverflow
60_8_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/debugging.md
https://huggingface.co/docs/transformers/en/debugging/#underflow-and-overflow-detection
.md
debug_overflow = DebugUnderflowOverflow(model) ``` [`~debug_utils.DebugUnderflowOverflow`] inserts hooks into the model that immediately after each forward call will test input and output variables and also the corresponding module's weights. As soon as `inf` or `nan` is detected in at least one element of the activations or weights, the program will assert and print a report like this (this was caught with `google/mt5-small` under fp16 mixed precision): ``` Detected inf/nan during batch_number=0
60_8_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/debugging.md
https://huggingface.co/docs/transformers/en/debugging/#underflow-and-overflow-detection
.md
like this (this was caught with `google/mt5-small` under fp16 mixed precision): ``` Detected inf/nan during batch_number=0 Last 21 forward frames: abs min abs max metadata encoder.block.1.layer.1.DenseReluDense.dropout Dropout 0.00e+00 2.57e+02 input[0] 0.00e+00 2.85e+02 output [...] encoder.block.2.layer.0 T5LayerSelfAttention 6.78e-04 3.15e+03 input[0] 2.65e-04 3.42e+03 output[0] None output[1] 2.25e-01 1.00e+04 output[2] encoder.block.2.layer.1.layer_norm T5LayerNorm 8.69e-02 4.18e-01 weight
60_8_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/debugging.md
https://huggingface.co/docs/transformers/en/debugging/#underflow-and-overflow-detection
.md
None output[1] 2.25e-01 1.00e+04 output[2] encoder.block.2.layer.1.layer_norm T5LayerNorm 8.69e-02 4.18e-01 weight 2.65e-04 3.42e+03 input[0] 1.79e-06 4.65e+00 output encoder.block.2.layer.1.DenseReluDense.wi_0 Linear 2.17e-07 4.50e+00 weight 1.79e-06 4.65e+00 input[0] 2.68e-06 3.70e+01 output encoder.block.2.layer.1.DenseReluDense.wi_1 Linear 8.08e-07 2.66e+01 weight 1.79e-06 4.65e+00 input[0] 1.27e-04 2.37e+02 output encoder.block.2.layer.1.DenseReluDense.dropout Dropout 0.00e+00 8.76e+03 input[0]
60_8_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/debugging.md
https://huggingface.co/docs/transformers/en/debugging/#underflow-and-overflow-detection
.md
1.27e-04 2.37e+02 output encoder.block.2.layer.1.DenseReluDense.dropout Dropout 0.00e+00 8.76e+03 input[0] 0.00e+00 9.74e+03 output encoder.block.2.layer.1.DenseReluDense.wo Linear 1.01e-06 6.44e+00 weight 0.00e+00 9.74e+03 input[0] 3.18e-04 6.27e+04 output encoder.block.2.layer.1.DenseReluDense T5DenseGatedGeluDense 1.79e-06 4.65e+00 input[0] 3.18e-04 6.27e+04 output encoder.block.2.layer.1.dropout Dropout 3.18e-04 6.27e+04 input[0] 0.00e+00 inf output ```
60_8_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/debugging.md
https://huggingface.co/docs/transformers/en/debugging/#underflow-and-overflow-detection
.md
3.18e-04 6.27e+04 output encoder.block.2.layer.1.dropout Dropout 3.18e-04 6.27e+04 input[0] 0.00e+00 inf output ``` The example output has been trimmed in the middle for brevity. The second column shows the value of the absolute largest element, so if you have a closer look at the last few frames, the inputs and outputs were in the range of `1e4`. So when this training was done under fp16 mixed precision the very
60_8_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/debugging.md
https://huggingface.co/docs/transformers/en/debugging/#underflow-and-overflow-detection
.md
the inputs and outputs were in the range of `1e4`. So when this training was done under fp16 mixed precision the very last step overflowed (since under `fp16` the largest number before `inf` is `64e3`). To avoid overflows under `fp16` the activations must remain way below `1e4`, because `1e4 * 1e4 = 1e8` so any matrix multiplication with large activations is going to lead to a numerical overflow condition.
60_8_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/debugging.md
https://huggingface.co/docs/transformers/en/debugging/#underflow-and-overflow-detection
.md
large activations is going to lead to a numerical overflow condition. At the very start of the trace you can discover at which batch number the problem occurred (here `Detected inf/nan during batch_number=0` means the problem occurred on the first batch). Each reported frame starts by declaring the fully qualified entry for the corresponding module this frame is reporting for. If we look just at this frame: ``` encoder.block.2.layer.1.layer_norm T5LayerNorm 8.69e-02 4.18e-01 weight
60_8_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/debugging.md
https://huggingface.co/docs/transformers/en/debugging/#underflow-and-overflow-detection
.md
for. If we look just at this frame: ``` encoder.block.2.layer.1.layer_norm T5LayerNorm 8.69e-02 4.18e-01 weight 2.65e-04 3.42e+03 input[0] 1.79e-06 4.65e+00 output ``` Here, `encoder.block.2.layer.1.layer_norm` indicates that it was a layer norm for the first layer, of the second block of the encoder. And the specific calls of the `forward` is `T5LayerNorm`. Let's look at the last few frames of that report: ``` Detected inf/nan during batch_number=0 Last 21 forward frames:
60_8_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/debugging.md
https://huggingface.co/docs/transformers/en/debugging/#underflow-and-overflow-detection
.md
Let's look at the last few frames of that report: ``` Detected inf/nan during batch_number=0 Last 21 forward frames: abs min abs max metadata [...] encoder.block.2.layer.1.DenseReluDense.wi_0 Linear 2.17e-07 4.50e+00 weight 1.79e-06 4.65e+00 input[0] 2.68e-06 3.70e+01 output encoder.block.2.layer.1.DenseReluDense.wi_1 Linear 8.08e-07 2.66e+01 weight 1.79e-06 4.65e+00 input[0] 1.27e-04 2.37e+02 output encoder.block.2.layer.1.DenseReluDense.wo Linear 1.01e-06 6.44e+00 weight 0.00e+00 9.74e+03 input[0]
60_8_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/debugging.md
https://huggingface.co/docs/transformers/en/debugging/#underflow-and-overflow-detection
.md
1.27e-04 2.37e+02 output encoder.block.2.layer.1.DenseReluDense.wo Linear 1.01e-06 6.44e+00 weight 0.00e+00 9.74e+03 input[0] 3.18e-04 6.27e+04 output encoder.block.2.layer.1.DenseReluDense T5DenseGatedGeluDense 1.79e-06 4.65e+00 input[0] 3.18e-04 6.27e+04 output encoder.block.2.layer.1.dropout Dropout 3.18e-04 6.27e+04 input[0] 0.00e+00 inf output ``` The last frame reports for `Dropout.forward` function with the first entry for the only input and the second for the
60_8_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/debugging.md
https://huggingface.co/docs/transformers/en/debugging/#underflow-and-overflow-detection
.md
``` The last frame reports for `Dropout.forward` function with the first entry for the only input and the second for the only output. You can see that it was called from an attribute `dropout` inside `DenseReluDense` class. We can see that it happened during the first layer, of the 2nd block, during the very first batch. Finally, the absolute largest input elements was `6.27e+04` and same for the output was `inf`.
60_8_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/debugging.md
https://huggingface.co/docs/transformers/en/debugging/#underflow-and-overflow-detection
.md
input elements was `6.27e+04` and same for the output was `inf`. You can see here, that `T5DenseGatedGeluDense.forward` resulted in output activations, whose absolute max value was around 62.7K, which is very close to fp16's top limit of 64K. In the next frame we have `Dropout` which renormalizes the weights, after it zeroed some of the elements, which pushes the absolute max value to more than 64K, and we get an overflow (`inf`).
60_8_14
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/debugging.md
https://huggingface.co/docs/transformers/en/debugging/#underflow-and-overflow-detection
.md
overflow (`inf`). As you can see it's the previous frames that we need to look into when the numbers start going into very large for fp16 numbers. Let's match the report to the code from `models/t5/modeling_t5.py`: ```python class T5DenseGatedGeluDense(nn.Module): def __init__(self, config): super().__init__() self.wi_0 = nn.Linear(config.d_model, config.d_ff, bias=False) self.wi_1 = nn.Linear(config.d_model, config.d_ff, bias=False) self.wo = nn.Linear(config.d_ff, config.d_model, bias=False)
60_8_15
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/debugging.md
https://huggingface.co/docs/transformers/en/debugging/#underflow-and-overflow-detection
.md
self.wi_1 = nn.Linear(config.d_model, config.d_ff, bias=False) self.wo = nn.Linear(config.d_ff, config.d_model, bias=False) self.dropout = nn.Dropout(config.dropout_rate) self.gelu_act = ACT2FN["gelu_new"]
60_8_16
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/debugging.md
https://huggingface.co/docs/transformers/en/debugging/#underflow-and-overflow-detection
.md
def forward(self, hidden_states): hidden_gelu = self.gelu_act(self.wi_0(hidden_states)) hidden_linear = self.wi_1(hidden_states) hidden_states = hidden_gelu * hidden_linear hidden_states = self.dropout(hidden_states) hidden_states = self.wo(hidden_states) return hidden_states ``` Now it's easy to see the `dropout` call, and all the previous calls as well. Since the detection is happening in a forward hook, these reports are printed immediately after each `forward` returns.
60_8_17
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/debugging.md
https://huggingface.co/docs/transformers/en/debugging/#underflow-and-overflow-detection
.md
Since the detection is happening in a forward hook, these reports are printed immediately after each `forward` returns. Going back to the full report, to act on it and to fix the problem, we need to go a few frames up where the numbers started to go up and most likely switch to the `fp32` mode here, so that the numbers don't overflow when multiplied or summed up. Of course, there might be other solutions. For example, we could turn off `amp` temporarily if it's
60_8_18
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/debugging.md
https://huggingface.co/docs/transformers/en/debugging/#underflow-and-overflow-detection
.md
or summed up. Of course, there might be other solutions. For example, we could turn off `amp` temporarily if it's enabled, after moving the original `forward` into a helper wrapper, like so: ```python def _forward(self, hidden_states): hidden_gelu = self.gelu_act(self.wi_0(hidden_states)) hidden_linear = self.wi_1(hidden_states) hidden_states = hidden_gelu * hidden_linear hidden_states = self.dropout(hidden_states) hidden_states = self.wo(hidden_states) return hidden_states
60_8_19
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/debugging.md
https://huggingface.co/docs/transformers/en/debugging/#underflow-and-overflow-detection
.md
import torch
60_8_20
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/debugging.md
https://huggingface.co/docs/transformers/en/debugging/#underflow-and-overflow-detection
.md
def forward(self, hidden_states): if torch.is_autocast_enabled(): with torch.cuda.amp.autocast(enabled=False): return self._forward(hidden_states) else: return self._forward(hidden_states) ``` Since the automatic detector only reports on inputs and outputs of full frames, once you know where to look, you may want to analyse the intermediary stages of any specific `forward` function as well. In such a case you can use the
60_8_21
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/debugging.md
https://huggingface.co/docs/transformers/en/debugging/#underflow-and-overflow-detection
.md
want to analyse the intermediary stages of any specific `forward` function as well. In such a case you can use the `detect_overflow` helper function to inject the detector where you want it, for example: ```python from debug_utils import detect_overflow
60_8_22
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/debugging.md
https://huggingface.co/docs/transformers/en/debugging/#underflow-and-overflow-detection
.md
class T5LayerFF(nn.Module): [...]
60_8_23
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/debugging.md
https://huggingface.co/docs/transformers/en/debugging/#underflow-and-overflow-detection
.md
def forward(self, hidden_states): forwarded_states = self.layer_norm(hidden_states) detect_overflow(forwarded_states, "after layer_norm") forwarded_states = self.DenseReluDense(forwarded_states) detect_overflow(forwarded_states, "after DenseReluDense") return hidden_states + self.dropout(forwarded_states) ``` You can see that we added 2 of these and now we track if `inf` or `nan` for `forwarded_states` was detected somewhere in between.
60_8_24
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/debugging.md
https://huggingface.co/docs/transformers/en/debugging/#underflow-and-overflow-detection
.md
somewhere in between. Actually, the detector already reports these because each of the calls in the example above is a `nn.Module`, but let's say if you had some local direct calculations this is how you'd do that. Additionally, if you're instantiating the debugger in your own code, you can adjust the number of frames printed from its default, e.g.: ```python from transformers.debug_utils import DebugUnderflowOverflow
60_8_25
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/debugging.md
https://huggingface.co/docs/transformers/en/debugging/#underflow-and-overflow-detection
.md
debug_overflow = DebugUnderflowOverflow(model, max_frames_to_save=100) ```
60_8_26
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/debugging.md
https://huggingface.co/docs/transformers/en/debugging/#specific-batch-absolute-min-and-max-value-tracing
.md
The same debugging class can be used for per-batch tracing with the underflow/overflow detection feature turned off. Let's say you want to watch the absolute min and max values for all the ingredients of each `forward` call of a given batch, and only do that for batches 1 and 3. Then you instantiate this class as: ```python debug_overflow = DebugUnderflowOverflow(model, trace_batch_nums=[1, 3]) ```
60_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/debugging.md
https://huggingface.co/docs/transformers/en/debugging/#specific-batch-absolute-min-and-max-value-tracing
.md
```python debug_overflow = DebugUnderflowOverflow(model, trace_batch_nums=[1, 3]) ``` And now full batches 1 and 3 will be traced using the same format as the underflow/overflow detector does. Batches are 0-indexed. This is helpful if you know that the program starts misbehaving after a certain batch number, so you can fast-forward right to that area. Here is a sample truncated output for such configuration: ``` *** Starting batch number=1 *** abs min abs max metadata shared Embedding
60_9_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/debugging.md
https://huggingface.co/docs/transformers/en/debugging/#specific-batch-absolute-min-and-max-value-tracing
.md
``` *** Starting batch number=1 *** abs min abs max metadata shared Embedding 1.01e-06 7.92e+02 weight 0.00e+00 2.47e+04 input[0] 5.36e-05 7.92e+02 output [...] decoder.dropout Dropout 1.60e-07 2.27e+01 input[0] 0.00e+00 2.52e+01 output decoder T5Stack not a tensor output lm_head Linear 1.01e-06 7.92e+02 weight 0.00e+00 1.11e+00 input[0] 6.06e-02 8.39e+01 output T5ForConditionalGeneration not a tensor output
60_9_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/debugging.md
https://huggingface.co/docs/transformers/en/debugging/#specific-batch-absolute-min-and-max-value-tracing
.md
*** Starting batch number=3 *** abs min abs max metadata shared Embedding 1.01e-06 7.92e+02 weight 0.00e+00 2.78e+04 input[0] 5.36e-05 7.92e+02 output [...] ``` Here you will get a huge number of frames dumped - as many as there were forward calls in your model, so it may or may not what you want, but sometimes it can be easier to use for debugging purposes than a normal debugger. For example, if
60_9_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/debugging.md
https://huggingface.co/docs/transformers/en/debugging/#specific-batch-absolute-min-and-max-value-tracing
.md
not what you want, but sometimes it can be easier to use for debugging purposes than a normal debugger. For example, if a problem starts happening at batch number 150. So you can dump traces for batches 149 and 150 and compare where numbers started to diverge. You can also specify the batch number after which to stop the training, with: ```python debug_overflow = DebugUnderflowOverflow(model, trace_batch_nums=[1, 3], abort_after_batch_num=3) ```
60_9_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/create_a_model.md
https://huggingface.co/docs/transformers/en/create_a_model/
.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
61_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/create_a_model.md
https://huggingface.co/docs/transformers/en/create_a_model/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
61_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/create_a_model.md
https://huggingface.co/docs/transformers/en/create_a_model/#create-a-custom-architecture
.md
An [`AutoClass`](model_doc/auto) automatically infers the model architecture and downloads pretrained configuration and weights. Generally, we recommend using an `AutoClass` to produce checkpoint-agnostic code. But users who want more control over specific model parameters can create a custom 🤗 Transformers model from just a few base classes. This could be particularly useful for anyone who is interested in studying, training or experimenting with a 🤗 Transformers model. In this guide, dive deeper into
61_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/create_a_model.md
https://huggingface.co/docs/transformers/en/create_a_model/#create-a-custom-architecture
.md
anyone who is interested in studying, training or experimenting with a 🤗 Transformers model. In this guide, dive deeper into creating a custom model without an `AutoClass`. Learn how to:
61_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/create_a_model.md
https://huggingface.co/docs/transformers/en/create_a_model/#create-a-custom-architecture
.md
- Load and customize a model configuration. - Create a model architecture. - Create a slow and fast tokenizer for text. - Create an image processor for vision tasks. - Create a feature extractor for audio tasks. - Create a processor for multimodal tasks.
61_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/create_a_model.md
https://huggingface.co/docs/transformers/en/create_a_model/#configuration
.md
A [configuration](main_classes/configuration) refers to a model's specific attributes. Each model configuration has different attributes; for instance, all NLP models have the `hidden_size`, `num_attention_heads`, `num_hidden_layers` and `vocab_size` attributes in common. These attributes specify the number of attention heads or hidden layers to construct a model with. Get a closer look at [DistilBERT](model_doc/distilbert) by accessing [`DistilBertConfig`] to inspect it's attributes: ```py
61_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/create_a_model.md
https://huggingface.co/docs/transformers/en/create_a_model/#configuration
.md
Get a closer look at [DistilBERT](model_doc/distilbert) by accessing [`DistilBertConfig`] to inspect it's attributes: ```py >>> from transformers import DistilBertConfig
61_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/create_a_model.md
https://huggingface.co/docs/transformers/en/create_a_model/#configuration
.md
>>> config = DistilBertConfig() >>> print(config) DistilBertConfig { "activation": "gelu", "attention_dropout": 0.1, "dim": 768, "dropout": 0.1, "hidden_dim": 3072, "initializer_range": 0.02, "max_position_embeddings": 512, "model_type": "distilbert", "n_heads": 12, "n_layers": 6, "pad_token_id": 0, "qa_dropout": 0.1, "seq_classif_dropout": 0.2, "sinusoidal_pos_embds": false, "transformers_version": "4.16.2", "vocab_size": 30522 } ```
61_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/create_a_model.md
https://huggingface.co/docs/transformers/en/create_a_model/#configuration
.md
"seq_classif_dropout": 0.2, "sinusoidal_pos_embds": false, "transformers_version": "4.16.2", "vocab_size": 30522 } ``` [`DistilBertConfig`] displays all the default attributes used to build a base [`DistilBertModel`]. All attributes are customizable, creating space for experimentation. For example, you can customize a default model to: - Try a different activation function with the `activation` parameter.
61_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/create_a_model.md
https://huggingface.co/docs/transformers/en/create_a_model/#configuration
.md
- Try a different activation function with the `activation` parameter. - Use a higher dropout ratio for the attention probabilities with the `attention_dropout` parameter. ```py >>> my_config = DistilBertConfig(activation="relu", attention_dropout=0.4) >>> print(my_config) DistilBertConfig { "activation": "relu", "attention_dropout": 0.4, "dim": 768, "dropout": 0.1, "hidden_dim": 3072, "initializer_range": 0.02, "max_position_embeddings": 512, "model_type": "distilbert", "n_heads": 12, "n_layers": 6,
61_2_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/create_a_model.md
https://huggingface.co/docs/transformers/en/create_a_model/#configuration
.md
"initializer_range": 0.02, "max_position_embeddings": 512, "model_type": "distilbert", "n_heads": 12, "n_layers": 6, "pad_token_id": 0, "qa_dropout": 0.1, "seq_classif_dropout": 0.2, "sinusoidal_pos_embds": false, "transformers_version": "4.16.2", "vocab_size": 30522 } ``` Pretrained model attributes can be modified in the [`~PretrainedConfig.from_pretrained`] function: ```py >>> my_config = DistilBertConfig.from_pretrained("distilbert/distilbert-base-uncased", activation="relu", attention_dropout=0.4)
61_2_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/create_a_model.md
https://huggingface.co/docs/transformers/en/create_a_model/#configuration
.md
``` Once you are satisfied with your model configuration, you can save it with [`~PretrainedConfig.save_pretrained`]. Your configuration file is stored as a JSON file in the specified save directory: ```py >>> my_config.save_pretrained(save_directory="./your_model_save_path") ``` To reuse the configuration file, load it with [`~PretrainedConfig.from_pretrained`]: ```py >>> my_config = DistilBertConfig.from_pretrained("./your_model_save_path/config.json") ``` <Tip>
61_2_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/create_a_model.md
https://huggingface.co/docs/transformers/en/create_a_model/#configuration
.md
```py >>> my_config = DistilBertConfig.from_pretrained("./your_model_save_path/config.json") ``` <Tip> You can also save your configuration file as a dictionary or even just the difference between your custom configuration attributes and the default configuration attributes! See the [configuration](main_classes/configuration) documentation for more details. </Tip>
61_2_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/create_a_model.md
https://huggingface.co/docs/transformers/en/create_a_model/#model
.md
The next step is to create a [model](main_classes/models). The model - also loosely referred to as the architecture - defines what each layer is doing and what operations are happening. Attributes like `num_hidden_layers` from the configuration are used to define the architecture. Every model shares the base class [`PreTrainedModel`] and a few common methods like resizing input embeddings and pruning self-attention heads. In addition, all models are also either a
61_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/create_a_model.md
https://huggingface.co/docs/transformers/en/create_a_model/#model
.md
a few common methods like resizing input embeddings and pruning self-attention heads. In addition, all models are also either a [`torch.nn.Module`](https://pytorch.org/docs/stable/generated/torch.nn.Module.html), [`tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) or [`flax.linen.Module`](https://flax.readthedocs.io/en/latest/api_reference/flax.linen/module.html) subclass. This means models are compatible with each of their respective framework's usage.
61_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/create_a_model.md
https://huggingface.co/docs/transformers/en/create_a_model/#model
.md
<frameworkcontent> <pt> Load your custom configuration attributes into the model: ```py >>> from transformers import DistilBertModel
61_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/create_a_model.md
https://huggingface.co/docs/transformers/en/create_a_model/#model
.md
>>> my_config = DistilBertConfig.from_pretrained("./your_model_save_path/config.json") >>> model = DistilBertModel(my_config) ``` This creates a model with random values instead of pretrained weights. You won't be able to use this model for anything useful yet until you train it. Training is a costly and time-consuming process. It is generally better to use a pretrained model to obtain better results faster, while using only a fraction of the resources required for training.
61_3_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/create_a_model.md
https://huggingface.co/docs/transformers/en/create_a_model/#model
.md
Create a pretrained model with [`~PreTrainedModel.from_pretrained`]: ```py >>> model = DistilBertModel.from_pretrained("distilbert/distilbert-base-uncased") ``` When you load pretrained weights, the default model configuration is automatically loaded if the model is provided by 🤗 Transformers. However, you can still replace - some or all of - the default model configuration attributes with your own if you'd like: ```py
61_3_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/create_a_model.md
https://huggingface.co/docs/transformers/en/create_a_model/#model
.md
```py >>> model = DistilBertModel.from_pretrained("distilbert/distilbert-base-uncased", config=my_config) ``` </pt> <tf> Load your custom configuration attributes into the model: ```py >>> from transformers import TFDistilBertModel
61_3_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/create_a_model.md
https://huggingface.co/docs/transformers/en/create_a_model/#model
.md
>>> my_config = DistilBertConfig.from_pretrained("./your_model_save_path/my_config.json") >>> tf_model = TFDistilBertModel(my_config) ``` This creates a model with random values instead of pretrained weights. You won't be able to use this model for anything useful yet until you train it. Training is a costly and time-consuming process. It is generally better to use a pretrained model to obtain better results faster, while using only a fraction of the resources required for training.
61_3_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/create_a_model.md
https://huggingface.co/docs/transformers/en/create_a_model/#model
.md
Create a pretrained model with [`~TFPreTrainedModel.from_pretrained`]: ```py >>> tf_model = TFDistilBertModel.from_pretrained("distilbert/distilbert-base-uncased") ``` When you load pretrained weights, the default model configuration is automatically loaded if the model is provided by 🤗 Transformers. However, you can still replace - some or all of - the default model configuration attributes with your own if you'd like: ```py
61_3_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/create_a_model.md
https://huggingface.co/docs/transformers/en/create_a_model/#model
.md
```py >>> tf_model = TFDistilBertModel.from_pretrained("distilbert/distilbert-base-uncased", config=my_config) ``` </tf> </frameworkcontent>
61_3_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/create_a_model.md
https://huggingface.co/docs/transformers/en/create_a_model/#model-heads
.md
At this point, you have a base DistilBERT model which outputs the *hidden states*. The hidden states are passed as inputs to a model head to produce the final output. 🤗 Transformers provides a different model head for each task as long as a model supports the task (i.e., you can't use DistilBERT for a sequence-to-sequence task like translation). <frameworkcontent> <pt>
61_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/create_a_model.md
https://huggingface.co/docs/transformers/en/create_a_model/#model-heads
.md
<frameworkcontent> <pt> For example, [`DistilBertForSequenceClassification`] is a base DistilBERT model with a sequence classification head. The sequence classification head is a linear layer on top of the pooled outputs. ```py >>> from transformers import DistilBertForSequenceClassification
61_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/create_a_model.md
https://huggingface.co/docs/transformers/en/create_a_model/#model-heads
.md
>>> model = DistilBertForSequenceClassification.from_pretrained("distilbert/distilbert-base-uncased") ``` Easily reuse this checkpoint for another task by switching to a different model head. For a question answering task, you would use the [`DistilBertForQuestionAnswering`] model head. The question answering head is similar to the sequence classification head except it is a linear layer on top of the hidden states output. ```py >>> from transformers import DistilBertForQuestionAnswering
61_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/create_a_model.md
https://huggingface.co/docs/transformers/en/create_a_model/#model-heads
.md
>>> model = DistilBertForQuestionAnswering.from_pretrained("distilbert/distilbert-base-uncased") ``` </pt> <tf> For example, [`TFDistilBertForSequenceClassification`] is a base DistilBERT model with a sequence classification head. The sequence classification head is a linear layer on top of the pooled outputs. ```py >>> from transformers import TFDistilBertForSequenceClassification
61_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/create_a_model.md
https://huggingface.co/docs/transformers/en/create_a_model/#model-heads
.md
>>> tf_model = TFDistilBertForSequenceClassification.from_pretrained("distilbert/distilbert-base-uncased") ``` Easily reuse this checkpoint for another task by switching to a different model head. For a question answering task, you would use the [`TFDistilBertForQuestionAnswering`] model head. The question answering head is similar to the sequence classification head except it is a linear layer on top of the hidden states output. ```py >>> from transformers import TFDistilBertForQuestionAnswering
61_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/create_a_model.md
https://huggingface.co/docs/transformers/en/create_a_model/#model-heads
.md
>>> tf_model = TFDistilBertForQuestionAnswering.from_pretrained("distilbert/distilbert-base-uncased") ``` </tf> </frameworkcontent>
61_4_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/create_a_model.md
https://huggingface.co/docs/transformers/en/create_a_model/#tokenizer
.md
The last base class you need before using a model for textual data is a [tokenizer](main_classes/tokenizer) to convert raw text to tensors. There are two types of tokenizers you can use with 🤗 Transformers: - [`PreTrainedTokenizer`]: a Python implementation of a tokenizer.
61_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/create_a_model.md
https://huggingface.co/docs/transformers/en/create_a_model/#tokenizer
.md
- [`PreTrainedTokenizer`]: a Python implementation of a tokenizer. - [`PreTrainedTokenizerFast`]: a tokenizer from our Rust-based [🤗 Tokenizer](https://huggingface.co/docs/tokenizers/python/latest/) library. This tokenizer type is significantly faster - especially during batch tokenization - due to its Rust implementation. The fast tokenizer also offers additional methods like *offset mapping* which maps tokens to their original words or characters.
61_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/create_a_model.md
https://huggingface.co/docs/transformers/en/create_a_model/#tokenizer
.md
Both tokenizers support common methods such as encoding and decoding, adding new tokens, and managing special tokens. <Tip warning={true}> Not every model supports a fast tokenizer. Take a look at this [table](index#supported-frameworks) to check if a model has fast tokenizer support. </Tip> If you trained your own tokenizer, you can create one from your *vocabulary* file: ```py >>> from transformers import DistilBertTokenizer
61_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/create_a_model.md
https://huggingface.co/docs/transformers/en/create_a_model/#tokenizer
.md
>>> my_tokenizer = DistilBertTokenizer(vocab_file="my_vocab_file.txt", do_lower_case=False, padding_side="left") ``` It is important to remember the vocabulary from a custom tokenizer will be different from the vocabulary generated by a pretrained model's tokenizer. You need to use a pretrained model's vocabulary if you are using a pretrained model, otherwise the inputs won't make sense. Create a tokenizer with a pretrained model's vocabulary with the [`DistilBertTokenizer`] class: ```py
61_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/create_a_model.md
https://huggingface.co/docs/transformers/en/create_a_model/#tokenizer
.md
```py >>> from transformers import DistilBertTokenizer
61_5_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/create_a_model.md
https://huggingface.co/docs/transformers/en/create_a_model/#tokenizer
.md
>>> slow_tokenizer = DistilBertTokenizer.from_pretrained("distilbert/distilbert-base-uncased") ``` Create a fast tokenizer with the [`DistilBertTokenizerFast`] class: ```py >>> from transformers import DistilBertTokenizerFast >>> fast_tokenizer = DistilBertTokenizerFast.from_pretrained("distilbert/distilbert-base-uncased") ``` <Tip> By default, [`AutoTokenizer`] will try to load a fast tokenizer. You can disable this behavior by setting `use_fast=False` in `from_pretrained`. </Tip>
61_5_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/create_a_model.md
https://huggingface.co/docs/transformers/en/create_a_model/#image-processor
.md
An image processor processes vision inputs. It inherits from the base [`~image_processing_utils.ImageProcessingMixin`] class. To use, create an image processor associated with the model you're using. For example, create a default [`ViTImageProcessor`] if you are using [ViT](model_doc/vit) for image classification: ```py >>> from transformers import ViTImageProcessor
61_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/create_a_model.md
https://huggingface.co/docs/transformers/en/create_a_model/#image-processor
.md
>>> vit_extractor = ViTImageProcessor() >>> print(vit_extractor) ViTImageProcessor { "do_normalize": true, "do_resize": true, "image_processor_type": "ViTImageProcessor", "image_mean": [ 0.5, 0.5, 0.5 ], "image_std": [ 0.5, 0.5, 0.5 ], "resample": 2, "size": 224 } ``` <Tip> If you aren't looking for any customization, just use the `from_pretrained` method to load a model's default image processor parameters. </Tip>
61_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/create_a_model.md
https://huggingface.co/docs/transformers/en/create_a_model/#image-processor
.md
</Tip> Modify any of the [`ViTImageProcessor`] parameters to create your custom image processor: ```py >>> from transformers import ViTImageProcessor
61_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/create_a_model.md
https://huggingface.co/docs/transformers/en/create_a_model/#image-processor
.md
>>> my_vit_extractor = ViTImageProcessor(resample="PIL.Image.BOX", do_normalize=False, image_mean=[0.3, 0.3, 0.3]) >>> print(my_vit_extractor) ViTImageProcessor { "do_normalize": false, "do_resize": true, "image_processor_type": "ViTImageProcessor", "image_mean": [ 0.3, 0.3, 0.3 ], "image_std": [ 0.5, 0.5, 0.5 ], "resample": "PIL.Image.BOX", "size": 224 } ```
61_6_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/create_a_model.md
https://huggingface.co/docs/transformers/en/create_a_model/#backbone
.md
<div style="text-align: center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/Backbone.png"> </div>
61_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/create_a_model.md
https://huggingface.co/docs/transformers/en/create_a_model/#backbone
.md
</div> Computer vision models consist of a backbone, neck, and head. The backbone extracts features from an input image, the neck combines and enhances the extracted features, and the head is used for the main task (e.g., object detection). Start by initializing a backbone in the model config and specify whether you want to load pretrained weights or load randomly initialized weights. Then you can pass the model config to the model head.
61_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/create_a_model.md
https://huggingface.co/docs/transformers/en/create_a_model/#backbone
.md
For example, to load a [ResNet](../model_doc/resnet) backbone into a [MaskFormer](../model_doc/maskformer) model with an instance segmentation head: <hfoptions id="backbone"> <hfoption id="pretrained weights"> Set `use_pretrained_backbone=True` to load pretrained ResNet weights for the backbone. ```py from transformers import MaskFormerConfig, MaskFormerForInstanceSegmentation
61_7_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/create_a_model.md
https://huggingface.co/docs/transformers/en/create_a_model/#backbone
.md
config = MaskFormerConfig(backbone="microsoft/resnet-50", use_pretrained_backbone=True) # backbone and neck config model = MaskFormerForInstanceSegmentation(config) # head ``` </hfoption> <hfoption id="random weights"> Set `use_pretrained_backbone=False` to randomly initialize a ResNet backbone. ```py from transformers import MaskFormerConfig, MaskFormerForInstanceSegmentation
61_7_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/create_a_model.md
https://huggingface.co/docs/transformers/en/create_a_model/#backbone
.md
config = MaskFormerConfig(backbone="microsoft/resnet-50", use_pretrained_backbone=False) # backbone and neck config model = MaskFormerForInstanceSegmentation(config) # head ``` You could also load the backbone config separately and then pass it to the model config. ```py from transformers import MaskFormerConfig, MaskFormerForInstanceSegmentation, ResNetConfig
61_7_4