source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/custom_models.md | https://huggingface.co/docs/transformers/en/custom_models/#sending-the-code-to-the-hub | .md | ```
.
└── resnet_model
├── __init__.py
├── configuration_resnet.py
└── modeling_resnet.py
```
The `__init__.py` can be empty, it's just there so that Python detects `resnet_model` can be use as a module.
<Tip warning={true}>
If copying a modeling files from the library, you will need to replace all the relative imports at the top of the file
to import from the `transformers` package.
</Tip>
Note that you can re-use (or subclass) an existing configuration/model. | 26_5_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/custom_models.md | https://huggingface.co/docs/transformers/en/custom_models/#sending-the-code-to-the-hub | .md | to import from the `transformers` package.
</Tip>
Note that you can re-use (or subclass) an existing configuration/model.
To share your model with the community, follow those steps: first import the ResNet model and config from the newly
created files:
```py
from resnet_model.configuration_resnet import ResnetConfig
from resnet_model.modeling_resnet import ResnetModel, ResnetModelForImageClassification
``` | 26_5_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/custom_models.md | https://huggingface.co/docs/transformers/en/custom_models/#sending-the-code-to-the-hub | .md | from resnet_model.modeling_resnet import ResnetModel, ResnetModelForImageClassification
```
Then you have to tell the library you want to copy the code files of those objects when using the `save_pretrained`
method and properly register them with a given Auto class (especially for models), just run:
```py
ResnetConfig.register_for_auto_class()
ResnetModel.register_for_auto_class("AutoModel")
ResnetModelForImageClassification.register_for_auto_class("AutoModelForImageClassification")
``` | 26_5_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/custom_models.md | https://huggingface.co/docs/transformers/en/custom_models/#sending-the-code-to-the-hub | .md | ResnetModelForImageClassification.register_for_auto_class("AutoModelForImageClassification")
```
Note that there is no need to specify an auto class for the configuration (there is only one auto class for them,
[`AutoConfig`]) but it's different for models. Your custom model could be suitable for many different tasks, so you
have to specify which one of the auto classes is the correct one for your model.
<Tip> | 26_5_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/custom_models.md | https://huggingface.co/docs/transformers/en/custom_models/#sending-the-code-to-the-hub | .md | have to specify which one of the auto classes is the correct one for your model.
<Tip>
Use `register_for_auto_class()` if you want the code files to be copied. If you instead prefer to use code on the Hub from another repo,
you don't need to call it. In cases where there's more than one auto class, you can modify the `config.json` directly using the
following structure:
```json
"auto_map": {
"AutoConfig": "<your-repo-name>--<config-name>",
"AutoModel": "<your-repo-name>--<config-name>", | 26_5_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/custom_models.md | https://huggingface.co/docs/transformers/en/custom_models/#sending-the-code-to-the-hub | .md | ```json
"auto_map": {
"AutoConfig": "<your-repo-name>--<config-name>",
"AutoModel": "<your-repo-name>--<config-name>",
"AutoModelFor<Task>": "<your-repo-name>--<config-name>",
},
```
</Tip>
Next, let's create the config and models as we did before:
```py
resnet50d_config = ResnetConfig(block_type="bottleneck", stem_width=32, stem_type="deep", avg_down=True)
resnet50d = ResnetModelForImageClassification(resnet50d_config) | 26_5_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/custom_models.md | https://huggingface.co/docs/transformers/en/custom_models/#sending-the-code-to-the-hub | .md | pretrained_model = timm.create_model("resnet50d", pretrained=True)
resnet50d.model.load_state_dict(pretrained_model.state_dict())
```
Now to send the model to the Hub, make sure you are logged in. Either run in your terminal:
```bash
huggingface-cli login
```
or from a notebook:
```py
from huggingface_hub import notebook_login | 26_5_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/custom_models.md | https://huggingface.co/docs/transformers/en/custom_models/#sending-the-code-to-the-hub | .md | notebook_login()
```
You can then push to your own namespace (or an organization you are a member of) like this:
```py
resnet50d.push_to_hub("custom-resnet50d")
```
On top of the modeling weights and the configuration in json format, this also copied the modeling and
configuration `.py` files in the folder `custom-resnet50d` and uploaded the result to the Hub. You can check the result
in this [model repo](https://huggingface.co/sgugger/custom-resnet50d). | 26_5_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/custom_models.md | https://huggingface.co/docs/transformers/en/custom_models/#sending-the-code-to-the-hub | .md | in this [model repo](https://huggingface.co/sgugger/custom-resnet50d).
See the [sharing tutorial](model_sharing) for more information on the push to Hub method. | 26_5_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/custom_models.md | https://huggingface.co/docs/transformers/en/custom_models/#using-a-model-with-custom-code | .md | You can use any configuration, model or tokenizer with custom code files in its repository with the auto-classes and
the `from_pretrained` method. All files and code uploaded to the Hub are scanned for malware (refer to the [Hub security](https://huggingface.co/docs/hub/security#malware-scanning) documentation for more information), but you should still
review the model code and author to avoid executing malicious code on your machine. Set `trust_remote_code=True` to use
a model with custom code:
```py | 26_6_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/custom_models.md | https://huggingface.co/docs/transformers/en/custom_models/#using-a-model-with-custom-code | .md | a model with custom code:
```py
from transformers import AutoModelForImageClassification | 26_6_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/custom_models.md | https://huggingface.co/docs/transformers/en/custom_models/#using-a-model-with-custom-code | .md | model = AutoModelForImageClassification.from_pretrained("sgugger/custom-resnet50d", trust_remote_code=True)
```
It is also strongly encouraged to pass a commit hash as a `revision` to make sure the author of the models did not
update the code with some malicious new lines (unless you fully trust the authors of the models).
```py
commit_hash = "ed94a7c6247d8aedce4647f00f20de6875b5b292"
model = AutoModelForImageClassification.from_pretrained( | 26_6_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/custom_models.md | https://huggingface.co/docs/transformers/en/custom_models/#using-a-model-with-custom-code | .md | ```py
commit_hash = "ed94a7c6247d8aedce4647f00f20de6875b5b292"
model = AutoModelForImageClassification.from_pretrained(
"sgugger/custom-resnet50d", trust_remote_code=True, revision=commit_hash
)
```
Note that when browsing the commit history of the model repo on the Hub, there is a button to easily copy the commit
hash of any commit. | 26_6_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/autoclass_tutorial.md | https://huggingface.co/docs/transformers/en/autoclass_tutorial/ | .md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 27_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/autoclass_tutorial.md | https://huggingface.co/docs/transformers/en/autoclass_tutorial/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 27_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/autoclass_tutorial.md | https://huggingface.co/docs/transformers/en/autoclass_tutorial/#load-pretrained-instances-with-an-autoclass | .md | With so many different Transformer architectures, it can be challenging to create one for your checkpoint. As a part of 🤗 Transformers core philosophy to make the library easy, simple and flexible to use, an `AutoClass` automatically infers and loads the correct architecture from a given checkpoint. The `from_pretrained()` method lets you quickly load a pretrained model for any architecture so you don't have to devote time and resources to train a model from scratch. Producing this type of | 27_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/autoclass_tutorial.md | https://huggingface.co/docs/transformers/en/autoclass_tutorial/#load-pretrained-instances-with-an-autoclass | .md | model for any architecture so you don't have to devote time and resources to train a model from scratch. Producing this type of checkpoint-agnostic code means if your code works for one checkpoint, it will work with another checkpoint - as long as it was trained for a similar task - even if the architecture is different. | 27_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/autoclass_tutorial.md | https://huggingface.co/docs/transformers/en/autoclass_tutorial/#load-pretrained-instances-with-an-autoclass | .md | <Tip>
Remember, architecture refers to the skeleton of the model and checkpoints are the weights for a given architecture. For example, [BERT](https://huggingface.co/google-bert/bert-base-uncased) is an architecture, while `google-bert/bert-base-uncased` is a checkpoint. Model is a general term that can mean either architecture or checkpoint.
</Tip>
In this tutorial, learn to:
* Load a pretrained tokenizer.
* Load a pretrained image processor
* Load a pretrained feature extractor. | 27_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/autoclass_tutorial.md | https://huggingface.co/docs/transformers/en/autoclass_tutorial/#load-pretrained-instances-with-an-autoclass | .md | * Load a pretrained tokenizer.
* Load a pretrained image processor
* Load a pretrained feature extractor.
* Load a pretrained processor.
* Load a pretrained model.
* Load a model as a backbone. | 27_1_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/autoclass_tutorial.md | https://huggingface.co/docs/transformers/en/autoclass_tutorial/#autotokenizer | .md | Nearly every NLP task begins with a tokenizer. A tokenizer converts your input into a format that can be processed by the model.
Load a tokenizer with [`AutoTokenizer.from_pretrained`]:
```py
>>> from transformers import AutoTokenizer | 27_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/autoclass_tutorial.md | https://huggingface.co/docs/transformers/en/autoclass_tutorial/#autotokenizer | .md | >>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased")
```
Then tokenize your input as shown below:
```py
>>> sequence = "In a hole in the ground there lived a hobbit."
>>> print(tokenizer(sequence))
{'input_ids': [101, 1999, 1037, 4920, 1999, 1996, 2598, 2045, 2973, 1037, 7570, 10322, 4183, 1012, 102],
'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}
``` | 27_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/autoclass_tutorial.md | https://huggingface.co/docs/transformers/en/autoclass_tutorial/#autoimageprocessor | .md | For vision tasks, an image processor processes the image into the correct input format.
```py
>>> from transformers import AutoImageProcessor
>>> image_processor = AutoImageProcessor.from_pretrained("google/vit-base-patch16-224")
``` | 27_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/autoclass_tutorial.md | https://huggingface.co/docs/transformers/en/autoclass_tutorial/#autobackbone | .md | <div style="text-align: center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/Swin%20Stages.png">
<figcaption class="mt-2 text-center text-sm text-gray-500">A Swin backbone with multiple stages for outputting a feature map.</figcaption>
</div> | 27_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/autoclass_tutorial.md | https://huggingface.co/docs/transformers/en/autoclass_tutorial/#autobackbone | .md | </div>
The [`AutoBackbone`] lets you use pretrained models as backbones to get feature maps from different stages of the backbone. You should specify one of the following parameters in [`~PretrainedConfig.from_pretrained`]:
* `out_indices` is the index of the layer you'd like to get the feature map from
* `out_features` is the name of the layer you'd like to get the feature map from | 27_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/autoclass_tutorial.md | https://huggingface.co/docs/transformers/en/autoclass_tutorial/#autobackbone | .md | * `out_features` is the name of the layer you'd like to get the feature map from
These parameters can be used interchangeably, but if you use both, make sure they're aligned with each other! If you don't pass any of these parameters, the backbone returns the feature map from the last layer.
<div style="text-align: center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/Swin%20Stage%201.png"> | 27_4_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/autoclass_tutorial.md | https://huggingface.co/docs/transformers/en/autoclass_tutorial/#autobackbone | .md | <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/Swin%20Stage%201.png">
<figcaption class="mt-2 text-center text-sm text-gray-500">A feature map from the first stage of the backbone. The patch partition refers to the model stem.</figcaption>
</div>
For example, in the above diagram, to return the feature map from the first stage of the Swin backbone, you can set `out_indices=(1,)`:
```py
>>> from transformers import AutoImageProcessor, AutoBackbone | 27_4_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/autoclass_tutorial.md | https://huggingface.co/docs/transformers/en/autoclass_tutorial/#autobackbone | .md | ```py
>>> from transformers import AutoImageProcessor, AutoBackbone
>>> import torch
>>> from PIL import Image
>>> import requests
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> processor = AutoImageProcessor.from_pretrained("microsoft/swin-tiny-patch4-window7-224")
>>> model = AutoBackbone.from_pretrained("microsoft/swin-tiny-patch4-window7-224", out_indices=(1,)) | 27_4_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/autoclass_tutorial.md | https://huggingface.co/docs/transformers/en/autoclass_tutorial/#autobackbone | .md | >>> inputs = processor(image, return_tensors="pt")
>>> outputs = model(**inputs)
>>> feature_maps = outputs.feature_maps
```
Now you can access the `feature_maps` object from the first stage of the backbone:
```py
>>> list(feature_maps[0].shape)
[1, 96, 56, 56]
``` | 27_4_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/autoclass_tutorial.md | https://huggingface.co/docs/transformers/en/autoclass_tutorial/#autofeatureextractor | .md | For audio tasks, a feature extractor processes the audio signal into the correct input format.
Load a feature extractor with [`AutoFeatureExtractor.from_pretrained`]:
```py
>>> from transformers import AutoFeatureExtractor
>>> feature_extractor = AutoFeatureExtractor.from_pretrained(
... "ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition"
... )
``` | 27_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/autoclass_tutorial.md | https://huggingface.co/docs/transformers/en/autoclass_tutorial/#autoprocessor | .md | Multimodal tasks require a processor that combines two types of preprocessing tools. For example, the [LayoutLMV2](model_doc/layoutlmv2) model requires an image processor to handle images and a tokenizer to handle text; a processor combines both of them.
Load a processor with [`AutoProcessor.from_pretrained`]:
```py
>>> from transformers import AutoProcessor
>>> processor = AutoProcessor.from_pretrained("microsoft/layoutlmv2-base-uncased")
``` | 27_6_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/autoclass_tutorial.md | https://huggingface.co/docs/transformers/en/autoclass_tutorial/#automodel | .md | <frameworkcontent>
<pt>
The `AutoModelFor` classes let you load a pretrained model for a given task (see [here](model_doc/auto) for a complete list of available tasks). For example, load a model for sequence classification with [`AutoModelForSequenceClassification.from_pretrained`].
> [!WARNING] | 27_7_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/autoclass_tutorial.md | https://huggingface.co/docs/transformers/en/autoclass_tutorial/#automodel | .md | > [!WARNING]
> By default, the weights are loaded in full precision (torch.float32) regardless of the actual data type the weights are stored in such as torch.float16. Set `torch_dtype="auto"` to load the weights in the data type defined in a model's `config.json` file to automatically load the most memory-optimal data type.
```py
>>> from transformers import AutoModelForSequenceClassification | 27_7_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/autoclass_tutorial.md | https://huggingface.co/docs/transformers/en/autoclass_tutorial/#automodel | .md | >>> model = AutoModelForSequenceClassification.from_pretrained("distilbert/distilbert-base-uncased", torch_dtype="auto")
```
Easily reuse the same checkpoint to load an architecture for a different task:
```py
>>> from transformers import AutoModelForTokenClassification | 27_7_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/autoclass_tutorial.md | https://huggingface.co/docs/transformers/en/autoclass_tutorial/#automodel | .md | >>> model = AutoModelForTokenClassification.from_pretrained("distilbert/distilbert-base-uncased", torch_dtype="auto")
```
<Tip warning={true}> | 27_7_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/autoclass_tutorial.md | https://huggingface.co/docs/transformers/en/autoclass_tutorial/#automodel | .md | For PyTorch models, the `from_pretrained()` method uses `torch.load()` which internally uses `pickle` and is known to be insecure. In general, never load a model that could have come from an untrusted source, or that could have been tampered with. This security risk is partially mitigated for public models hosted on the Hugging Face Hub, which are [scanned for malware](https://huggingface.co/docs/hub/security-malware) at each commit. See the [Hub documentation](https://huggingface.co/docs/hub/security) for | 27_7_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/autoclass_tutorial.md | https://huggingface.co/docs/transformers/en/autoclass_tutorial/#automodel | .md | at each commit. See the [Hub documentation](https://huggingface.co/docs/hub/security) for best practices like [signed commit verification](https://huggingface.co/docs/hub/security-gpg#signing-commits-with-gpg) with GPG. | 27_7_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/autoclass_tutorial.md | https://huggingface.co/docs/transformers/en/autoclass_tutorial/#automodel | .md | TensorFlow and Flax checkpoints are not affected, and can be loaded within PyTorch architectures using the `from_tf` and `from_flax` kwargs for the `from_pretrained` method to circumvent this issue.
</Tip> | 27_7_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/autoclass_tutorial.md | https://huggingface.co/docs/transformers/en/autoclass_tutorial/#automodel | .md | </Tip>
Generally, we recommend using the `AutoTokenizer` class and the `AutoModelFor` class to load pretrained instances of models. This will ensure you load the correct architecture every time. In the next [tutorial](preprocessing), learn how to use your newly loaded tokenizer, image processor, feature extractor and processor to preprocess a dataset for fine-tuning.
</pt>
<tf> | 27_7_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/autoclass_tutorial.md | https://huggingface.co/docs/transformers/en/autoclass_tutorial/#automodel | .md | </pt>
<tf>
Finally, the `TFAutoModelFor` classes let you load a pretrained model for a given task (see [here](model_doc/auto) for a complete list of available tasks). For example, load a model for sequence classification with [`TFAutoModelForSequenceClassification.from_pretrained`]:
```py
>>> from transformers import TFAutoModelForSequenceClassification | 27_7_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/autoclass_tutorial.md | https://huggingface.co/docs/transformers/en/autoclass_tutorial/#automodel | .md | >>> model = TFAutoModelForSequenceClassification.from_pretrained("distilbert/distilbert-base-uncased")
```
Easily reuse the same checkpoint to load an architecture for a different task:
```py
>>> from transformers import TFAutoModelForTokenClassification | 27_7_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/autoclass_tutorial.md | https://huggingface.co/docs/transformers/en/autoclass_tutorial/#automodel | .md | >>> model = TFAutoModelForTokenClassification.from_pretrained("distilbert/distilbert-base-uncased")
```
Generally, we recommend using the `AutoTokenizer` class and the `TFAutoModelFor` class to load pretrained instances of models. This will ensure you load the correct architecture every time. In the next [tutorial](preprocessing), learn how to use your newly loaded tokenizer, image processor, feature extractor and processor to preprocess a dataset for fine-tuning.
</tf>
</frameworkcontent> | 27_7_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_infer_cpu.md | https://huggingface.co/docs/transformers/en/perf_infer_cpu/ | .md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 28_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_infer_cpu.md | https://huggingface.co/docs/transformers/en/perf_infer_cpu/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 28_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_infer_cpu.md | https://huggingface.co/docs/transformers/en/perf_infer_cpu/#cpu-inference | .md | With some optimizations, it is possible to efficiently run large model inference on a CPU. One of these optimization techniques involves compiling the PyTorch code into an intermediate format for high-performance environments like C++. The other technique fuses multiple operations into one kernel to reduce the overhead of running each operation separately. | 28_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_infer_cpu.md | https://huggingface.co/docs/transformers/en/perf_infer_cpu/#cpu-inference | .md | You'll learn how to use [BetterTransformer](https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/) for faster inference, and how to convert your PyTorch code to [TorchScript](https://pytorch.org/tutorials/beginner/Intro_to_TorchScript_tutorial.html). If you're using an Intel CPU, you can also use [graph optimizations](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/features.html#graph-optimization) from [Intel Extension for | 28_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_infer_cpu.md | https://huggingface.co/docs/transformers/en/perf_infer_cpu/#cpu-inference | .md | from [Intel Extension for PyTorch](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/index.html) to boost inference speed even more. Finally, learn how to use 🤗 Optimum to accelerate inference with ONNX Runtime or OpenVINO (if you're using an Intel CPU). | 28_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_infer_cpu.md | https://huggingface.co/docs/transformers/en/perf_infer_cpu/#bettertransformer | .md | BetterTransformer accelerates inference with its fastpath (native PyTorch specialized implementation of Transformer functions) execution. The two optimizations in the fastpath execution are:
1. fusion, which combines multiple sequential operations into a single "kernel" to reduce the number of computation steps
2. skipping the inherent sparsity of padding tokens to avoid unnecessary computation with nested tensors | 28_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_infer_cpu.md | https://huggingface.co/docs/transformers/en/perf_infer_cpu/#bettertransformer | .md | 2. skipping the inherent sparsity of padding tokens to avoid unnecessary computation with nested tensors
BetterTransformer also converts all attention operations to use the more memory-efficient [scaled dot product attention](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention).
<Tip> | 28_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_infer_cpu.md | https://huggingface.co/docs/transformers/en/perf_infer_cpu/#bettertransformer | .md | <Tip>
BetterTransformer is not supported for all models. Check this [list](https://huggingface.co/docs/optimum/bettertransformer/overview#supported-models) to see if a model supports BetterTransformer.
</Tip>
Before you start, make sure you have 🤗 Optimum [installed](https://huggingface.co/docs/optimum/installation).
Enable BetterTransformer with the [`PreTrainedModel.to_bettertransformer`] method:
```py
from transformers import AutoModelForCausalLM | 28_2_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_infer_cpu.md | https://huggingface.co/docs/transformers/en/perf_infer_cpu/#bettertransformer | .md | model = AutoModelForCausalLM.from_pretrained("bigcode/starcoder", torch_dtype="auto")
``` | 28_2_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_infer_cpu.md | https://huggingface.co/docs/transformers/en/perf_infer_cpu/#torchscript | .md | TorchScript is an intermediate PyTorch model representation that can be run in production environments where performance is important. You can train a model in PyTorch and then export it to TorchScript to free the model from Python performance constraints. PyTorch [traces](https://pytorch.org/docs/stable/generated/torch.jit.trace.html) a model to return a [`ScriptFunction`] that is optimized with just-in-time compilation (JIT). Compared to the default eager mode, JIT mode in PyTorch typically yields better | 28_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_infer_cpu.md | https://huggingface.co/docs/transformers/en/perf_infer_cpu/#torchscript | .md | optimized with just-in-time compilation (JIT). Compared to the default eager mode, JIT mode in PyTorch typically yields better performance for inference using optimization techniques like operator fusion. | 28_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_infer_cpu.md | https://huggingface.co/docs/transformers/en/perf_infer_cpu/#torchscript | .md | For a gentle introduction to TorchScript, see the [Introduction to PyTorch TorchScript](https://pytorch.org/tutorials/beginner/Intro_to_TorchScript_tutorial.html) tutorial.
With the [`Trainer`] class, you can enable JIT mode for CPU inference by setting the `--jit_mode_eval` flag:
```bash
python examples/pytorch/question-answering/run_qa.py \
--model_name_or_path csarron/bert-base-uncased-squad-v1 \
--dataset_name squad \
--do_eval \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /tmp/ \ | 28_3_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_infer_cpu.md | https://huggingface.co/docs/transformers/en/perf_infer_cpu/#torchscript | .md | --dataset_name squad \
--do_eval \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /tmp/ \
--no_cuda \
--jit_mode_eval
```
<Tip warning={true}>
For PyTorch >= 1.14.0, JIT-mode could benefit any model for prediction and evaluation since the dict input is supported in `jit.trace`. | 28_3_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_infer_cpu.md | https://huggingface.co/docs/transformers/en/perf_infer_cpu/#torchscript | .md | For PyTorch < 1.14.0, JIT-mode could benefit a model if its forward parameter order matches the tuple input order in `jit.trace`, such as a question-answering model. If the forward parameter order does not match the tuple input order in `jit.trace`, like a text classification model, `jit.trace` will fail and we are capturing this with the exception here to make it fallback. Logging is used to notify users.
</Tip> | 28_3_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_infer_cpu.md | https://huggingface.co/docs/transformers/en/perf_infer_cpu/#ipex-graph-optimization | .md | Intel® Extension for PyTorch (IPEX) provides further optimizations in JIT mode for Intel CPUs, and we recommend combining it with TorchScript for even faster performance. The IPEX [graph optimization](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/features/graph_optimization.html) fuses operations like Multi-head attention, Concat Linear, Linear + Add, Linear + Gelu, Add + LayerNorm, and more. | 28_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_infer_cpu.md | https://huggingface.co/docs/transformers/en/perf_infer_cpu/#ipex-graph-optimization | .md | To take advantage of these graph optimizations, make sure you have IPEX [installed](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/installation.html):
```bash
pip install intel_extension_for_pytorch
```
Set the `--use_ipex` and `--jit_mode_eval` flags in the [`Trainer`] class to enable JIT mode with the graph optimizations:
```bash
python examples/pytorch/question-answering/run_qa.py \
--model_name_or_path csarron/bert-base-uncased-squad-v1 \
--dataset_name squad \ | 28_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_infer_cpu.md | https://huggingface.co/docs/transformers/en/perf_infer_cpu/#ipex-graph-optimization | .md | --model_name_or_path csarron/bert-base-uncased-squad-v1 \
--dataset_name squad \
--do_eval \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /tmp/ \
--no_cuda \
--use_ipex \
--jit_mode_eval
``` | 28_4_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_infer_cpu.md | https://huggingface.co/docs/transformers/en/perf_infer_cpu/#-optimum | .md | <Tip>
Learn more details about using ORT with 🤗 Optimum in the [Optimum Inference with ONNX Runtime](https://huggingface.co/docs/optimum/onnxruntime/usage_guides/models) guide. This section only provides a brief and simple example.
</Tip> | 28_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_infer_cpu.md | https://huggingface.co/docs/transformers/en/perf_infer_cpu/#-optimum | .md | </Tip>
ONNX Runtime (ORT) is a model accelerator that runs inference on CPUs by default. ORT is supported by 🤗 Optimum which can be used in 🤗 Transformers, without making too many changes to your code. You only need to replace the 🤗 Transformers `AutoClass` with its equivalent [`~optimum.onnxruntime.ORTModel`] for the task you're solving, and load a checkpoint in the ONNX format. | 28_5_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_infer_cpu.md | https://huggingface.co/docs/transformers/en/perf_infer_cpu/#-optimum | .md | For example, if you're running inference on a question answering task, load the [optimum/roberta-base-squad2](https://huggingface.co/optimum/roberta-base-squad2) checkpoint which contains a `model.onnx` file:
```py
from transformers import AutoTokenizer, pipeline
from optimum.onnxruntime import ORTModelForQuestionAnswering | 28_5_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_infer_cpu.md | https://huggingface.co/docs/transformers/en/perf_infer_cpu/#-optimum | .md | model = ORTModelForQuestionAnswering.from_pretrained("optimum/roberta-base-squad2")
tokenizer = AutoTokenizer.from_pretrained("deepset/roberta-base-squad2")
onnx_qa = pipeline("question-answering", model=model, tokenizer=tokenizer) | 28_5_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_infer_cpu.md | https://huggingface.co/docs/transformers/en/perf_infer_cpu/#-optimum | .md | question = "What's my name?"
context = "My name is Philipp and I live in Nuremberg."
pred = onnx_qa(question, context)
```
If you have an Intel CPU, take a look at 🤗 [Optimum Intel](https://huggingface.co/docs/optimum/intel/index) which supports a variety of compression techniques (quantization, pruning, knowledge distillation) and tools for converting models to the [OpenVINO](https://huggingface.co/docs/optimum/intel/inference) format for higher performance inference. | 28_5_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/bertology.md | https://huggingface.co/docs/transformers/en/bertology/ | .md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 29_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/bertology.md | https://huggingface.co/docs/transformers/en/bertology/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 29_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/bertology.md | https://huggingface.co/docs/transformers/en/bertology/#bertology | .md | There is a growing field of study concerned with investigating the inner working of large-scale transformers like BERT
(that some call "BERTology"). Some good examples of this field are:
- BERT Rediscovers the Classical NLP Pipeline by Ian Tenney, Dipanjan Das, Ellie Pavlick:
https://arxiv.org/abs/1905.05950
- Are Sixteen Heads Really Better than One? by Paul Michel, Omer Levy, Graham Neubig: https://arxiv.org/abs/1905.10650 | 29_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/bertology.md | https://huggingface.co/docs/transformers/en/bertology/#bertology | .md | - Are Sixteen Heads Really Better than One? by Paul Michel, Omer Levy, Graham Neubig: https://arxiv.org/abs/1905.10650
- What Does BERT Look At? An Analysis of BERT's Attention by Kevin Clark, Urvashi Khandelwal, Omer Levy, Christopher D.
Manning: https://arxiv.org/abs/1906.04341
- CAT-probing: A Metric-based Approach to Interpret How Pre-trained Models for Programming Language Attend Code Structure: https://arxiv.org/abs/2210.04633 | 29_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/bertology.md | https://huggingface.co/docs/transformers/en/bertology/#bertology | .md | In order to help this new field develop, we have included a few additional features in the BERT/GPT/GPT-2 models to
help people access the inner representations, mainly adapted from the great work of Paul Michel
(https://arxiv.org/abs/1905.10650):
- accessing all the hidden-states of BERT/GPT/GPT-2,
- accessing all the attention weights for each head of BERT/GPT/GPT-2,
- retrieving heads output values and gradients to be able to compute head importance score and prune head as explained | 29_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/bertology.md | https://huggingface.co/docs/transformers/en/bertology/#bertology | .md | - retrieving heads output values and gradients to be able to compute head importance score and prune head as explained
in https://arxiv.org/abs/1905.10650.
To help you understand and use these features, we have added a specific example script: [bertology.py](https://github.com/huggingface/transformers/tree/main/examples/research_projects/bertology/run_bertology.py) which extracts information and prune a model pre-trained on
GLUE. | 29_1_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md | https://huggingface.co/docs/transformers/en/notebooks/ | .md | <!---
Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | 30_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md | https://huggingface.co/docs/transformers/en/notebooks/ | .md | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
--> | 30_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md | https://huggingface.co/docs/transformers/en/notebooks/#-transformers-notebooks | .md | You can find here a list of the official notebooks provided by Hugging Face.
Also, we would like to list here interesting content created by the community.
If you wrote some notebook(s) leveraging 🤗 Transformers and would like to be listed here, please open a
Pull Request so it can be included under the Community notebooks. | 30_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md | https://huggingface.co/docs/transformers/en/notebooks/#documentation-notebooks | .md | You can open any page of the documentation as a notebook in Colab (there is a button directly on said pages) but they are also listed here if you need them:
| Notebook | Description | | |
|:----------|:-------------|:-------------|------:| | 30_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md | https://huggingface.co/docs/transformers/en/notebooks/#documentation-notebooks | .md | | [Quicktour of the library](https://github.com/huggingface/notebooks/blob/main/transformers_doc/en/quicktour.ipynb) | A presentation of the various APIs in Transformers |[](https://colab.research.google.com/github/huggingface/notebooks/blob/main/transformers_doc/en/quicktour.ipynb)| [](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/en/transformers_doc/quicktour.ipynb)| | 30_2_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md | https://huggingface.co/docs/transformers/en/notebooks/#documentation-notebooks | .md | | [Summary of the tasks](https://github.com/huggingface/notebooks/blob/main/transformers_doc/en/task_summary.ipynb) | How to run the models of the Transformers library task by task |[](https://colab.research.google.com/github/huggingface/notebooks/blob/main/transformers_doc/en/task_summary.ipynb)| [](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/transformers_doc/en/task_summary.ipynb)| | 30_2_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md | https://huggingface.co/docs/transformers/en/notebooks/#documentation-notebooks | .md | | [Preprocessing data](https://github.com/huggingface/notebooks/blob/main/transformers_doc/en/preprocessing.ipynb) | How to use a tokenizer to preprocess your data |[](https://colab.research.google.com/github/huggingface/notebooks/blob/main/transformers_doc/en/preprocessing.ipynb)| [](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/transformers_doc/en/preprocessing.ipynb)| | 30_2_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md | https://huggingface.co/docs/transformers/en/notebooks/#documentation-notebooks | .md | | [Fine-tuning a pretrained model](https://github.com/huggingface/notebooks/blob/main/transformers_doc/en/training.ipynb) | How to use the Trainer to fine-tune a pretrained model |[](https://colab.research.google.com/github/huggingface/notebooks/blob/main/transformers_doc/en/training.ipynb)| [](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/transformers_doc/en/training.ipynb)| | 30_2_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md | https://huggingface.co/docs/transformers/en/notebooks/#documentation-notebooks | .md | | [Summary of the tokenizers](https://github.com/huggingface/notebooks/blob/main/transformers_doc/en/tokenizer_summary.ipynb) | The differences between the tokenizers algorithm |[](https://colab.research.google.com/github/huggingface/notebooks/blob/main/transformers_doc/en/tokenizer_summary.ipynb)| [](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/transformers_doc/en/tokenizer_summary.ipynb)| | 30_2_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md | https://huggingface.co/docs/transformers/en/notebooks/#documentation-notebooks | .md | | [Multilingual models](https://github.com/huggingface/notebooks/blob/main/transformers_doc/en/multilingual.ipynb) | How to use the multilingual models of the library |[](https://colab.research.google.com/github/huggingface/notebooks/blob/main/transformers_doc/en/multilingual.ipynb)| [](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/transformers_doc/en/multilingual.ipynb)| | 30_2_12 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md | https://huggingface.co/docs/transformers/en/notebooks/#natural-language-processingpytorch-nlp | .md | | Notebook | Description | | |
|:----------|:-------------|:-------------|------:| | 30_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md | https://huggingface.co/docs/transformers/en/notebooks/#natural-language-processingpytorch-nlp | .md | | [Train your tokenizer](https://github.com/huggingface/notebooks/blob/main/examples/tokenizer_training.ipynb) | How to train and use your very own tokenizer |[](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/tokenizer_training.ipynb)| [](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/tokenizer_training.ipynb)| | 30_3_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md | https://huggingface.co/docs/transformers/en/notebooks/#natural-language-processingpytorch-nlp | .md | | [Train your language model](https://github.com/huggingface/notebooks/blob/main/examples/language_modeling_from_scratch.ipynb) | How to easily start using transformers |[](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling_from_scratch.ipynb)| [](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/language_modeling_from_scratch.ipynb)| | 30_3_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md | https://huggingface.co/docs/transformers/en/notebooks/#natural-language-processingpytorch-nlp | .md | | [How to fine-tune a model on text classification](https://github.com/huggingface/notebooks/blob/main/examples/text_classification.ipynb)| Show how to preprocess the data and fine-tune a pretrained model on any GLUE task. | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification.ipynb)| [](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/text_classification.ipynb)| | 30_3_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md | https://huggingface.co/docs/transformers/en/notebooks/#natural-language-processingpytorch-nlp | .md | | [How to fine-tune a model on language modeling](https://github.com/huggingface/notebooks/blob/main/examples/language_modeling.ipynb)| Show how to preprocess the data and fine-tune a pretrained model on a causal or masked LM task. | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb)| [](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb)| | 30_3_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md | https://huggingface.co/docs/transformers/en/notebooks/#natural-language-processingpytorch-nlp | .md | | [How to fine-tune a model on token classification](https://github.com/huggingface/notebooks/blob/main/examples/token_classification.ipynb)| Show how to preprocess the data and fine-tune a pretrained model on a token classification task (NER, PoS). | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification.ipynb)| [](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/token_classification.ipynb)| | 30_3_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md | https://huggingface.co/docs/transformers/en/notebooks/#natural-language-processingpytorch-nlp | .md | | [How to fine-tune a model on question answering](https://github.com/huggingface/notebooks/blob/main/examples/question_answering.ipynb)| Show how to preprocess the data and fine-tune a pretrained model on SQUAD. | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering.ipynb)| [](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/question_answering.ipynb)| | 30_3_12 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/notebooks.md | https://huggingface.co/docs/transformers/en/notebooks/#natural-language-processingpytorch-nlp | .md | | [How to fine-tune a model on multiple choice](https://github.com/huggingface/notebooks/blob/main/examples/multiple_choice.ipynb)| Show how to preprocess the data and fine-tune a pretrained model on SWAG. | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice.ipynb)| [](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/multiple_choice.ipynb)| | 30_3_14 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.