source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md
https://huggingface.co/docs/transformers/en/model_doc/opt/#using-scaled-dot-product-attention-sdpa
.md
PyTorch includes a native scaled dot-product attention (SDPA) operator as part of `torch.nn.functional`. This function encompasses several implementations that can be applied depending on the inputs and the hardware in use. See the [official documentation](https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html) or the [GPU Inference](https://huggingface.co/docs/transformers/main/en/perf_infer_gpu_one#pytorch-scaled-dot-product-attention) page for more information.
380_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md
https://huggingface.co/docs/transformers/en/model_doc/opt/#using-scaled-dot-product-attention-sdpa
.md
page for more information. SDPA is used by default for `torch>=2.1.1` when an implementation is available, but you may also set `attn_implementation="sdpa"` in `from_pretrained()` to explicitly request SDPA to be used. ```python from transformers import OPTForCausalLM model = OPTForCausalLM.from_pretrained("facebook/opt-350m", torch_dtype=torch.float16, attn_implementation="sdpa") ... ```
380_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md
https://huggingface.co/docs/transformers/en/model_doc/opt/#using-scaled-dot-product-attention-sdpa
.md
model = OPTForCausalLM.from_pretrained("facebook/opt-350m", torch_dtype=torch.float16, attn_implementation="sdpa") ... ``` For the best speedups, we recommend loading the model in half-precision (e.g. `torch.float16` or `torch.bfloat16`). On a local benchmark (L40S-45GB, PyTorch 2.4.0, OS Debian GNU/Linux 11) using `float16` with [facebook/opt-350m](https://huggingface.co/facebook/opt-350m), we saw the following speedups during training and inference.
380_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md
https://huggingface.co/docs/transformers/en/model_doc/opt/#training
.md
| batch_size | seq_len | Time per batch (eager - s) | Time per batch (sdpa - s) | Speedup (%) | Eager peak mem (MB) | sdpa peak mem (MB) | Mem saving (%) | |--------------:|-----------:|:------------------------------|-----------------------------:|:---------------|:-----------------------|----------------------:|:------------------|
380_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md
https://huggingface.co/docs/transformers/en/model_doc/opt/#training
.md
| 1 | 128 | 0.047 | 0.037 | 26.360 | 1474.611 | 1474.32 | 0.019 | | 1 | 256 | 0.046 | 0.037 | 24.335 | 1498.541 | 1499.49 | -0.063 |
380_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md
https://huggingface.co/docs/transformers/en/model_doc/opt/#training
.md
| 1 | 512 | 0.046 | 0.037 | 24.959 | 1973.544 | 1551.35 | 27.215 | | 1 | 1024 | 0.062 | 0.038 | 65.135 | 4867.113 | 1698.35 | 186.578 |
380_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md
https://huggingface.co/docs/transformers/en/model_doc/opt/#training
.md
| 1 | 2048 | 0.230 | 0.039 | 483.933 | 15662.224 | 2715.75 | 476.718 | | 2 | 128 | 0.045 | 0.037 | 20.455 | 1498.164 | 1499.49 | -0.089 |
380_6_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md
https://huggingface.co/docs/transformers/en/model_doc/opt/#training
.md
| 2 | 256 | 0.046 | 0.037 | 24.027 | 1569.367 | 1551.35 | 1.161 | | 2 | 512 | 0.045 | 0.037 | 20.965 | 3257.074 | 1698.35 | 91.778 |
380_6_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md
https://huggingface.co/docs/transformers/en/model_doc/opt/#training
.md
| 2 | 1024 | 0.122 | 0.038 | 225.958 | 9054.405 | 2715.75 | 233.403 | | 2 | 2048 | 0.464 | 0.067 | 593.646 | 30572.058 | 4750.55 | 543.548 |
380_6_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md
https://huggingface.co/docs/transformers/en/model_doc/opt/#training
.md
| 4 | 128 | 0.045 | 0.037 | 21.918 | 1549.448 | 1551.35 | -0.123 | | 4 | 256 | 0.044 | 0.038 | 18.084 | 2451.768 | 1698.35 | 44.361 |
380_6_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md
https://huggingface.co/docs/transformers/en/model_doc/opt/#training
.md
| 4 | 512 | 0.069 | 0.037 | 84.421 | 5833.180 | 2715.75 | 114.791 | | 4 | 1024 | 0.262 | 0.062 | 319.475 | 17427.842 | 4750.55 | 266.860 |
380_6_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md
https://huggingface.co/docs/transformers/en/model_doc/opt/#training
.md
| 4 | 2048 | OOM | 0.062 | Eager OOM | OOM | 4750.55 | Eager OOM | | 8 | 128 | 0.044 | 0.037 | 18.436 | 2049.115 | 1697.78 | 20.694 |
380_6_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md
https://huggingface.co/docs/transformers/en/model_doc/opt/#training
.md
| 8 | 256 | 0.048 | 0.036 | 32.887 | 4222.567 | 2715.75 | 55.484 | | 8 | 512 | 0.153 | 0.06 | 154.862 | 10985.391 | 4750.55 | 131.245 |
380_6_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md
https://huggingface.co/docs/transformers/en/model_doc/opt/#training
.md
| 8 | 1024 | 0.526 | 0.122 | 330.697 | 34175.763 | 8821.18 | 287.428 | | 8 | 2048 | OOM | 0.122 | Eager OOM | OOM | 8821.18 | Eager OOM |
380_6_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md
https://huggingface.co/docs/transformers/en/model_doc/opt/#inference
.md
| batch_size | seq_len | Per token latency eager (ms) | Per token latency SDPA (ms) | Speedup (%) | Mem eager (MB) | Mem BT (MB) | Mem saved (%) | |--------------:|-----------:|--------------------------------:|-------------------------------:|---------------:|------------------:|---------------:|-----------------:| | 1 | 128 | 11.634 | 8.647 | 34.546 | 717.676 | 717.674 | 0 |
380_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md
https://huggingface.co/docs/transformers/en/model_doc/opt/#inference
.md
| 1 | 256 | 11.593 | 8.86 | 30.851 | 742.852 | 742.845 | 0.001 | | 1 | 512 | 11.515 | 8.816 | 30.614 | 798.232 | 799.593 | -0.17 |
380_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md
https://huggingface.co/docs/transformers/en/model_doc/opt/#inference
.md
| 1 | 1024 | 11.556 | 8.915 | 29.628 | 917.265 | 895.538 | 2.426 | | 2 | 128 | 12.724 | 11.002 | 15.659 | 762.434 | 762.431 | 0 |
380_7_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md
https://huggingface.co/docs/transformers/en/model_doc/opt/#inference
.md
| 2 | 256 | 12.704 | 11.063 | 14.83 | 816.809 | 816.733 | 0.009 | | 2 | 512 | 12.757 | 10.947 | 16.535 | 917.383 | 918.339 | -0.104 |
380_7_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md
https://huggingface.co/docs/transformers/en/model_doc/opt/#inference
.md
| 2 | 1024 | 13.018 | 11.018 | 18.147 | 1162.65 | 1114.81 | 4.291 | | 4 | 128 | 12.739 | 10.959 | 16.243 | 856.335 | 856.483 | -0.017 |
380_7_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md
https://huggingface.co/docs/transformers/en/model_doc/opt/#inference
.md
| 4 | 256 | 12.718 | 10.837 | 17.355 | 957.298 | 957.674 | -0.039 | | 4 | 512 | 12.813 | 10.822 | 18.393 | 1158.44 | 1158.45 | -0.001 |
380_7_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md
https://huggingface.co/docs/transformers/en/model_doc/opt/#inference
.md
| 4 | 1024 | 13.416 | 11.06 | 21.301 | 1653.42 | 1557.19 | 6.18 | | 8 | 128 | 12.763 | 10.891 | 17.193 | 1036.13 | 1036.51 | -0.036 |
380_7_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md
https://huggingface.co/docs/transformers/en/model_doc/opt/#inference
.md
| 8 | 256 | 12.89 | 11.104 | 16.085 | 1236.98 | 1236.87 | 0.01 | | 8 | 512 | 13.327 | 10.939 | 21.836 | 1642.29 | 1641.78 | 0.031 |
380_7_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md
https://huggingface.co/docs/transformers/en/model_doc/opt/#inference
.md
| 8 | 1024 | 15.181 | 11.175 | 35.848 | 2634.98 | 2443.35 | 7.843 |
380_7_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md
https://huggingface.co/docs/transformers/en/model_doc/opt/#optconfig
.md
This is the configuration class to store the configuration of a [`OPTModel`]. It is used to instantiate a OPT model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the OPT [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
380_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md
https://huggingface.co/docs/transformers/en/model_doc/opt/#optconfig
.md
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 50272): Vocabulary size of the OPT model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`OPTModel`] hidden_size (`int`, *optional*, defaults to 768): Dimensionality of the layers and the pooler layer.
380_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md
https://huggingface.co/docs/transformers/en/model_doc/opt/#optconfig
.md
hidden_size (`int`, *optional*, defaults to 768): Dimensionality of the layers and the pooler layer. num_hidden_layers (`int`, *optional*, defaults to 12): Number of decoder layers. ffn_dim (`int`, *optional*, defaults to 3072): Dimensionality of the "intermediate" (often named feed-forward) layer in decoder. num_attention_heads (`int`, *optional*, defaults to 12): Number of attention heads for each attention layer in the Transformer decoder.
380_8_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md
https://huggingface.co/docs/transformers/en/model_doc/opt/#optconfig
.md
Number of attention heads for each attention layer in the Transformer decoder. activation_function (`str` or `function`, *optional*, defaults to `"relu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"silu"` and `"gelu_new"` are supported. max_position_embeddings (`int`, *optional*, defaults to 2048): The maximum sequence length that this model might ever be used with. Typically set this to something large
380_8_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md
https://huggingface.co/docs/transformers/en/model_doc/opt/#optconfig
.md
The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). do_layer_norm_before (`bool`, *optional*, defaults to `True`): Whether to perform layer normalization before the attention block. word_embed_proj_dim (`int`, *optional*): `word_embed_proj_dim` can be set to down-project word embeddings, *e.g.* `opt-350m`. Defaults to `hidden_size`. dropout (`float`, *optional*, defaults to 0.1):
380_8_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md
https://huggingface.co/docs/transformers/en/model_doc/opt/#optconfig
.md
`hidden_size`. dropout (`float`, *optional*, defaults to 0.1): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. layerdrop (`float`, *optional*, defaults to 0.0): The LayerDrop probability. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more details. init_std (`float`, *optional*, defaults to 0.02):
380_8_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md
https://huggingface.co/docs/transformers/en/model_doc/opt/#optconfig
.md
details. init_std (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. use_cache (`bool`, *optional*, defaults to `True`): Whether or not the model should return the last key/values attentions (not used by all models). enable_bias (`bool`, *optional*, defaults to `True`): Whether or not if the linear layers in the attention blocks should use the bias term.
380_8_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md
https://huggingface.co/docs/transformers/en/model_doc/opt/#optconfig
.md
Whether or not if the linear layers in the attention blocks should use the bias term. layer_norm_elementwise_affine (`bool`, *optional*, defaults to `True`): Whether or not if the layer norms should have learnable parameters. Example: ```python >>> from transformers import OPTConfig, OPTModel
380_8_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md
https://huggingface.co/docs/transformers/en/model_doc/opt/#optconfig
.md
>>> # Initializing a OPT facebook/opt-large style configuration >>> configuration = OPTConfig() >>> # Initializing a model (with random weights) from the facebook/opt-large style configuration >>> model = OPTModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ``` <frameworkcontent> <pt>
380_8_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md
https://huggingface.co/docs/transformers/en/model_doc/opt/#optmodel
.md
The bare OPT Model outputting raw hidden-states without any specific head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
380_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md
https://huggingface.co/docs/transformers/en/model_doc/opt/#optmodel
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`OPTConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the
380_9_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md
https://huggingface.co/docs/transformers/en/model_doc/opt/#optmodel
.md
load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
380_9_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md
https://huggingface.co/docs/transformers/en/model_doc/opt/#optforcausallm
.md
No docstring available for OPTForCausalLM Methods: forward
380_10_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md
https://huggingface.co/docs/transformers/en/model_doc/opt/#optforsequenceclassification
.md
The OPT Model transformer with a sequence classification head on top (linear layer). [`OPTForSequenceClassification`] uses the last token in order to do the classification, as other causal models (e.g. GPT-2) do. Since it does classification on the last token, it requires to know the position of the last token. If a `pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If
380_11_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md
https://huggingface.co/docs/transformers/en/model_doc/opt/#optforsequenceclassification
.md
`pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If no `pad_token_id` is defined, it simply takes the last value in each row of the batch. Since it cannot guess the padding tokens when `inputs_embeds` are passed instead of `input_ids`, it does the same (take the last value in each row of the batch). This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
380_11_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md
https://huggingface.co/docs/transformers/en/model_doc/opt/#optforsequenceclassification
.md
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`OPTConfig`]):
380_11_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md
https://huggingface.co/docs/transformers/en/model_doc/opt/#optforsequenceclassification
.md
and behavior. Parameters: config ([`OPTConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
380_11_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md
https://huggingface.co/docs/transformers/en/model_doc/opt/#optforquestionanswering
.md
The OPT Model transformer with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute `span start logits` and `span end logits`). This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
380_12_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md
https://huggingface.co/docs/transformers/en/model_doc/opt/#optforquestionanswering
.md
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`OPTConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not
380_12_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md
https://huggingface.co/docs/transformers/en/model_doc/opt/#optforquestionanswering
.md
Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward </pt> <tf>
380_12_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md
https://huggingface.co/docs/transformers/en/model_doc/opt/#tfoptmodel
.md
No docstring available for TFOPTModel Methods: call
380_13_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md
https://huggingface.co/docs/transformers/en/model_doc/opt/#tfoptforcausallm
.md
No docstring available for TFOPTForCausalLM Methods: call </tf> <jax>
380_14_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md
https://huggingface.co/docs/transformers/en/model_doc/opt/#flaxoptmodel
.md
No docstring available for FlaxOPTModel Methods: __call__
380_15_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md
https://huggingface.co/docs/transformers/en/model_doc/opt/#flaxoptforcausallm
.md
No docstring available for FlaxOPTForCausalLM Methods: __call__ </jax> </frameworkcontent>
380_16_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/t5v1.1.md
https://huggingface.co/docs/transformers/en/model_doc/t5v1.1/
.md
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
381_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/t5v1.1.md
https://huggingface.co/docs/transformers/en/model_doc/t5v1.1/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
381_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/t5v1.1.md
https://huggingface.co/docs/transformers/en/model_doc/t5v1.1/#overview
.md
T5v1.1 was released in the [google-research/text-to-text-transfer-transformer](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) repository by Colin Raffel et al. It's an improved version of the original T5 model. This model was contributed by [patrickvonplaten](https://huggingface.co/patrickvonplaten). The original code can be found [here](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511).
381_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/t5v1.1.md
https://huggingface.co/docs/transformers/en/model_doc/t5v1.1/#usage-tips
.md
One can directly plug in the weights of T5v1.1 into a T5 model, like so: ```python >>> from transformers import T5ForConditionalGeneration
381_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/t5v1.1.md
https://huggingface.co/docs/transformers/en/model_doc/t5v1.1/#usage-tips
.md
>>> model = T5ForConditionalGeneration.from_pretrained("google/t5-v1_1-base") ``` T5 Version 1.1 includes the following improvements compared to the original T5 model: - GEGLU activation in the feed-forward hidden layer, rather than ReLU. See [this paper](https://arxiv.org/abs/2002.05202). - Dropout was turned off in pre-training (quality win). Dropout should be re-enabled during fine-tuning. - Pre-trained on C4 only without mixing in the downstream tasks.
381_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/t5v1.1.md
https://huggingface.co/docs/transformers/en/model_doc/t5v1.1/#usage-tips
.md
- Pre-trained on C4 only without mixing in the downstream tasks. - No parameter sharing between the embedding and classifier layer. - "xl" and "xxl" replace "3B" and "11B". The model shapes are a bit different - larger `d_model` and smaller `num_heads` and `d_ff`. Note: T5 Version 1.1 was only pre-trained on [C4](https://huggingface.co/datasets/c4) excluding any supervised training. Therefore, this model has to be fine-tuned before it is usable on a downstream task, unlike the original T5
381_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/t5v1.1.md
https://huggingface.co/docs/transformers/en/model_doc/t5v1.1/#usage-tips
.md
training. Therefore, this model has to be fine-tuned before it is usable on a downstream task, unlike the original T5 model. Since t5v1.1 was pre-trained unsupervisedly, there's no real advantage to using a task prefix during single-task fine-tuning. If you are doing multi-task fine-tuning, you should use a prefix. Google has released the following variants: - [google/t5-v1_1-small](https://huggingface.co/google/t5-v1_1-small) - [google/t5-v1_1-base](https://huggingface.co/google/t5-v1_1-base)
381_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/t5v1.1.md
https://huggingface.co/docs/transformers/en/model_doc/t5v1.1/#usage-tips
.md
- [google/t5-v1_1-base](https://huggingface.co/google/t5-v1_1-base) - [google/t5-v1_1-large](https://huggingface.co/google/t5-v1_1-large) - [google/t5-v1_1-xl](https://huggingface.co/google/t5-v1_1-xl) - [google/t5-v1_1-xxl](https://huggingface.co/google/t5-v1_1-xxl). <Tip> Refer to [T5's documentation page](t5) for all API reference, tips, code examples and notebooks. </Tip>
381_2_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_mae.md
https://huggingface.co/docs/transformers/en/model_doc/vit_mae/
.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
382_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_mae.md
https://huggingface.co/docs/transformers/en/model_doc/vit_mae/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
382_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_mae.md
https://huggingface.co/docs/transformers/en/model_doc/vit_mae/#overview
.md
The ViTMAE model was proposed in [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377v2) by Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick. The paper shows that, by pre-training a Vision Transformer (ViT) to reconstruct pixel values for masked patches, one can get results after fine-tuning that outperform supervised pre-training. The abstract from the paper is the following:
382_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_mae.md
https://huggingface.co/docs/transformers/en/model_doc/vit_mae/#overview
.md
fine-tuning that outperform supervised pre-training. The abstract from the paper is the following: *This paper shows that masked autoencoders (MAE) are scalable self-supervised learners for computer vision. Our MAE approach is simple: we mask random patches of the input image and reconstruct the missing pixels. It is based on two core designs. First, we develop an asymmetric encoder-decoder architecture, with an encoder that operates
382_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_mae.md
https://huggingface.co/docs/transformers/en/model_doc/vit_mae/#overview
.md
only on the visible subset of patches (without mask tokens), along with a lightweight decoder that reconstructs the original image from the latent representation and mask tokens. Second, we find that masking a high proportion of the input image, e.g., 75%, yields a nontrivial and meaningful self-supervisory task. Coupling these two designs
382_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_mae.md
https://huggingface.co/docs/transformers/en/model_doc/vit_mae/#overview
.md
enables us to train large models efficiently and effectively: we accelerate training (by 3x or more) and improve accuracy. Our scalable approach allows for learning high-capacity models that generalize well: e.g., a vanilla ViT-Huge model achieves the best accuracy (87.8%) among methods that use only ImageNet-1K data. Transfer performance in downstream tasks outperforms supervised pre-training and shows promising scaling behavior.*
382_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_mae.md
https://huggingface.co/docs/transformers/en/model_doc/vit_mae/#overview
.md
tasks outperforms supervised pre-training and shows promising scaling behavior.* <img src="https://user-images.githubusercontent.com/11435359/146857310-f258c86c-fde6-48e8-9cee-badd2b21bd2c.png" alt="drawing" width="600"/> <small> MAE architecture. Taken from the <a href="https://arxiv.org/abs/2111.06377">original paper.</a> </small> This model was contributed by [nielsr](https://huggingface.co/nielsr). TensorFlow version of the model was contributed by [sayakpaul](https://github.com/sayakpaul) and
382_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_mae.md
https://huggingface.co/docs/transformers/en/model_doc/vit_mae/#overview
.md
[ariG23498](https://github.com/ariG23498) (equal contribution). The original code can be found [here](https://github.com/facebookresearch/mae).
382_1_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_mae.md
https://huggingface.co/docs/transformers/en/model_doc/vit_mae/#usage-tips
.md
- MAE (masked auto encoding) is a method for self-supervised pre-training of Vision Transformers (ViTs). The pre-training objective is relatively simple: by masking a large portion (75%) of the image patches, the model must reconstruct raw pixel values. One can use [`ViTMAEForPreTraining`] for this purpose. - After pre-training, one "throws away" the decoder used to reconstruct pixels, and one uses the encoder for fine-tuning/linear probing. This means that after
382_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_mae.md
https://huggingface.co/docs/transformers/en/model_doc/vit_mae/#usage-tips
.md
fine-tuning, one can directly plug in the weights into a [`ViTForImageClassification`]. - One can use [`ViTImageProcessor`] to prepare images for the model. See the code examples for more info. - Note that the encoder of MAE is only used to encode the visual patches. The encoded patches are then concatenated with mask tokens, which the decoder (which also
382_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_mae.md
https://huggingface.co/docs/transformers/en/model_doc/vit_mae/#usage-tips
.md
consists of Transformer blocks) takes as input. Each mask token is a shared, learned vector that indicates the presence of a missing patch to be predicted. Fixed sin/cos position embeddings are added both to the input of the encoder and the decoder. - For a visual understanding of how MAEs work you can check out this [post](https://keras.io/examples/vision/masked_image_modeling/).
382_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_mae.md
https://huggingface.co/docs/transformers/en/model_doc/vit_mae/#using-scaled-dot-product-attention-sdpa
.md
PyTorch includes a native scaled dot-product attention (SDPA) operator as part of `torch.nn.functional`. This function encompasses several implementations that can be applied depending on the inputs and the hardware in use. See the [official documentation](https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html) or the [GPU Inference](https://huggingface.co/docs/transformers/main/en/perf_infer_gpu_one#pytorch-scaled-dot-product-attention) page for more information.
382_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_mae.md
https://huggingface.co/docs/transformers/en/model_doc/vit_mae/#using-scaled-dot-product-attention-sdpa
.md
page for more information. SDPA is used by default for `torch>=2.1.1` when an implementation is available, but you may also set `attn_implementation="sdpa"` in `from_pretrained()` to explicitly request SDPA to be used. ``` from transformers import ViTMAEModel model = ViTMAEModel.from_pretrained("facebook/vit-mae-base", attn_implementation="sdpa", torch_dtype=torch.float16) ... ``` For the best speedups, we recommend loading the model in half-precision (e.g. `torch.float16` or `torch.bfloat16`).
382_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_mae.md
https://huggingface.co/docs/transformers/en/model_doc/vit_mae/#using-scaled-dot-product-attention-sdpa
.md
... ``` For the best speedups, we recommend loading the model in half-precision (e.g. `torch.float16` or `torch.bfloat16`). On a local benchmark (A100-40GB, PyTorch 2.3.0, OS Ubuntu 22.04) with `float32` and `facebook/vit-mae-base` model, we saw the following speedups during inference. | Batch size | Average inference time (ms), eager mode | Average inference time (ms), sdpa model | Speed up, Sdpa / Eager (x) |
382_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_mae.md
https://huggingface.co/docs/transformers/en/model_doc/vit_mae/#using-scaled-dot-product-attention-sdpa
.md
|--------------|-------------------------------------------|-------------------------------------------|------------------------------| | 1 | 11 | 6 | 1.83 | | 2 | 8 | 6 | 1.33 |
382_3_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_mae.md
https://huggingface.co/docs/transformers/en/model_doc/vit_mae/#using-scaled-dot-product-attention-sdpa
.md
| 4 | 8 | 6 | 1.33 | | 8 | 8 | 6 | 1.33 |
382_3_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_mae.md
https://huggingface.co/docs/transformers/en/model_doc/vit_mae/#resources
.md
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ViTMAE. - [`ViTMAEForPreTraining`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-pretraining), allowing you to pre-train the model from scratch/further pre-train the model on custom data.
382_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_mae.md
https://huggingface.co/docs/transformers/en/model_doc/vit_mae/#resources
.md
- A notebook that illustrates how to visualize reconstructed pixel values with [`ViTMAEForPreTraining`] can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/ViTMAE/ViT_MAE_visualization_demo.ipynb). If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
382_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_mae.md
https://huggingface.co/docs/transformers/en/model_doc/vit_mae/#vitmaeconfig
.md
This is the configuration class to store the configuration of a [`ViTMAEModel`]. It is used to instantiate an ViT MAE model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the ViT [facebook/vit-mae-base](https://huggingface.co/facebook/vit-mae-base) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
382_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_mae.md
https://huggingface.co/docs/transformers/en/model_doc/vit_mae/#vitmaeconfig
.md
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: hidden_size (`int`, *optional*, defaults to 768): Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (`int`, *optional*, defaults to 12): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 12):
382_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_mae.md
https://huggingface.co/docs/transformers/en/model_doc/vit_mae/#vitmaeconfig
.md
Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 12): Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (`int`, *optional*, defaults to 3072): Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder. hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
382_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_mae.md
https://huggingface.co/docs/transformers/en/model_doc/vit_mae/#vitmaeconfig
.md
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported. hidden_dropout_prob (`float`, *optional*, defaults to 0.0): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. initializer_range (`float`, *optional*, defaults to 0.02):
382_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_mae.md
https://huggingface.co/docs/transformers/en/model_doc/vit_mae/#vitmaeconfig
.md
The dropout ratio for the attention probabilities. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-12): The epsilon used by the layer normalization layers. image_size (`int`, *optional*, defaults to 224): The size (resolution) of each image. patch_size (`int`, *optional*, defaults to 16): The size (resolution) of each patch.
382_5_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_mae.md
https://huggingface.co/docs/transformers/en/model_doc/vit_mae/#vitmaeconfig
.md
The size (resolution) of each image. patch_size (`int`, *optional*, defaults to 16): The size (resolution) of each patch. num_channels (`int`, *optional*, defaults to 3): The number of input channels. qkv_bias (`bool`, *optional*, defaults to `True`): Whether to add a bias to the queries, keys and values. decoder_num_attention_heads (`int`, *optional*, defaults to 16): Number of attention heads for each attention layer in the decoder. decoder_hidden_size (`int`, *optional*, defaults to 512):
382_5_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_mae.md
https://huggingface.co/docs/transformers/en/model_doc/vit_mae/#vitmaeconfig
.md
Number of attention heads for each attention layer in the decoder. decoder_hidden_size (`int`, *optional*, defaults to 512): Dimensionality of the decoder. decoder_num_hidden_layers (`int`, *optional*, defaults to 8): Number of hidden layers in the decoder. decoder_intermediate_size (`int`, *optional*, defaults to 2048): Dimensionality of the "intermediate" (i.e., feed-forward) layer in the decoder. mask_ratio (`float`, *optional*, defaults to 0.75):
382_5_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_mae.md
https://huggingface.co/docs/transformers/en/model_doc/vit_mae/#vitmaeconfig
.md
mask_ratio (`float`, *optional*, defaults to 0.75): The ratio of the number of masked tokens in the input sequence. norm_pix_loss (`bool`, *optional*, defaults to `False`): Whether or not to train with normalized pixels (see Table 3 in the paper). Using normalized pixels improved representation quality in the experiments of the authors. Example: ```python >>> from transformers import ViTMAEConfig, ViTMAEModel
382_5_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_mae.md
https://huggingface.co/docs/transformers/en/model_doc/vit_mae/#vitmaeconfig
.md
>>> # Initializing a ViT MAE vit-mae-base style configuration >>> configuration = ViTMAEConfig() >>> # Initializing a model (with random weights) from the vit-mae-base style configuration >>> model = ViTMAEModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ``` <frameworkcontent> <pt>
382_5_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_mae.md
https://huggingface.co/docs/transformers/en/model_doc/vit_mae/#vitmaemodel
.md
The bare ViTMAE Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`ViTMAEConfig`]): Model configuration class with all the parameters of the model.
382_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_mae.md
https://huggingface.co/docs/transformers/en/model_doc/vit_mae/#vitmaemodel
.md
behavior. Parameters: config ([`ViTMAEConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
382_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_mae.md
https://huggingface.co/docs/transformers/en/model_doc/vit_mae/#vitmaeforpretraining
.md
The ViTMAE Model transformer with the decoder on top for self-supervised pre-training. <Tip> Note that we provide a script to pre-train this model on custom data in our [examples directory](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-pretraining). </Tip> This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it
382_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_mae.md
https://huggingface.co/docs/transformers/en/model_doc/vit_mae/#vitmaeforpretraining
.md
</Tip> This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`ViTMAEConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
382_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_mae.md
https://huggingface.co/docs/transformers/en/model_doc/vit_mae/#vitmaeforpretraining
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward </pt> <tf>
382_7_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_mae.md
https://huggingface.co/docs/transformers/en/model_doc/vit_mae/#tfvitmaemodel
.md
No docstring available for TFViTMAEModel Methods: call
382_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_mae.md
https://huggingface.co/docs/transformers/en/model_doc/vit_mae/#tfvitmaeforpretraining
.md
No docstring available for TFViTMAEForPreTraining Methods: call </tf> </frameworkcontent>
382_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ibert.md
https://huggingface.co/docs/transformers/en/model_doc/ibert/
.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
383_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ibert.md
https://huggingface.co/docs/transformers/en/model_doc/ibert/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
383_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ibert.md
https://huggingface.co/docs/transformers/en/model_doc/ibert/#overview
.md
The I-BERT model was proposed in [I-BERT: Integer-only BERT Quantization](https://arxiv.org/abs/2101.01321) by Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney and Kurt Keutzer. It's a quantized version of RoBERTa running inference up to four times faster. The abstract from the paper is the following: *Transformer based models, like BERT and RoBERTa, have achieved state-of-the-art results in many Natural Language
383_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ibert.md
https://huggingface.co/docs/transformers/en/model_doc/ibert/#overview
.md
*Transformer based models, like BERT and RoBERTa, have achieved state-of-the-art results in many Natural Language Processing tasks. However, their memory footprint, inference latency, and power consumption are prohibitive for efficient inference at the edge, and even at the data center. While quantization can be a viable solution for this, previous work on quantizing Transformer based models use floating-point arithmetic during inference, which cannot
383_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ibert.md
https://huggingface.co/docs/transformers/en/model_doc/ibert/#overview
.md
previous work on quantizing Transformer based models use floating-point arithmetic during inference, which cannot efficiently utilize integer-only logical units such as the recent Turing Tensor Cores, or traditional integer-only ARM processors. In this work, we propose I-BERT, a novel quantization scheme for Transformer based models that quantizes the entire inference with integer-only arithmetic. Based on lightweight integer-only approximation methods for
383_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ibert.md
https://huggingface.co/docs/transformers/en/model_doc/ibert/#overview
.md
the entire inference with integer-only arithmetic. Based on lightweight integer-only approximation methods for nonlinear operations, e.g., GELU, Softmax, and Layer Normalization, I-BERT performs an end-to-end integer-only BERT inference without any floating point calculation. We evaluate our approach on GLUE downstream tasks using RoBERTa-Base/Large. We show that for both cases, I-BERT achieves similar (and slightly higher) accuracy as compared to
383_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ibert.md
https://huggingface.co/docs/transformers/en/model_doc/ibert/#overview
.md
RoBERTa-Base/Large. We show that for both cases, I-BERT achieves similar (and slightly higher) accuracy as compared to the full-precision baseline. Furthermore, our preliminary implementation of I-BERT shows a speedup of 2.4 - 4.0x for INT8 inference on a T4 GPU system as compared to FP32 inference. The framework has been developed in PyTorch and has been open-sourced.*
383_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ibert.md
https://huggingface.co/docs/transformers/en/model_doc/ibert/#overview
.md
been open-sourced.* This model was contributed by [kssteven](https://huggingface.co/kssteven). The original code can be found [here](https://github.com/kssteven418/I-BERT).
383_1_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ibert.md
https://huggingface.co/docs/transformers/en/model_doc/ibert/#resources
.md
- [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Multiple choice task guide](../tasks/masked_language_modeling)
383_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ibert.md
https://huggingface.co/docs/transformers/en/model_doc/ibert/#ibertconfig
.md
This is the configuration class to store the configuration of a [`IBertModel`]. It is used to instantiate a I-BERT model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the IBERT [kssteven/ibert-roberta-base](https://huggingface.co/kssteven/ibert-roberta-base) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
383_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ibert.md
https://huggingface.co/docs/transformers/en/model_doc/ibert/#ibertconfig
.md
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 30522): Vocabulary size of the I-BERT model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`IBertModel`] hidden_size (`int`, *optional*, defaults to 768): Dimensionality of the encoder layers and the pooler layer.
383_3_1