source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v1.md
https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v1/#mobilenetv1imageprocessor
.md
`preprocess` method. do_center_crop (`bool`, *optional*, defaults to `True`): Whether to center crop the image. If the input size is smaller than `crop_size` along any edge, the image is padded with 0's and then center cropped. Can be overridden by the `do_center_crop` parameter in the `preprocess` method. crop_size (`Dict[str, int]`, *optional*, defaults to `{"height": 224, "width": 224}`): Desired output size when applying center-cropping. Only has an effect if `do_center_crop` is set to `True`.
122_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v1.md
https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v1/#mobilenetv1imageprocessor
.md
Desired output size when applying center-cropping. Only has an effect if `do_center_crop` is set to `True`. Can be overridden by the `crop_size` parameter in the `preprocess` method. do_rescale (`bool`, *optional*, defaults to `True`): Whether to rescale the image by the specified scale `rescale_factor`. Can be overridden by the `do_rescale` parameter in the `preprocess` method. rescale_factor (`int` or `float`, *optional*, defaults to `1/255`):
122_6_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v1.md
https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v1/#mobilenetv1imageprocessor
.md
parameter in the `preprocess` method. rescale_factor (`int` or `float`, *optional*, defaults to `1/255`): Scale factor to use if rescaling the image. Can be overridden by the `rescale_factor` parameter in the `preprocess` method. do_normalize: Whether to normalize the image. Can be overridden by the `do_normalize` parameter in the `preprocess` method. image_mean (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_MEAN`):
122_6_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v1.md
https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v1/#mobilenetv1imageprocessor
.md
method. image_mean (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_MEAN`): Mean to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the `image_mean` parameter in the `preprocess` method. image_std (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_STD`): Standard deviation to use if normalizing the image. This is a float or list of floats the length of the
122_6_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v1.md
https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v1/#mobilenetv1imageprocessor
.md
Standard deviation to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the `image_std` parameter in the `preprocess` method. Methods: preprocess
122_6_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v1.md
https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v1/#mobilenetv1model
.md
The bare MobileNetV1 model outputting raw hidden-states without any specific head on top. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`MobileNetV1Config`]): Model configuration class with all the parameters of the model.
122_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v1.md
https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v1/#mobilenetv1model
.md
behavior. Parameters: config ([`MobileNetV1Config`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
122_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v1.md
https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v1/#mobilenetv1forimageclassification
.md
MobileNetV1 model with an image classification head on top (a linear layer on top of the pooled features), e.g. for ImageNet. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`MobileNetV1Config`]): Model configuration class with all the parameters of the model.
122_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v1.md
https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v1/#mobilenetv1forimageclassification
.md
behavior. Parameters: config ([`MobileNetV1Config`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
122_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jetmoe.md
https://huggingface.co/docs/transformers/en/model_doc/jetmoe/
.md
<!--Copyright 2024 JetMoe team and The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
123_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jetmoe.md
https://huggingface.co/docs/transformers/en/model_doc/jetmoe/
.md
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
123_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jetmoe.md
https://huggingface.co/docs/transformers/en/model_doc/jetmoe/#overview
.md
**JetMoe-8B** is an 8B Mixture-of-Experts (MoE) language model developed by [Yikang Shen](https://scholar.google.com.hk/citations?user=qff5rRYAAAAJ) and [MyShell](https://myshell.ai/). JetMoe project aims to provide a LLaMA2-level performance and efficient language model with a limited budget. To achieve this goal, JetMoe uses a sparsely activated architecture inspired by the [ModuleFormer](https://arxiv.org/abs/2306.04640).
123_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jetmoe.md
https://huggingface.co/docs/transformers/en/model_doc/jetmoe/#overview
.md
Each JetMoe block consists of two MoE layers: Mixture of Attention Heads and Mixture of MLP Experts. Given the input tokens, it activates a subset of its experts to process them. This sparse activation schema enables JetMoe to achieve much better training throughput than similar size dense models. The training throughput of JetMoe-8B is around 100B tokens per day on a cluster of 96 H100 GPUs with a straightforward 3-way pipeline parallelism strategy.
123_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jetmoe.md
https://huggingface.co/docs/transformers/en/model_doc/jetmoe/#overview
.md
This model was contributed by [Yikang Shen](https://huggingface.co/YikangS).
123_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jetmoe.md
https://huggingface.co/docs/transformers/en/model_doc/jetmoe/#jetmoeconfig
.md
This is the configuration class to store the configuration of a [`JetMoeModel`]. It is used to instantiate a JetMoe model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a configuration of the JetMoe-4B. [jetmoe/jetmoe-8b](https://huggingface.co/jetmoe/jetmoe-8b) Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
123_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jetmoe.md
https://huggingface.co/docs/transformers/en/model_doc/jetmoe/#jetmoeconfig
.md
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 32000): Vocabulary size of the JetMoe model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`JetMoeModel`] hidden_size (`int`, *optional*, defaults to 2048): Dimension of the hidden representations.
123_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jetmoe.md
https://huggingface.co/docs/transformers/en/model_doc/jetmoe/#jetmoeconfig
.md
hidden_size (`int`, *optional*, defaults to 2048): Dimension of the hidden representations. num_hidden_layers (`int`, *optional*, defaults to 12): Number of hidden layers in the Transformer encoder. num_key_value_heads (`int`, *optional*, defaults to 16): Number of attention heads for each key and value in the Transformer encoder. kv_channels (`int`, *optional*, defaults to 128): Defines the number of channels for the key and value tensors. intermediate_size (`int`, *optional*, defaults to 5632):
123_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jetmoe.md
https://huggingface.co/docs/transformers/en/model_doc/jetmoe/#jetmoeconfig
.md
Defines the number of channels for the key and value tensors. intermediate_size (`int`, *optional*, defaults to 5632): Dimension of the MLP representations. max_position_embeddings (`int`, *optional*, defaults to 4096): The maximum sequence length that this model might ever be used with. JetMoe's attention allows sequence of up to 4096 tokens. activation_function (`string`, *optional*, defaults to `"silu"`): Defines the activation function for MLP experts.
123_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jetmoe.md
https://huggingface.co/docs/transformers/en/model_doc/jetmoe/#jetmoeconfig
.md
activation_function (`string`, *optional*, defaults to `"silu"`): Defines the activation function for MLP experts. num_local_experts (`int`, *optional*, defaults to 8): Defines the number of experts in the MoE and MoA. num_experts_per_tok (`int, *optional*, defaults to 2): The number of experts to route per-token and for MoE and MoA. output_router_logits (`bool`, *optional*, defaults to `False`): Whether or not the router logits should be returned by the model. Enabeling this will also
123_2_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jetmoe.md
https://huggingface.co/docs/transformers/en/model_doc/jetmoe/#jetmoeconfig
.md
Whether or not the router logits should be returned by the model. Enabeling this will also allow the model to output the auxiliary loss. aux_loss_coef (`float`, *optional*, defaults to 0.01): The coefficient for the auxiliary loss. use_cache (`bool`, *optional*, defaults to `True`): Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if `config.is_decoder=True`. bos_token_id (`int`, *optional*, defaults to 1):
123_2_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jetmoe.md
https://huggingface.co/docs/transformers/en/model_doc/jetmoe/#jetmoeconfig
.md
relevant if `config.is_decoder=True`. bos_token_id (`int`, *optional*, defaults to 1): The id of the "beginning-of-sequence" token. eos_token_id (`int`, *optional*, defaults to 2): The id of the "end-of-sequence" token. tie_word_embeddings (`bool`, *optional*, defaults to `True`): Whether the model's input and output word embeddings should be tied. rope_theta (`float`, *optional*, defaults to 10000.0): The base period of the RoPE embeddings. rms_norm_eps (`float`, *optional*, defaults to 1e-06):
123_2_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jetmoe.md
https://huggingface.co/docs/transformers/en/model_doc/jetmoe/#jetmoeconfig
.md
The base period of the RoPE embeddings. rms_norm_eps (`float`, *optional*, defaults to 1e-06): The epsilon used by the rms normalization layers. initializer_range (`float`, *optional*, defaults to 0.01): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. attention_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. ```python >>> from transformers import JetMoeModel, JetMoeConfig
123_2_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jetmoe.md
https://huggingface.co/docs/transformers/en/model_doc/jetmoe/#jetmoeconfig
.md
>>> # Initializing a JetMoe 4B style configuration >>> configuration = JetMoeConfig() >>> # Initializing a model from the JetMoe 4B style configuration >>> model = JetMoeModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
123_2_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jetmoe.md
https://huggingface.co/docs/transformers/en/model_doc/jetmoe/#jetmoemodel
.md
The bare JetMoe Model outputting raw hidden-states without any specific head on top. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`JetMoeConfig`]): Model configuration class with all the parameters of the model.
123_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jetmoe.md
https://huggingface.co/docs/transformers/en/model_doc/jetmoe/#jetmoemodel
.md
behavior. Parameters: config ([`JetMoeConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Transformer decoder consisting of *config.num_hidden_layers* layers. Each layer is a [`JetMoeBlock`] Args: config: JetMoeConfig Methods: forward
123_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jetmoe.md
https://huggingface.co/docs/transformers/en/model_doc/jetmoe/#jetmoeforcausallm
.md
No docstring available for JetMoeForCausalLM Methods: forward
123_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jetmoe.md
https://huggingface.co/docs/transformers/en/model_doc/jetmoe/#jetmoeforsequenceclassification
.md
The JetMoe Model transformer with a sequence classification head on top (linear layer). [`JetMoeForSequenceClassification`] uses the last token in order to do the classification, as other causal models (e.g. GPT-2) do. Since it does classification on the last token, it requires to know the position of the last token. If a `pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If
123_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jetmoe.md
https://huggingface.co/docs/transformers/en/model_doc/jetmoe/#jetmoeforsequenceclassification
.md
`pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If no `pad_token_id` is defined, it simply takes the last value in each row of the batch. Since it cannot guess the padding tokens when `inputs_embeds` are passed instead of `input_ids`, it does the same (take the last value in each row of the batch). This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
123_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jetmoe.md
https://huggingface.co/docs/transformers/en/model_doc/jetmoe/#jetmoeforsequenceclassification
.md
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`JetMoeConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
123_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jetmoe.md
https://huggingface.co/docs/transformers/en/model_doc/jetmoe/#jetmoeforsequenceclassification
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
123_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta-xl.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta-xl/
.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
124_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta-xl.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta-xl/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
124_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta-xl.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta-xl/#overview
.md
The XLM-RoBERTa-XL model was proposed in [Larger-Scale Transformers for Multilingual Masked Language Modeling](https://arxiv.org/abs/2105.00572) by Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau. The abstract from the paper is the following:
124_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta-xl.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta-xl/#overview
.md
*Recent work has demonstrated the effectiveness of cross-lingual language model pretraining for cross-lingual understanding. In this study, we present the results of two larger multilingual masked language models, with 3.5B and 10.7B parameters. Our two new models dubbed XLM-R XL and XLM-R XXL outperform XLM-R by 1.8% and 2.4% average accuracy on XNLI. Our model also outperforms the RoBERTa-Large model on several English tasks of the GLUE benchmark by 0.3% on average while handling 99 more languages. This
124_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta-xl.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta-xl/#overview
.md
RoBERTa-Large model on several English tasks of the GLUE benchmark by 0.3% on average while handling 99 more languages. This suggests pretrained models with larger capacity may obtain both strong performance on high-resource languages while greatly improving low-resource languages. We make our code and models publicly available.*
124_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta-xl.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta-xl/#overview
.md
This model was contributed by [Soonhwan-Kwon](https://github.com/Soonhwan-Kwon) and [stefan-it](https://huggingface.co/stefan-it). The original code can be found [here](https://github.com/pytorch/fairseq/tree/master/examples/xlmr).
124_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta-xl.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta-xl/#usage-tips
.md
XLM-RoBERTa-XL is a multilingual model trained on 100 different languages. Unlike some XLM multilingual models, it does not require `lang` tensors to understand which language is used, and should be able to determine the correct language from the input ids.
124_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta-xl.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta-xl/#resources
.md
- [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Causal language modeling task guide](../tasks/language_modeling) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Multiple choice task guide](../tasks/multiple_choice)
124_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta-xl.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta-xl/#xlmrobertaxlconfig
.md
This is the configuration class to store the configuration of a [`XLMRobertaXLModel`] or a [`TFXLMRobertaXLModel`]. It is used to instantiate a XLM_ROBERTA_XL model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the XLM_ROBERTA_XL [facebook/xlm-roberta-xl](https://huggingface.co/facebook/xlm-roberta-xl) architecture.
124_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta-xl.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta-xl/#xlmrobertaxlconfig
.md
XLM_ROBERTA_XL [facebook/xlm-roberta-xl](https://huggingface.co/facebook/xlm-roberta-xl) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 250880): Vocabulary size of the XLM_ROBERTA_XL model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`XLMRobertaXLModel`].
124_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta-xl.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta-xl/#xlmrobertaxlconfig
.md
by the `inputs_ids` passed when calling [`XLMRobertaXLModel`]. hidden_size (`int`, *optional*, defaults to 2560): Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (`int`, *optional*, defaults to 36): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 32): Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (`int`, *optional*, defaults to 10240):
124_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta-xl.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta-xl/#xlmrobertaxlconfig
.md
intermediate_size (`int`, *optional*, defaults to 10240): Dimensionality of the "intermediate" (often named feed-forward) layer in the Transformer encoder. hidden_act (`str` or `Callable`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"silu"` and `"gelu_new"` are supported. hidden_dropout_prob (`float`, *optional*, defaults to 0.1):
124_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta-xl.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta-xl/#xlmrobertaxlconfig
.md
`"relu"`, `"silu"` and `"gelu_new"` are supported. hidden_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout ratio for the attention probabilities. max_position_embeddings (`int`, *optional*, defaults to 514): The maximum sequence length that this model might ever be used with. Typically set this to something large
124_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta-xl.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta-xl/#xlmrobertaxlconfig
.md
The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). type_vocab_size (`int`, *optional*, defaults to 1): The vocabulary size of the `token_type_ids` passed when calling [`XLMRobertaXLModel`] or [`TFXLMRobertaXLModel`]. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
124_4_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta-xl.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta-xl/#xlmrobertaxlconfig
.md
The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-5): The epsilon used by the layer normalization layers. position_embedding_type (`str`, *optional*, defaults to `"absolute"`): Type of position embedding. Choose one of `"absolute"`, `"relative_key"`, `"relative_key_query"`. For positional embeddings use `"absolute"`. For more information on `"relative_key"`, please refer to
124_4_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta-xl.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta-xl/#xlmrobertaxlconfig
.md
positional embeddings use `"absolute"`. For more information on `"relative_key"`, please refer to [Self-Attention with Relative Position Representations (Shaw et al.)](https://arxiv.org/abs/1803.02155). For more information on `"relative_key_query"`, please refer to *Method 4* in [Improve Transformer Models with Better Relative Position Embeddings (Huang et al.)](https://arxiv.org/abs/2009.13658). use_cache (`bool`, *optional*, defaults to `True`):
124_4_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta-xl.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta-xl/#xlmrobertaxlconfig
.md
use_cache (`bool`, *optional*, defaults to `True`): Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if `config.is_decoder=True`. classifier_dropout (`float`, *optional*): The dropout ratio for the classification head. Examples: ```python >>> from transformers import XLMRobertaXLConfig, XLMRobertaXLModel
124_4_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta-xl.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta-xl/#xlmrobertaxlconfig
.md
>>> # Initializing a XLM_ROBERTA_XL google-bert/bert-base-uncased style configuration >>> configuration = XLMRobertaXLConfig() >>> # Initializing a model (with random weights) from the google-bert/bert-base-uncased style configuration >>> model = XLMRobertaXLModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
124_4_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta-xl.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta-xl/#xlmrobertaxlmodel
.md
The bare XLM-RoBERTa-XL Model transformer outputting raw hidden-states without any specific head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)
124_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta-xl.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta-xl/#xlmrobertaxlmodel
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`XLMRobertaXLConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
124_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta-xl.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta-xl/#xlmrobertaxlmodel
.md
model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of cross-attention is added between the self-attention layers, following the architecture described in [Attention is
124_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta-xl.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta-xl/#xlmrobertaxlmodel
.md
cross-attention is added between the self-attention layers, following the architecture described in [Attention is all you need](https://arxiv.org/abs/1706.03762) by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin. To behave as an decoder the model needs to be initialized with the `is_decoder` argument of the configuration set
124_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta-xl.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta-xl/#xlmrobertaxlmodel
.md
To behave as an decoder the model needs to be initialized with the `is_decoder` argument of the configuration set to `True`. To be used in a Seq2Seq model, the model needs to initialized with both `is_decoder` argument and `add_cross_attention` set to `True`; an `encoder_hidden_states` is then expected as an input to the forward pass. Methods: forward
124_5_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta-xl.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta-xl/#xlmrobertaxlforcausallm
.md
XLM-RoBERTa-XL Model with a `language modeling` head on top for CLM fine-tuning. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)
124_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta-xl.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta-xl/#xlmrobertaxlforcausallm
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`XLMRobertaXLConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
124_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta-xl.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta-xl/#xlmrobertaxlforcausallm
.md
model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
124_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta-xl.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta-xl/#xlmrobertaxlformaskedlm
.md
XLM-RoBERTa-XL Model with a `language modeling` head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to
124_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta-xl.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta-xl/#xlmrobertaxlformaskedlm
.md
subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`XLMRobertaXLConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
124_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta-xl.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta-xl/#xlmrobertaxlforsequenceclassification
.md
XLM-RoBERTa-XL Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)
124_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta-xl.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta-xl/#xlmrobertaxlforsequenceclassification
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`XLMRobertaXLConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
124_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta-xl.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta-xl/#xlmrobertaxlforsequenceclassification
.md
model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
124_8_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta-xl.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta-xl/#xlmrobertaxlformultiplechoice
.md
XLM-RoBERTa-XL Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)
124_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta-xl.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta-xl/#xlmrobertaxlformultiplechoice
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`XLMRobertaXLConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
124_9_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta-xl.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta-xl/#xlmrobertaxlformultiplechoice
.md
model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
124_9_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta-xl.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta-xl/#xlmrobertaxlfortokenclassification
.md
XLM-RoBERTa-XL Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)
124_10_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta-xl.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta-xl/#xlmrobertaxlfortokenclassification
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`XLMRobertaXLConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
124_10_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta-xl.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta-xl/#xlmrobertaxlfortokenclassification
.md
model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
124_10_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta-xl.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta-xl/#xlmrobertaxlforquestionanswering
.md
XLM-RoBERTa-XL Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute `span start logits` and `span end logits`). This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
124_11_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta-xl.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta-xl/#xlmrobertaxlforquestionanswering
.md
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`XLMRobertaXLConfig`]): Model configuration class with all the parameters of the
124_11_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-roberta-xl.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta-xl/#xlmrobertaxlforquestionanswering
.md
Parameters: config ([`XLMRobertaXLConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
124_11_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bit.md
https://huggingface.co/docs/transformers/en/model_doc/bit/
.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
125_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bit.md
https://huggingface.co/docs/transformers/en/model_doc/bit/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
125_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bit.md
https://huggingface.co/docs/transformers/en/model_doc/bit/#overview
.md
The BiT model was proposed in [Big Transfer (BiT): General Visual Representation Learning](https://arxiv.org/abs/1912.11370) by Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Joan Puigcerver, Jessica Yung, Sylvain Gelly, Neil Houlsby. BiT is a simple recipe for scaling up pre-training of [ResNet](resnet)-like architectures (specifically, ResNetv2). The method results in significant improvements for transfer learning. The abstract from the paper is the following:
125_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bit.md
https://huggingface.co/docs/transformers/en/model_doc/bit/#overview
.md
*Transfer of pre-trained representations improves sample efficiency and simplifies hyperparameter tuning when training deep neural networks for vision. We revisit the paradigm of pre-training on large supervised datasets and fine-tuning the model on a target task. We scale up pre-training, and propose a simple recipe that we call Big Transfer (BiT). By combining a few carefully selected components, and transferring using a simple heuristic, we achieve strong performance on over 20 datasets. BiT performs
125_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bit.md
https://huggingface.co/docs/transformers/en/model_doc/bit/#overview
.md
selected components, and transferring using a simple heuristic, we achieve strong performance on over 20 datasets. BiT performs well across a surprisingly wide range of data regimes -- from 1 example per class to 1M total examples. BiT achieves 87.5% top-1 accuracy on ILSVRC-2012, 99.4% on CIFAR-10, and 76.3% on the 19 task Visual Task Adaptation Benchmark (VTAB). On small datasets, BiT attains 76.8% on ILSVRC-2012 with 10 examples per class, and 97.0% on CIFAR-10 with 10 examples per class. We conduct
125_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bit.md
https://huggingface.co/docs/transformers/en/model_doc/bit/#overview
.md
BiT attains 76.8% on ILSVRC-2012 with 10 examples per class, and 97.0% on CIFAR-10 with 10 examples per class. We conduct detailed analysis of the main components that lead to high transfer performance.*
125_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bit.md
https://huggingface.co/docs/transformers/en/model_doc/bit/#overview
.md
This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/google-research/big_transfer).
125_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bit.md
https://huggingface.co/docs/transformers/en/model_doc/bit/#usage-tips
.md
- BiT models are equivalent to ResNetv2 in terms of architecture, except that: 1) all batch normalization layers are replaced by [group normalization](https://arxiv.org/abs/1803.08494), 2) [weight standardization](https://arxiv.org/abs/1903.10520) is used for convolutional layers. The authors show that the combination of both is useful for training with large batch sizes, and has a significant impact on transfer learning.
125_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bit.md
https://huggingface.co/docs/transformers/en/model_doc/bit/#resources
.md
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with BiT. <PipelineTag pipeline="image-classification"/> - [`BitForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb).
125_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bit.md
https://huggingface.co/docs/transformers/en/model_doc/bit/#resources
.md
- See also: [Image classification task guide](../tasks/image_classification) If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
125_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bit.md
https://huggingface.co/docs/transformers/en/model_doc/bit/#bitconfig
.md
This is the configuration class to store the configuration of a [`BitModel`]. It is used to instantiate an BiT model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the BiT [google/bit-50](https://huggingface.co/google/bit-50) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
125_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bit.md
https://huggingface.co/docs/transformers/en/model_doc/bit/#bitconfig
.md
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: num_channels (`int`, *optional*, defaults to 3): The number of input channels. embedding_size (`int`, *optional*, defaults to 64): Dimensionality (hidden size) for the embedding layer. hidden_sizes (`List[int]`, *optional*, defaults to `[256, 512, 1024, 2048]`): Dimensionality (hidden size) at each stage.
125_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bit.md
https://huggingface.co/docs/transformers/en/model_doc/bit/#bitconfig
.md
hidden_sizes (`List[int]`, *optional*, defaults to `[256, 512, 1024, 2048]`): Dimensionality (hidden size) at each stage. depths (`List[int]`, *optional*, defaults to `[3, 4, 6, 3]`): Depth (number of layers) for each stage. layer_type (`str`, *optional*, defaults to `"preactivation"`): The layer to use, it can be either `"preactivation"` or `"bottleneck"`. hidden_act (`str`, *optional*, defaults to `"relu"`):
125_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bit.md
https://huggingface.co/docs/transformers/en/model_doc/bit/#bitconfig
.md
The layer to use, it can be either `"preactivation"` or `"bottleneck"`. hidden_act (`str`, *optional*, defaults to `"relu"`): The non-linear activation function in each block. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported. global_padding (`str`, *optional*): Padding strategy to use for the convolutional layers. Can be either `"valid"`, `"same"`, or `None`. num_groups (`int`, *optional*, defaults to 32): Number of groups used for the `BitGroupNormActivation` layers.
125_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bit.md
https://huggingface.co/docs/transformers/en/model_doc/bit/#bitconfig
.md
num_groups (`int`, *optional*, defaults to 32): Number of groups used for the `BitGroupNormActivation` layers. drop_path_rate (`float`, *optional*, defaults to 0.0): The drop path rate for the stochastic depth. embedding_dynamic_padding (`bool`, *optional*, defaults to `False`): Whether or not to make use of dynamic padding for the embedding layer. output_stride (`int`, *optional*, defaults to 32): The output stride of the model. width_factor (`int`, *optional*, defaults to 1):
125_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bit.md
https://huggingface.co/docs/transformers/en/model_doc/bit/#bitconfig
.md
The output stride of the model. width_factor (`int`, *optional*, defaults to 1): The width factor for the model. out_features (`List[str]`, *optional*): If used as backbone, list of features to output. Can be any of `"stem"`, `"stage1"`, `"stage2"`, etc. (depending on how many stages the model has). If unset and `out_indices` is set, will default to the corresponding stages. If unset and `out_indices` is unset, will default to the last stage. Must be in the
125_4_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bit.md
https://huggingface.co/docs/transformers/en/model_doc/bit/#bitconfig
.md
corresponding stages. If unset and `out_indices` is unset, will default to the last stage. Must be in the same order as defined in the `stage_names` attribute. out_indices (`List[int]`, *optional*): If used as backbone, list of indices of features to output. Can be any of 0, 1, 2, etc. (depending on how many stages the model has). If unset and `out_features` is set, will default to the corresponding stages. If unset and `out_features` is unset, will default to the last stage. Must be in the
125_4_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bit.md
https://huggingface.co/docs/transformers/en/model_doc/bit/#bitconfig
.md
If unset and `out_features` is unset, will default to the last stage. Must be in the same order as defined in the `stage_names` attribute. Example: ```python >>> from transformers import BitConfig, BitModel
125_4_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bit.md
https://huggingface.co/docs/transformers/en/model_doc/bit/#bitconfig
.md
>>> # Initializing a BiT bit-50 style configuration >>> configuration = BitConfig() >>> # Initializing a model (with random weights) from the bit-50 style configuration >>> model = BitModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
125_4_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bit.md
https://huggingface.co/docs/transformers/en/model_doc/bit/#bitimageprocessor
.md
Constructs a BiT image processor. Args: do_resize (`bool`, *optional*, defaults to `True`): Whether to resize the image's (height, width) dimensions to the specified `size`. Can be overridden by `do_resize` in the `preprocess` method. size (`Dict[str, int]` *optional*, defaults to `{"shortest_edge": 224}`): Size of the image after resizing. The shortest edge of the image is resized to size["shortest_edge"], with
125_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bit.md
https://huggingface.co/docs/transformers/en/model_doc/bit/#bitimageprocessor
.md
Size of the image after resizing. The shortest edge of the image is resized to size["shortest_edge"], with the longest edge resized to keep the input aspect ratio. Can be overridden by `size` in the `preprocess` method. resample (`PILImageResampling`, *optional*, defaults to `PILImageResampling.BICUBIC`): Resampling filter to use if resizing the image. Can be overridden by `resample` in the `preprocess` method. do_center_crop (`bool`, *optional*, defaults to `True`):
125_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bit.md
https://huggingface.co/docs/transformers/en/model_doc/bit/#bitimageprocessor
.md
do_center_crop (`bool`, *optional*, defaults to `True`): Whether to center crop the image to the specified `crop_size`. Can be overridden by `do_center_crop` in the `preprocess` method. crop_size (`Dict[str, int]` *optional*, defaults to 224): Size of the output image after applying `center_crop`. Can be overridden by `crop_size` in the `preprocess` method. do_rescale (`bool`, *optional*, defaults to `True`):
125_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bit.md
https://huggingface.co/docs/transformers/en/model_doc/bit/#bitimageprocessor
.md
method. do_rescale (`bool`, *optional*, defaults to `True`): Whether to rescale the image by the specified scale `rescale_factor`. Can be overridden by `do_rescale` in the `preprocess` method. rescale_factor (`int` or `float`, *optional*, defaults to `1/255`): Scale factor to use if rescaling the image. Can be overridden by `rescale_factor` in the `preprocess` method. do_normalize: Whether to normalize the image. Can be overridden by `do_normalize` in the `preprocess` method.
125_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bit.md
https://huggingface.co/docs/transformers/en/model_doc/bit/#bitimageprocessor
.md
method. do_normalize: Whether to normalize the image. Can be overridden by `do_normalize` in the `preprocess` method. image_mean (`float` or `List[float]`, *optional*, defaults to `OPENAI_CLIP_MEAN`): Mean to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the `image_mean` parameter in the `preprocess` method. image_std (`float` or `List[float]`, *optional*, defaults to `OPENAI_CLIP_MEAN`):
125_5_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bit.md
https://huggingface.co/docs/transformers/en/model_doc/bit/#bitimageprocessor
.md
image_std (`float` or `List[float]`, *optional*, defaults to `OPENAI_CLIP_MEAN`): Standard deviation to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the `image_std` parameter in the `preprocess` method. Can be overridden by the `image_std` parameter in the `preprocess` method. do_convert_rgb (`bool`, *optional*, defaults to `True`): Whether to convert the image to RGB. Methods: preprocess
125_5_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bit.md
https://huggingface.co/docs/transformers/en/model_doc/bit/#bitmodel
.md
The bare BiT model outputting raw features without any specific head on top. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`BitConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
125_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bit.md
https://huggingface.co/docs/transformers/en/model_doc/bit/#bitmodel
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
125_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bit.md
https://huggingface.co/docs/transformers/en/model_doc/bit/#bitforimageclassification
.md
BiT Model with an image classification head on top (a linear layer on top of the pooled features), e.g. for ImageNet. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`BitConfig`]): Model configuration class with all the parameters of the model.
125_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bit.md
https://huggingface.co/docs/transformers/en/model_doc/bit/#bitforimageclassification
.md
behavior. Parameters: config ([`BitConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
125_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics.md
https://huggingface.co/docs/transformers/en/model_doc/idefics/
.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
126_0_0