source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/segformer.md
https://huggingface.co/docs/transformers/en/model_doc/segformer/#usage-tips
.md
such as 512x512 or 640x640, after which they are normalized. - One additional thing to keep in mind is that one can initialize [`SegformerImageProcessor`] with `do_reduce_labels` set to `True` or `False`. In some datasets (like ADE20k), the 0 index is used in the annotated segmentation maps for background. However, ADE20k doesn't include the "background" class in its 150 labels. Therefore, `do_reduce_labels` is used to reduce all labels by 1, and to make sure no loss is computed for the
217_2_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/segformer.md
https://huggingface.co/docs/transformers/en/model_doc/segformer/#usage-tips
.md
Therefore, `do_reduce_labels` is used to reduce all labels by 1, and to make sure no loss is computed for the background class (i.e. it replaces 0 in the annotated maps by 255, which is the *ignore_index* of the loss function used by [`SegformerForSemanticSegmentation`]). However, other datasets use the 0 index as background class and include this class as part of all labels. In that case, `do_reduce_labels` should be set to `False`, as loss should also be computed for the background class.
217_2_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/segformer.md
https://huggingface.co/docs/transformers/en/model_doc/segformer/#usage-tips
.md
`False`, as loss should also be computed for the background class. - As most models, SegFormer comes in different sizes, the details of which can be found in the table below (taken from Table 7 of the [original paper](https://arxiv.org/abs/2105.15203)). | **Model variant** | **Depths** | **Hidden sizes** | **Decoder hidden size** | **Params (M)** | **ImageNet-1k Top 1** | | :---------------: | ------------- | ------------------- | :---------------------: | :------------: | :-------------------: |
217_2_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/segformer.md
https://huggingface.co/docs/transformers/en/model_doc/segformer/#usage-tips
.md
| :---------------: | ------------- | ------------------- | :---------------------: | :------------: | :-------------------: | | MiT-b0 | [2, 2, 2, 2] | [32, 64, 160, 256] | 256 | 3.7 | 70.5 | | MiT-b1 | [2, 2, 2, 2] | [64, 128, 320, 512] | 256 | 14.0 | 78.7 | | MiT-b2 | [3, 4, 6, 3] | [64, 128, 320, 512] | 768 | 25.4 | 81.6 |
217_2_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/segformer.md
https://huggingface.co/docs/transformers/en/model_doc/segformer/#usage-tips
.md
| MiT-b2 | [3, 4, 6, 3] | [64, 128, 320, 512] | 768 | 25.4 | 81.6 | | MiT-b3 | [3, 4, 18, 3] | [64, 128, 320, 512] | 768 | 45.2 | 83.1 | | MiT-b4 | [3, 8, 27, 3] | [64, 128, 320, 512] | 768 | 62.6 | 83.6 | | MiT-b5 | [3, 6, 40, 3] | [64, 128, 320, 512] | 768 | 82.0 | 83.8 |
217_2_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/segformer.md
https://huggingface.co/docs/transformers/en/model_doc/segformer/#usage-tips
.md
Note that MiT in the above table refers to the Mix Transformer encoder backbone introduced in SegFormer. For SegFormer's results on the segmentation datasets like ADE20k, refer to the [paper](https://arxiv.org/abs/2105.15203).
217_2_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/segformer.md
https://huggingface.co/docs/transformers/en/model_doc/segformer/#resources
.md
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with SegFormer. <PipelineTag pipeline="image-classification"/> - [`SegformerForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb).
217_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/segformer.md
https://huggingface.co/docs/transformers/en/model_doc/segformer/#resources
.md
- [Image classification task guide](../tasks/image_classification) Semantic segmentation: - [`SegformerForSemanticSegmentation`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/semantic-segmentation). - A blog on fine-tuning SegFormer on a custom dataset can be found [here](https://huggingface.co/blog/fine-tune-segformer).
217_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/segformer.md
https://huggingface.co/docs/transformers/en/model_doc/segformer/#resources
.md
- A blog on fine-tuning SegFormer on a custom dataset can be found [here](https://huggingface.co/blog/fine-tune-segformer). - More demo notebooks on SegFormer (both inference + fine-tuning on a custom dataset) can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/SegFormer). - [`TFSegformerForSemanticSegmentation`] is supported by this [example notebook](https://github.com/huggingface/notebooks/blob/main/examples/semantic_segmentation-tf.ipynb).
217_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/segformer.md
https://huggingface.co/docs/transformers/en/model_doc/segformer/#resources
.md
- [Semantic segmentation task guide](../tasks/semantic_segmentation) If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
217_3_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/segformer.md
https://huggingface.co/docs/transformers/en/model_doc/segformer/#segformerconfig
.md
This is the configuration class to store the configuration of a [`SegformerModel`]. It is used to instantiate an SegFormer model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the SegFormer [nvidia/segformer-b0-finetuned-ade-512-512](https://huggingface.co/nvidia/segformer-b0-finetuned-ade-512-512) architecture.
217_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/segformer.md
https://huggingface.co/docs/transformers/en/model_doc/segformer/#segformerconfig
.md
[nvidia/segformer-b0-finetuned-ade-512-512](https://huggingface.co/nvidia/segformer-b0-finetuned-ade-512-512) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: num_channels (`int`, *optional*, defaults to 3): The number of input channels. num_encoder_blocks (`int`, *optional*, defaults to 4):
217_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/segformer.md
https://huggingface.co/docs/transformers/en/model_doc/segformer/#segformerconfig
.md
The number of input channels. num_encoder_blocks (`int`, *optional*, defaults to 4): The number of encoder blocks (i.e. stages in the Mix Transformer encoder). depths (`List[int]`, *optional*, defaults to `[2, 2, 2, 2]`): The number of layers in each encoder block. sr_ratios (`List[int]`, *optional*, defaults to `[8, 4, 2, 1]`): Sequence reduction ratios in each encoder block. hidden_sizes (`List[int]`, *optional*, defaults to `[32, 64, 160, 256]`): Dimension of each of the encoder blocks.
217_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/segformer.md
https://huggingface.co/docs/transformers/en/model_doc/segformer/#segformerconfig
.md
hidden_sizes (`List[int]`, *optional*, defaults to `[32, 64, 160, 256]`): Dimension of each of the encoder blocks. patch_sizes (`List[int]`, *optional*, defaults to `[7, 3, 3, 3]`): Patch size before each encoder block. strides (`List[int]`, *optional*, defaults to `[4, 2, 2, 2]`): Stride before each encoder block. num_attention_heads (`List[int]`, *optional*, defaults to `[1, 2, 5, 8]`): Number of attention heads for each attention layer in each block of the Transformer encoder.
217_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/segformer.md
https://huggingface.co/docs/transformers/en/model_doc/segformer/#segformerconfig
.md
Number of attention heads for each attention layer in each block of the Transformer encoder. mlp_ratios (`List[int]`, *optional*, defaults to `[4, 4, 4, 4]`): Ratio of the size of the hidden layer compared to the size of the input layer of the Mix FFNs in the encoder blocks. hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported.
217_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/segformer.md
https://huggingface.co/docs/transformers/en/model_doc/segformer/#segformerconfig
.md
`"relu"`, `"selu"` and `"gelu_new"` are supported. hidden_dropout_prob (`float`, *optional*, defaults to 0.0): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. classifier_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout probability before the classification head. initializer_range (`float`, *optional*, defaults to 0.02):
217_4_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/segformer.md
https://huggingface.co/docs/transformers/en/model_doc/segformer/#segformerconfig
.md
The dropout probability before the classification head. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. drop_path_rate (`float`, *optional*, defaults to 0.1): The dropout probability for stochastic depth, used in the blocks of the Transformer encoder. layer_norm_eps (`float`, *optional*, defaults to 1e-06): The epsilon used by the layer normalization layers.
217_4_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/segformer.md
https://huggingface.co/docs/transformers/en/model_doc/segformer/#segformerconfig
.md
layer_norm_eps (`float`, *optional*, defaults to 1e-06): The epsilon used by the layer normalization layers. decoder_hidden_size (`int`, *optional*, defaults to 256): The dimension of the all-MLP decode head. semantic_loss_ignore_index (`int`, *optional*, defaults to 255): The index that is ignored by the loss function of the semantic segmentation model. Example: ```python >>> from transformers import SegformerModel, SegformerConfig
217_4_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/segformer.md
https://huggingface.co/docs/transformers/en/model_doc/segformer/#segformerconfig
.md
>>> # Initializing a SegFormer nvidia/segformer-b0-finetuned-ade-512-512 style configuration >>> configuration = SegformerConfig() >>> # Initializing a model from the nvidia/segformer-b0-finetuned-ade-512-512 style configuration >>> model = SegformerModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
217_4_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/segformer.md
https://huggingface.co/docs/transformers/en/model_doc/segformer/#segformerfeatureextractor
.md
No docstring available for SegformerFeatureExtractor Methods: __call__ - post_process_semantic_segmentation
217_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/segformer.md
https://huggingface.co/docs/transformers/en/model_doc/segformer/#segformerimageprocessor
.md
Constructs a Segformer image processor. Args: do_resize (`bool`, *optional*, defaults to `True`): Whether to resize the image's (height, width) dimensions to the specified `(size["height"], size["width"])`. Can be overridden by the `do_resize` parameter in the `preprocess` method. size (`Dict[str, int]` *optional*, defaults to `{"height": 512, "width": 512}`): Size of the output image after resizing. Can be overridden by the `size` parameter in the `preprocess` method.
217_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/segformer.md
https://huggingface.co/docs/transformers/en/model_doc/segformer/#segformerimageprocessor
.md
Size of the output image after resizing. Can be overridden by the `size` parameter in the `preprocess` method. resample (`PILImageResampling`, *optional*, defaults to `Resampling.BILINEAR`): Resampling filter to use if resizing the image. Can be overridden by the `resample` parameter in the `preprocess` method. do_rescale (`bool`, *optional*, defaults to `True`): Whether to rescale the image by the specified scale `rescale_factor`. Can be overridden by the `do_rescale` parameter in the `preprocess` method.
217_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/segformer.md
https://huggingface.co/docs/transformers/en/model_doc/segformer/#segformerimageprocessor
.md
parameter in the `preprocess` method. rescale_factor (`int` or `float`, *optional*, defaults to `1/255`): Whether to normalize the image. Can be overridden by the `do_normalize` parameter in the `preprocess` method. do_normalize (`bool`, *optional*, defaults to `True`): Whether to normalize the image. Can be overridden by the `do_normalize` parameter in the `preprocess` method. image_mean (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_MEAN`):
217_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/segformer.md
https://huggingface.co/docs/transformers/en/model_doc/segformer/#segformerimageprocessor
.md
method. image_mean (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_MEAN`): Mean to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the `image_mean` parameter in the `preprocess` method. image_std (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_STD`): Standard deviation to use if normalizing the image. This is a float or list of floats the length of the
217_6_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/segformer.md
https://huggingface.co/docs/transformers/en/model_doc/segformer/#segformerimageprocessor
.md
Standard deviation to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the `image_std` parameter in the `preprocess` method. do_reduce_labels (`bool`, *optional*, defaults to `False`): Whether or not to reduce all label values of segmentation maps by 1. Usually used for datasets where 0 is used for background, and background itself is not included in all classes of a dataset (e.g. ADE20k). The
217_6_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/segformer.md
https://huggingface.co/docs/transformers/en/model_doc/segformer/#segformerimageprocessor
.md
used for background, and background itself is not included in all classes of a dataset (e.g. ADE20k). The background label will be replaced by 255. Can be overridden by the `do_reduce_labels` parameter in the `preprocess` method. Methods: preprocess - post_process_semantic_segmentation <frameworkcontent> <pt>
217_6_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/segformer.md
https://huggingface.co/docs/transformers/en/model_doc/segformer/#segformermodel
.md
The bare SegFormer encoder (Mix-Transformer) outputting raw hidden-states without any specific head on top. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`SegformerConfig`]): Model configuration class with all the parameters of the model.
217_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/segformer.md
https://huggingface.co/docs/transformers/en/model_doc/segformer/#segformermodel
.md
behavior. Parameters: config ([`SegformerConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
217_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/segformer.md
https://huggingface.co/docs/transformers/en/model_doc/segformer/#segformerdecodehead
.md
No docstring available for SegformerDecodeHead Methods: forward
217_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/segformer.md
https://huggingface.co/docs/transformers/en/model_doc/segformer/#segformerforimageclassification
.md
SegFormer Model transformer with an image classification head on top (a linear layer on top of the final hidden states) e.g. for ImageNet. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`SegformerConfig`]): Model configuration class with all the parameters of the model.
217_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/segformer.md
https://huggingface.co/docs/transformers/en/model_doc/segformer/#segformerforimageclassification
.md
behavior. Parameters: config ([`SegformerConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
217_9_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/segformer.md
https://huggingface.co/docs/transformers/en/model_doc/segformer/#segformerforsemanticsegmentation
.md
SegFormer Model transformer with an all-MLP decode head on top e.g. for ADE20k, CityScapes. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`SegformerConfig`]): Model configuration class with all the parameters of the model.
217_10_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/segformer.md
https://huggingface.co/docs/transformers/en/model_doc/segformer/#segformerforsemanticsegmentation
.md
behavior. Parameters: config ([`SegformerConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward </pt> <tf>
217_10_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/segformer.md
https://huggingface.co/docs/transformers/en/model_doc/segformer/#tfsegformerdecodehead
.md
No docstring available for TFSegformerDecodeHead Methods: call
217_11_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/segformer.md
https://huggingface.co/docs/transformers/en/model_doc/segformer/#tfsegformermodel
.md
No docstring available for TFSegformerModel Methods: call
217_12_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/segformer.md
https://huggingface.co/docs/transformers/en/model_doc/segformer/#tfsegformerforimageclassification
.md
No docstring available for TFSegformerForImageClassification Methods: call
217_13_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/segformer.md
https://huggingface.co/docs/transformers/en/model_doc/segformer/#tfsegformerforsemanticsegmentation
.md
No docstring available for TFSegformerForSemanticSegmentation Methods: call </tf> </frameworkcontent>
217_14_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xls_r.md
https://huggingface.co/docs/transformers/en/model_doc/xls_r/
.md
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
218_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xls_r.md
https://huggingface.co/docs/transformers/en/model_doc/xls_r/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
218_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xls_r.md
https://huggingface.co/docs/transformers/en/model_doc/xls_r/#overview
.md
The XLS-R model was proposed in [XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale](https://arxiv.org/abs/2111.09296) by Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli. The abstract from the paper is the following: *This paper presents XLS-R, a large-scale model for cross-lingual speech representation learning based on wav2vec 2.0.
218_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xls_r.md
https://huggingface.co/docs/transformers/en/model_doc/xls_r/#overview
.md
*This paper presents XLS-R, a large-scale model for cross-lingual speech representation learning based on wav2vec 2.0. We train models with up to 2B parameters on nearly half a million hours of publicly available speech audio in 128 languages, an order of magnitude more public data than the largest known prior work. Our evaluation covers a wide range of tasks, domains, data regimes and languages, both high and low-resource. On the CoVoST-2 speech translation
218_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xls_r.md
https://huggingface.co/docs/transformers/en/model_doc/xls_r/#overview
.md
of tasks, domains, data regimes and languages, both high and low-resource. On the CoVoST-2 speech translation benchmark, we improve the previous state of the art by an average of 7.4 BLEU over 21 translation directions into English. For speech recognition, XLS-R improves over the best known prior work on BABEL, MLS, CommonVoice as well as VoxPopuli, lowering error rates by 14-34% relative on average. XLS-R also sets a new state of the art on VoxLingua107
218_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xls_r.md
https://huggingface.co/docs/transformers/en/model_doc/xls_r/#overview
.md
VoxPopuli, lowering error rates by 14-34% relative on average. XLS-R also sets a new state of the art on VoxLingua107 language identification. Moreover, we show that with sufficient model size, cross-lingual pretraining can outperform English-only pretraining when translating English speech into other languages, a setting which favors monolingual pretraining. We hope XLS-R can help to improve speech processing tasks for many more languages of the world.*
218_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xls_r.md
https://huggingface.co/docs/transformers/en/model_doc/xls_r/#overview
.md
pretraining. We hope XLS-R can help to improve speech processing tasks for many more languages of the world.* Relevant checkpoints can be found under https://huggingface.co/models?other=xls_r. The original code can be found [here](https://github.com/pytorch/fairseq/tree/master/fairseq/models/wav2vec).
218_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xls_r.md
https://huggingface.co/docs/transformers/en/model_doc/xls_r/#usage-tips
.md
- XLS-R is a speech model that accepts a float array corresponding to the raw waveform of the speech signal. - XLS-R model was trained using connectionist temporal classification (CTC) so the model output has to be decoded using [`Wav2Vec2CTCTokenizer`]. <Tip> XLS-R's architecture is based on the Wav2Vec2 model, refer to [Wav2Vec2's documentation page](wav2vec2) for API reference. </Tip>
218_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-v.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-v/
.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
219_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-v.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-v/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
219_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-v.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-v/#overview
.md
XLM-V is multilingual language model with a one million token vocabulary trained on 2.5TB of data from Common Crawl (same as XLM-R). It was introduced in the [XLM-V: Overcoming the Vocabulary Bottleneck in Multilingual Masked Language Models](https://arxiv.org/abs/2301.10472) paper by Davis Liang, Hila Gonen, Yuning Mao, Rui Hou, Naman Goyal, Marjan Ghazvininejad, Luke Zettlemoyer and Madian Khabsa. From the abstract of the XLM-V paper:
219_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-v.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-v/#overview
.md
From the abstract of the XLM-V paper: *Large multilingual language models typically rely on a single vocabulary shared across 100+ languages. As these models have increased in parameter count and depth, vocabulary size has remained largely unchanged. This vocabulary bottleneck limits the representational capabilities of multilingual models like XLM-R. In this paper, we introduce a new approach for scaling to very large multilingual vocabularies by
219_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-v.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-v/#overview
.md
In this paper, we introduce a new approach for scaling to very large multilingual vocabularies by de-emphasizing token sharing between languages with little lexical overlap and assigning vocabulary capacity to achieve sufficient coverage for each individual language. Tokenizations using our vocabulary are typically more semantically meaningful and shorter compared to XLM-R. Leveraging this improved vocabulary, we train XLM-V,
219_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-v.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-v/#overview
.md
more semantically meaningful and shorter compared to XLM-R. Leveraging this improved vocabulary, we train XLM-V, a multilingual language model with a one million token vocabulary. XLM-V outperforms XLM-R on every task we tested on ranging from natural language inference (XNLI), question answering (MLQA, XQuAD, TyDiQA), and named entity recognition (WikiAnn) to low-resource tasks (Americas NLI, MasakhaNER).*
219_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-v.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-v/#overview
.md
named entity recognition (WikiAnn) to low-resource tasks (Americas NLI, MasakhaNER).* This model was contributed by [stefan-it](https://huggingface.co/stefan-it), including detailed experiments with XLM-V on downstream tasks. The experiments repository can be found [here](https://github.com/stefan-it/xlm-v-experiments).
219_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-v.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-v/#usage-tips
.md
- XLM-V is compatible with the XLM-RoBERTa model architecture, only model weights from [`fairseq`](https://github.com/facebookresearch/fairseq) library had to be converted. - The `XLMTokenizer` implementation is used to load the vocab and performs tokenization. A XLM-V (base size) model is available under the [`facebook/xlm-v-base`](https://huggingface.co/facebook/xlm-v-base) identifier. <Tip>
219_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm-v.md
https://huggingface.co/docs/transformers/en/model_doc/xlm-v/#usage-tips
.md
<Tip> XLM-V architecture is the same as XLM-RoBERTa, refer to [XLM-RoBERTa documentation](xlm-roberta) for API reference, and examples. </Tip>
219_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rembert.md
https://huggingface.co/docs/transformers/en/model_doc/rembert/
.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
220_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rembert.md
https://huggingface.co/docs/transformers/en/model_doc/rembert/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
220_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rembert.md
https://huggingface.co/docs/transformers/en/model_doc/rembert/#overview
.md
The RemBERT model was proposed in [Rethinking Embedding Coupling in Pre-trained Language Models](https://arxiv.org/abs/2010.12821) by Hyung Won Chung, Thibault Févry, Henry Tsai, Melvin Johnson, Sebastian Ruder. The abstract from the paper is the following: *We re-evaluate the standard practice of sharing weights between input and output embeddings in state-of-the-art pre-trained language models. We show that decoupled embeddings provide increased modeling flexibility, allowing us to
220_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rembert.md
https://huggingface.co/docs/transformers/en/model_doc/rembert/#overview
.md
pre-trained language models. We show that decoupled embeddings provide increased modeling flexibility, allowing us to significantly improve the efficiency of parameter allocation in the input embedding of multilingual models. By reallocating the input embedding parameters in the Transformer layers, we achieve dramatically better performance on standard natural language understanding tasks with the same number of parameters during fine-tuning. We also show that
220_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rembert.md
https://huggingface.co/docs/transformers/en/model_doc/rembert/#overview
.md
standard natural language understanding tasks with the same number of parameters during fine-tuning. We also show that allocating additional capacity to the output embedding provides benefits to the model that persist through the fine-tuning stage even though the output embedding is discarded after pre-training. Our analysis shows that larger output embeddings prevent the model's last layers from overspecializing to the pre-training task and encourage
220_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rembert.md
https://huggingface.co/docs/transformers/en/model_doc/rembert/#overview
.md
output embeddings prevent the model's last layers from overspecializing to the pre-training task and encourage Transformer representations to be more general and more transferable to other tasks and languages. Harnessing these findings, we are able to train models that achieve strong performance on the XTREME benchmark without increasing the number of parameters at the fine-tuning stage.*
220_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rembert.md
https://huggingface.co/docs/transformers/en/model_doc/rembert/#usage-tips
.md
For fine-tuning, RemBERT can be thought of as a bigger version of mBERT with an ALBERT-like factorization of the embedding layer. The embeddings are not tied in pre-training, in contrast with BERT, which enables smaller input embeddings (preserved during fine-tuning) and bigger output embeddings (discarded at fine-tuning). The tokenizer is also similar to the Albert one rather than the BERT one.
220_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rembert.md
https://huggingface.co/docs/transformers/en/model_doc/rembert/#resources
.md
- [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Causal language modeling task guide](../tasks/language_modeling) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Multiple choice task guide](../tasks/multiple_choice)
220_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rembert.md
https://huggingface.co/docs/transformers/en/model_doc/rembert/#rembertconfig
.md
This is the configuration class to store the configuration of a [`RemBertModel`]. It is used to instantiate an RemBERT model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the RemBERT [google/rembert](https://huggingface.co/google/rembert) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
220_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rembert.md
https://huggingface.co/docs/transformers/en/model_doc/rembert/#rembertconfig
.md
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 250300): Vocabulary size of the RemBERT model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`RemBertModel`] or [`TFRemBertModel`]. Vocabulary size of the model.
220_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rembert.md
https://huggingface.co/docs/transformers/en/model_doc/rembert/#rembertconfig
.md
`inputs_ids` passed when calling [`RemBertModel`] or [`TFRemBertModel`]. Vocabulary size of the model. Defines the different tokens that can be represented by the *inputs_ids* passed to the forward method of [`RemBertModel`]. hidden_size (`int`, *optional*, defaults to 1152): Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (`int`, *optional*, defaults to 32): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 18):
220_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rembert.md
https://huggingface.co/docs/transformers/en/model_doc/rembert/#rembertconfig
.md
Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 18): Number of attention heads for each attention layer in the Transformer encoder. input_embedding_size (`int`, *optional*, defaults to 256): Dimensionality of the input embeddings. output_embedding_size (`int`, *optional*, defaults to 1664): Dimensionality of the output embeddings. intermediate_size (`int`, *optional*, defaults to 4608):
220_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rembert.md
https://huggingface.co/docs/transformers/en/model_doc/rembert/#rembertconfig
.md
Dimensionality of the output embeddings. intermediate_size (`int`, *optional*, defaults to 4608): Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder. hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported. hidden_dropout_prob (`float`, *optional*, defaults to 0):
220_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rembert.md
https://huggingface.co/docs/transformers/en/model_doc/rembert/#rembertconfig
.md
`"relu"`, `"selu"` and `"gelu_new"` are supported. hidden_dropout_prob (`float`, *optional*, defaults to 0): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (`float`, *optional*, defaults to 0): The dropout ratio for the attention probabilities. classifier_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout ratio for the classifier layer when fine-tuning. max_position_embeddings (`int`, *optional*, defaults to 512):
220_4_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rembert.md
https://huggingface.co/docs/transformers/en/model_doc/rembert/#rembertconfig
.md
The dropout ratio for the classifier layer when fine-tuning. max_position_embeddings (`int`, *optional*, defaults to 512): The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). type_vocab_size (`int`, *optional*, defaults to 2): The vocabulary size of the `token_type_ids` passed when calling [`RemBertModel`] or [`TFRemBertModel`]. initializer_range (`float`, *optional*, defaults to 0.02):
220_4_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rembert.md
https://huggingface.co/docs/transformers/en/model_doc/rembert/#rembertconfig
.md
initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-12): The epsilon used by the layer normalization layers. is_decoder (`bool`, *optional*, defaults to `False`): Whether the model is used as a decoder or not. If `False`, the model is used as an encoder. use_cache (`bool`, *optional*, defaults to `True`):
220_4_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rembert.md
https://huggingface.co/docs/transformers/en/model_doc/rembert/#rembertconfig
.md
use_cache (`bool`, *optional*, defaults to `True`): Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if `config.is_decoder=True`. Example: ```python >>> from transformers import RemBertModel, RemBertConfig
220_4_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rembert.md
https://huggingface.co/docs/transformers/en/model_doc/rembert/#rembertconfig
.md
>>> # Initializing a RemBERT rembert style configuration >>> configuration = RemBertConfig() >>> # Initializing a model from the rembert style configuration >>> model = RemBertModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
220_4_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rembert.md
https://huggingface.co/docs/transformers/en/model_doc/rembert/#remberttokenizer
.md
Construct a RemBERT tokenizer. Based on [SentencePiece](https://github.com/google/sentencepiece). This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. Args: vocab_file (`str`): [SentencePiece](https://github.com/google/sentencepiece) file (generally has a *.spm* extension) that contains the vocabulary necessary to instantiate a tokenizer.
220_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rembert.md
https://huggingface.co/docs/transformers/en/model_doc/rembert/#remberttokenizer
.md
contains the vocabulary necessary to instantiate a tokenizer. bos_token (`str`, *optional*, defaults to `"[CLS]"`): The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token. <Tip> When building a sequence using special tokens, this is not the token that is used for the beginning of sequence. The token used is the `cls_token`. </Tip> eos_token (`str`, *optional*, defaults to `"[SEP]"`): The end of sequence token. <Tip>
220_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rembert.md
https://huggingface.co/docs/transformers/en/model_doc/rembert/#remberttokenizer
.md
</Tip> eos_token (`str`, *optional*, defaults to `"[SEP]"`): The end of sequence token. <Tip> When building a sequence using special tokens, this is not the token that is used for the end of sequence. The token used is the `sep_token`. </Tip> unk_token (`str`, *optional*, defaults to `"<unk>"`): The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. sep_token (`str`, *optional*, defaults to `"[SEP]"`):
220_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rembert.md
https://huggingface.co/docs/transformers/en/model_doc/rembert/#remberttokenizer
.md
token instead. sep_token (`str`, *optional*, defaults to `"[SEP]"`): The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens. pad_token (`str`, *optional*, defaults to `"<pad>"`): The token used for padding, for example when batching sequences of different lengths.
220_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rembert.md
https://huggingface.co/docs/transformers/en/model_doc/rembert/#remberttokenizer
.md
The token used for padding, for example when batching sequences of different lengths. cls_token (`str`, *optional*, defaults to `"[CLS]"`): The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens. mask_token (`str`, *optional*, defaults to `"[MASK]"`):
220_5_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rembert.md
https://huggingface.co/docs/transformers/en/model_doc/rembert/#remberttokenizer
.md
mask_token (`str`, *optional*, defaults to `"[MASK]"`): The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict. Attributes: sp_model (`SentencePieceProcessor`): The *SentencePiece* processor that is used for every conversion (string, tokens and IDs). Methods: build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary
220_5_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rembert.md
https://huggingface.co/docs/transformers/en/model_doc/rembert/#remberttokenizerfast
.md
Construct a "fast" RemBert tokenizer (backed by HuggingFace's *tokenizers* library). Based on [Unigram](https://huggingface.co/docs/tokenizers/python/latest/components.html?highlight=unigram#models). This tokenizer inherits from [`PreTrainedTokenizerFast`] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods Args: vocab_file (`str`): [SentencePiece](https://github.com/google/sentencepiece) file (generally has a *.spm* extension) that
220_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rembert.md
https://huggingface.co/docs/transformers/en/model_doc/rembert/#remberttokenizerfast
.md
Args: vocab_file (`str`): [SentencePiece](https://github.com/google/sentencepiece) file (generally has a *.spm* extension) that contains the vocabulary necessary to instantiate a tokenizer. do_lower_case (`bool`, *optional*, defaults to `True`): Whether or not to lowercase the input when tokenizing. remove_space (`bool`, *optional*, defaults to `True`): Whether or not to strip the text when tokenizing (removing excess spaces before and after the string).
220_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rembert.md
https://huggingface.co/docs/transformers/en/model_doc/rembert/#remberttokenizerfast
.md
Whether or not to strip the text when tokenizing (removing excess spaces before and after the string). keep_accents (`bool`, *optional*, defaults to `False`): Whether or not to keep accents when tokenizing. bos_token (`str`, *optional*, defaults to `"[CLS]"`): The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token. <Tip> When building a sequence using special tokens, this is not the token that is used for the beginning of
220_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rembert.md
https://huggingface.co/docs/transformers/en/model_doc/rembert/#remberttokenizerfast
.md
<Tip> When building a sequence using special tokens, this is not the token that is used for the beginning of sequence. The token used is the `cls_token`. </Tip> eos_token (`str`, *optional*, defaults to `"[SEP]"`): The end of sequence token. .. note:: When building a sequence using special tokens, this is not the token that is used for the end of sequence. The token used is the `sep_token`. unk_token (`str`, *optional*, defaults to `"<unk>"`):
220_6_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rembert.md
https://huggingface.co/docs/transformers/en/model_doc/rembert/#remberttokenizerfast
.md
that is used for the end of sequence. The token used is the `sep_token`. unk_token (`str`, *optional*, defaults to `"<unk>"`): The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. sep_token (`str`, *optional*, defaults to `"[SEP]"`): The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
220_6_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rembert.md
https://huggingface.co/docs/transformers/en/model_doc/rembert/#remberttokenizerfast
.md
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens. pad_token (`str`, *optional*, defaults to `"<pad>"`): The token used for padding, for example when batching sequences of different lengths. cls_token (`str`, *optional*, defaults to `"[CLS]"`):
220_6_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rembert.md
https://huggingface.co/docs/transformers/en/model_doc/rembert/#remberttokenizerfast
.md
cls_token (`str`, *optional*, defaults to `"[CLS]"`): The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens. mask_token (`str`, *optional*, defaults to `"[MASK]"`): The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict.
220_6_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rembert.md
https://huggingface.co/docs/transformers/en/model_doc/rembert/#remberttokenizerfast
.md
modeling. This is the token which the model will try to predict. Methods: build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary <frameworkcontent> <pt>
220_6_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rembert.md
https://huggingface.co/docs/transformers/en/model_doc/rembert/#rembertmodel
.md
The bare RemBERT Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`RemBertConfig`]): Model configuration class with all the parameters of the model.
220_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rembert.md
https://huggingface.co/docs/transformers/en/model_doc/rembert/#rembertmodel
.md
behavior. Parameters: config ([`RemBertConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of
220_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rembert.md
https://huggingface.co/docs/transformers/en/model_doc/rembert/#rembertmodel
.md
The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of cross-attention is added between the self-attention layers, following the architecture described in [Attention is all you need](https://arxiv.org/abs/1706.03762) by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin.
220_7_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rembert.md
https://huggingface.co/docs/transformers/en/model_doc/rembert/#rembertmodel
.md
Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin. To behave as an decoder the model needs to be initialized with the `is_decoder` argument of the configuration set to `True`. To be used in a Seq2Seq model, the model needs to initialized with both `is_decoder` argument and `add_cross_attention` set to `True`; an `encoder_hidden_states` is then expected as an input to the forward pass. Methods: forward
220_7_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rembert.md
https://huggingface.co/docs/transformers/en/model_doc/rembert/#rembertforcausallm
.md
RemBERT Model with a `language modeling` head on top for CLM fine-tuning. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`RemBertConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
220_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rembert.md
https://huggingface.co/docs/transformers/en/model_doc/rembert/#rembertforcausallm
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
220_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rembert.md
https://huggingface.co/docs/transformers/en/model_doc/rembert/#rembertformaskedlm
.md
RemBERT Model with a `language modeling` head on top. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`RemBertConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
220_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rembert.md
https://huggingface.co/docs/transformers/en/model_doc/rembert/#rembertformaskedlm
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
220_9_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rembert.md
https://huggingface.co/docs/transformers/en/model_doc/rembert/#rembertforsequenceclassification
.md
RemBERT Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`RemBertConfig`]): Model configuration class with all the parameters of the model.
220_10_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rembert.md
https://huggingface.co/docs/transformers/en/model_doc/rembert/#rembertforsequenceclassification
.md
behavior. Parameters: config ([`RemBertConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
220_10_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rembert.md
https://huggingface.co/docs/transformers/en/model_doc/rembert/#rembertformultiplechoice
.md
RemBERT Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`RemBertConfig`]): Model configuration class with all the parameters of the model.
220_11_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rembert.md
https://huggingface.co/docs/transformers/en/model_doc/rembert/#rembertformultiplechoice
.md
behavior. Parameters: config ([`RemBertConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
220_11_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rembert.md
https://huggingface.co/docs/transformers/en/model_doc/rembert/#rembertfortokenclassification
.md
RemBERT Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`RemBertConfig`]): Model configuration class with all the parameters of the model.
220_12_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rembert.md
https://huggingface.co/docs/transformers/en/model_doc/rembert/#rembertfortokenclassification
.md
behavior. Parameters: config ([`RemBertConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
220_12_1