source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptj.md | https://huggingface.co/docs/transformers/en/model_doc/gptj/#resources | .md | - A blog post introducing [GPT-J-6B: 6B JAX-Based Transformer](https://arankomatsuzaki.wordpress.com/2021/06/04/gpt-j/). 🌎
- A notebook for [GPT-J-6B Inference Demo](https://colab.research.google.com/github/kingoflolz/mesh-transformer-jax/blob/master/colab_demo.ipynb). 🌎
- Another notebook demonstrating [Inference with GPT-J-6B](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/GPT-J-6B/Inference_with_GPT_J_6B.ipynb). | 414_4_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptj.md | https://huggingface.co/docs/transformers/en/model_doc/gptj/#resources | .md | - [Causal language modeling](https://huggingface.co/course/en/chapter7/6?fw=pt#training-a-causal-language-model-from-scratch) chapter of the 🤗 Hugging Face Course. | 414_4_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptj.md | https://huggingface.co/docs/transformers/en/model_doc/gptj/#resources | .md | - [`GPTJForCausalLM`] is supported by this [causal language modeling example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling#gpt-2gpt-and-causal-language-modeling), [text generation example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-generation), and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb). | 414_4_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptj.md | https://huggingface.co/docs/transformers/en/model_doc/gptj/#resources | .md | - [`TFGPTJForCausalLM`] is supported by this [causal language modeling example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/language-modeling#run_clmpy) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb). | 414_4_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptj.md | https://huggingface.co/docs/transformers/en/model_doc/gptj/#resources | .md | - [`FlaxGPTJForCausalLM`] is supported by this [causal language modeling example script](https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling#causal-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/causal_language_modeling_flax.ipynb).
**Documentation resources**
- [Text classification task guide](../tasks/sequence_classification)
- [Question answering task guide](../tasks/question_answering) | 414_4_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptj.md | https://huggingface.co/docs/transformers/en/model_doc/gptj/#resources | .md | - [Question answering task guide](../tasks/question_answering)
- [Causal language modeling task guide](../tasks/language_modeling) | 414_4_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptj.md | https://huggingface.co/docs/transformers/en/model_doc/gptj/#gptjconfig | .md | This is the configuration class to store the configuration of a [`GPTJModel`]. It is used to instantiate a GPT-J
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the GPT-J
[EleutherAI/gpt-j-6B](https://huggingface.co/EleutherAI/gpt-j-6B) architecture. Configuration objects inherit from | 414_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptj.md | https://huggingface.co/docs/transformers/en/model_doc/gptj/#gptjconfig | .md | [EleutherAI/gpt-j-6B](https://huggingface.co/EleutherAI/gpt-j-6B) architecture. Configuration objects inherit from
[`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`]
for more information.
Args:
vocab_size (`int`, *optional*, defaults to 50400):
Vocabulary size of the GPT-J model. Defines the number of different tokens that can be represented by the
`inputs_ids` passed when calling [`GPTJModel`]. | 414_5_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptj.md | https://huggingface.co/docs/transformers/en/model_doc/gptj/#gptjconfig | .md | `inputs_ids` passed when calling [`GPTJModel`].
n_positions (`int`, *optional*, defaults to 2048):
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
n_embd (`int`, *optional*, defaults to 4096):
Dimensionality of the embeddings and hidden states.
n_layer (`int`, *optional*, defaults to 28):
Number of hidden layers in the Transformer encoder.
n_head (`int`, *optional*, defaults to 16): | 414_5_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptj.md | https://huggingface.co/docs/transformers/en/model_doc/gptj/#gptjconfig | .md | Number of hidden layers in the Transformer encoder.
n_head (`int`, *optional*, defaults to 16):
Number of attention heads for each attention layer in the Transformer encoder.
rotary_dim (`int`, *optional*, defaults to 64):
Number of dimensions in the embedding that Rotary Position Embedding is applied to.
n_inner (`int`, *optional*, defaults to None):
Dimensionality of the inner feed-forward layers. `None` will set it to 4 times n_embd
activation_function (`str`, *optional*, defaults to `"gelu_new"`): | 414_5_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptj.md | https://huggingface.co/docs/transformers/en/model_doc/gptj/#gptjconfig | .md | activation_function (`str`, *optional*, defaults to `"gelu_new"`):
Activation function, to be selected in the list `["relu", "silu", "gelu", "tanh", "gelu_new"]`.
resid_pdrop (`float`, *optional*, defaults to 0.1):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
embd_pdrop (`int`, *optional*, defaults to 0.1):
The dropout ratio for the embeddings.
attn_pdrop (`float`, *optional*, defaults to 0.1):
The dropout ratio for the attention. | 414_5_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptj.md | https://huggingface.co/docs/transformers/en/model_doc/gptj/#gptjconfig | .md | The dropout ratio for the embeddings.
attn_pdrop (`float`, *optional*, defaults to 0.1):
The dropout ratio for the attention.
layer_norm_epsilon (`float`, *optional*, defaults to 1e-5):
The epsilon to use in the layer normalization layers.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
use_cache (`bool`, *optional*, defaults to `True`): | 414_5_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptj.md | https://huggingface.co/docs/transformers/en/model_doc/gptj/#gptjconfig | .md | use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models).
Example:
```python
>>> from transformers import GPTJModel, GPTJConfig | 414_5_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptj.md | https://huggingface.co/docs/transformers/en/model_doc/gptj/#gptjconfig | .md | >>> # Initializing a GPT-J 6B configuration
>>> configuration = GPTJConfig()
>>> # Initializing a model from the configuration
>>> model = GPTJModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
Methods: all
<frameworkcontent>
<pt> | 414_5_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptj.md | https://huggingface.co/docs/transformers/en/model_doc/gptj/#gptjmodel | .md | The bare GPT-J Model transformer outputting raw hidden-states without any specific head on top.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`GPTJConfig`]): Model configuration class with all the parameters of the model. | 414_6_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptj.md | https://huggingface.co/docs/transformers/en/model_doc/gptj/#gptjmodel | .md | behavior.
Parameters:
config ([`GPTJConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 414_6_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptj.md | https://huggingface.co/docs/transformers/en/model_doc/gptj/#gptjforcausallm | .md | The GPT-J Model transformer with a language modeling head on top.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`GPTJConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the | 414_7_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptj.md | https://huggingface.co/docs/transformers/en/model_doc/gptj/#gptjforcausallm | .md | Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 414_7_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptj.md | https://huggingface.co/docs/transformers/en/model_doc/gptj/#gptjforsequenceclassification | .md | The GPT-J Model transformer with a sequence classification head on top (linear layer).
[`GPTJForSequenceClassification`] uses the last token in order to do the classification, as other causal models
(e.g. GPT, GPT-2, GPT-Neo) do.
Since it does classification on the last token, it requires to know the position of the last token. If a
`pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If | 414_8_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptj.md | https://huggingface.co/docs/transformers/en/model_doc/gptj/#gptjforsequenceclassification | .md | `pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If
no `pad_token_id` is defined, it simply takes the last value in each row of the batch. Since it cannot guess the
padding tokens when `inputs_embeds` are passed instead of `input_ids`, it does the same (take the last value in
each row of the batch).
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use | 414_8_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptj.md | https://huggingface.co/docs/transformers/en/model_doc/gptj/#gptjforsequenceclassification | .md | This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`GPTJConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the | 414_8_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptj.md | https://huggingface.co/docs/transformers/en/model_doc/gptj/#gptjforsequenceclassification | .md | Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 414_8_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptj.md | https://huggingface.co/docs/transformers/en/model_doc/gptj/#gptjforquestionanswering | .md | The GPT-J Model transformer with a span classification head on top for extractive question-answering tasks like
SQuAD (a linear layers on top of the hidden-states output to compute `span start logits` and `span end logits`).
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters: | 414_9_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptj.md | https://huggingface.co/docs/transformers/en/model_doc/gptj/#gptjforquestionanswering | .md | behavior.
Parameters:
config ([`GPTJConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
</pt>
<tf> | 414_9_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptj.md | https://huggingface.co/docs/transformers/en/model_doc/gptj/#tfgptjmodel | .md | No docstring available for TFGPTJModel
Methods: call | 414_10_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptj.md | https://huggingface.co/docs/transformers/en/model_doc/gptj/#tfgptjforcausallm | .md | No docstring available for TFGPTJForCausalLM
Methods: call | 414_11_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptj.md | https://huggingface.co/docs/transformers/en/model_doc/gptj/#tfgptjforsequenceclassification | .md | No docstring available for TFGPTJForSequenceClassification
Methods: call | 414_12_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptj.md | https://huggingface.co/docs/transformers/en/model_doc/gptj/#tfgptjforquestionanswering | .md | No docstring available for TFGPTJForQuestionAnswering
Methods: call
</tf>
<jax> | 414_13_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptj.md | https://huggingface.co/docs/transformers/en/model_doc/gptj/#flaxgptjmodel | .md | No docstring available for FlaxGPTJModel
Methods: __call__ | 414_14_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptj.md | https://huggingface.co/docs/transformers/en/model_doc/gptj/#flaxgptjforcausallm | .md | No docstring available for FlaxGPTJForCausalLM
Methods: __call__
</jax>
</frameworkcontent> | 414_15_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevit.md | https://huggingface.co/docs/transformers/en/model_doc/mobilevit/ | .md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 415_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevit.md | https://huggingface.co/docs/transformers/en/model_doc/mobilevit/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 415_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevit.md | https://huggingface.co/docs/transformers/en/model_doc/mobilevit/#overview | .md | The MobileViT model was proposed in [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) by Sachin Mehta and Mohammad Rastegari. MobileViT introduces a new layer that replaces local processing in convolutions with global processing using transformers.
The abstract from the paper is the following: | 415_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevit.md | https://huggingface.co/docs/transformers/en/model_doc/mobilevit/#overview | .md | *Light-weight convolutional neural networks (CNNs) are the de-facto for mobile vision tasks. Their spatial inductive biases allow them to learn representations with fewer parameters across different vision tasks. However, these networks are spatially local. To learn global representations, self-attention-based vision trans-formers (ViTs) have been adopted. Unlike CNNs, ViTs are heavy-weight. In this paper, we ask the following question: is it possible to combine the strengths of CNNs and ViTs to build a | 415_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevit.md | https://huggingface.co/docs/transformers/en/model_doc/mobilevit/#overview | .md | heavy-weight. In this paper, we ask the following question: is it possible to combine the strengths of CNNs and ViTs to build a light-weight and low latency network for mobile vision tasks? Towards this end, we introduce MobileViT, a light-weight and general-purpose vision transformer for mobile devices. MobileViT presents a different perspective for the global processing of information with transformers, i.e., transformers as convolutions. Our results show that MobileViT significantly outperforms CNN- and | 415_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevit.md | https://huggingface.co/docs/transformers/en/model_doc/mobilevit/#overview | .md | with transformers, i.e., transformers as convolutions. Our results show that MobileViT significantly outperforms CNN- and ViT-based networks across different tasks and datasets. On the ImageNet-1k dataset, MobileViT achieves top-1 accuracy of 78.4% with about 6 million parameters, which is 3.2% and 6.2% more accurate than MobileNetv3 (CNN-based) and DeIT (ViT-based) for a similar number of parameters. On the MS-COCO object detection task, MobileViT is 5.7% more accurate than MobileNetv3 for a similar | 415_1_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevit.md | https://huggingface.co/docs/transformers/en/model_doc/mobilevit/#overview | .md | number of parameters. On the MS-COCO object detection task, MobileViT is 5.7% more accurate than MobileNetv3 for a similar number of parameters.* | 415_1_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevit.md | https://huggingface.co/docs/transformers/en/model_doc/mobilevit/#overview | .md | This model was contributed by [matthijs](https://huggingface.co/Matthijs). The TensorFlow version of the model was contributed by [sayakpaul](https://huggingface.co/sayakpaul). The original code and weights can be found [here](https://github.com/apple/ml-cvnets). | 415_1_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevit.md | https://huggingface.co/docs/transformers/en/model_doc/mobilevit/#usage-tips | .md | - MobileViT is more like a CNN than a Transformer model. It does not work on sequence data but on batches of images. Unlike ViT, there are no embeddings. The backbone model outputs a feature map. You can follow [this tutorial](https://keras.io/examples/vision/mobilevit) for a lightweight introduction.
- One can use [`MobileViTImageProcessor`] to prepare images for the model. Note that if you do your own preprocessing, the pretrained checkpoints expect images to be in BGR pixel order (not RGB). | 415_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevit.md | https://huggingface.co/docs/transformers/en/model_doc/mobilevit/#usage-tips | .md | - The available image classification checkpoints are pre-trained on [ImageNet-1k](https://huggingface.co/datasets/imagenet-1k) (also referred to as ILSVRC 2012, a collection of 1.3 million images and 1,000 classes).
- The segmentation model uses a [DeepLabV3](https://arxiv.org/abs/1706.05587) head. The available semantic segmentation checkpoints are pre-trained on [PASCAL VOC](http://host.robots.ox.ac.uk/pascal/VOC/). | 415_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevit.md | https://huggingface.co/docs/transformers/en/model_doc/mobilevit/#usage-tips | .md | - As the name suggests MobileViT was designed to be performant and efficient on mobile phones. The TensorFlow versions of the MobileViT models are fully compatible with [TensorFlow Lite](https://www.tensorflow.org/lite).
You can use the following code to convert a MobileViT checkpoint (be it image classification or semantic segmentation) to generate a
TensorFlow Lite model:
```py
from transformers import TFMobileViTForImageClassification
import tensorflow as tf | 415_2_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevit.md | https://huggingface.co/docs/transformers/en/model_doc/mobilevit/#usage-tips | .md | model_ckpt = "apple/mobilevit-xx-small"
model = TFMobileViTForImageClassification.from_pretrained(model_ckpt) | 415_2_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevit.md | https://huggingface.co/docs/transformers/en/model_doc/mobilevit/#usage-tips | .md | converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS,
tf.lite.OpsSet.SELECT_TF_OPS,
]
tflite_model = converter.convert()
tflite_filename = model_ckpt.split("/")[-1] + ".tflite"
with open(tflite_filename, "wb") as f:
f.write(tflite_model)
```
The resulting model will be just **about an MB** making it a good fit for mobile applications where resources and network | 415_2_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevit.md | https://huggingface.co/docs/transformers/en/model_doc/mobilevit/#usage-tips | .md | ```
The resulting model will be just **about an MB** making it a good fit for mobile applications where resources and network
bandwidth can be constrained. | 415_2_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevit.md | https://huggingface.co/docs/transformers/en/model_doc/mobilevit/#resources | .md | A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with MobileViT.
<PipelineTag pipeline="image-classification"/>
- [`MobileViTForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb). | 415_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevit.md | https://huggingface.co/docs/transformers/en/model_doc/mobilevit/#resources | .md | - See also: [Image classification task guide](../tasks/image_classification)
**Semantic segmentation**
- [Semantic segmentation task guide](../tasks/semantic_segmentation)
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. | 415_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevit.md | https://huggingface.co/docs/transformers/en/model_doc/mobilevit/#mobilevitconfig | .md | This is the configuration class to store the configuration of a [`MobileViTModel`]. It is used to instantiate a
MobileViT model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the MobileViT
[apple/mobilevit-small](https://huggingface.co/apple/mobilevit-small) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the | 415_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevit.md | https://huggingface.co/docs/transformers/en/model_doc/mobilevit/#mobilevitconfig | .md | Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
num_channels (`int`, *optional*, defaults to 3):
The number of input channels.
image_size (`int`, *optional*, defaults to 256):
The size (resolution) of each image.
patch_size (`int`, *optional*, defaults to 2):
The size (resolution) of each patch.
hidden_sizes (`List[int]`, *optional*, defaults to `[144, 192, 240]`): | 415_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevit.md | https://huggingface.co/docs/transformers/en/model_doc/mobilevit/#mobilevitconfig | .md | The size (resolution) of each patch.
hidden_sizes (`List[int]`, *optional*, defaults to `[144, 192, 240]`):
Dimensionality (hidden size) of the Transformer encoders at each stage.
neck_hidden_sizes (`List[int]`, *optional*, defaults to `[16, 32, 64, 96, 128, 160, 640]`):
The number of channels for the feature maps of the backbone.
num_attention_heads (`int`, *optional*, defaults to 4):
Number of attention heads for each attention layer in the Transformer encoder. | 415_4_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevit.md | https://huggingface.co/docs/transformers/en/model_doc/mobilevit/#mobilevitconfig | .md | Number of attention heads for each attention layer in the Transformer encoder.
mlp_ratio (`float`, *optional*, defaults to 2.0):
The ratio of the number of channels in the output of the MLP to the number of channels in the input.
expand_ratio (`float`, *optional*, defaults to 4.0):
Expansion factor for the MobileNetv2 layers.
hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
The non-linear activation function (function or string) in the Transformer encoder and convolution layers. | 415_4_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevit.md | https://huggingface.co/docs/transformers/en/model_doc/mobilevit/#mobilevitconfig | .md | The non-linear activation function (function or string) in the Transformer encoder and convolution layers.
conv_kernel_size (`int`, *optional*, defaults to 3):
The size of the convolutional kernel in the MobileViT layer.
output_stride (`int`, *optional*, defaults to 32):
The ratio of the spatial resolution of the output to the resolution of the input image.
hidden_dropout_prob (`float`, *optional*, defaults to 0.1):
The dropout probability for all fully connected layers in the Transformer encoder. | 415_4_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevit.md | https://huggingface.co/docs/transformers/en/model_doc/mobilevit/#mobilevitconfig | .md | The dropout probability for all fully connected layers in the Transformer encoder.
attention_probs_dropout_prob (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
classifier_dropout_prob (`float`, *optional*, defaults to 0.1):
The dropout ratio for attached classifiers.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices. | 415_4_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevit.md | https://huggingface.co/docs/transformers/en/model_doc/mobilevit/#mobilevitconfig | .md | The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (`float`, *optional*, defaults to 1e-05):
The epsilon used by the layer normalization layers.
qkv_bias (`bool`, *optional*, defaults to `True`):
Whether to add a bias to the queries, keys and values.
aspp_out_channels (`int`, *optional*, defaults to 256):
Number of output channels used in the ASPP layer for semantic segmentation.
atrous_rates (`List[int]`, *optional*, defaults to `[6, 12, 18]`): | 415_4_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevit.md | https://huggingface.co/docs/transformers/en/model_doc/mobilevit/#mobilevitconfig | .md | atrous_rates (`List[int]`, *optional*, defaults to `[6, 12, 18]`):
Dilation (atrous) factors used in the ASPP layer for semantic segmentation.
aspp_dropout_prob (`float`, *optional*, defaults to 0.1):
The dropout ratio for the ASPP layer for semantic segmentation.
semantic_loss_ignore_index (`int`, *optional*, defaults to 255):
The index that is ignored by the loss function of the semantic segmentation model.
Example:
```python
>>> from transformers import MobileViTConfig, MobileViTModel | 415_4_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevit.md | https://huggingface.co/docs/transformers/en/model_doc/mobilevit/#mobilevitconfig | .md | >>> # Initializing a mobilevit-small style configuration
>>> configuration = MobileViTConfig()
>>> # Initializing a model from the mobilevit-small style configuration
>>> model = MobileViTModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
``` | 415_4_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevit.md | https://huggingface.co/docs/transformers/en/model_doc/mobilevit/#mobilevitfeatureextractor | .md | No docstring available for MobileViTFeatureExtractor
Methods: __call__
- post_process_semantic_segmentation | 415_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevit.md | https://huggingface.co/docs/transformers/en/model_doc/mobilevit/#mobilevitimageprocessor | .md | Constructs a MobileViT image processor.
Args:
do_resize (`bool`, *optional*, defaults to `True`):
Whether to resize the image's (height, width) dimensions to the specified `size`. Can be overridden by the
`do_resize` parameter in the `preprocess` method.
size (`Dict[str, int]` *optional*, defaults to `{"shortest_edge": 224}`):
Controls the size of the output image after resizing. Can be overridden by the `size` parameter in the
`preprocess` method. | 415_6_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevit.md | https://huggingface.co/docs/transformers/en/model_doc/mobilevit/#mobilevitimageprocessor | .md | Controls the size of the output image after resizing. Can be overridden by the `size` parameter in the
`preprocess` method.
resample (`PILImageResampling`, *optional*, defaults to `Resampling.BILINEAR`):
Defines the resampling filter to use if resizing the image. Can be overridden by the `resample` parameter
in the `preprocess` method.
do_rescale (`bool`, *optional*, defaults to `True`):
Whether to rescale the image by the specified scale `rescale_factor`. Can be overridden by the `do_rescale` | 415_6_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevit.md | https://huggingface.co/docs/transformers/en/model_doc/mobilevit/#mobilevitimageprocessor | .md | Whether to rescale the image by the specified scale `rescale_factor`. Can be overridden by the `do_rescale`
parameter in the `preprocess` method.
rescale_factor (`int` or `float`, *optional*, defaults to `1/255`):
Scale factor to use if rescaling the image. Can be overridden by the `rescale_factor` parameter in the
`preprocess` method.
do_center_crop (`bool`, *optional*, defaults to `True`):
Whether to crop the input at the center. If the input size is smaller than `crop_size` along any edge, the | 415_6_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevit.md | https://huggingface.co/docs/transformers/en/model_doc/mobilevit/#mobilevitimageprocessor | .md | Whether to crop the input at the center. If the input size is smaller than `crop_size` along any edge, the
image is padded with 0's and then center cropped. Can be overridden by the `do_center_crop` parameter in
the `preprocess` method.
crop_size (`Dict[str, int]`, *optional*, defaults to `{"height": 256, "width": 256}`):
Desired output size `(size["height"], size["width"])` when applying center-cropping. Can be overridden by
the `crop_size` parameter in the `preprocess` method. | 415_6_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevit.md | https://huggingface.co/docs/transformers/en/model_doc/mobilevit/#mobilevitimageprocessor | .md | the `crop_size` parameter in the `preprocess` method.
do_flip_channel_order (`bool`, *optional*, defaults to `True`):
Whether to flip the color channels from RGB to BGR. Can be overridden by the `do_flip_channel_order`
parameter in the `preprocess` method.
Methods: preprocess
- post_process_semantic_segmentation
<frameworkcontent>
<pt> | 415_6_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevit.md | https://huggingface.co/docs/transformers/en/model_doc/mobilevit/#mobilevitmodel | .md | The bare MobileViT model outputting raw hidden-states without any specific head on top.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`MobileViTConfig`]): Model configuration class with all the parameters of the model. | 415_7_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevit.md | https://huggingface.co/docs/transformers/en/model_doc/mobilevit/#mobilevitmodel | .md | behavior.
Parameters:
config ([`MobileViTConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 415_7_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevit.md | https://huggingface.co/docs/transformers/en/model_doc/mobilevit/#mobilevitforimageclassification | .md | MobileViT model with an image classification head on top (a linear layer on top of the pooled features), e.g. for
ImageNet.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`MobileViTConfig`]): Model configuration class with all the parameters of the model. | 415_8_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevit.md | https://huggingface.co/docs/transformers/en/model_doc/mobilevit/#mobilevitforimageclassification | .md | behavior.
Parameters:
config ([`MobileViTConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 415_8_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevit.md | https://huggingface.co/docs/transformers/en/model_doc/mobilevit/#mobilevitforsemanticsegmentation | .md | MobileViT model with a semantic segmentation head on top, e.g. for Pascal VOC.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`MobileViTConfig`]): Model configuration class with all the parameters of the model. | 415_9_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevit.md | https://huggingface.co/docs/transformers/en/model_doc/mobilevit/#mobilevitforsemanticsegmentation | .md | behavior.
Parameters:
config ([`MobileViTConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
</pt>
<tf> | 415_9_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevit.md | https://huggingface.co/docs/transformers/en/model_doc/mobilevit/#tfmobilevitmodel | .md | No docstring available for TFMobileViTModel
Methods: call | 415_10_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevit.md | https://huggingface.co/docs/transformers/en/model_doc/mobilevit/#tfmobilevitforimageclassification | .md | No docstring available for TFMobileViTForImageClassification
Methods: call | 415_11_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevit.md | https://huggingface.co/docs/transformers/en/model_doc/mobilevit/#tfmobilevitforsemanticsegmentation | .md | No docstring available for TFMobileViTForSemanticSegmentation
Methods: call
</tf>
</frameworkcontent> | 415_12_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md | https://huggingface.co/docs/transformers/en/model_doc/xlm/ | .md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 416_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md | https://huggingface.co/docs/transformers/en/model_doc/xlm/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 416_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md | https://huggingface.co/docs/transformers/en/model_doc/xlm/#xlm | .md | <div class="flex flex-wrap space-x-1">
<a href="https://huggingface.co/models?filter=xlm">
<img alt="Models" src="https://img.shields.io/badge/All_model_pages-xlm-blueviolet">
</a>
<a href="https://huggingface.co/spaces/docs-demos/xlm-mlm-en-2048">
<img alt="Spaces" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue">
</a>
</div> | 416_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md | https://huggingface.co/docs/transformers/en/model_doc/xlm/#overview | .md | The XLM model was proposed in [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by
Guillaume Lample, Alexis Conneau. It's a transformer pretrained using one of the following objectives:
- a causal language modeling (CLM) objective (next token prediction),
- a masked language modeling (MLM) objective (BERT-like), or
- a Translation Language Modeling (TLM) object (extension of BERT's MLM to multiple language inputs)
The abstract from the paper is the following: | 416_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md | https://huggingface.co/docs/transformers/en/model_doc/xlm/#overview | .md | The abstract from the paper is the following:
*Recent studies have demonstrated the efficiency of generative pretraining for English natural language understanding.
In this work, we extend this approach to multiple languages and show the effectiveness of cross-lingual pretraining. We
propose two methods to learn cross-lingual language models (XLMs): one unsupervised that only relies on monolingual | 416_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md | https://huggingface.co/docs/transformers/en/model_doc/xlm/#overview | .md | propose two methods to learn cross-lingual language models (XLMs): one unsupervised that only relies on monolingual
data, and one supervised that leverages parallel data with a new cross-lingual language model objective. We obtain
state-of-the-art results on cross-lingual classification, unsupervised and supervised machine translation. On XNLI, our
approach pushes the state of the art by an absolute gain of 4.9% accuracy. On unsupervised machine translation, we | 416_2_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md | https://huggingface.co/docs/transformers/en/model_doc/xlm/#overview | .md | approach pushes the state of the art by an absolute gain of 4.9% accuracy. On unsupervised machine translation, we
obtain 34.3 BLEU on WMT'16 German-English, improving the previous state of the art by more than 9 BLEU. On supervised
machine translation, we obtain a new state of the art of 38.5 BLEU on WMT'16 Romanian-English, outperforming the
previous best approach by more than 4 BLEU. Our code and pretrained models will be made publicly available.* | 416_2_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md | https://huggingface.co/docs/transformers/en/model_doc/xlm/#overview | .md | previous best approach by more than 4 BLEU. Our code and pretrained models will be made publicly available.*
This model was contributed by [thomwolf](https://huggingface.co/thomwolf). The original code can be found [here](https://github.com/facebookresearch/XLM/). | 416_2_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md | https://huggingface.co/docs/transformers/en/model_doc/xlm/#usage-tips | .md | - XLM has many different checkpoints, which were trained using different objectives: CLM, MLM or TLM. Make sure to
select the correct objective for your task (e.g. MLM checkpoints are not suitable for generation).
- XLM has multilingual checkpoints which leverage a specific `lang` parameter. Check out the [multi-lingual](../multilingual) page for more information. | 416_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md | https://huggingface.co/docs/transformers/en/model_doc/xlm/#usage-tips | .md | - A transformer model trained on several languages. There are three different type of training for this model and the library provides checkpoints for all of them:
* Causal language modeling (CLM) which is the traditional autoregressive training (so this model could be in the previous section as well). One of the languages is selected for each training sample, and the model input is a sentence of 256 tokens, that may span over several documents in one of those languages. | 416_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md | https://huggingface.co/docs/transformers/en/model_doc/xlm/#usage-tips | .md | * Masked language modeling (MLM) which is like RoBERTa. One of the languages is selected for each training sample, and the model input is a sentence of 256 tokens, that may span over several documents in one of those languages, with dynamic masking of the tokens. | 416_3_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md | https://huggingface.co/docs/transformers/en/model_doc/xlm/#usage-tips | .md | * A combination of MLM and translation language modeling (TLM). This consists of concatenating a sentence in two different languages, with random masking. To predict one of the masked tokens, the model can use both, the surrounding context in language 1 and the context given by language 2. | 416_3_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md | https://huggingface.co/docs/transformers/en/model_doc/xlm/#resources | .md | - [Text classification task guide](../tasks/sequence_classification)
- [Token classification task guide](../tasks/token_classification)
- [Question answering task guide](../tasks/question_answering)
- [Causal language modeling task guide](../tasks/language_modeling)
- [Masked language modeling task guide](../tasks/masked_language_modeling)
- [Multiple choice task guide](../tasks/multiple_choice) | 416_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md | https://huggingface.co/docs/transformers/en/model_doc/xlm/#xlmconfig | .md | This is the configuration class to store the configuration of a [`XLMModel`] or a [`TFXLMModel`]. It is used to
instantiate a XLM model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the
[FacebookAI/xlm-mlm-en-2048](https://huggingface.co/FacebookAI/xlm-mlm-en-2048) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the | 416_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md | https://huggingface.co/docs/transformers/en/model_doc/xlm/#xlmconfig | .md | Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 30145):
Vocabulary size of the BERT model. Defines the number of different tokens that can be represented by the
`inputs_ids` passed when calling [`XLMModel`] or [`TFXLMModel`].
emb_dim (`int`, *optional*, defaults to 2048):
Dimensionality of the encoder layers and the pooler layer. | 416_5_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md | https://huggingface.co/docs/transformers/en/model_doc/xlm/#xlmconfig | .md | emb_dim (`int`, *optional*, defaults to 2048):
Dimensionality of the encoder layers and the pooler layer.
n_layer (`int`, *optional*, defaults to 12):
Number of hidden layers in the Transformer encoder.
n_head (`int`, *optional*, defaults to 16):
Number of attention heads for each attention layer in the Transformer encoder.
dropout (`float`, *optional*, defaults to 0.1):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. | 416_5_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md | https://huggingface.co/docs/transformers/en/model_doc/xlm/#xlmconfig | .md | The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_dropout (`float`, *optional*, defaults to 0.1):
The dropout probability for the attention mechanism
gelu_activation (`bool`, *optional*, defaults to `True`):
Whether or not to use *gelu* for the activations instead of *relu*.
sinusoidal_embeddings (`bool`, *optional*, defaults to `False`):
Whether or not to use sinusoidal positional embeddings instead of absolute positional embeddings. | 416_5_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md | https://huggingface.co/docs/transformers/en/model_doc/xlm/#xlmconfig | .md | Whether or not to use sinusoidal positional embeddings instead of absolute positional embeddings.
causal (`bool`, *optional*, defaults to `False`):
Whether or not the model should behave in a causal manner. Causal models use a triangular attention mask in
order to only attend to the left-side context instead if a bidirectional context.
asm (`bool`, *optional*, defaults to `False`):
Whether or not to use an adaptive log softmax projection layer instead of a linear layer for the prediction
layer. | 416_5_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md | https://huggingface.co/docs/transformers/en/model_doc/xlm/#xlmconfig | .md | Whether or not to use an adaptive log softmax projection layer instead of a linear layer for the prediction
layer.
n_langs (`int`, *optional*, defaults to 1):
The number of languages the model handles. Set to 1 for monolingual models.
use_lang_emb (`bool`, *optional*, defaults to `True`)
Whether to use language embeddings. Some models use additional language embeddings, see [the multilingual
models page](http://huggingface.co/transformers/multilingual.html#xlm-language-embeddings) for information | 416_5_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md | https://huggingface.co/docs/transformers/en/model_doc/xlm/#xlmconfig | .md | models page](http://huggingface.co/transformers/multilingual.html#xlm-language-embeddings) for information
on how to use them.
max_position_embeddings (`int`, *optional*, defaults to 512):
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
embed_init_std (`float`, *optional*, defaults to 2048^-0.5):
The standard deviation of the truncated_normal_initializer for initializing the embedding matrices. | 416_5_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md | https://huggingface.co/docs/transformers/en/model_doc/xlm/#xlmconfig | .md | The standard deviation of the truncated_normal_initializer for initializing the embedding matrices.
init_std (`int`, *optional*, defaults to 50257):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices except the
embedding matrices.
layer_norm_eps (`float`, *optional*, defaults to 1e-12):
The epsilon used by the layer normalization layers.
bos_index (`int`, *optional*, defaults to 0):
The index of the beginning of sentence token in the vocabulary. | 416_5_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md | https://huggingface.co/docs/transformers/en/model_doc/xlm/#xlmconfig | .md | bos_index (`int`, *optional*, defaults to 0):
The index of the beginning of sentence token in the vocabulary.
eos_index (`int`, *optional*, defaults to 1):
The index of the end of sentence token in the vocabulary.
pad_index (`int`, *optional*, defaults to 2):
The index of the padding token in the vocabulary.
unk_index (`int`, *optional*, defaults to 3):
The index of the unknown token in the vocabulary.
mask_index (`int`, *optional*, defaults to 5):
The index of the masking token in the vocabulary. | 416_5_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md | https://huggingface.co/docs/transformers/en/model_doc/xlm/#xlmconfig | .md | mask_index (`int`, *optional*, defaults to 5):
The index of the masking token in the vocabulary.
is_encoder(`bool`, *optional*, defaults to `True`):
Whether or not the initialized model should be a transformer encoder or decoder as seen in Vaswani et al.
summary_type (`string`, *optional*, defaults to "first"):
Argument used when doing sequence summary. Used in the sequence classification and multiple choice models.
Has to be one of the following options: | 416_5_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md | https://huggingface.co/docs/transformers/en/model_doc/xlm/#xlmconfig | .md | Has to be one of the following options:
- `"last"`: Take the last token hidden state (like XLNet).
- `"first"`: Take the first token hidden state (like BERT).
- `"mean"`: Take the mean of all tokens hidden states.
- `"cls_index"`: Supply a Tensor of classification token position (like GPT/GPT-2).
- `"attn"`: Not implemented now, use multi-head attention.
summary_use_proj (`bool`, *optional*, defaults to `True`): | 416_5_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md | https://huggingface.co/docs/transformers/en/model_doc/xlm/#xlmconfig | .md | - `"attn"`: Not implemented now, use multi-head attention.
summary_use_proj (`bool`, *optional*, defaults to `True`):
Argument used when doing sequence summary. Used in the sequence classification and multiple choice models.
Whether or not to add a projection after the vector extraction.
summary_activation (`str`, *optional*):
Argument used when doing sequence summary. Used in the sequence classification and multiple choice models. | 416_5_11 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md | https://huggingface.co/docs/transformers/en/model_doc/xlm/#xlmconfig | .md | Argument used when doing sequence summary. Used in the sequence classification and multiple choice models.
Pass `"tanh"` for a tanh activation to the output, any other value will result in no activation.
summary_proj_to_labels (`bool`, *optional*, defaults to `True`):
Used in the sequence classification and multiple choice models.
Whether the projection outputs should have `config.num_labels` or `config.hidden_size` classes.
summary_first_dropout (`float`, *optional*, defaults to 0.1): | 416_5_12 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md | https://huggingface.co/docs/transformers/en/model_doc/xlm/#xlmconfig | .md | summary_first_dropout (`float`, *optional*, defaults to 0.1):
Used in the sequence classification and multiple choice models.
The dropout ratio to be used after the projection and activation.
start_n_top (`int`, *optional*, defaults to 5):
Used in the SQuAD evaluation script.
end_n_top (`int`, *optional*, defaults to 5):
Used in the SQuAD evaluation script.
mask_token_id (`int`, *optional*, defaults to 0):
Model agnostic parameter to identify masked tokens when generating text in an MLM context. | 416_5_13 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md | https://huggingface.co/docs/transformers/en/model_doc/xlm/#xlmconfig | .md | Model agnostic parameter to identify masked tokens when generating text in an MLM context.
lang_id (`int`, *optional*, defaults to 1):
The ID of the language used by the model. This parameter is used when generating text in a given language.
Examples:
```python
>>> from transformers import XLMConfig, XLMModel | 416_5_14 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md | https://huggingface.co/docs/transformers/en/model_doc/xlm/#xlmconfig | .md | >>> # Initializing a XLM configuration
>>> configuration = XLMConfig()
>>> # Initializing a model (with random weights) from the configuration
>>> model = XLMModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
``` | 416_5_15 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlm.md | https://huggingface.co/docs/transformers/en/model_doc/xlm/#xlmtokenizer | .md | Construct an XLM tokenizer. Based on Byte-Pair Encoding. The tokenization process is the following:
- Moses preprocessing and tokenization for most supported languages.
- Language specific tokenization for Chinese (Jieba), Japanese (KyTea) and Thai (PyThaiNLP).
- Optionally lowercases and normalizes all inputs text.
- The arguments `special_tokens` and the function `set_special_tokens`, can be used to add additional symbols (like
"__classify__") to a vocabulary. | 416_6_0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.