source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/openai-gpt.md | https://huggingface.co/docs/transformers/en/model_doc/openai-gpt/#openaigptconfig | .md | >>> # Initializing a GPT configuration
>>> configuration = OpenAIGPTConfig()
>>> # Initializing a model (with random weights) from the configuration
>>> model = OpenAIGPTModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
``` | 410_5_11 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/openai-gpt.md | https://huggingface.co/docs/transformers/en/model_doc/openai-gpt/#openaigpttokenizer | .md | Construct a GPT Tokenizer. Based on Byte-Pair-Encoding with the following peculiarities:
- lowercases all inputs,
- uses `SpaCy` tokenizer and `ftfy` for pre-BPE tokenization if they are installed, fallback to BERT's
`BasicTokenizer` if not.
This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
Args:
vocab_file (`str`):
Path to the vocabulary file.
merges_file (`str`): | 410_6_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/openai-gpt.md | https://huggingface.co/docs/transformers/en/model_doc/openai-gpt/#openaigpttokenizer | .md | Args:
vocab_file (`str`):
Path to the vocabulary file.
merges_file (`str`):
Path to the merges file.
unk_token (`str`, *optional*, defaults to `"<unk>"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
Methods: save_vocabulary | 410_6_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/openai-gpt.md | https://huggingface.co/docs/transformers/en/model_doc/openai-gpt/#openaigpttokenizerfast | .md | Construct a "fast" GPT Tokenizer (backed by HuggingFace's *tokenizers* library). Based on Byte-Pair-Encoding with
the following peculiarities:
- lower case all inputs
- uses BERT's BasicTokenizer for pre-BPE tokenization
This tokenizer inherits from [`PreTrainedTokenizerFast`] which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
Args:
vocab_file (`str`):
Path to the vocabulary file.
merges_file (`str`):
Path to the merges file. | 410_7_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/openai-gpt.md | https://huggingface.co/docs/transformers/en/model_doc/openai-gpt/#openaigpttokenizerfast | .md | Args:
vocab_file (`str`):
Path to the vocabulary file.
merges_file (`str`):
Path to the merges file.
unk_token (`str`, *optional*, defaults to `"<unk>"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead. | 410_7_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/openai-gpt.md | https://huggingface.co/docs/transformers/en/model_doc/openai-gpt/#openai-specific-outputs | .md | models.openai.modeling_openai.OpenAIGPTDoubleHeadsModelOutput
Base class for outputs of models predicting if two sentences are consecutive or not.
Args:
loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `labels` is provided):
Language modeling loss.
mc_loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `mc_labels` is provided):
Multiple choice classification loss.
logits (`torch.FloatTensor` of shape `(batch_size, num_choices, sequence_length, config.vocab_size)`): | 410_8_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/openai-gpt.md | https://huggingface.co/docs/transformers/en/model_doc/openai-gpt/#openai-specific-outputs | .md | logits (`torch.FloatTensor` of shape `(batch_size, num_choices, sequence_length, config.vocab_size)`):
Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
mc_logits (`torch.FloatTensor` of shape `(batch_size, num_choices)`):
Prediction scores of the multiple choice classification head (scores for each choice before SoftMax). | 410_8_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/openai-gpt.md | https://huggingface.co/docs/transformers/en/model_doc/openai-gpt/#openai-specific-outputs | .md | Prediction scores of the multiple choice classification head (scores for each choice before SoftMax).
hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of
shape `(batch_size, sequence_length, hidden_size)`.
Hidden-states of the model at the output of each layer plus the initial embedding outputs. | 410_8_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/openai-gpt.md | https://huggingface.co/docs/transformers/en/model_doc/openai-gpt/#openai-specific-outputs | .md | Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
sequence_length)`.
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads. | 410_8_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/openai-gpt.md | https://huggingface.co/docs/transformers/en/model_doc/openai-gpt/#openai-specific-outputs | .md | Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
[[autodoc]] models.openai.modeling_tf_openai.TFOpenAIGPTDoubleHeadsModelOutput:
modeling_tf_openai requires the TensorFlow library but it was not found in your environment.
However, we were able to find a PyTorch installation. PyTorch classes do not begin
with "TF", but are otherwise identically named to our TF classes.
If you want to use PyTorch, please use those classes instead! | 410_8_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/openai-gpt.md | https://huggingface.co/docs/transformers/en/model_doc/openai-gpt/#openai-specific-outputs | .md | If you want to use PyTorch, please use those classes instead!
If you really do want to use TensorFlow, please follow the instructions on the
installation page https://www.tensorflow.org/install that match your environment.
<frameworkcontent>
<pt> | 410_8_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/openai-gpt.md | https://huggingface.co/docs/transformers/en/model_doc/openai-gpt/#openaigptmodel | .md | The bare OpenAI GPT transformer model outputting raw hidden-states without any specific head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | 410_9_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/openai-gpt.md | https://huggingface.co/docs/transformers/en/model_doc/openai-gpt/#openaigptmodel | .md | etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`OpenAIGPTConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the | 410_9_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/openai-gpt.md | https://huggingface.co/docs/transformers/en/model_doc/openai-gpt/#openaigptmodel | .md | Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 410_9_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/openai-gpt.md | https://huggingface.co/docs/transformers/en/model_doc/openai-gpt/#openaigptlmheadmodel | .md | OpenAI GPT Model transformer with a language modeling head on top (linear layer with weights tied to the input
embeddings).
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | 410_10_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/openai-gpt.md | https://huggingface.co/docs/transformers/en/model_doc/openai-gpt/#openaigptlmheadmodel | .md | etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`OpenAIGPTConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the | 410_10_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/openai-gpt.md | https://huggingface.co/docs/transformers/en/model_doc/openai-gpt/#openaigptlmheadmodel | .md | Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 410_10_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/openai-gpt.md | https://huggingface.co/docs/transformers/en/model_doc/openai-gpt/#openaigptdoubleheadsmodel | .md | OpenAI GPT Model transformer with a language modeling and a multiple-choice classification head on top e.g. for
RocStories/SWAG tasks. The two heads are two linear layers. The language modeling head has its weights tied to the
input embeddings, the classification head takes as input the input of a specified classification token index in the
input sequence).
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the | 410_11_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/openai-gpt.md | https://huggingface.co/docs/transformers/en/model_doc/openai-gpt/#openaigptdoubleheadsmodel | .md | input sequence).
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters: | 410_11_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/openai-gpt.md | https://huggingface.co/docs/transformers/en/model_doc/openai-gpt/#openaigptdoubleheadsmodel | .md | and behavior.
Parameters:
config ([`OpenAIGPTConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 410_11_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/openai-gpt.md | https://huggingface.co/docs/transformers/en/model_doc/openai-gpt/#openaigptforsequenceclassification | .md | The Original OpenAI GPT Model transformer with a sequence classification head on top (linear layer).
[`OpenAIGPTForSequenceClassification`] uses the last token in order to do the classification, as other causal
models (e.g. GPT-2) do. Since it does classification on the last token, it requires to know the position of the
last token. If a `pad_token_id` is defined in the configuration, it finds the last token that is not a padding | 410_12_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/openai-gpt.md | https://huggingface.co/docs/transformers/en/model_doc/openai-gpt/#openaigptforsequenceclassification | .md | last token. If a `pad_token_id` is defined in the configuration, it finds the last token that is not a padding
token in each row. If no `pad_token_id` is defined, it simply takes the last value in each row of the batch. Since
it cannot guess the padding tokens when `inputs_embeds` are passed instead of `input_ids`, it does the same (take
the last value in each row of the batch).
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the | 410_12_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/openai-gpt.md | https://huggingface.co/docs/transformers/en/model_doc/openai-gpt/#openaigptforsequenceclassification | .md | This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters: | 410_12_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/openai-gpt.md | https://huggingface.co/docs/transformers/en/model_doc/openai-gpt/#openaigptforsequenceclassification | .md | and behavior.
Parameters:
config ([`OpenAIGPTConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
</pt>
<tf> | 410_12_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/openai-gpt.md | https://huggingface.co/docs/transformers/en/model_doc/openai-gpt/#tfopenaigptmodel | .md | No docstring available for TFOpenAIGPTModel
Methods: call | 410_13_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/openai-gpt.md | https://huggingface.co/docs/transformers/en/model_doc/openai-gpt/#tfopenaigptlmheadmodel | .md | No docstring available for TFOpenAIGPTLMHeadModel
Methods: call | 410_14_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/openai-gpt.md | https://huggingface.co/docs/transformers/en/model_doc/openai-gpt/#tfopenaigptdoubleheadsmodel | .md | No docstring available for TFOpenAIGPTDoubleHeadsModel
Methods: call | 410_15_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/openai-gpt.md | https://huggingface.co/docs/transformers/en/model_doc/openai-gpt/#tfopenaigptforsequenceclassification | .md | No docstring available for TFOpenAIGPTForSequenceClassification
Methods: call
</tf>
</frameworkcontent> | 410_16_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/levit.md | https://huggingface.co/docs/transformers/en/model_doc/levit/ | .md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 411_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/levit.md | https://huggingface.co/docs/transformers/en/model_doc/levit/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 411_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/levit.md | https://huggingface.co/docs/transformers/en/model_doc/levit/#overview | .md | The LeViT model was proposed in [LeViT: Introducing Convolutions to Vision Transformers](https://arxiv.org/abs/2104.01136) by Ben Graham, Alaaeldin El-Nouby, Hugo Touvron, Pierre Stock, Armand Joulin, Hervé Jégou, Matthijs Douze. LeViT improves the [Vision Transformer (ViT)](vit) in performance and efficiency by a few architectural differences such as activation maps with decreasing resolutions in Transformers and the introduction of an attention bias to integrate positional information. | 411_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/levit.md | https://huggingface.co/docs/transformers/en/model_doc/levit/#overview | .md | The abstract from the paper is the following:
*We design a family of image classification architectures that optimize the trade-off between accuracy
and efficiency in a high-speed regime. Our work exploits recent findings in attention-based architectures,
which are competitive on highly parallel processing hardware. We revisit principles from the extensive
literature on convolutional neural networks to apply them to transformers, in particular activation maps | 411_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/levit.md | https://huggingface.co/docs/transformers/en/model_doc/levit/#overview | .md | literature on convolutional neural networks to apply them to transformers, in particular activation maps
with decreasing resolutions. We also introduce the attention bias, a new way to integrate positional information
in vision transformers. As a result, we propose LeVIT: a hybrid neural network for fast inference image classification.
We consider different measures of efficiency on different hardware platforms, so as to best reflect a wide range of | 411_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/levit.md | https://huggingface.co/docs/transformers/en/model_doc/levit/#overview | .md | We consider different measures of efficiency on different hardware platforms, so as to best reflect a wide range of
application scenarios. Our extensive experiments empirically validate our technical choices and show they are suitable
to most architectures. Overall, LeViT significantly outperforms existing convnets and vision transformers with respect
to the speed/accuracy tradeoff. For example, at 80% ImageNet top-1 accuracy, LeViT is 5 times faster than EfficientNet on CPU. * | 411_1_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/levit.md | https://huggingface.co/docs/transformers/en/model_doc/levit/#overview | .md | <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/levit_architecture.png"
alt="drawing" width="600"/>
<small> LeViT Architecture. Taken from the <a href="https://arxiv.org/abs/2104.01136">original paper</a>.</small>
This model was contributed by [anugunj](https://huggingface.co/anugunj). The original code can be found [here](https://github.com/facebookresearch/LeViT). | 411_1_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/levit.md | https://huggingface.co/docs/transformers/en/model_doc/levit/#usage-tips | .md | - Compared to ViT, LeViT models use an additional distillation head to effectively learn from a teacher (which, in the LeViT paper, is a ResNet like-model). The distillation head is learned through backpropagation under supervision of a ResNet like-model. They also draw inspiration from convolution neural networks to use activation maps with decreasing resolutions to increase the efficiency. | 411_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/levit.md | https://huggingface.co/docs/transformers/en/model_doc/levit/#usage-tips | .md | - There are 2 ways to fine-tune distilled models, either (1) in a classic way, by only placing a prediction head on top
of the final hidden state and not using the distillation head, or (2) by placing both a prediction head and distillation
head on top of the final hidden state. In that case, the prediction head is trained using regular cross-entropy between
the prediction of the head and the ground-truth label, while the distillation prediction head is trained using hard distillation | 411_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/levit.md | https://huggingface.co/docs/transformers/en/model_doc/levit/#usage-tips | .md | (cross-entropy between the prediction of the distillation head and the label predicted by the teacher). At inference time,
one takes the average prediction between both heads as final prediction. (2) is also called "fine-tuning with distillation",
because one relies on a teacher that has already been fine-tuned on the downstream dataset. In terms of models, (1) corresponds
to [`LevitForImageClassification`] and (2) corresponds to [`LevitForImageClassificationWithTeacher`]. | 411_2_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/levit.md | https://huggingface.co/docs/transformers/en/model_doc/levit/#usage-tips | .md | to [`LevitForImageClassification`] and (2) corresponds to [`LevitForImageClassificationWithTeacher`].
- All released checkpoints were pre-trained and fine-tuned on [ImageNet-1k](https://huggingface.co/datasets/imagenet-1k)
(also referred to as ILSVRC 2012, a collection of 1.3 million images and 1,000 classes). only. No external data was used. This is in
contrast with the original ViT model, which used external data like the JFT-300M dataset/Imagenet-21k for
pre-training. | 411_2_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/levit.md | https://huggingface.co/docs/transformers/en/model_doc/levit/#usage-tips | .md | contrast with the original ViT model, which used external data like the JFT-300M dataset/Imagenet-21k for
pre-training.
- The authors of LeViT released 5 trained LeViT models, which you can directly plug into [`LevitModel`] or [`LevitForImageClassification`].
Techniques like data augmentation, optimization, and regularization were used in order to simulate training on a much larger dataset
(while only using ImageNet-1k for pre-training). The 5 variants available are (all trained on images of size 224x224): | 411_2_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/levit.md | https://huggingface.co/docs/transformers/en/model_doc/levit/#usage-tips | .md | (while only using ImageNet-1k for pre-training). The 5 variants available are (all trained on images of size 224x224):
*facebook/levit-128S*, *facebook/levit-128*, *facebook/levit-192*, *facebook/levit-256* and
*facebook/levit-384*. Note that one should use [`LevitImageProcessor`] in order to
prepare images for the model.
- [`LevitForImageClassificationWithTeacher`] currently supports only inference and not training or fine-tuning. | 411_2_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/levit.md | https://huggingface.co/docs/transformers/en/model_doc/levit/#usage-tips | .md | - [`LevitForImageClassificationWithTeacher`] currently supports only inference and not training or fine-tuning.
- You can check out demo notebooks regarding inference as well as fine-tuning on custom data [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/VisionTransformer)
(you can just replace [`ViTFeatureExtractor`] by [`LevitImageProcessor`] and [`ViTForImageClassification`] by [`LevitForImageClassification`] or [`LevitForImageClassificationWithTeacher`]). | 411_2_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/levit.md | https://huggingface.co/docs/transformers/en/model_doc/levit/#resources | .md | A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with LeViT.
<PipelineTag pipeline="image-classification"/>
- [`LevitForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb). | 411_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/levit.md | https://huggingface.co/docs/transformers/en/model_doc/levit/#resources | .md | - See also: [Image classification task guide](../tasks/image_classification)
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. | 411_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/levit.md | https://huggingface.co/docs/transformers/en/model_doc/levit/#levitconfig | .md | This is the configuration class to store the configuration of a [`LevitModel`]. It is used to instantiate a LeViT
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the LeViT
[facebook/levit-128S](https://huggingface.co/facebook/levit-128S) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the | 411_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/levit.md | https://huggingface.co/docs/transformers/en/model_doc/levit/#levitconfig | .md | Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
image_size (`int`, *optional*, defaults to 224):
The size of the input image.
num_channels (`int`, *optional*, defaults to 3):
Number of channels in the input image.
kernel_size (`int`, *optional*, defaults to 3):
The kernel size for the initial convolution layers of patch embedding.
stride (`int`, *optional*, defaults to 2): | 411_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/levit.md | https://huggingface.co/docs/transformers/en/model_doc/levit/#levitconfig | .md | The kernel size for the initial convolution layers of patch embedding.
stride (`int`, *optional*, defaults to 2):
The stride size for the initial convolution layers of patch embedding.
padding (`int`, *optional*, defaults to 1):
The padding size for the initial convolution layers of patch embedding.
patch_size (`int`, *optional*, defaults to 16):
The patch size for embeddings.
hidden_sizes (`List[int]`, *optional*, defaults to `[128, 256, 384]`):
Dimension of each of the encoder blocks. | 411_4_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/levit.md | https://huggingface.co/docs/transformers/en/model_doc/levit/#levitconfig | .md | hidden_sizes (`List[int]`, *optional*, defaults to `[128, 256, 384]`):
Dimension of each of the encoder blocks.
num_attention_heads (`List[int]`, *optional*, defaults to `[4, 8, 12]`):
Number of attention heads for each attention layer in each block of the Transformer encoder.
depths (`List[int]`, *optional*, defaults to `[4, 4, 4]`):
The number of layers in each encoder block.
key_dim (`List[int]`, *optional*, defaults to `[16, 16, 16]`):
The size of key in each of the encoder blocks. | 411_4_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/levit.md | https://huggingface.co/docs/transformers/en/model_doc/levit/#levitconfig | .md | key_dim (`List[int]`, *optional*, defaults to `[16, 16, 16]`):
The size of key in each of the encoder blocks.
drop_path_rate (`int`, *optional*, defaults to 0):
The dropout probability for stochastic depths, used in the blocks of the Transformer encoder.
mlp_ratios (`List[int]`, *optional*, defaults to `[2, 2, 2]`):
Ratio of the size of the hidden layer compared to the size of the input layer of the Mix FFNs in the
encoder blocks.
attention_ratios (`List[int]`, *optional*, defaults to `[2, 2, 2]`): | 411_4_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/levit.md | https://huggingface.co/docs/transformers/en/model_doc/levit/#levitconfig | .md | encoder blocks.
attention_ratios (`List[int]`, *optional*, defaults to `[2, 2, 2]`):
Ratio of the size of the output dimension compared to input dimension of attention layers.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
Example:
```python
>>> from transformers import LevitConfig, LevitModel | 411_4_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/levit.md | https://huggingface.co/docs/transformers/en/model_doc/levit/#levitconfig | .md | >>> # Initializing a LeViT levit-128S style configuration
>>> configuration = LevitConfig()
>>> # Initializing a model (with random weights) from the levit-128S style configuration
>>> model = LevitModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
``` | 411_4_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/levit.md | https://huggingface.co/docs/transformers/en/model_doc/levit/#levitfeatureextractor | .md | No docstring available for LevitFeatureExtractor
Methods: __call__ | 411_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/levit.md | https://huggingface.co/docs/transformers/en/model_doc/levit/#levitimageprocessor | .md | Constructs a LeViT image processor.
Args:
do_resize (`bool`, *optional*, defaults to `True`):
Wwhether to resize the shortest edge of the input to int(256/224 *`size`). Can be overridden by the
`do_resize` parameter in the `preprocess` method.
size (`Dict[str, int]`, *optional*, defaults to `{"shortest_edge": 224}`):
Size of the output image after resizing. If size is a dict with keys "width" and "height", the image will | 411_6_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/levit.md | https://huggingface.co/docs/transformers/en/model_doc/levit/#levitimageprocessor | .md | Size of the output image after resizing. If size is a dict with keys "width" and "height", the image will
be resized to `(size["height"], size["width"])`. If size is a dict with key "shortest_edge", the shortest
edge value `c` is rescaled to `int(c * (256/224))`. The smaller edge of the image will be matched to this
value i.e, if height > width, then image will be rescaled to `(size["shortest_egde"] * height / width, | 411_6_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/levit.md | https://huggingface.co/docs/transformers/en/model_doc/levit/#levitimageprocessor | .md | value i.e, if height > width, then image will be rescaled to `(size["shortest_egde"] * height / width,
size["shortest_egde"])`. Can be overridden by the `size` parameter in the `preprocess` method.
resample (`PILImageResampling`, *optional*, defaults to `Resampling.BICUBIC`):
Resampling filter to use if resizing the image. Can be overridden by the `resample` parameter in the
`preprocess` method.
do_center_crop (`bool`, *optional*, defaults to `True`): | 411_6_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/levit.md | https://huggingface.co/docs/transformers/en/model_doc/levit/#levitimageprocessor | .md | `preprocess` method.
do_center_crop (`bool`, *optional*, defaults to `True`):
Whether or not to center crop the input to `(crop_size["height"], crop_size["width"])`. Can be overridden
by the `do_center_crop` parameter in the `preprocess` method.
crop_size (`Dict`, *optional*, defaults to `{"height": 224, "width": 224}`):
Desired image size after `center_crop`. Can be overridden by the `crop_size` parameter in the `preprocess`
method.
do_rescale (`bool`, *optional*, defaults to `True`): | 411_6_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/levit.md | https://huggingface.co/docs/transformers/en/model_doc/levit/#levitimageprocessor | .md | method.
do_rescale (`bool`, *optional*, defaults to `True`):
Controls whether to rescale the image by the specified scale `rescale_factor`. Can be overridden by the
`do_rescale` parameter in the `preprocess` method.
rescale_factor (`int` or `float`, *optional*, defaults to `1/255`):
Scale factor to use if rescaling the image. Can be overridden by the `rescale_factor` parameter in the
`preprocess` method.
do_normalize (`bool`, *optional*, defaults to `True`): | 411_6_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/levit.md | https://huggingface.co/docs/transformers/en/model_doc/levit/#levitimageprocessor | .md | `preprocess` method.
do_normalize (`bool`, *optional*, defaults to `True`):
Controls whether to normalize the image. Can be overridden by the `do_normalize` parameter in the
`preprocess` method.
image_mean (`List[int]`, *optional*, defaults to `[0.485, 0.456, 0.406]`):
Mean to use if normalizing the image. This is a float or list of floats the length of the number of
channels in the image. Can be overridden by the `image_mean` parameter in the `preprocess` method. | 411_6_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/levit.md | https://huggingface.co/docs/transformers/en/model_doc/levit/#levitimageprocessor | .md | channels in the image. Can be overridden by the `image_mean` parameter in the `preprocess` method.
image_std (`List[int]`, *optional*, defaults to `[0.229, 0.224, 0.225]`):
Standard deviation to use if normalizing the image. This is a float or list of floats the length of the
number of channels in the image. Can be overridden by the `image_std` parameter in the `preprocess` method.
Methods: preprocess | 411_6_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/levit.md | https://huggingface.co/docs/transformers/en/model_doc/levit/#levitmodel | .md | The bare Levit model outputting raw features without any specific head on top.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`LevitConfig`]): Model configuration class with all the parameters of the model. | 411_7_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/levit.md | https://huggingface.co/docs/transformers/en/model_doc/levit/#levitmodel | .md | behavior.
Parameters:
config ([`LevitConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 411_7_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/levit.md | https://huggingface.co/docs/transformers/en/model_doc/levit/#levitforimageclassification | .md | Levit Model with an image classification head on top (a linear layer on top of the pooled features), e.g. for
ImageNet.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`LevitConfig`]): Model configuration class with all the parameters of the model. | 411_8_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/levit.md | https://huggingface.co/docs/transformers/en/model_doc/levit/#levitforimageclassification | .md | behavior.
Parameters:
config ([`LevitConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 411_8_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/levit.md | https://huggingface.co/docs/transformers/en/model_doc/levit/#levitforimageclassificationwithteacher | .md | LeViT Model transformer with image classification heads on top (a linear layer on top of the final hidden state and
a linear layer on top of the final hidden state of the distillation token) e.g. for ImageNet. .. warning::
This model supports inference-only. Fine-tuning with distillation (i.e. with a teacher) is not yet
supported.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it | 411_9_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/levit.md | https://huggingface.co/docs/transformers/en/model_doc/levit/#levitforimageclassificationwithteacher | .md | This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`LevitConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the | 411_9_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/levit.md | https://huggingface.co/docs/transformers/en/model_doc/levit/#levitforimageclassificationwithteacher | .md | Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 411_9_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v2.md | https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v2/ | .md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 412_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v2.md | https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v2/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 412_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v2.md | https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v2/#overview | .md | The MobileNet model was proposed in [MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381) by Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen.
The abstract from the paper is the following: | 412_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v2.md | https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v2/#overview | .md | *In this paper we describe a new mobile architecture, MobileNetV2, that improves the state of the art performance of mobile models on multiple tasks and benchmarks as well as across a spectrum of different model sizes. We also describe efficient ways of applying these mobile models to object detection in a novel framework we call SSDLite. Additionally, we demonstrate how to build mobile semantic segmentation models through a reduced form of DeepLabv3 which we call Mobile DeepLabv3.* | 412_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v2.md | https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v2/#overview | .md | *The MobileNetV2 architecture is based on an inverted residual structure where the input and output of the residual block are thin bottleneck layers opposite to traditional residual models which use expanded representations in the input an MobileNetV2 uses lightweight depthwise convolutions to filter features in the intermediate expansion layer. Additionally, we find that it is important to remove non-linearities in the narrow layers in order to maintain representational power. We demonstrate that this | 412_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v2.md | https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v2/#overview | .md | important to remove non-linearities in the narrow layers in order to maintain representational power. We demonstrate that this improves performance and provide an intuition that led to this design. Finally, our approach allows decoupling of the input/output domains from the expressiveness of the transformation, which provides a convenient framework for further analysis. We measure our performance on Imagenet classification, COCO object detection, VOC image segmentation. We evaluate the trade-offs between | 412_1_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v2.md | https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v2/#overview | .md | our performance on Imagenet classification, COCO object detection, VOC image segmentation. We evaluate the trade-offs between accuracy, and number of operations measured by multiply-adds (MAdd), as well as the number of parameters.* | 412_1_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v2.md | https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v2/#overview | .md | This model was contributed by [matthijs](https://huggingface.co/Matthijs). The original code and weights can be found [here for the main model](https://github.com/tensorflow/models/tree/master/research/slim/nets/mobilenet) and [here for DeepLabV3+](https://github.com/tensorflow/models/tree/master/research/deeplab). | 412_1_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v2.md | https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v2/#usage-tips | .md | - The checkpoints are named **mobilenet\_v2\_*depth*\_*size***, for example **mobilenet\_v2\_1.0\_224**, where **1.0** is the depth multiplier (sometimes also referred to as "alpha" or the width multiplier) and **224** is the resolution of the input images the model was trained on.
- Even though the checkpoint is trained on images of specific size, the model will work on images of any size. The smallest supported image size is 32x32. | 412_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v2.md | https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v2/#usage-tips | .md | - One can use [`MobileNetV2ImageProcessor`] to prepare images for the model.
- The available image classification checkpoints are pre-trained on [ImageNet-1k](https://huggingface.co/datasets/imagenet-1k) (also referred to as ILSVRC 2012, a collection of 1.3 million images and 1,000 classes). However, the model predicts 1001 classes: the 1000 classes from ImageNet plus an extra “background” class (index 0). | 412_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v2.md | https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v2/#usage-tips | .md | - The segmentation model uses a [DeepLabV3+](https://arxiv.org/abs/1802.02611) head. The available semantic segmentation checkpoints are pre-trained on [PASCAL VOC](http://host.robots.ox.ac.uk/pascal/VOC/).
- The original TensorFlow checkpoints use different padding rules than PyTorch, requiring the model to determine the padding amount at inference time, since this depends on the input image size. To use native PyTorch padding behavior, create a [`MobileNetV2Config`] with `tf_padding = False`. | 412_2_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v2.md | https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v2/#usage-tips | .md | Unsupported features:
- The [`MobileNetV2Model`] outputs a globally pooled version of the last hidden state. In the original model it is possible to use an average pooling layer with a fixed 7x7 window and stride 1 instead of global pooling. For inputs that are larger than the recommended image size, this gives a pooled output that is larger than 1x1. The Hugging Face implementation does not support this. | 412_2_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v2.md | https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v2/#usage-tips | .md | - The original TensorFlow checkpoints include quantized models. We do not support these models as they include additional "FakeQuantization" operations to unquantize the weights.
- It's common to extract the output from the expansion layers at indices 10 and 13, as well as the output from the final 1x1 convolution layer, for downstream purposes. Using `output_hidden_states=True` returns the output from all intermediate layers. There is currently no way to limit this to specific layers. | 412_2_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v2.md | https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v2/#usage-tips | .md | - The DeepLabV3+ segmentation head does not use the final convolution layer from the backbone, but this layer gets computed anyway. There is currently no way to tell [`MobileNetV2Model`] up to which layer it should run. | 412_2_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v2.md | https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v2/#resources | .md | A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with MobileNetV2.
<PipelineTag pipeline="image-classification"/>
- [`MobileNetV2ForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb). | 412_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v2.md | https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v2/#resources | .md | - See also: [Image classification task guide](../tasks/image_classification)
**Semantic segmentation**
- [Semantic segmentation task guide](../tasks/semantic_segmentation)
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. | 412_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v2.md | https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v2/#mobilenetv2config | .md | This is the configuration class to store the configuration of a [`MobileNetV2Model`]. It is used to instantiate a
MobileNetV2 model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the MobileNetV2
[google/mobilenet_v2_1.0_224](https://huggingface.co/google/mobilenet_v2_1.0_224) architecture. | 412_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v2.md | https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v2/#mobilenetv2config | .md | [google/mobilenet_v2_1.0_224](https://huggingface.co/google/mobilenet_v2_1.0_224) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
num_channels (`int`, *optional*, defaults to 3):
The number of input channels.
image_size (`int`, *optional*, defaults to 224):
The size (resolution) of each image.
depth_multiplier (`float`, *optional*, defaults to 1.0): | 412_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v2.md | https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v2/#mobilenetv2config | .md | The size (resolution) of each image.
depth_multiplier (`float`, *optional*, defaults to 1.0):
Shrinks or expands the number of channels in each layer. Default is 1.0, which starts the network with 32
channels. This is sometimes also called "alpha" or "width multiplier".
depth_divisible_by (`int`, *optional*, defaults to 8):
The number of channels in each layer will always be a multiple of this number.
min_depth (`int`, *optional*, defaults to 8):
All layers will have at least this many channels. | 412_4_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v2.md | https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v2/#mobilenetv2config | .md | min_depth (`int`, *optional*, defaults to 8):
All layers will have at least this many channels.
expand_ratio (`float`, *optional*, defaults to 6.0):
The number of output channels of the first layer in each block is input channels times expansion ratio.
output_stride (`int`, *optional*, defaults to 32):
The ratio between the spatial resolution of the input and output feature maps. By default the model reduces | 412_4_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v2.md | https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v2/#mobilenetv2config | .md | The ratio between the spatial resolution of the input and output feature maps. By default the model reduces
the input dimensions by a factor of 32. If `output_stride` is 8 or 16, the model uses dilated convolutions
on the depthwise layers instead of regular convolutions, so that the feature maps never become more than 8x
or 16x smaller than the input image.
first_layer_is_expansion (`bool`, *optional*, defaults to `True`): | 412_4_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v2.md | https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v2/#mobilenetv2config | .md | or 16x smaller than the input image.
first_layer_is_expansion (`bool`, *optional*, defaults to `True`):
True if the very first convolution layer is also the expansion layer for the first expansion block.
finegrained_output (`bool`, *optional*, defaults to `True`):
If true, the number of output channels in the final convolution layer will stay large (1280) even if
`depth_multiplier` is less than 1.
hidden_act (`str` or `function`, *optional*, defaults to `"relu6"`): | 412_4_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v2.md | https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v2/#mobilenetv2config | .md | `depth_multiplier` is less than 1.
hidden_act (`str` or `function`, *optional*, defaults to `"relu6"`):
The non-linear activation function (function or string) in the Transformer encoder and convolution layers.
tf_padding (`bool`, *optional*, defaults to `True`):
Whether to use TensorFlow padding rules on the convolution layers.
classifier_dropout_prob (`float`, *optional*, defaults to 0.8):
The dropout ratio for attached classifiers.
initializer_range (`float`, *optional*, defaults to 0.02): | 412_4_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v2.md | https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v2/#mobilenetv2config | .md | The dropout ratio for attached classifiers.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (`float`, *optional*, defaults to 0.001):
The epsilon used by the layer normalization layers.
semantic_loss_ignore_index (`int`, *optional*, defaults to 255):
The index that is ignored by the loss function of the semantic segmentation model.
Example:
```python | 412_4_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v2.md | https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v2/#mobilenetv2config | .md | The index that is ignored by the loss function of the semantic segmentation model.
Example:
```python
>>> from transformers import MobileNetV2Config, MobileNetV2Model | 412_4_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v2.md | https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v2/#mobilenetv2config | .md | >>> # Initializing a "mobilenet_v2_1.0_224" style configuration
>>> configuration = MobileNetV2Config()
>>> # Initializing a model from the "mobilenet_v2_1.0_224" style configuration
>>> model = MobileNetV2Model(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
``` | 412_4_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v2.md | https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v2/#mobilenetv2featureextractor | .md | No docstring available for MobileNetV2FeatureExtractor
Methods: preprocess
- post_process_semantic_segmentation | 412_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v2.md | https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v2/#mobilenetv2imageprocessor | .md | Constructs a MobileNetV2 image processor.
Args:
do_resize (`bool`, *optional*, defaults to `True`):
Whether to resize the image's (height, width) dimensions to the specified `size`. Can be overridden by
`do_resize` in the `preprocess` method.
size (`Dict[str, int]` *optional*, defaults to `{"shortest_edge": 256}`):
Size of the image after resizing. The shortest edge of the image is resized to size["shortest_edge"], with | 412_6_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v2.md | https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v2/#mobilenetv2imageprocessor | .md | Size of the image after resizing. The shortest edge of the image is resized to size["shortest_edge"], with
the longest edge resized to keep the input aspect ratio. Can be overridden by `size` in the `preprocess`
method.
resample (`PILImageResampling`, *optional*, defaults to `PILImageResampling.BILINEAR`):
Resampling filter to use if resizing the image. Can be overridden by the `resample` parameter in the
`preprocess` method.
do_center_crop (`bool`, *optional*, defaults to `True`): | 412_6_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v2.md | https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v2/#mobilenetv2imageprocessor | .md | `preprocess` method.
do_center_crop (`bool`, *optional*, defaults to `True`):
Whether to center crop the image. If the input size is smaller than `crop_size` along any edge, the image
is padded with 0's and then center cropped. Can be overridden by the `do_center_crop` parameter in the
`preprocess` method.
crop_size (`Dict[str, int]`, *optional*, defaults to `{"height": 224, "width": 224}`):
Desired output size when applying center-cropping. Only has an effect if `do_center_crop` is set to `True`. | 412_6_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v2.md | https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v2/#mobilenetv2imageprocessor | .md | Desired output size when applying center-cropping. Only has an effect if `do_center_crop` is set to `True`.
Can be overridden by the `crop_size` parameter in the `preprocess` method.
do_rescale (`bool`, *optional*, defaults to `True`):
Whether to rescale the image by the specified scale `rescale_factor`. Can be overridden by the `do_rescale`
parameter in the `preprocess` method.
rescale_factor (`int` or `float`, *optional*, defaults to `1/255`): | 412_6_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v2.md | https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v2/#mobilenetv2imageprocessor | .md | parameter in the `preprocess` method.
rescale_factor (`int` or `float`, *optional*, defaults to `1/255`):
Scale factor to use if rescaling the image. Can be overridden by the `rescale_factor` parameter in the
`preprocess` method.
do_normalize:
Whether to normalize the image. Can be overridden by the `do_normalize` parameter in the `preprocess`
method.
image_mean (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_MEAN`): | 412_6_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v2.md | https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v2/#mobilenetv2imageprocessor | .md | method.
image_mean (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_MEAN`):
Mean to use if normalizing the image. This is a float or list of floats the length of the number of
channels in the image. Can be overridden by the `image_mean` parameter in the `preprocess` method.
image_std (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_STD`):
Standard deviation to use if normalizing the image. This is a float or list of floats the length of the | 412_6_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v2.md | https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v2/#mobilenetv2imageprocessor | .md | Standard deviation to use if normalizing the image. This is a float or list of floats the length of the
number of channels in the image. Can be overridden by the `image_std` parameter in the `preprocess` method.
Methods: preprocess
- post_process_semantic_segmentation | 412_6_6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.