source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/timesformer.md | https://huggingface.co/docs/transformers/en/model_doc/timesformer/#overview | .md | This model was contributed by [fcakyon](https://huggingface.co/fcakyon).
The original code can be found [here](https://github.com/facebookresearch/TimeSformer). | 102_1_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/timesformer.md | https://huggingface.co/docs/transformers/en/model_doc/timesformer/#usage-tips | .md | There are many pretrained variants. Select your pretrained model based on the dataset it is trained on. Moreover,
the number of input frames per clip changes based on the model size so you should consider this parameter while selecting your pretrained model. | 102_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/timesformer.md | https://huggingface.co/docs/transformers/en/model_doc/timesformer/#resources | .md | - [Video classification task guide](../tasks/video_classification) | 102_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/timesformer.md | https://huggingface.co/docs/transformers/en/model_doc/timesformer/#timesformerconfig | .md | This is the configuration class to store the configuration of a [`TimesformerModel`]. It is used to instantiate a
TimeSformer model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the TimeSformer
[facebook/timesformer-base-finetuned-k600](https://huggingface.co/facebook/timesformer-base-finetuned-k600)
architecture. | 102_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/timesformer.md | https://huggingface.co/docs/transformers/en/model_doc/timesformer/#timesformerconfig | .md | [facebook/timesformer-base-finetuned-k600](https://huggingface.co/facebook/timesformer-base-finetuned-k600)
architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
image_size (`int`, *optional*, defaults to 224):
The size (resolution) of each image.
patch_size (`int`, *optional*, defaults to 16):
The size (resolution) of each patch. | 102_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/timesformer.md | https://huggingface.co/docs/transformers/en/model_doc/timesformer/#timesformerconfig | .md | The size (resolution) of each image.
patch_size (`int`, *optional*, defaults to 16):
The size (resolution) of each patch.
num_channels (`int`, *optional*, defaults to 3):
The number of input channels.
num_frames (`int`, *optional*, defaults to 8):
The number of frames in each video.
hidden_size (`int`, *optional*, defaults to 768):
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (`int`, *optional*, defaults to 12):
Number of hidden layers in the Transformer encoder. | 102_4_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/timesformer.md | https://huggingface.co/docs/transformers/en/model_doc/timesformer/#timesformerconfig | .md | num_hidden_layers (`int`, *optional*, defaults to 12):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 12):
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (`int`, *optional*, defaults to 3072):
Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`): | 102_4_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/timesformer.md | https://huggingface.co/docs/transformers/en/model_doc/timesformer/#timesformerconfig | .md | hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"selu"` and `"gelu_new"` are supported.
hidden_dropout_prob (`float`, *optional*, defaults to 0.0):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities. | 102_4_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/timesformer.md | https://huggingface.co/docs/transformers/en/model_doc/timesformer/#timesformerconfig | .md | attention_probs_dropout_prob (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (`float`, *optional*, defaults to 1e-06):
The epsilon used by the layer normalization layers.
qkv_bias (`bool`, *optional*, defaults to `True`):
Whether to add a bias to the queries, keys and values. | 102_4_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/timesformer.md | https://huggingface.co/docs/transformers/en/model_doc/timesformer/#timesformerconfig | .md | qkv_bias (`bool`, *optional*, defaults to `True`):
Whether to add a bias to the queries, keys and values.
attention_type (`str`, *optional*, defaults to `"divided_space_time"`):
The attention type to use. Must be one of `"divided_space_time"`, `"space_only"`, `"joint_space_time"`.
drop_path_rate (`float`, *optional*, defaults to 0):
The dropout ratio for stochastic depth.
Example:
```python
>>> from transformers import TimesformerConfig, TimesformerModel | 102_4_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/timesformer.md | https://huggingface.co/docs/transformers/en/model_doc/timesformer/#timesformerconfig | .md | >>> # Initializing a TimeSformer timesformer-base style configuration
>>> configuration = TimesformerConfig()
>>> # Initializing a model from the configuration
>>> model = TimesformerModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
``` | 102_4_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/timesformer.md | https://huggingface.co/docs/transformers/en/model_doc/timesformer/#timesformermodel | .md | The bare TimeSformer Model transformer outputting raw hidden-states without any specific head on top.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`TimesformerConfig`]): Model configuration class with all the parameters of the model. | 102_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/timesformer.md | https://huggingface.co/docs/transformers/en/model_doc/timesformer/#timesformermodel | .md | behavior.
Parameters:
config ([`TimesformerConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 102_5_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/timesformer.md | https://huggingface.co/docs/transformers/en/model_doc/timesformer/#timesformerforvideoclassification | .md | TimeSformer Model transformer with a video classification head on top (a linear layer on top of the final hidden state
of the [CLS] token) e.g. for ImageNet.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`TimesformerConfig`]): Model configuration class with all the parameters of the model. | 102_6_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/timesformer.md | https://huggingface.co/docs/transformers/en/model_doc/timesformer/#timesformerforvideoclassification | .md | behavior.
Parameters:
config ([`TimesformerConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 102_6_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta-v2.md | https://huggingface.co/docs/transformers/en/model_doc/deberta-v2/ | .md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 103_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta-v2.md | https://huggingface.co/docs/transformers/en/model_doc/deberta-v2/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 103_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta-v2.md | https://huggingface.co/docs/transformers/en/model_doc/deberta-v2/#overview | .md | The DeBERTa model was proposed in [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen It is based on Google's
BERT model released in 2018 and Facebook's RoBERTa model released in 2019.
It builds on RoBERTa with disentangled attention and enhanced mask decoder training with half of the data used in
RoBERTa.
The abstract from the paper is the following: | 103_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta-v2.md | https://huggingface.co/docs/transformers/en/model_doc/deberta-v2/#overview | .md | RoBERTa.
The abstract from the paper is the following:
*Recent progress in pre-trained neural language models has significantly improved the performance of many natural
language processing (NLP) tasks. In this paper we propose a new model architecture DeBERTa (Decoding-enhanced BERT with
disentangled attention) that improves the BERT and RoBERTa models using two novel techniques. The first is the | 103_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta-v2.md | https://huggingface.co/docs/transformers/en/model_doc/deberta-v2/#overview | .md | disentangled attention) that improves the BERT and RoBERTa models using two novel techniques. The first is the
disentangled attention mechanism, where each word is represented using two vectors that encode its content and
position, respectively, and the attention weights among words are computed using disentangled matrices on their
contents and relative positions. Second, an enhanced mask decoder is used to replace the output softmax layer to | 103_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta-v2.md | https://huggingface.co/docs/transformers/en/model_doc/deberta-v2/#overview | .md | contents and relative positions. Second, an enhanced mask decoder is used to replace the output softmax layer to
predict the masked tokens for model pretraining. We show that these two techniques significantly improve the efficiency
of model pretraining and performance of downstream tasks. Compared to RoBERTa-Large, a DeBERTa model trained on half of
the training data performs consistently better on a wide range of NLP tasks, achieving improvements on MNLI by +0.9% | 103_1_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta-v2.md | https://huggingface.co/docs/transformers/en/model_doc/deberta-v2/#overview | .md | the training data performs consistently better on a wide range of NLP tasks, achieving improvements on MNLI by +0.9%
(90.2% vs. 91.1%), on SQuAD v2.0 by +2.3% (88.4% vs. 90.7%) and RACE by +3.6% (83.2% vs. 86.8%). The DeBERTa code and
pre-trained models will be made publicly available at https://github.com/microsoft/DeBERTa.*
The following information is visible directly on the [original implementation | 103_1_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta-v2.md | https://huggingface.co/docs/transformers/en/model_doc/deberta-v2/#overview | .md | The following information is visible directly on the [original implementation
repository](https://github.com/microsoft/DeBERTa). DeBERTa v2 is the second version of the DeBERTa model. It includes
the 1.5B model used for the SuperGLUE single-model submission and achieving 89.9, versus human baseline 89.8. You can
find more details about this submission in the authors'
[blog](https://www.microsoft.com/en-us/research/blog/microsoft-deberta-surpasses-human-performance-on-the-superglue-benchmark/) | 103_1_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta-v2.md | https://huggingface.co/docs/transformers/en/model_doc/deberta-v2/#overview | .md | New in v2:
- **Vocabulary** In v2 the tokenizer is changed to use a new vocabulary of size 128K built from the training data.
Instead of a GPT2-based tokenizer, the tokenizer is now
[sentencepiece-based](https://github.com/google/sentencepiece) tokenizer.
- **nGiE(nGram Induced Input Encoding)** The DeBERTa-v2 model uses an additional convolution layer aside with the first
transformer layer to better learn the local dependency of input tokens. | 103_1_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta-v2.md | https://huggingface.co/docs/transformers/en/model_doc/deberta-v2/#overview | .md | transformer layer to better learn the local dependency of input tokens.
- **Sharing position projection matrix with content projection matrix in attention layer** Based on previous
experiments, this can save parameters without affecting the performance.
- **Apply bucket to encode relative positions** The DeBERTa-v2 model uses log bucket to encode relative positions
similar to T5.
- **900M model & 1.5B model** Two additional model sizes are available: 900M and 1.5B, which significantly improves the | 103_1_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta-v2.md | https://huggingface.co/docs/transformers/en/model_doc/deberta-v2/#overview | .md | - **900M model & 1.5B model** Two additional model sizes are available: 900M and 1.5B, which significantly improves the
performance of downstream tasks.
This model was contributed by [DeBERTa](https://huggingface.co/DeBERTa). This model TF 2.0 implementation was
contributed by [kamalkraj](https://huggingface.co/kamalkraj). The original code can be found [here](https://github.com/microsoft/DeBERTa). | 103_1_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta-v2.md | https://huggingface.co/docs/transformers/en/model_doc/deberta-v2/#resources | .md | - [Text classification task guide](../tasks/sequence_classification)
- [Token classification task guide](../tasks/token_classification)
- [Question answering task guide](../tasks/question_answering)
- [Masked language modeling task guide](../tasks/masked_language_modeling)
- [Multiple choice task guide](../tasks/multiple_choice) | 103_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta-v2.md | https://huggingface.co/docs/transformers/en/model_doc/deberta-v2/#debertav2config | .md | This is the configuration class to store the configuration of a [`DebertaV2Model`]. It is used to instantiate a
DeBERTa-v2 model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the DeBERTa
[microsoft/deberta-v2-xlarge](https://huggingface.co/microsoft/deberta-v2-xlarge) architecture. | 103_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta-v2.md | https://huggingface.co/docs/transformers/en/model_doc/deberta-v2/#debertav2config | .md | [microsoft/deberta-v2-xlarge](https://huggingface.co/microsoft/deberta-v2-xlarge) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Arguments:
vocab_size (`int`, *optional*, defaults to 128100):
Vocabulary size of the DeBERTa-v2 model. Defines the number of different tokens that can be represented by
the `inputs_ids` passed when calling [`DebertaV2Model`]. | 103_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta-v2.md | https://huggingface.co/docs/transformers/en/model_doc/deberta-v2/#debertav2config | .md | the `inputs_ids` passed when calling [`DebertaV2Model`].
hidden_size (`int`, *optional*, defaults to 1536):
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (`int`, *optional*, defaults to 24):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 24):
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (`int`, *optional*, defaults to 6144): | 103_3_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta-v2.md | https://huggingface.co/docs/transformers/en/model_doc/deberta-v2/#debertav2config | .md | intermediate_size (`int`, *optional*, defaults to 6144):
Dimensionality of the "intermediate" (often named feed-forward) layer in the Transformer encoder.
hidden_act (`str` or `Callable`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"silu"`, `"gelu"`, `"tanh"`, `"gelu_fast"`, `"mish"`, `"linear"`, `"sigmoid"` and `"gelu_new"`
are supported.
hidden_dropout_prob (`float`, *optional*, defaults to 0.1): | 103_3_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta-v2.md | https://huggingface.co/docs/transformers/en/model_doc/deberta-v2/#debertav2config | .md | are supported.
hidden_dropout_prob (`float`, *optional*, defaults to 0.1):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1):
The dropout ratio for the attention probabilities.
max_position_embeddings (`int`, *optional*, defaults to 512):
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048). | 103_3_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta-v2.md | https://huggingface.co/docs/transformers/en/model_doc/deberta-v2/#debertav2config | .md | just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (`int`, *optional*, defaults to 0):
The vocabulary size of the `token_type_ids` passed when calling [`DebertaModel`] or [`TFDebertaModel`].
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (`float`, *optional*, defaults to 1e-7):
The epsilon used by the layer normalization layers. | 103_3_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta-v2.md | https://huggingface.co/docs/transformers/en/model_doc/deberta-v2/#debertav2config | .md | layer_norm_eps (`float`, *optional*, defaults to 1e-7):
The epsilon used by the layer normalization layers.
relative_attention (`bool`, *optional*, defaults to `True`):
Whether use relative position encoding.
max_relative_positions (`int`, *optional*, defaults to -1):
The range of relative positions `[-max_position_embeddings, max_position_embeddings]`. Use the same value
as `max_position_embeddings`.
pad_token_id (`int`, *optional*, defaults to 0):
The value used to pad input_ids. | 103_3_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta-v2.md | https://huggingface.co/docs/transformers/en/model_doc/deberta-v2/#debertav2config | .md | as `max_position_embeddings`.
pad_token_id (`int`, *optional*, defaults to 0):
The value used to pad input_ids.
position_biased_input (`bool`, *optional*, defaults to `True`):
Whether add absolute position embedding to content embedding.
pos_att_type (`List[str]`, *optional*):
The type of relative position attention, it can be a combination of `["p2c", "c2p"]`, e.g. `["p2c"]`,
`["p2c", "c2p"]`, `["p2c", "c2p"]`.
layer_norm_eps (`float`, *optional*, defaults to 1e-12): | 103_3_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta-v2.md | https://huggingface.co/docs/transformers/en/model_doc/deberta-v2/#debertav2config | .md | `["p2c", "c2p"]`, `["p2c", "c2p"]`.
layer_norm_eps (`float`, *optional*, defaults to 1e-12):
The epsilon used by the layer normalization layers.
legacy (`bool`, *optional*, defaults to `True`):
Whether or not the model should use the legacy `LegacyDebertaOnlyMLMHead`, which does not work properly
for mask infilling tasks.
Example:
```python
>>> from transformers import DebertaV2Config, DebertaV2Model | 103_3_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta-v2.md | https://huggingface.co/docs/transformers/en/model_doc/deberta-v2/#debertav2config | .md | >>> # Initializing a DeBERTa-v2 microsoft/deberta-v2-xlarge style configuration
>>> configuration = DebertaV2Config()
>>> # Initializing a model (with random weights) from the microsoft/deberta-v2-xlarge style configuration
>>> model = DebertaV2Model(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
``` | 103_3_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta-v2.md | https://huggingface.co/docs/transformers/en/model_doc/deberta-v2/#debertav2tokenizer | .md | Constructs a DeBERTa-v2 tokenizer. Based on [SentencePiece](https://github.com/google/sentencepiece).
Args:
vocab_file (`str`):
[SentencePiece](https://github.com/google/sentencepiece) file (generally has a *.spm* extension) that
contains the vocabulary necessary to instantiate a tokenizer.
do_lower_case (`bool`, *optional*, defaults to `False`):
Whether or not to lowercase the input when tokenizing.
bos_token (`string`, *optional*, defaults to `"[CLS]"`): | 103_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta-v2.md | https://huggingface.co/docs/transformers/en/model_doc/deberta-v2/#debertav2tokenizer | .md | Whether or not to lowercase the input when tokenizing.
bos_token (`string`, *optional*, defaults to `"[CLS]"`):
The beginning of sequence token that was used during pre-training. Can be used a sequence classifier token.
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the `cls_token`.
eos_token (`string`, *optional*, defaults to `"[SEP]"`): | 103_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta-v2.md | https://huggingface.co/docs/transformers/en/model_doc/deberta-v2/#debertav2tokenizer | .md | sequence. The token used is the `cls_token`.
eos_token (`string`, *optional*, defaults to `"[SEP]"`):
The end of sequence token. When building a sequence using special tokens, this is not the token that is
used for the end of sequence. The token used is the `sep_token`.
unk_token (`str`, *optional*, defaults to `"[UNK]"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
sep_token (`str`, *optional*, defaults to `"[SEP]"`): | 103_4_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta-v2.md | https://huggingface.co/docs/transformers/en/model_doc/deberta-v2/#debertav2tokenizer | .md | token instead.
sep_token (`str`, *optional*, defaults to `"[SEP]"`):
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
pad_token (`str`, *optional*, defaults to `"[PAD]"`):
The token used for padding, for example when batching sequences of different lengths. | 103_4_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta-v2.md | https://huggingface.co/docs/transformers/en/model_doc/deberta-v2/#debertav2tokenizer | .md | The token used for padding, for example when batching sequences of different lengths.
cls_token (`str`, *optional*, defaults to `"[CLS]"`):
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
mask_token (`str`, *optional*, defaults to `"[MASK]"`): | 103_4_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta-v2.md | https://huggingface.co/docs/transformers/en/model_doc/deberta-v2/#debertav2tokenizer | .md | mask_token (`str`, *optional*, defaults to `"[MASK]"`):
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
sp_model_kwargs (`dict`, *optional*):
Will be passed to the `SentencePieceProcessor.__init__()` method. The [Python wrapper for
SentencePiece](https://github.com/google/sentencepiece/tree/master/python) can be used, among other things,
to set: | 103_4_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta-v2.md | https://huggingface.co/docs/transformers/en/model_doc/deberta-v2/#debertav2tokenizer | .md | SentencePiece](https://github.com/google/sentencepiece/tree/master/python) can be used, among other things,
to set:
- `enable_sampling`: Enable subword regularization.
- `nbest_size`: Sampling parameters for unigram. Invalid for BPE-Dropout.
- `nbest_size = {0,1}`: No sampling is performed.
- `nbest_size > 1`: samples from the nbest_size results.
- `nbest_size < 0`: assuming that nbest_size is infinite and samples from the all hypothesis (lattice) | 103_4_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta-v2.md | https://huggingface.co/docs/transformers/en/model_doc/deberta-v2/#debertav2tokenizer | .md | - `nbest_size < 0`: assuming that nbest_size is infinite and samples from the all hypothesis (lattice)
using forward-filtering-and-backward-sampling algorithm.
- `alpha`: Smoothing parameter for unigram sampling, and dropout probability of merge operations for
BPE-dropout.
Methods: build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary | 103_4_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta-v2.md | https://huggingface.co/docs/transformers/en/model_doc/deberta-v2/#debertav2tokenizerfast | .md | Constructs a DeBERTa-v2 fast tokenizer. Based on [SentencePiece](https://github.com/google/sentencepiece).
Args:
vocab_file (`str`):
[SentencePiece](https://github.com/google/sentencepiece) file (generally has a *.spm* extension) that
contains the vocabulary necessary to instantiate a tokenizer.
do_lower_case (`bool`, *optional*, defaults to `False`):
Whether or not to lowercase the input when tokenizing.
bos_token (`string`, *optional*, defaults to `"[CLS]"`): | 103_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta-v2.md | https://huggingface.co/docs/transformers/en/model_doc/deberta-v2/#debertav2tokenizerfast | .md | Whether or not to lowercase the input when tokenizing.
bos_token (`string`, *optional*, defaults to `"[CLS]"`):
The beginning of sequence token that was used during pre-training. Can be used a sequence classifier token.
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the `cls_token`.
eos_token (`string`, *optional*, defaults to `"[SEP]"`): | 103_5_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta-v2.md | https://huggingface.co/docs/transformers/en/model_doc/deberta-v2/#debertav2tokenizerfast | .md | sequence. The token used is the `cls_token`.
eos_token (`string`, *optional*, defaults to `"[SEP]"`):
The end of sequence token. When building a sequence using special tokens, this is not the token that is
used for the end of sequence. The token used is the `sep_token`.
unk_token (`str`, *optional*, defaults to `"[UNK]"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
sep_token (`str`, *optional*, defaults to `"[SEP]"`): | 103_5_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta-v2.md | https://huggingface.co/docs/transformers/en/model_doc/deberta-v2/#debertav2tokenizerfast | .md | token instead.
sep_token (`str`, *optional*, defaults to `"[SEP]"`):
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
pad_token (`str`, *optional*, defaults to `"[PAD]"`):
The token used for padding, for example when batching sequences of different lengths. | 103_5_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta-v2.md | https://huggingface.co/docs/transformers/en/model_doc/deberta-v2/#debertav2tokenizerfast | .md | The token used for padding, for example when batching sequences of different lengths.
cls_token (`str`, *optional*, defaults to `"[CLS]"`):
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
mask_token (`str`, *optional*, defaults to `"[MASK]"`): | 103_5_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta-v2.md | https://huggingface.co/docs/transformers/en/model_doc/deberta-v2/#debertav2tokenizerfast | .md | mask_token (`str`, *optional*, defaults to `"[MASK]"`):
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
sp_model_kwargs (`dict`, *optional*):
Will be passed to the `SentencePieceProcessor.__init__()` method. The [Python wrapper for
SentencePiece](https://github.com/google/sentencepiece/tree/master/python) can be used, among other things,
to set: | 103_5_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta-v2.md | https://huggingface.co/docs/transformers/en/model_doc/deberta-v2/#debertav2tokenizerfast | .md | SentencePiece](https://github.com/google/sentencepiece/tree/master/python) can be used, among other things,
to set:
- `enable_sampling`: Enable subword regularization.
- `nbest_size`: Sampling parameters for unigram. Invalid for BPE-Dropout.
- `nbest_size = {0,1}`: No sampling is performed.
- `nbest_size > 1`: samples from the nbest_size results.
- `nbest_size < 0`: assuming that nbest_size is infinite and samples from the all hypothesis (lattice) | 103_5_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta-v2.md | https://huggingface.co/docs/transformers/en/model_doc/deberta-v2/#debertav2tokenizerfast | .md | - `nbest_size < 0`: assuming that nbest_size is infinite and samples from the all hypothesis (lattice)
using forward-filtering-and-backward-sampling algorithm.
- `alpha`: Smoothing parameter for unigram sampling, and dropout probability of merge operations for
BPE-dropout.
Methods: build_inputs_with_special_tokens
- create_token_type_ids_from_sequences
<frameworkcontent>
<pt> | 103_5_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta-v2.md | https://huggingface.co/docs/transformers/en/model_doc/deberta-v2/#debertav2model | .md | The bare DeBERTa Model transformer outputting raw hidden-states without any specific head on top.
The DeBERTa model was proposed in [DeBERTa: Decoding-enhanced BERT with Disentangled
Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. It's build
on top of BERT/RoBERTa with two improvements, i.e. disentangled attention and enhanced mask decoder. With those two
improvements, it out perform BERT/RoBERTa on a majority of tasks with 80GB pretraining data. | 103_6_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta-v2.md | https://huggingface.co/docs/transformers/en/model_doc/deberta-v2/#debertav2model | .md | improvements, it out perform BERT/RoBERTa on a majority of tasks with 80GB pretraining data.
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`DebertaV2Config`]): Model configuration class with all the parameters of the model. | 103_6_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta-v2.md | https://huggingface.co/docs/transformers/en/model_doc/deberta-v2/#debertav2model | .md | and behavior.
Parameters:
config ([`DebertaV2Config`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 103_6_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta-v2.md | https://huggingface.co/docs/transformers/en/model_doc/deberta-v2/#debertav2pretrainedmodel | .md | An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained
models.
Methods: forward | 103_7_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta-v2.md | https://huggingface.co/docs/transformers/en/model_doc/deberta-v2/#debertav2formaskedlm | .md | DeBERTa Model with a `language modeling` head on top.
The DeBERTa model was proposed in [DeBERTa: Decoding-enhanced BERT with Disentangled
Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. It's build
on top of BERT/RoBERTa with two improvements, i.e. disentangled attention and enhanced mask decoder. With those two
improvements, it out perform BERT/RoBERTa on a majority of tasks with 80GB pretraining data. | 103_8_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta-v2.md | https://huggingface.co/docs/transformers/en/model_doc/deberta-v2/#debertav2formaskedlm | .md | improvements, it out perform BERT/RoBERTa on a majority of tasks with 80GB pretraining data.
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`DebertaV2Config`]): Model configuration class with all the parameters of the model. | 103_8_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta-v2.md | https://huggingface.co/docs/transformers/en/model_doc/deberta-v2/#debertav2formaskedlm | .md | and behavior.
Parameters:
config ([`DebertaV2Config`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 103_8_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta-v2.md | https://huggingface.co/docs/transformers/en/model_doc/deberta-v2/#debertav2forsequenceclassification | .md | DeBERTa Model transformer with a sequence classification/regression head on top (a linear layer on top of the
pooled output) e.g. for GLUE tasks.
The DeBERTa model was proposed in [DeBERTa: Decoding-enhanced BERT with Disentangled
Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. It's build
on top of BERT/RoBERTa with two improvements, i.e. disentangled attention and enhanced mask decoder. With those two | 103_9_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta-v2.md | https://huggingface.co/docs/transformers/en/model_doc/deberta-v2/#debertav2forsequenceclassification | .md | on top of BERT/RoBERTa with two improvements, i.e. disentangled attention and enhanced mask decoder. With those two
improvements, it out perform BERT/RoBERTa on a majority of tasks with 80GB pretraining data.
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters: | 103_9_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta-v2.md | https://huggingface.co/docs/transformers/en/model_doc/deberta-v2/#debertav2forsequenceclassification | .md | and behavior.
Parameters:
config ([`DebertaV2Config`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 103_9_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta-v2.md | https://huggingface.co/docs/transformers/en/model_doc/deberta-v2/#debertav2fortokenclassification | .md | DeBERTa Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for
Named-Entity-Recognition (NER) tasks.
The DeBERTa model was proposed in [DeBERTa: Decoding-enhanced BERT with Disentangled
Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. It's build
on top of BERT/RoBERTa with two improvements, i.e. disentangled attention and enhanced mask decoder. With those two | 103_10_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta-v2.md | https://huggingface.co/docs/transformers/en/model_doc/deberta-v2/#debertav2fortokenclassification | .md | on top of BERT/RoBERTa with two improvements, i.e. disentangled attention and enhanced mask decoder. With those two
improvements, it out perform BERT/RoBERTa on a majority of tasks with 80GB pretraining data.
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters: | 103_10_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta-v2.md | https://huggingface.co/docs/transformers/en/model_doc/deberta-v2/#debertav2fortokenclassification | .md | and behavior.
Parameters:
config ([`DebertaV2Config`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 103_10_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta-v2.md | https://huggingface.co/docs/transformers/en/model_doc/deberta-v2/#debertav2forquestionanswering | .md | DeBERTa Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layers on top of the hidden-states output to compute `span start logits` and `span end logits`).
The DeBERTa model was proposed in [DeBERTa: Decoding-enhanced BERT with Disentangled
Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. It's build | 103_11_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta-v2.md | https://huggingface.co/docs/transformers/en/model_doc/deberta-v2/#debertav2forquestionanswering | .md | Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. It's build
on top of BERT/RoBERTa with two improvements, i.e. disentangled attention and enhanced mask decoder. With those two
improvements, it out perform BERT/RoBERTa on a majority of tasks with 80GB pretraining data.
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | 103_11_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta-v2.md | https://huggingface.co/docs/transformers/en/model_doc/deberta-v2/#debertav2forquestionanswering | .md | This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`DebertaV2Config`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the | 103_11_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta-v2.md | https://huggingface.co/docs/transformers/en/model_doc/deberta-v2/#debertav2forquestionanswering | .md | Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 103_11_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta-v2.md | https://huggingface.co/docs/transformers/en/model_doc/deberta-v2/#debertav2formultiplechoice | .md | DeBERTa Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a
softmax) e.g. for RocStories/SWAG tasks.
The DeBERTa model was proposed in [DeBERTa: Decoding-enhanced BERT with Disentangled
Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. It's build
on top of BERT/RoBERTa with two improvements, i.e. disentangled attention and enhanced mask decoder. With those two | 103_12_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta-v2.md | https://huggingface.co/docs/transformers/en/model_doc/deberta-v2/#debertav2formultiplechoice | .md | on top of BERT/RoBERTa with two improvements, i.e. disentangled attention and enhanced mask decoder. With those two
improvements, it out perform BERT/RoBERTa on a majority of tasks with 80GB pretraining data.
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters: | 103_12_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta-v2.md | https://huggingface.co/docs/transformers/en/model_doc/deberta-v2/#debertav2formultiplechoice | .md | and behavior.
Parameters:
config ([`DebertaV2Config`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
</pt>
<tf> | 103_12_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta-v2.md | https://huggingface.co/docs/transformers/en/model_doc/deberta-v2/#tfdebertav2model | .md | No docstring available for TFDebertaV2Model
Methods: call | 103_13_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta-v2.md | https://huggingface.co/docs/transformers/en/model_doc/deberta-v2/#tfdebertav2pretrainedmodel | .md | No docstring available for TFDebertaV2PreTrainedModel
Methods: call | 103_14_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta-v2.md | https://huggingface.co/docs/transformers/en/model_doc/deberta-v2/#tfdebertav2formaskedlm | .md | No docstring available for TFDebertaV2ForMaskedLM
Methods: call | 103_15_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta-v2.md | https://huggingface.co/docs/transformers/en/model_doc/deberta-v2/#tfdebertav2forsequenceclassification | .md | No docstring available for TFDebertaV2ForSequenceClassification
Methods: call | 103_16_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta-v2.md | https://huggingface.co/docs/transformers/en/model_doc/deberta-v2/#tfdebertav2fortokenclassification | .md | No docstring available for TFDebertaV2ForTokenClassification
Methods: call | 103_17_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta-v2.md | https://huggingface.co/docs/transformers/en/model_doc/deberta-v2/#tfdebertav2forquestionanswering | .md | No docstring available for TFDebertaV2ForQuestionAnswering
Methods: call | 103_18_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deberta-v2.md | https://huggingface.co/docs/transformers/en/model_doc/deberta-v2/#tfdebertav2formultiplechoice | .md | No docstring available for TFDebertaV2ForMultipleChoice
Methods: call
</tf>
</frameworkcontent> | 103_19_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/timm_wrapper.md | https://huggingface.co/docs/transformers/en/model_doc/timm_wrapper/ | .md | <!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 104_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/timm_wrapper.md | https://huggingface.co/docs/transformers/en/model_doc/timm_wrapper/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 104_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/timm_wrapper.md | https://huggingface.co/docs/transformers/en/model_doc/timm_wrapper/#overview | .md | Helper class to enable loading timm models to be used with the transformers library and its autoclasses.
```python
>>> import torch
>>> from PIL import Image
>>> from urllib.request import urlopen
>>> from transformers import AutoModelForImageClassification, AutoImageProcessor
>>> # Load image
>>> image = Image.open(urlopen(
... 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
... )) | 104_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/timm_wrapper.md | https://huggingface.co/docs/transformers/en/model_doc/timm_wrapper/#overview | .md | >>> # Load model and image processor
>>> checkpoint = "timm/resnet50.a1_in1k"
>>> image_processor = AutoImageProcessor.from_pretrained(checkpoint)
>>> model = AutoModelForImageClassification.from_pretrained(checkpoint).eval()
>>> # Preprocess image
>>> inputs = image_processor(image)
>>> # Forward pass
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> # Get top 5 predictions
>>> top5_probabilities, top5_class_indices = torch.topk(logits.softmax(dim=1) * 100, k=5)
``` | 104_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/timm_wrapper.md | https://huggingface.co/docs/transformers/en/model_doc/timm_wrapper/#timmwrapperconfig | .md | This is the configuration class to store the configuration for a timm backbone [`TimmWrapper`].
It is used to instantiate a timm model according to the specified arguments, defining the model.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Config loads imagenet label descriptions and stores them in `id2label` attribute, `label2id` attribute for default | 104_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/timm_wrapper.md | https://huggingface.co/docs/transformers/en/model_doc/timm_wrapper/#timmwrapperconfig | .md | Config loads imagenet label descriptions and stores them in `id2label` attribute, `label2id` attribute for default
imagenet models is set to `None` due to occlusions in the label descriptions.
Args:
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
do_pooling (`bool`, *optional*, defaults to `True`):
Whether to do pooling for the last_hidden_state in `TimmWrapperModel` or not.
Example:
```python | 104_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/timm_wrapper.md | https://huggingface.co/docs/transformers/en/model_doc/timm_wrapper/#timmwrapperconfig | .md | Whether to do pooling for the last_hidden_state in `TimmWrapperModel` or not.
Example:
```python
>>> from transformers import TimmWrapperModel | 104_2_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/timm_wrapper.md | https://huggingface.co/docs/transformers/en/model_doc/timm_wrapper/#timmwrapperconfig | .md | >>> # Initializing a timm model
>>> model = TimmWrapperModel.from_pretrained("timm/resnet18.a1_in1k")
>>> # Accessing the model configuration
>>> configuration = model.config
``` | 104_2_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/timm_wrapper.md | https://huggingface.co/docs/transformers/en/model_doc/timm_wrapper/#timmwrapperimageprocessor | .md | Wrapper class for timm models to be used within transformers.
Args:
pretrained_cfg (`Dict[str, Any]`):
The configuration of the pretrained model used to resolve evaluation and
training transforms.
architecture (`Optional[str]`, *optional*):
Name of the architecture of the model.
Methods: preprocess | 104_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/timm_wrapper.md | https://huggingface.co/docs/transformers/en/model_doc/timm_wrapper/#timmwrappermodel | .md | Wrapper class for timm models to be used in transformers.
Methods: forward | 104_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/timm_wrapper.md | https://huggingface.co/docs/transformers/en/model_doc/timm_wrapper/#timmwrapperforimageclassification | .md | Wrapper class for timm models to be used in transformers for image classification.
Methods: forward | 104_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chinese_clip.md | https://huggingface.co/docs/transformers/en/model_doc/chinese_clip/ | .md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 105_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chinese_clip.md | https://huggingface.co/docs/transformers/en/model_doc/chinese_clip/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 105_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chinese_clip.md | https://huggingface.co/docs/transformers/en/model_doc/chinese_clip/#overview | .md | The Chinese-CLIP model was proposed in [Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese](https://arxiv.org/abs/2211.01335) by An Yang, Junshu Pan, Junyang Lin, Rui Men, Yichang Zhang, Jingren Zhou, Chang Zhou. | 105_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chinese_clip.md | https://huggingface.co/docs/transformers/en/model_doc/chinese_clip/#overview | .md | Chinese-CLIP is an implementation of CLIP (Radford et al., 2021) on a large-scale dataset of Chinese image-text pairs. It is capable of performing cross-modal retrieval and also playing as a vision backbone for vision tasks like zero-shot image classification, open-domain object detection, etc. The original Chinese-CLIP code is released [at this link](https://github.com/OFA-Sys/Chinese-CLIP).
The abstract from the paper is the following: | 105_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chinese_clip.md | https://huggingface.co/docs/transformers/en/model_doc/chinese_clip/#overview | .md | *The tremendous success of CLIP (Radford et al., 2021) has promoted the research and application of contrastive learning for vision-language pretraining. In this work, we construct a large-scale dataset of image-text pairs in Chinese, where most data are retrieved from publicly available datasets, and we pretrain Chinese CLIP models on the new dataset. We develop 5 Chinese CLIP models of multiple sizes, spanning from 77 to 958 million parameters. Furthermore, we propose a two-stage pretraining method, | 105_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chinese_clip.md | https://huggingface.co/docs/transformers/en/model_doc/chinese_clip/#overview | .md | models of multiple sizes, spanning from 77 to 958 million parameters. Furthermore, we propose a two-stage pretraining method, where the model is first trained with the image encoder frozen and then trained with all parameters being optimized, to achieve enhanced model performance. Our comprehensive experiments demonstrate that Chinese CLIP can achieve the state-of-the-art performance on MUGE, Flickr30K-CN, and COCO-CN in the setups of zero-shot learning and finetuning, and it is able to achieve competitive | 105_1_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chinese_clip.md | https://huggingface.co/docs/transformers/en/model_doc/chinese_clip/#overview | .md | on MUGE, Flickr30K-CN, and COCO-CN in the setups of zero-shot learning and finetuning, and it is able to achieve competitive performance in zero-shot image classification based on the evaluation on the ELEVATER benchmark (Li et al., 2022). Our codes, pretrained models, and demos have been released.* | 105_1_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chinese_clip.md | https://huggingface.co/docs/transformers/en/model_doc/chinese_clip/#overview | .md | The Chinese-CLIP model was contributed by [OFA-Sys](https://huggingface.co/OFA-Sys). | 105_1_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chinese_clip.md | https://huggingface.co/docs/transformers/en/model_doc/chinese_clip/#usage-example | .md | The code snippet below shows how to compute image & text features and similarities:
```python
>>> from PIL import Image
>>> import requests
>>> from transformers import ChineseCLIPProcessor, ChineseCLIPModel
>>> model = ChineseCLIPModel.from_pretrained("OFA-Sys/chinese-clip-vit-base-patch16")
>>> processor = ChineseCLIPProcessor.from_pretrained("OFA-Sys/chinese-clip-vit-base-patch16") | 105_2_0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.