source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
|
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#lxmertmodel
|
.md
|
The bare Lxmert Model transformer outputting raw hidden-states without any specific head on top.
The LXMERT model was proposed in [LXMERT: Learning Cross-Modality Encoder Representations from
Transformers](https://arxiv.org/abs/1908.07490) by Hao Tan and Mohit Bansal. It's a vision and language transformer
model, pretrained on a variety of multi-modal datasets comprising of GQA, VQAv2.0, MSCOCO captions, and Visual
|
308_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
|
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#lxmertmodel
|
.md
|
model, pretrained on a variety of multi-modal datasets comprising of GQA, VQAv2.0, MSCOCO captions, and Visual
genome, using a combination of masked language modeling, region of interest feature regression, cross entropy loss
for question answering attribute prediction, and object tag prediction.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
|
308_8_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
|
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#lxmertmodel
|
.md
|
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
|
308_8_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
|
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#lxmertmodel
|
.md
|
and behavior.
Parameters:
config ([`LxmertConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
308_8_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
|
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#lxmertforpretraining
|
.md
|
Lxmert Model with a specified pretraining head on top.
The LXMERT model was proposed in [LXMERT: Learning Cross-Modality Encoder Representations from
Transformers](https://arxiv.org/abs/1908.07490) by Hao Tan and Mohit Bansal. It's a vision and language transformer
model, pretrained on a variety of multi-modal datasets comprising of GQA, VQAv2.0, MSCOCO captions, and Visual
genome, using a combination of masked language modeling, region of interest feature regression, cross entropy loss
|
308_9_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
|
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#lxmertforpretraining
|
.md
|
genome, using a combination of masked language modeling, region of interest feature regression, cross entropy loss
for question answering attribute prediction, and object tag prediction.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
|
308_9_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
|
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#lxmertforpretraining
|
.md
|
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`LxmertConfig`]): Model configuration class with all the parameters of the model.
|
308_9_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
|
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#lxmertforpretraining
|
.md
|
and behavior.
Parameters:
config ([`LxmertConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
308_9_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
|
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#lxmertforquestionanswering
|
.md
|
Lxmert Model with a visual-answering head on top for downstream QA tasks
The LXMERT model was proposed in [LXMERT: Learning Cross-Modality Encoder Representations from
Transformers](https://arxiv.org/abs/1908.07490) by Hao Tan and Mohit Bansal. It's a vision and language transformer
model, pretrained on a variety of multi-modal datasets comprising of GQA, VQAv2.0, MSCOCO captions, and Visual
genome, using a combination of masked language modeling, region of interest feature regression, cross entropy loss
|
308_10_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
|
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#lxmertforquestionanswering
|
.md
|
genome, using a combination of masked language modeling, region of interest feature regression, cross entropy loss
for question answering attribute prediction, and object tag prediction.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
|
308_10_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
|
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#lxmertforquestionanswering
|
.md
|
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`LxmertConfig`]): Model configuration class with all the parameters of the model.
|
308_10_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
|
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#lxmertforquestionanswering
|
.md
|
and behavior.
Parameters:
config ([`LxmertConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
</pt>
<tf>
|
308_10_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
|
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#tflxmertmodel
|
.md
|
No docstring available for TFLxmertModel
Methods: call
|
308_11_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lxmert.md
|
https://huggingface.co/docs/transformers/en/model_doc/lxmert/#tflxmertforpretraining
|
.md
|
No docstring available for TFLxmertForPreTraining
Methods: call
</tf>
</frameworkcontent>
|
308_12_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/dpt/
|
.md
|
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
309_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/dpt/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
309_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/dpt/#overview
|
.md
|
The DPT model was proposed in [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) by René Ranftl, Alexey Bochkovskiy, Vladlen Koltun.
DPT is a model that leverages the [Vision Transformer (ViT)](vit) as backbone for dense prediction tasks like semantic segmentation and depth estimation.
The abstract from the paper is the following:
|
309_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/dpt/#overview
|
.md
|
*We introduce dense vision transformers, an architecture that leverages vision transformers in place of convolutional networks as a backbone for dense prediction tasks. We assemble tokens from various stages of the vision transformer into image-like representations at various resolutions and progressively combine them into full-resolution predictions using a convolutional decoder. The transformer backbone processes representations at a constant and relatively high resolution and has a global receptive
|
309_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/dpt/#overview
|
.md
|
The transformer backbone processes representations at a constant and relatively high resolution and has a global receptive field at every stage. These properties allow the dense vision transformer to provide finer-grained and more globally coherent predictions when compared to fully-convolutional networks. Our experiments show that this architecture yields substantial improvements on dense prediction tasks, especially when a large amount of training data is available. For monocular depth estimation, we
|
309_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/dpt/#overview
|
.md
|
on dense prediction tasks, especially when a large amount of training data is available. For monocular depth estimation, we observe an improvement of up to 28% in relative performance when compared to a state-of-the-art fully-convolutional network. When applied to semantic segmentation, dense vision transformers set a new state of the art on ADE20K with 49.02% mIoU. We further show that the architecture can be fine-tuned on smaller datasets such as NYUv2, KITTI, and Pascal Context where it also sets the
|
309_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/dpt/#overview
|
.md
|
that the architecture can be fine-tuned on smaller datasets such as NYUv2, KITTI, and Pascal Context where it also sets the new state of the art.*
|
309_1_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/dpt/#overview
|
.md
|
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/dpt_architecture.jpg"
alt="drawing" width="600"/>
<small> DPT architecture. Taken from the <a href="https://arxiv.org/abs/2103.13413" target="_blank">original paper</a>. </small>
This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/isl-org/DPT).
|
309_1_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/dpt/#usage-tips
|
.md
|
DPT is compatible with the [`AutoBackbone`] class. This allows to use the DPT framework with various computer vision backbones available in the library, such as [`VitDetBackbone`] or [`Dinov2Backbone`]. One can create it as follows:
```python
from transformers import Dinov2Config, DPTConfig, DPTForDepthEstimation
|
309_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/dpt/#usage-tips
|
.md
|
# initialize with a Transformer-based backbone such as DINOv2
# in that case, we also specify `reshape_hidden_states=False` to get feature maps of shape (batch_size, num_channels, height, width)
backbone_config = Dinov2Config.from_pretrained("facebook/dinov2-base", out_features=["stage1", "stage2", "stage3", "stage4"], reshape_hidden_states=False)
config = DPTConfig(backbone_config=backbone_config)
model = DPTForDepthEstimation(config=config)
```
|
309_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/dpt/#resources
|
.md
|
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with DPT.
- Demo notebooks for [`DPTForDepthEstimation`] can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/DPT).
- [Semantic segmentation task guide](../tasks/semantic_segmentation)
- [Monocular depth estimation task guide](../tasks/monocular_depth_estimation)
|
309_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/dpt/#resources
|
.md
|
- [Monocular depth estimation task guide](../tasks/monocular_depth_estimation)
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
|
309_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/dpt/#dptconfig
|
.md
|
This is the configuration class to store the configuration of a [`DPTModel`]. It is used to instantiate an DPT
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the DPT
[Intel/dpt-large](https://huggingface.co/Intel/dpt-large) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
309_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/dpt/#dptconfig
|
.md
|
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
hidden_size (`int`, *optional*, defaults to 768):
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (`int`, *optional*, defaults to 12):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 12):
|
309_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/dpt/#dptconfig
|
.md
|
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 12):
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (`int`, *optional*, defaults to 3072):
Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
|
309_4_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/dpt/#dptconfig
|
.md
|
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"selu"` and `"gelu_new"` are supported.
hidden_dropout_prob (`float`, *optional*, defaults to 0.0):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
initializer_range (`float`, *optional*, defaults to 0.02):
|
309_4_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/dpt/#dptconfig
|
.md
|
The dropout ratio for the attention probabilities.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (`float`, *optional*, defaults to 1e-12):
The epsilon used by the layer normalization layers.
image_size (`int`, *optional*, defaults to 384):
The size (resolution) of each image.
patch_size (`int`, *optional*, defaults to 16):
The size (resolution) of each patch.
|
309_4_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/dpt/#dptconfig
|
.md
|
The size (resolution) of each image.
patch_size (`int`, *optional*, defaults to 16):
The size (resolution) of each patch.
num_channels (`int`, *optional*, defaults to 3):
The number of input channels.
is_hybrid (`bool`, *optional*, defaults to `False`):
Whether to use a hybrid backbone. Useful in the context of loading DPT-Hybrid models.
qkv_bias (`bool`, *optional*, defaults to `True`):
Whether to add a bias to the queries, keys and values.
|
309_4_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/dpt/#dptconfig
|
.md
|
qkv_bias (`bool`, *optional*, defaults to `True`):
Whether to add a bias to the queries, keys and values.
backbone_out_indices (`List[int]`, *optional*, defaults to `[2, 5, 8, 11]`):
Indices of the intermediate hidden states to use from backbone.
readout_type (`str`, *optional*, defaults to `"project"`):
The readout type to use when processing the readout token (CLS token) of the intermediate hidden states of
the ViT backbone. Can be one of [`"ignore"`, `"add"`, `"project"`].
|
309_4_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/dpt/#dptconfig
|
.md
|
the ViT backbone. Can be one of [`"ignore"`, `"add"`, `"project"`].
- "ignore" simply ignores the CLS token.
- "add" passes the information from the CLS token to all other tokens by adding the representations.
- "project" passes information to the other tokens by concatenating the readout to all other tokens before
projecting the
representation to the original feature dimension D using a linear layer followed by a GELU non-linearity.
|
309_4_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/dpt/#dptconfig
|
.md
|
projecting the
representation to the original feature dimension D using a linear layer followed by a GELU non-linearity.
reassemble_factors (`List[int]`, *optional*, defaults to `[4, 2, 1, 0.5]`):
The up/downsampling factors of the reassemble layers.
neck_hidden_sizes (`List[str]`, *optional*, defaults to `[96, 192, 384, 768]`):
The hidden sizes to project to for the feature maps of the backbone.
fusion_hidden_size (`int`, *optional*, defaults to 256):
The number of channels before fusion.
|
309_4_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/dpt/#dptconfig
|
.md
|
fusion_hidden_size (`int`, *optional*, defaults to 256):
The number of channels before fusion.
head_in_index (`int`, *optional*, defaults to -1):
The index of the features to use in the heads.
use_batch_norm_in_fusion_residual (`bool`, *optional*, defaults to `False`):
Whether to use batch normalization in the pre-activate residual units of the fusion blocks.
use_bias_in_fusion_residual (`bool`, *optional*, defaults to `True`):
Whether to use bias in the pre-activate residual units of the fusion blocks.
|
309_4_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/dpt/#dptconfig
|
.md
|
Whether to use bias in the pre-activate residual units of the fusion blocks.
add_projection (`bool`, *optional*, defaults to `False`):
Whether to add a projection layer before the depth estimation head.
use_auxiliary_head (`bool`, *optional*, defaults to `True`):
Whether to use an auxiliary head during training.
auxiliary_loss_weight (`float`, *optional*, defaults to 0.4):
Weight of the cross-entropy loss of the auxiliary head.
semantic_loss_ignore_index (`int`, *optional*, defaults to 255):
|
309_4_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/dpt/#dptconfig
|
.md
|
Weight of the cross-entropy loss of the auxiliary head.
semantic_loss_ignore_index (`int`, *optional*, defaults to 255):
The index that is ignored by the loss function of the semantic segmentation model.
semantic_classifier_dropout (`float`, *optional*, defaults to 0.1):
The dropout ratio for the semantic classification head.
backbone_featmap_shape (`List[int]`, *optional*, defaults to `[1, 1024, 24, 24]`):
Used only for the `hybrid` embedding type. The shape of the feature maps of the backbone.
|
309_4_11
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/dpt/#dptconfig
|
.md
|
Used only for the `hybrid` embedding type. The shape of the feature maps of the backbone.
neck_ignore_stages (`List[int]`, *optional*, defaults to `[0, 1]`):
Used only for the `hybrid` embedding type. The stages of the readout layers to ignore.
backbone_config (`Union[Dict[str, Any], PretrainedConfig]`, *optional*):
The configuration of the backbone model. Only used in case `is_hybrid` is `True` or in case you want to
leverage the [`AutoBackbone`] API.
backbone (`str`, *optional*):
|
309_4_12
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/dpt/#dptconfig
|
.md
|
leverage the [`AutoBackbone`] API.
backbone (`str`, *optional*):
Name of backbone to use when `backbone_config` is `None`. If `use_pretrained_backbone` is `True`, this
will load the corresponding pretrained weights from the timm or transformers library. If `use_pretrained_backbone`
is `False`, this loads the backbone's config and uses that to initialize the backbone with random weights.
use_pretrained_backbone (`bool`, *optional*, defaults to `False`):
Whether to use pretrained weights for the backbone.
|
309_4_13
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/dpt/#dptconfig
|
.md
|
use_pretrained_backbone (`bool`, *optional*, defaults to `False`):
Whether to use pretrained weights for the backbone.
use_timm_backbone (`bool`, *optional*, defaults to `False`):
Whether to load `backbone` from the timm library. If `False`, the backbone is loaded from the transformers
library.
backbone_kwargs (`dict`, *optional*):
Keyword arguments to be passed to AutoBackbone when loading from a checkpoint
e.g. `{'out_indices': (0, 1, 2, 3)}`. Cannot be specified if `backbone_config` is set.
Example:
|
309_4_14
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/dpt/#dptconfig
|
.md
|
e.g. `{'out_indices': (0, 1, 2, 3)}`. Cannot be specified if `backbone_config` is set.
Example:
```python
>>> from transformers import DPTModel, DPTConfig
|
309_4_15
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/dpt/#dptconfig
|
.md
|
>>> # Initializing a DPT dpt-large style configuration
>>> configuration = DPTConfig()
>>> # Initializing a model from the dpt-large style configuration
>>> model = DPTModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
309_4_16
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/dpt/#dptfeatureextractor
|
.md
|
No docstring available for DPTFeatureExtractor
Methods: __call__
- post_process_semantic_segmentation
|
309_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/dpt/#dptimageprocessor
|
.md
|
Constructs a DPT image processor.
Args:
do_resize (`bool`, *optional*, defaults to `True`):
Whether to resize the image's (height, width) dimensions. Can be overidden by `do_resize` in `preprocess`.
size (`Dict[str, int]` *optional*, defaults to `{"height": 384, "width": 384}`):
Size of the image after resizing. Can be overidden by `size` in `preprocess`.
resample (`PILImageResampling`, *optional*, defaults to `Resampling.BICUBIC`):
|
309_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/dpt/#dptimageprocessor
|
.md
|
resample (`PILImageResampling`, *optional*, defaults to `Resampling.BICUBIC`):
Defines the resampling filter to use if resizing the image. Can be overidden by `resample` in `preprocess`.
keep_aspect_ratio (`bool`, *optional*, defaults to `False`):
If `True`, the image is resized to the largest possible size such that the aspect ratio is preserved. Can
be overidden by `keep_aspect_ratio` in `preprocess`.
ensure_multiple_of (`int`, *optional*, defaults to 1):
|
309_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/dpt/#dptimageprocessor
|
.md
|
be overidden by `keep_aspect_ratio` in `preprocess`.
ensure_multiple_of (`int`, *optional*, defaults to 1):
If `do_resize` is `True`, the image is resized to a size that is a multiple of this value. Can be overidden
by `ensure_multiple_of` in `preprocess`.
do_rescale (`bool`, *optional*, defaults to `True`):
Whether to rescale the image by the specified scale `rescale_factor`. Can be overidden by `do_rescale` in
`preprocess`.
rescale_factor (`int` or `float`, *optional*, defaults to `1/255`):
|
309_6_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/dpt/#dptimageprocessor
|
.md
|
`preprocess`.
rescale_factor (`int` or `float`, *optional*, defaults to `1/255`):
Scale factor to use if rescaling the image. Can be overidden by `rescale_factor` in `preprocess`.
do_normalize (`bool`, *optional*, defaults to `True`):
Whether to normalize the image. Can be overridden by the `do_normalize` parameter in the `preprocess`
method.
image_mean (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_MEAN`):
|
309_6_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/dpt/#dptimageprocessor
|
.md
|
method.
image_mean (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_MEAN`):
Mean to use if normalizing the image. This is a float or list of floats the length of the number of
channels in the image. Can be overridden by the `image_mean` parameter in the `preprocess` method.
image_std (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_STD`):
Standard deviation to use if normalizing the image. This is a float or list of floats the length of the
|
309_6_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/dpt/#dptimageprocessor
|
.md
|
Standard deviation to use if normalizing the image. This is a float or list of floats the length of the
number of channels in the image. Can be overridden by the `image_std` parameter in the `preprocess` method.
do_pad (`bool`, *optional*, defaults to `False`):
Whether to apply center padding. This was introduced in the DINOv2 paper, which uses the model in
combination with DPT.
size_divisor (`int`, *optional*):
|
309_6_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/dpt/#dptimageprocessor
|
.md
|
combination with DPT.
size_divisor (`int`, *optional*):
If `do_pad` is `True`, pads the image dimensions to be divisible by this value. This was introduced in the
DINOv2 paper, which uses the model in combination with DPT.
Methods: preprocess
- post_process_semantic_segmentation
|
309_6_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/dpt/#dptmodel
|
.md
|
The bare DPT Model transformer outputting raw hidden-states without any specific head on top.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`ViTConfig`]): Model configuration class with all the parameters of the model.
|
309_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/dpt/#dptmodel
|
.md
|
behavior.
Parameters:
config ([`ViTConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
309_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/dpt/#dptfordepthestimation
|
.md
|
DPT Model with a depth estimation head on top (consisting of 3 convolutional layers) e.g. for KITTI, NYUv2.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`ViTConfig`]): Model configuration class with all the parameters of the model.
|
309_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/dpt/#dptfordepthestimation
|
.md
|
behavior.
Parameters:
config ([`ViTConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
309_8_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/dpt/#dptforsemanticsegmentation
|
.md
|
DPT Model with a semantic segmentation head on top e.g. for ADE20k, CityScapes.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`ViTConfig`]): Model configuration class with all the parameters of the model.
|
309_9_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/dpt/#dptforsemanticsegmentation
|
.md
|
behavior.
Parameters:
config ([`ViTConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
309_9_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/canine.md
|
https://huggingface.co/docs/transformers/en/model_doc/canine/
|
.md
|
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
310_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/canine.md
|
https://huggingface.co/docs/transformers/en/model_doc/canine/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
310_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/canine.md
|
https://huggingface.co/docs/transformers/en/model_doc/canine/#overview
|
.md
|
The CANINE model was proposed in [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language
Representation](https://arxiv.org/abs/2103.06874) by Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting. It's
among the first papers that trains a Transformer without using an explicit tokenization step (such as Byte Pair
Encoding (BPE), WordPiece or SentencePiece). Instead, the model is trained directly at a Unicode character-level.
|
310_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/canine.md
|
https://huggingface.co/docs/transformers/en/model_doc/canine/#overview
|
.md
|
Encoding (BPE), WordPiece or SentencePiece). Instead, the model is trained directly at a Unicode character-level.
Training at a character-level inevitably comes with a longer sequence length, which CANINE solves with an efficient
downsampling strategy, before applying a deep Transformer encoder.
The abstract from the paper is the following:
*Pipelined NLP systems have largely been superseded by end-to-end neural modeling, yet nearly all commonly-used models
|
310_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/canine.md
|
https://huggingface.co/docs/transformers/en/model_doc/canine/#overview
|
.md
|
*Pipelined NLP systems have largely been superseded by end-to-end neural modeling, yet nearly all commonly-used models
still require an explicit tokenization step. While recent tokenization approaches based on data-derived subword
lexicons are less brittle than manually engineered tokenizers, these techniques are not equally suited to all
languages, and the use of any fixed vocabulary may limit a model's ability to adapt. In this paper, we present CANINE,
|
310_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/canine.md
|
https://huggingface.co/docs/transformers/en/model_doc/canine/#overview
|
.md
|
languages, and the use of any fixed vocabulary may limit a model's ability to adapt. In this paper, we present CANINE,
a neural encoder that operates directly on character sequences, without explicit tokenization or vocabulary, and a
pre-training strategy that operates either directly on characters or optionally uses subwords as a soft inductive bias.
To use its finer-grained input effectively and efficiently, CANINE combines downsampling, which reduces the input
|
310_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/canine.md
|
https://huggingface.co/docs/transformers/en/model_doc/canine/#overview
|
.md
|
To use its finer-grained input effectively and efficiently, CANINE combines downsampling, which reduces the input
sequence length, with a deep transformer stack, which encodes context. CANINE outperforms a comparable mBERT model by
2.8 F1 on TyDi QA, a challenging multilingual benchmark, despite having 28% fewer model parameters.*
|
310_1_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/canine.md
|
https://huggingface.co/docs/transformers/en/model_doc/canine/#overview
|
.md
|
2.8 F1 on TyDi QA, a challenging multilingual benchmark, despite having 28% fewer model parameters.*
This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/google-research/language/tree/master/language/canine).
|
310_1_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/canine.md
|
https://huggingface.co/docs/transformers/en/model_doc/canine/#usage-tips
|
.md
|
- CANINE uses no less than 3 Transformer encoders internally: 2 "shallow" encoders (which only consist of a single
layer) and 1 "deep" encoder (which is a regular BERT encoder). First, a "shallow" encoder is used to contextualize
the character embeddings, using local attention. Next, after downsampling, a "deep" encoder is applied. Finally,
after upsampling, a "shallow" encoder is used to create the final character embeddings. Details regarding up- and
downsampling can be found in the paper.
|
310_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/canine.md
|
https://huggingface.co/docs/transformers/en/model_doc/canine/#usage-tips
|
.md
|
downsampling can be found in the paper.
- CANINE uses a max sequence length of 2048 characters by default. One can use [`CanineTokenizer`]
to prepare text for the model.
- Classification can be done by placing a linear layer on top of the final hidden state of the special [CLS] token
(which has a predefined Unicode code point). For token classification tasks however, the downsampled sequence of
tokens needs to be upsampled again to match the length of the original character sequence (which is 2048). The
|
310_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/canine.md
|
https://huggingface.co/docs/transformers/en/model_doc/canine/#usage-tips
|
.md
|
tokens needs to be upsampled again to match the length of the original character sequence (which is 2048). The
details for this can be found in the paper.
Model checkpoints:
- [google/canine-c](https://huggingface.co/google/canine-c): Pre-trained with autoregressive character loss,
12-layer, 768-hidden, 12-heads, 121M parameters (size ~500 MB).
- [google/canine-s](https://huggingface.co/google/canine-s): Pre-trained with subword loss, 12-layer,
768-hidden, 12-heads, 121M parameters (size ~500 MB).
|
310_2_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/canine.md
|
https://huggingface.co/docs/transformers/en/model_doc/canine/#usage-example
|
.md
|
CANINE works on raw characters, so it can be used **without a tokenizer**:
```python
>>> from transformers import CanineModel
>>> import torch
>>> model = CanineModel.from_pretrained("google/canine-c") # model pre-trained with autoregressive character loss
>>> text = "hello world"
>>> # use Python's built-in ord() function to turn each character into its unicode code point id
>>> input_ids = torch.tensor([[ord(char) for char in text]])
|
310_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/canine.md
|
https://huggingface.co/docs/transformers/en/model_doc/canine/#usage-example
|
.md
|
>>> outputs = model(input_ids) # forward pass
>>> pooled_output = outputs.pooler_output
>>> sequence_output = outputs.last_hidden_state
```
For batched inference and training, it is however recommended to make use of the tokenizer (to pad/truncate all
sequences to the same length):
```python
>>> from transformers import CanineTokenizer, CanineModel
>>> model = CanineModel.from_pretrained("google/canine-c")
>>> tokenizer = CanineTokenizer.from_pretrained("google/canine-c")
|
310_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/canine.md
|
https://huggingface.co/docs/transformers/en/model_doc/canine/#usage-example
|
.md
|
>>> model = CanineModel.from_pretrained("google/canine-c")
>>> tokenizer = CanineTokenizer.from_pretrained("google/canine-c")
>>> inputs = ["Life is like a box of chocolates.", "You never know what you gonna get."]
>>> encoding = tokenizer(inputs, padding="longest", truncation=True, return_tensors="pt")
>>> outputs = model(**encoding) # forward pass
>>> pooled_output = outputs.pooler_output
>>> sequence_output = outputs.last_hidden_state
```
|
310_3_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/canine.md
|
https://huggingface.co/docs/transformers/en/model_doc/canine/#resources
|
.md
|
- [Text classification task guide](../tasks/sequence_classification)
- [Token classification task guide](../tasks/token_classification)
- [Question answering task guide](../tasks/question_answering)
- [Multiple choice task guide](../tasks/multiple_choice)
|
310_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/canine.md
|
https://huggingface.co/docs/transformers/en/model_doc/canine/#canineconfig
|
.md
|
This is the configuration class to store the configuration of a [`CanineModel`]. It is used to instantiate an
CANINE model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the CANINE
[google/canine-s](https://huggingface.co/google/canine-s) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
310_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/canine.md
|
https://huggingface.co/docs/transformers/en/model_doc/canine/#canineconfig
|
.md
|
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
hidden_size (`int`, *optional*, defaults to 768):
Dimension of the encoder layers and the pooler layer.
num_hidden_layers (`int`, *optional*, defaults to 12):
Number of hidden layers in the deep Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 12):
|
310_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/canine.md
|
https://huggingface.co/docs/transformers/en/model_doc/canine/#canineconfig
|
.md
|
Number of hidden layers in the deep Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 12):
Number of attention heads for each attention layer in the Transformer encoders.
intermediate_size (`int`, *optional*, defaults to 3072):
Dimension of the "intermediate" (i.e., feed-forward) layer in the Transformer encoders.
hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
|
310_5_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/canine.md
|
https://huggingface.co/docs/transformers/en/model_doc/canine/#canineconfig
|
.md
|
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"selu"` and `"gelu_new"` are supported.
hidden_dropout_prob (`float`, *optional*, defaults to 0.1):
The dropout probability for all fully connected layers in the embeddings, encoders, and pooler.
attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1):
The dropout ratio for the attention probabilities.
max_position_embeddings (`int`, *optional*, defaults to 16384):
|
310_5_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/canine.md
|
https://huggingface.co/docs/transformers/en/model_doc/canine/#canineconfig
|
.md
|
The dropout ratio for the attention probabilities.
max_position_embeddings (`int`, *optional*, defaults to 16384):
The maximum sequence length that this model might ever be used with.
type_vocab_size (`int`, *optional*, defaults to 16):
The vocabulary size of the `token_type_ids` passed when calling [`CanineModel`].
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
|
310_5_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/canine.md
|
https://huggingface.co/docs/transformers/en/model_doc/canine/#canineconfig
|
.md
|
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (`float`, *optional*, defaults to 1e-12):
The epsilon used by the layer normalization layers.
pad_token_id (`int`, *optional*, defaults to 0):
Padding token id.
bos_token_id (`int`, *optional*, defaults to 57344):
Beginning of stream token id.
eos_token_id (`int`, *optional*, defaults to 57345):
End of stream token id.
downsampling_rate (`int`, *optional*, defaults to 4):
|
310_5_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/canine.md
|
https://huggingface.co/docs/transformers/en/model_doc/canine/#canineconfig
|
.md
|
End of stream token id.
downsampling_rate (`int`, *optional*, defaults to 4):
The rate at which to downsample the original character sequence length before applying the deep Transformer
encoder.
upsampling_kernel_size (`int`, *optional*, defaults to 4):
The kernel size (i.e. the number of characters in each window) of the convolutional projection layer when
projecting back from `hidden_size`*2 to `hidden_size`.
num_hash_functions (`int`, *optional*, defaults to 8):
|
310_5_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/canine.md
|
https://huggingface.co/docs/transformers/en/model_doc/canine/#canineconfig
|
.md
|
projecting back from `hidden_size`*2 to `hidden_size`.
num_hash_functions (`int`, *optional*, defaults to 8):
The number of hash functions to use. Each hash function has its own embedding matrix.
num_hash_buckets (`int`, *optional*, defaults to 16384):
The number of hash buckets to use.
local_transformer_stride (`int`, *optional*, defaults to 128):
The stride of the local attention of the first shallow Transformer encoder. Defaults to 128 for good
TPU/XLA memory alignment.
Example:
```python
|
310_5_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/canine.md
|
https://huggingface.co/docs/transformers/en/model_doc/canine/#canineconfig
|
.md
|
TPU/XLA memory alignment.
Example:
```python
>>> from transformers import CanineConfig, CanineModel
|
310_5_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/canine.md
|
https://huggingface.co/docs/transformers/en/model_doc/canine/#canineconfig
|
.md
|
>>> # Initializing a CANINE google/canine-s style configuration
>>> configuration = CanineConfig()
>>> # Initializing a model (with random weights) from the google/canine-s style configuration
>>> model = CanineModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
310_5_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/canine.md
|
https://huggingface.co/docs/transformers/en/model_doc/canine/#caninetokenizer
|
.md
|
Construct a CANINE tokenizer (i.e. a character splitter). It turns text into a sequence of characters, and then
converts each character into its Unicode code point.
[`CanineTokenizer`] inherits from [`PreTrainedTokenizer`].
Refer to superclass [`PreTrainedTokenizer`] for usage examples and documentation concerning parameters.
Args:
model_max_length (`int`, *optional*, defaults to 2048):
The maximum sentence length the model accepts.
Methods: build_inputs_with_special_tokens
- get_special_tokens_mask
|
310_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/canine.md
|
https://huggingface.co/docs/transformers/en/model_doc/canine/#caninetokenizer
|
.md
|
The maximum sentence length the model accepts.
Methods: build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
|
310_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/canine.md
|
https://huggingface.co/docs/transformers/en/model_doc/canine/#canine-specific-outputs
|
.md
|
models.canine.modeling_canine.CanineModelOutputWithPooling
Output type of [`CanineModel`]. Based on [`~modeling_outputs.BaseModelOutputWithPooling`], but with slightly
different `hidden_states` and `attentions`, as these also include the hidden states and attentions of the shallow
Transformer encoders.
Args:
last_hidden_state (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`):
Sequence of hidden-states at the output of the last layer of the model (i.e. the output of the final
|
310_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/canine.md
|
https://huggingface.co/docs/transformers/en/model_doc/canine/#canine-specific-outputs
|
.md
|
Sequence of hidden-states at the output of the last layer of the model (i.e. the output of the final
shallow Transformer encoder).
pooler_output (`torch.FloatTensor` of shape `(batch_size, hidden_size)`):
Hidden-state of the first token of the sequence (classification token) at the last layer of the deep
Transformer encoder, further processed by a Linear layer and a Tanh activation function. The Linear layer
|
310_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/canine.md
|
https://huggingface.co/docs/transformers/en/model_doc/canine/#canine-specific-outputs
|
.md
|
Transformer encoder, further processed by a Linear layer and a Tanh activation function. The Linear layer
weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
Tuple of `torch.FloatTensor` (one for the input to each encoder + one for the output of each layer of each
|
310_7_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/canine.md
|
https://huggingface.co/docs/transformers/en/model_doc/canine/#canine-specific-outputs
|
.md
|
Tuple of `torch.FloatTensor` (one for the input to each encoder + one for the output of each layer of each
encoder) of shape `(batch_size, sequence_length, hidden_size)` and `(batch_size, sequence_length //
config.downsampling_rate, hidden_size)`. Hidden-states of the model at the output of each layer plus the
initial input to each Transformer encoder. The hidden states of the shallow encoders have length
`sequence_length`, but the hidden states of the deep encoder have length `sequence_length` //
|
310_7_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/canine.md
|
https://huggingface.co/docs/transformers/en/model_doc/canine/#canine-specific-outputs
|
.md
|
`sequence_length`, but the hidden states of the deep encoder have length `sequence_length` //
`config.downsampling_rate`.
attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
Tuple of `torch.FloatTensor` (one for each layer) of the 3 Transformer encoders of shape `(batch_size,
num_heads, sequence_length, sequence_length)` and `(batch_size, num_heads, sequence_length //
|
310_7_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/canine.md
|
https://huggingface.co/docs/transformers/en/model_doc/canine/#canine-specific-outputs
|
.md
|
num_heads, sequence_length, sequence_length)` and `(batch_size, num_heads, sequence_length //
config.downsampling_rate, sequence_length // config.downsampling_rate)`. Attentions weights after the
attention softmax, used to compute the weighted average in the self-attention heads.
|
310_7_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/canine.md
|
https://huggingface.co/docs/transformers/en/model_doc/canine/#caninemodel
|
.md
|
The bare CANINE Model transformer outputting raw hidden-states without any specific head on top.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`CanineConfig`]): Model configuration class with all the parameters of the model.
|
310_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/canine.md
|
https://huggingface.co/docs/transformers/en/model_doc/canine/#caninemodel
|
.md
|
behavior.
Parameters:
config ([`CanineConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
310_8_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/canine.md
|
https://huggingface.co/docs/transformers/en/model_doc/canine/#canineforsequenceclassification
|
.md
|
CANINE Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled
output) e.g. for GLUE tasks.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`CanineConfig`]): Model configuration class with all the parameters of the model.
|
310_9_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/canine.md
|
https://huggingface.co/docs/transformers/en/model_doc/canine/#canineforsequenceclassification
|
.md
|
behavior.
Parameters:
config ([`CanineConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
310_9_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/canine.md
|
https://huggingface.co/docs/transformers/en/model_doc/canine/#canineformultiplechoice
|
.md
|
CANINE Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a
softmax) e.g. for RocStories/SWAG tasks.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`CanineConfig`]): Model configuration class with all the parameters of the model.
|
310_10_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/canine.md
|
https://huggingface.co/docs/transformers/en/model_doc/canine/#canineformultiplechoice
|
.md
|
behavior.
Parameters:
config ([`CanineConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
310_10_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/canine.md
|
https://huggingface.co/docs/transformers/en/model_doc/canine/#caninefortokenclassification
|
.md
|
CANINE Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for
Named-Entity-Recognition (NER) tasks.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`CanineConfig`]): Model configuration class with all the parameters of the model.
|
310_11_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/canine.md
|
https://huggingface.co/docs/transformers/en/model_doc/canine/#caninefortokenclassification
|
.md
|
behavior.
Parameters:
config ([`CanineConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
310_11_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/canine.md
|
https://huggingface.co/docs/transformers/en/model_doc/canine/#canineforquestionanswering
|
.md
|
CANINE Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layers on top of the hidden-states output to compute `span start logits` and `span end logits`).
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
|
310_12_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/canine.md
|
https://huggingface.co/docs/transformers/en/model_doc/canine/#canineforquestionanswering
|
.md
|
behavior.
Parameters:
config ([`CanineConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
310_12_1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.