source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clvp.md
|
https://huggingface.co/docs/transformers/en/model_doc/clvp/#clvpprocessor
|
.md
|
Constructs a CLVP processor which wraps a CLVP Feature Extractor and a CLVP Tokenizer into a single processor.
[`ClvpProcessor`] offers all the functionalities of [`ClvpFeatureExtractor`] and [`ClvpTokenizer`]. See the
[`~ClvpProcessor.__call__`], [`~ClvpProcessor.decode`] and [`~ClvpProcessor.batch_decode`] for more information.
Args:
feature_extractor (`ClvpFeatureExtractor`):
An instance of [`ClvpFeatureExtractor`]. The feature extractor is a required input.
tokenizer (`ClvpTokenizer`):
|
330_9_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clvp.md
|
https://huggingface.co/docs/transformers/en/model_doc/clvp/#clvpprocessor
|
.md
|
An instance of [`ClvpFeatureExtractor`]. The feature extractor is a required input.
tokenizer (`ClvpTokenizer`):
An instance of [`ClvpTokenizer`]. The tokenizer is a required input.
Methods: __call__
- decode
- batch_decode
|
330_9_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clvp.md
|
https://huggingface.co/docs/transformers/en/model_doc/clvp/#clvpmodelforconditionalgeneration
|
.md
|
The composite CLVP model with a text encoder, speech encoder and speech decoder model.The speech decoder model generates the speech_ids from the text and the text encoder and speech encoder workstogether to filter out the best speech_ids.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
|
330_10_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clvp.md
|
https://huggingface.co/docs/transformers/en/model_doc/clvp/#clvpmodelforconditionalgeneration
|
.md
|
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`ClvpConfig`]): Model configuration class with all the parameters of the model.
|
330_10_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clvp.md
|
https://huggingface.co/docs/transformers/en/model_doc/clvp/#clvpmodelforconditionalgeneration
|
.md
|
and behavior.
Parameters:
config ([`ClvpConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
- generate
- get_text_features
- get_speech_features
|
330_10_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clvp.md
|
https://huggingface.co/docs/transformers/en/model_doc/clvp/#clvpforcausallm
|
.md
|
The CLVP decoder model with a language modelling head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
330_11_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clvp.md
|
https://huggingface.co/docs/transformers/en/model_doc/clvp/#clvpforcausallm
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`ClvpConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
330_11_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clvp.md
|
https://huggingface.co/docs/transformers/en/model_doc/clvp/#clvpforcausallm
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
|
330_11_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clvp.md
|
https://huggingface.co/docs/transformers/en/model_doc/clvp/#clvpmodel
|
.md
|
The bare Clvp decoder model outputting raw hidden-states without any specific head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
330_12_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clvp.md
|
https://huggingface.co/docs/transformers/en/model_doc/clvp/#clvpmodel
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`ClvpConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
330_12_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clvp.md
|
https://huggingface.co/docs/transformers/en/model_doc/clvp/#clvpmodel
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
|
330_12_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clvp.md
|
https://huggingface.co/docs/transformers/en/model_doc/clvp/#clvpencoder
|
.md
|
Transformer encoder consisting of `config.num_hidden_layers` self attention layers. Each layer is a
[`ClvpEncoderLayer`].
Args:
config: ClvpConfig
|
330_13_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clvp.md
|
https://huggingface.co/docs/transformers/en/model_doc/clvp/#clvpdecoder
|
.md
|
Transformer decoder consisting of *config.num_hidden_layers* layers. Each layer is a [`ClvpDecoderLayer`]
|
330_14_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/recurrent_gemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/recurrent_gemma/
|
.md
|
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
331_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/recurrent_gemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/recurrent_gemma/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
331_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/recurrent_gemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/recurrent_gemma/#overview
|
.md
|
The Recurrent Gemma model was proposed in [RecurrentGemma: Moving Past Transformers for Efficient Open Language Models](https://storage.googleapis.com/deepmind-media/gemma/recurrentgemma-report.pdf) by the Griffin, RLHF and Gemma Teams of Google.
The abstract from the paper is the following:
|
331_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/recurrent_gemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/recurrent_gemma/#overview
|
.md
|
*We introduce RecurrentGemma, an open language model which uses Google’s novel Griffin architecture. Griffin combines linear recurrences with local attention to achieve excellent performance on language. It has a fixed-sized state, which reduces memory use and enables efficient inference on long sequences. We provide a pre-trained model with 2B non-embedding parameters, and an instruction tuned variant. Both models achieve comparable performance to Gemma-2B despite being trained on fewer tokens.*
Tips:
|
331_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/recurrent_gemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/recurrent_gemma/#overview
|
.md
|
Tips:
- The original checkpoints can be converted using the conversion script [`src/transformers/models/recurrent_gemma/convert_recurrent_gemma_weights_to_hf.py`](https://github.com/huggingface/transformers/blob/main/src/transformers/models/recurrent_gemma/convert_recurrent_gemma_to_hf.py).
This model was contributed by [Arthur Zucker](https://huggingface.co/ArthurZ). The original code can be found [here](https://github.com/google-deepmind/recurrentgemma).
|
331_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/recurrent_gemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/recurrent_gemma/#recurrentgemmaconfig
|
.md
|
This is the configuration class to store the configuration of a [`RecurrentGemmaModel`]. It is used to instantiate a RecurrentGemma
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the RecurrentGemma-7B.
e.g. [google/recurrentgemma-2b](https://huggingface.co/google/recurrentgemma-2b)
|
331_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/recurrent_gemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/recurrent_gemma/#recurrentgemmaconfig
|
.md
|
e.g. [google/recurrentgemma-2b](https://huggingface.co/google/recurrentgemma-2b)
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
num_hidden_layers (`int`, *optional*, defaults to 26):
The number of hidden layers in the model.
vocab_size (`int`, *optional*, defaults to 256000):
Vocabulary size of the RecurrentGemma model. Defines the number of
|
331_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/recurrent_gemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/recurrent_gemma/#recurrentgemmaconfig
|
.md
|
vocab_size (`int`, *optional*, defaults to 256000):
Vocabulary size of the RecurrentGemma model. Defines the number of
different tokens that can be represented by the
`inputs_ids` passed when calling [`RecurrentGemmaModel`]
hidden_size (`int`, *optional*, defaults to 2560):
Dimension of the hidden representations.
intermediate_size (`int`, *optional*, defaults to 7680):
Dimension of the MLP representations.
num_attention_heads (`int`, *optional*, defaults to 10):
|
331_2_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/recurrent_gemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/recurrent_gemma/#recurrentgemmaconfig
|
.md
|
Dimension of the MLP representations.
num_attention_heads (`int`, *optional*, defaults to 10):
The number of heads for the attention block and the number of
heads/blocks for the block-diagonal layers used in the RG-LRU gates.
This number must divide `hidden_size` and `lru_width`.
lru_width (`int` or `None`, *optional*):
Dimension of the hidden representations of the RG-LRU. If `None`
this will be set to `hidden_size`.
Whether to scale the output of the embeddings by `sqrt(hidden_size)`.
|
331_2_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/recurrent_gemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/recurrent_gemma/#recurrentgemmaconfig
|
.md
|
this will be set to `hidden_size`.
Whether to scale the output of the embeddings by `sqrt(hidden_size)`.
attention_window_size (`int`, *optional*, defaults to 2048):
The size of the attention window used in the attention block.
conv1d_width (`int`, *optional*, defaults to 4):
The kernel size of conv1d layers used in the recurrent blocks.
logits_soft_cap (`float`, *optional*, defaults to 30.0):
|
331_2_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/recurrent_gemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/recurrent_gemma/#recurrentgemmaconfig
|
.md
|
The kernel size of conv1d layers used in the recurrent blocks.
logits_soft_cap (`float`, *optional*, defaults to 30.0):
The value at which the logits should be soft-capped to after the transformer and LM-head computation in the Causal LM architecture.
rms_norm_eps (`float`, *optional*, defaults to 1e-06):
The epsilon used by the rms normalization layers.
use_cache (`bool`, *optional*, defaults to `True`):
Whether the model should return the last key/values
attentions (not used by all models). Only
|
331_2_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/recurrent_gemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/recurrent_gemma/#recurrentgemmaconfig
|
.md
|
Whether the model should return the last key/values
attentions (not used by all models). Only
relevant if `config.is_decoder=True`.
pad_token_id (`int`, *optional*, defaults to 0):
Padding token id.
eos_token_id (`int`, *optional*, defaults to 1):
End of stream token id.
bos_token_id (`int`, *optional*, defaults to 2):
Beginning of stream token id.
hidden_activation (``str` or `function``, *optional*, defaults to `"gelu_pytorch_tanh"`):
|
331_2_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/recurrent_gemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/recurrent_gemma/#recurrentgemmaconfig
|
.md
|
Beginning of stream token id.
hidden_activation (``str` or `function``, *optional*, defaults to `"gelu_pytorch_tanh"`):
The hidden activation used in the recurrent block as well as the MLP layer of the decoder layers.
partial_rotary_factor (`float`, *optional*, defaults to 0.5):
The partial rotary factor used in the initialization of the rotary embeddings.
rope_theta (`float`, *optional*, defaults to 10000.0):
The base period of the RoPE embeddings.
|
331_2_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/recurrent_gemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/recurrent_gemma/#recurrentgemmaconfig
|
.md
|
rope_theta (`float`, *optional*, defaults to 10000.0):
The base period of the RoPE embeddings.
block_types (`List[str]`, *optional*, defaults to `('recurrent', 'recurrent', 'attention')`):
List of aleternating blocks that will be repeated to initialize the `temporal_block` layer.
attention_dropout (`float`, *optional*, defaults to 0.0): dropout value to use after the attention softmax.
num_key_value_heads (`16`, *optional*, defaults to 16): Number of key value heads to use GQA.
|
331_2_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/recurrent_gemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/recurrent_gemma/#recurrentgemmaconfig
|
.md
|
num_key_value_heads (`16`, *optional*, defaults to 16): Number of key value heads to use GQA.
attention_bias (`bool`, *optional*, defaults to `False`): whether or not the linear q,k,v of the Attention layer should have bias
w_init_variance_scale (`float`, *optional*, defaults to 0.01): weight initialization variance.
```python
>>> from transformers import RecurrentGemmaModel, RecurrentGemmaConfig
|
331_2_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/recurrent_gemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/recurrent_gemma/#recurrentgemmaconfig
|
.md
|
>>> # Initializing a RecurrentGemma recurrentgemma-2b style configuration
>>> configuration = RecurrentGemmaConfig()
>>> # Initializing a model from the recurrentgemma-2b style configuration
>>> model = RecurrentGemmaModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
331_2_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/recurrent_gemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/recurrent_gemma/#recurrentgemmamodel
|
.md
|
The bare RecurrentGemma Model outputting raw hidden-states without any specific head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
331_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/recurrent_gemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/recurrent_gemma/#recurrentgemmamodel
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`RecurrentGemmaConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
|
331_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/recurrent_gemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/recurrent_gemma/#recurrentgemmamodel
|
.md
|
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Transformer decoder consisting of *config.num_hidden_layers* layers. Each layer is a [`RecurrentGemmaDecoderLayer`]
Args:
config: RecurrentGemmaConfig
Methods: forward
|
331_3_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/recurrent_gemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/recurrent_gemma/#recurrentgemmaforcausallm
|
.md
|
No docstring available for RecurrentGemmaForCausalLM
Methods: forward
|
331_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/detr/
|
.md
|
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
332_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/detr/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
332_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/detr/#overview
|
.md
|
The DETR model was proposed in [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by
Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov and Sergey Zagoruyko. DETR
consists of a convolutional backbone followed by an encoder-decoder Transformer which can be trained end-to-end for
object detection. It greatly simplifies a lot of the complexity of models like Faster-R-CNN and Mask-R-CNN, which use
|
332_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/detr/#overview
|
.md
|
object detection. It greatly simplifies a lot of the complexity of models like Faster-R-CNN and Mask-R-CNN, which use
things like region proposals, non-maximum suppression procedure and anchor generation. Moreover, DETR can also be
naturally extended to perform panoptic segmentation, by simply adding a mask head on top of the decoder outputs.
The abstract from the paper is the following:
|
332_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/detr/#overview
|
.md
|
The abstract from the paper is the following:
*We present a new method that views object detection as a direct set prediction problem. Our approach streamlines the
detection pipeline, effectively removing the need for many hand-designed components like a non-maximum suppression
procedure or anchor generation that explicitly encode our prior knowledge about the task. The main ingredients of the
|
332_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/detr/#overview
|
.md
|
procedure or anchor generation that explicitly encode our prior knowledge about the task. The main ingredients of the
new framework, called DEtection TRansformer or DETR, are a set-based global loss that forces unique predictions via
bipartite matching, and a transformer encoder-decoder architecture. Given a fixed small set of learned object queries,
DETR reasons about the relations of the objects and the global image context to directly output the final set of
|
332_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/detr/#overview
|
.md
|
DETR reasons about the relations of the objects and the global image context to directly output the final set of
predictions in parallel. The new model is conceptually simple and does not require a specialized library, unlike many
other modern detectors. DETR demonstrates accuracy and run-time performance on par with the well-established and
highly-optimized Faster RCNN baseline on the challenging COCO object detection dataset. Moreover, DETR can be easily
|
332_1_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/detr/#overview
|
.md
|
highly-optimized Faster RCNN baseline on the challenging COCO object detection dataset. Moreover, DETR can be easily
generalized to produce panoptic segmentation in a unified manner. We show that it significantly outperforms competitive
baselines.*
This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/facebookresearch/detr).
|
332_1_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/detr/#how-detr-works
|
.md
|
Here's a TLDR explaining how [`~transformers.DetrForObjectDetection`] works:
First, an image is sent through a pre-trained convolutional backbone (in the paper, the authors use
ResNet-50/ResNet-101). Let's assume we also add a batch dimension. This means that the input to the backbone is a
tensor of shape `(batch_size, 3, height, width)`, assuming the image has 3 color channels (RGB). The CNN backbone
|
332_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/detr/#how-detr-works
|
.md
|
tensor of shape `(batch_size, 3, height, width)`, assuming the image has 3 color channels (RGB). The CNN backbone
outputs a new lower-resolution feature map, typically of shape `(batch_size, 2048, height/32, width/32)`. This is
then projected to match the hidden dimension of the Transformer of DETR, which is `256` by default, using a
`nn.Conv2D` layer. So now, we have a tensor of shape `(batch_size, 256, height/32, width/32).` Next, the
|
332_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/detr/#how-detr-works
|
.md
|
`nn.Conv2D` layer. So now, we have a tensor of shape `(batch_size, 256, height/32, width/32).` Next, the
feature map is flattened and transposed to obtain a tensor of shape `(batch_size, seq_len, d_model)` =
`(batch_size, width/32*height/32, 256)`. So a difference with NLP models is that the sequence length is actually
longer than usual, but with a smaller `d_model` (which in NLP is typically 768 or higher).
|
332_2_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/detr/#how-detr-works
|
.md
|
longer than usual, but with a smaller `d_model` (which in NLP is typically 768 or higher).
Next, this is sent through the encoder, outputting `encoder_hidden_states` of the same shape (you can consider
these as image features). Next, so-called **object queries** are sent through the decoder. This is a tensor of shape
`(batch_size, num_queries, d_model)`, with `num_queries` typically set to 100 and initialized with zeros.
|
332_2_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/detr/#how-detr-works
|
.md
|
`(batch_size, num_queries, d_model)`, with `num_queries` typically set to 100 and initialized with zeros.
These input embeddings are learnt positional encodings that the authors refer to as object queries, and similarly to
the encoder, they are added to the input of each attention layer. Each object query will look for a particular object
in the image. The decoder updates these embeddings through multiple self-attention and encoder-decoder attention layers
|
332_2_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/detr/#how-detr-works
|
.md
|
in the image. The decoder updates these embeddings through multiple self-attention and encoder-decoder attention layers
to output `decoder_hidden_states` of the same shape: `(batch_size, num_queries, d_model)`. Next, two heads
are added on top for object detection: a linear layer for classifying each object query into one of the objects or "no
object", and a MLP to predict bounding boxes for each query.
|
332_2_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/detr/#how-detr-works
|
.md
|
object", and a MLP to predict bounding boxes for each query.
The model is trained using a **bipartite matching loss**: so what we actually do is compare the predicted classes +
bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N
(so if an image only contains 4 objects, 96 annotations will just have a "no object" as class and "no bounding box" as
|
332_2_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/detr/#how-detr-works
|
.md
|
(so if an image only contains 4 objects, 96 annotations will just have a "no object" as class and "no bounding box" as
bounding box). The [Hungarian matching algorithm](https://en.wikipedia.org/wiki/Hungarian_algorithm) is used to find
an optimal one-to-one mapping of each of the N queries to each of the N annotations. Next, standard cross-entropy (for
the classes) and a linear combination of the L1 and [generalized IoU loss](https://giou.stanford.edu/) (for the
|
332_2_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/detr/#how-detr-works
|
.md
|
the classes) and a linear combination of the L1 and [generalized IoU loss](https://giou.stanford.edu/) (for the
bounding boxes) are used to optimize the parameters of the model.
DETR can be naturally extended to perform panoptic segmentation (which unifies semantic segmentation and instance
segmentation). [`~transformers.DetrForSegmentation`] adds a segmentation mask head on top of
[`~transformers.DetrForObjectDetection`]. The mask head can be trained either jointly, or in a two steps process,
|
332_2_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/detr/#how-detr-works
|
.md
|
[`~transformers.DetrForObjectDetection`]. The mask head can be trained either jointly, or in a two steps process,
where one first trains a [`~transformers.DetrForObjectDetection`] model to detect bounding boxes around both
"things" (instances) and "stuff" (background things like trees, roads, sky), then freeze all the weights and train only
the mask head for 25 epochs. Experimentally, these two approaches give similar results. Note that predicting boxes is
|
332_2_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/detr/#how-detr-works
|
.md
|
the mask head for 25 epochs. Experimentally, these two approaches give similar results. Note that predicting boxes is
required for the training to be possible, since the Hungarian matching is computed using distances between boxes.
|
332_2_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/detr/#usage-tips
|
.md
|
- DETR uses so-called **object queries** to detect objects in an image. The number of queries determines the maximum
number of objects that can be detected in a single image, and is set to 100 by default (see parameter
`num_queries` of [`~transformers.DetrConfig`]). Note that it's good to have some slack (in COCO, the
authors used 100, while the maximum number of objects in a COCO image is ~70).
|
332_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/detr/#usage-tips
|
.md
|
authors used 100, while the maximum number of objects in a COCO image is ~70).
- The decoder of DETR updates the query embeddings in parallel. This is different from language models like GPT-2,
which use autoregressive decoding instead of parallel. Hence, no causal attention mask is used.
- DETR adds position embeddings to the hidden states at each self-attention and cross-attention layer before projecting
|
332_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/detr/#usage-tips
|
.md
|
- DETR adds position embeddings to the hidden states at each self-attention and cross-attention layer before projecting
to queries and keys. For the position embeddings of the image, one can choose between fixed sinusoidal or learned
absolute position embeddings. By default, the parameter `position_embedding_type` of
[`~transformers.DetrConfig`] is set to `"sine"`.
- During training, the authors of DETR did find it helpful to use auxiliary losses in the decoder, especially to help
|
332_3_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/detr/#usage-tips
|
.md
|
- During training, the authors of DETR did find it helpful to use auxiliary losses in the decoder, especially to help
the model output the correct number of objects of each class. If you set the parameter `auxiliary_loss` of
[`~transformers.DetrConfig`] to `True`, then prediction feedforward neural networks and Hungarian losses
are added after each decoder layer (with the FFNs sharing parameters).
|
332_3_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/detr/#usage-tips
|
.md
|
are added after each decoder layer (with the FFNs sharing parameters).
- If you want to train the model in a distributed environment across multiple nodes, then one should update the
_num_boxes_ variable in the _DetrLoss_ class of _modeling_detr.py_. When training on multiple nodes, this should be
|
332_3_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/detr/#usage-tips
|
.md
|
_num_boxes_ variable in the _DetrLoss_ class of _modeling_detr.py_. When training on multiple nodes, this should be
set to the average number of target boxes across all nodes, as can be seen in the original implementation [here](https://github.com/facebookresearch/detr/blob/a54b77800eb8e64e3ad0d8237789fcbf2f8350c5/models/detr.py#L227-L232).
- [`~transformers.DetrForObjectDetection`] and [`~transformers.DetrForSegmentation`] can be initialized with
|
332_3_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/detr/#usage-tips
|
.md
|
- [`~transformers.DetrForObjectDetection`] and [`~transformers.DetrForSegmentation`] can be initialized with
any convolutional backbone available in the [timm library](https://github.com/rwightman/pytorch-image-models).
Initializing with a MobileNet backbone for example can be done by setting the `backbone` attribute of
[`~transformers.DetrConfig`] to `"tf_mobilenetv3_small_075"`, and then initializing the model with that
config.
|
332_3_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/detr/#usage-tips
|
.md
|
[`~transformers.DetrConfig`] to `"tf_mobilenetv3_small_075"`, and then initializing the model with that
config.
- DETR resizes the input images such that the shortest side is at least a certain amount of pixels while the longest is
at most 1333 pixels. At training time, scale augmentation is used such that the shortest side is randomly set to at
least 480 and at most 800 pixels. At inference time, the shortest side is set to 800. One can use
|
332_3_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/detr/#usage-tips
|
.md
|
least 480 and at most 800 pixels. At inference time, the shortest side is set to 800. One can use
[`~transformers.DetrImageProcessor`] to prepare images (and optional annotations in COCO format) for the
model. Due to this resizing, images in a batch can have different sizes. DETR solves this by padding images up to the
largest size in a batch, and by creating a pixel mask that indicates which pixels are real/which are padding.
|
332_3_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/detr/#usage-tips
|
.md
|
largest size in a batch, and by creating a pixel mask that indicates which pixels are real/which are padding.
Alternatively, one can also define a custom `collate_fn` in order to batch images together, using
[`~transformers.DetrImageProcessor.pad_and_create_pixel_mask`].
- The size of the images will determine the amount of memory being used, and will thus determine the `batch_size`.
|
332_3_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/detr/#usage-tips
|
.md
|
- The size of the images will determine the amount of memory being used, and will thus determine the `batch_size`.
It is advised to use a batch size of 2 per GPU. See [this Github thread](https://github.com/facebookresearch/detr/issues/150) for more info.
There are three ways to instantiate a DETR model (depending on what you prefer):
Option 1: Instantiate DETR with pre-trained weights for entire model
```py
>>> from transformers import DetrForObjectDetection
|
332_3_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/detr/#usage-tips
|
.md
|
>>> model = DetrForObjectDetection.from_pretrained("facebook/detr-resnet-50")
```
Option 2: Instantiate DETR with randomly initialized weights for Transformer, but pre-trained weights for backbone
```py
>>> from transformers import DetrConfig, DetrForObjectDetection
|
332_3_11
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/detr/#usage-tips
|
.md
|
>>> config = DetrConfig()
>>> model = DetrForObjectDetection(config)
```
Option 3: Instantiate DETR with randomly initialized weights for backbone + Transformer
```py
>>> config = DetrConfig(use_pretrained_backbone=False)
>>> model = DetrForObjectDetection(config)
```
As a summary, consider the following table:
| Task | Object detection | Instance segmentation | Panoptic segmentation |
|------|------------------|-----------------------|-----------------------|
|
332_3_12
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/detr/#usage-tips
|
.md
|
|------|------------------|-----------------------|-----------------------|
| **Description** | Predicting bounding boxes and class labels around objects in an image | Predicting masks around objects (i.e. instances) in an image | Predicting masks around both objects (i.e. instances) as well as "stuff" (i.e. background things like trees and roads) in an image |
| **Model** | [`~transformers.DetrForObjectDetection`] | [`~transformers.DetrForSegmentation`] | [`~transformers.DetrForSegmentation`] |
|
332_3_13
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/detr/#usage-tips
|
.md
|
| **Example dataset** | COCO detection | COCO detection, COCO panoptic | COCO panoptic | |
|
332_3_14
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/detr/#usage-tips
|
.md
|
| **Format of annotations to provide to** [`~transformers.DetrImageProcessor`] | {'image_id': `int`, 'annotations': `List[Dict]`} each Dict being a COCO object annotation | {'image_id': `int`, 'annotations': `List[Dict]`} (in case of COCO detection) or {'file_name': `str`, 'image_id': `int`, 'segments_info': `List[Dict]`} (in case of COCO panoptic) | {'file_name': `str`, 'image_id': `int`, 'segments_info': `List[Dict]`} and masks_path (path to directory containing PNG files of the masks) |
|
332_3_15
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/detr/#usage-tips
|
.md
|
| **Postprocessing** (i.e. converting the output of the model to Pascal VOC format) | [`~transformers.DetrImageProcessor.post_process`] | [`~transformers.DetrImageProcessor.post_process_segmentation`] | [`~transformers.DetrImageProcessor.post_process_segmentation`], [`~transformers.DetrImageProcessor.post_process_panoptic`] |
|
332_3_16
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/detr/#usage-tips
|
.md
|
| **evaluators** | `CocoEvaluator` with `iou_types="bbox"` | `CocoEvaluator` with `iou_types="bbox"` or `"segm"` | `CocoEvaluator` with `iou_tupes="bbox"` or `"segm"`, `PanopticEvaluator` |
In short, one should prepare the data either in COCO detection or COCO panoptic format, then use
[`~transformers.DetrImageProcessor`] to create `pixel_values`, `pixel_mask` and optional
`labels`, which can then be used to train (or fine-tune) a model. For evaluation, one should first convert the
|
332_3_17
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/detr/#usage-tips
|
.md
|
`labels`, which can then be used to train (or fine-tune) a model. For evaluation, one should first convert the
outputs of the model using one of the postprocessing methods of [`~transformers.DetrImageProcessor`]. These can
be provided to either `CocoEvaluator` or `PanopticEvaluator`, which allow you to calculate metrics like
|
332_3_18
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/detr/#usage-tips
|
.md
|
be provided to either `CocoEvaluator` or `PanopticEvaluator`, which allow you to calculate metrics like
mean Average Precision (mAP) and Panoptic Quality (PQ). The latter objects are implemented in the [original repository](https://github.com/facebookresearch/detr). See the [example notebooks](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/DETR) for more info regarding evaluation.
|
332_3_19
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/detr/#resources
|
.md
|
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with DETR.
<PipelineTag pipeline="object-detection"/>
- All example notebooks illustrating fine-tuning [`DetrForObjectDetection`] and [`DetrForSegmentation`] on a custom dataset can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/DETR).
|
332_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/detr/#resources
|
.md
|
- Scripts for finetuning [`DetrForObjectDetection`] with [`Trainer`] or [Accelerate](https://huggingface.co/docs/accelerate/index) can be found [here](https://github.com/huggingface/transformers/tree/main/examples/pytorch/object-detection).
- See also: [Object detection task guide](../tasks/object_detection).
|
332_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/detr/#resources
|
.md
|
- See also: [Object detection task guide](../tasks/object_detection).
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
|
332_4_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/detr/#detrconfig
|
.md
|
This is the configuration class to store the configuration of a [`DetrModel`]. It is used to instantiate a DETR
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the DETR
[facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
332_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/detr/#detrconfig
|
.md
|
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
use_timm_backbone (`bool`, *optional*, defaults to `True`):
Whether or not to use the `timm` library for the backbone. If set to `False`, will use the [`AutoBackbone`]
API.
backbone_config (`PretrainedConfig` or `dict`, *optional*):
|
332_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/detr/#detrconfig
|
.md
|
API.
backbone_config (`PretrainedConfig` or `dict`, *optional*):
The configuration of the backbone model. Only used in case `use_timm_backbone` is set to `False` in which
case it will default to `ResNetConfig()`.
num_channels (`int`, *optional*, defaults to 3):
The number of input channels.
num_queries (`int`, *optional*, defaults to 100):
Number of object queries, i.e. detection slots. This is the maximal number of objects [`DetrModel`] can
detect in a single image. For COCO, we recommend 100 queries.
|
332_5_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/detr/#detrconfig
|
.md
|
detect in a single image. For COCO, we recommend 100 queries.
d_model (`int`, *optional*, defaults to 256):
Dimension of the layers.
encoder_layers (`int`, *optional*, defaults to 6):
Number of encoder layers.
decoder_layers (`int`, *optional*, defaults to 6):
Number of decoder layers.
encoder_attention_heads (`int`, *optional*, defaults to 8):
Number of attention heads for each attention layer in the Transformer encoder.
decoder_attention_heads (`int`, *optional*, defaults to 8):
|
332_5_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/detr/#detrconfig
|
.md
|
decoder_attention_heads (`int`, *optional*, defaults to 8):
Number of attention heads for each attention layer in the Transformer decoder.
decoder_ffn_dim (`int`, *optional*, defaults to 2048):
Dimension of the "intermediate" (often named feed-forward) layer in decoder.
encoder_ffn_dim (`int`, *optional*, defaults to 2048):
Dimension of the "intermediate" (often named feed-forward) layer in decoder.
activation_function (`str` or `function`, *optional*, defaults to `"relu"`):
|
332_5_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/detr/#detrconfig
|
.md
|
activation_function (`str` or `function`, *optional*, defaults to `"relu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"silu"` and `"gelu_new"` are supported.
dropout (`float`, *optional*, defaults to 0.1):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
|
332_5_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/detr/#detrconfig
|
.md
|
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
activation_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for activations inside the fully connected layer.
init_std (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
init_xavier_std (`float`, *optional*, defaults to 1):
|
332_5_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/detr/#detrconfig
|
.md
|
init_xavier_std (`float`, *optional*, defaults to 1):
The scaling factor used for the Xavier initialization gain in the HM Attention map module.
encoder_layerdrop (`float`, *optional*, defaults to 0.0):
The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
decoder_layerdrop (`float`, *optional*, defaults to 0.0):
The LayerDrop probability for the decoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
|
332_5_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/detr/#detrconfig
|
.md
|
The LayerDrop probability for the decoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
auxiliary_loss (`bool`, *optional*, defaults to `False`):
Whether auxiliary decoding losses (loss at each decoder layer) are to be used.
position_embedding_type (`str`, *optional*, defaults to `"sine"`):
Type of position embeddings to be used on top of the image features. One of `"sine"` or `"learned"`.
backbone (`str`, *optional*, defaults to `"resnet50"`):
|
332_5_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/detr/#detrconfig
|
.md
|
backbone (`str`, *optional*, defaults to `"resnet50"`):
Name of backbone to use when `backbone_config` is `None`. If `use_pretrained_backbone` is `True`, this
will load the corresponding pretrained weights from the timm or transformers library. If `use_pretrained_backbone`
is `False`, this loads the backbone's config and uses that to initialize the backbone with random weights.
use_pretrained_backbone (`bool`, *optional*, `True`):
Whether to use pretrained weights for the backbone.
|
332_5_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/detr/#detrconfig
|
.md
|
use_pretrained_backbone (`bool`, *optional*, `True`):
Whether to use pretrained weights for the backbone.
backbone_kwargs (`dict`, *optional*):
Keyword arguments to be passed to AutoBackbone when loading from a checkpoint
e.g. `{'out_indices': (0, 1, 2, 3)}`. Cannot be specified if `backbone_config` is set.
dilation (`bool`, *optional*, defaults to `False`):
Whether to replace stride with dilation in the last convolutional block (DC5). Only supported when
`use_timm_backbone` = `True`.
|
332_5_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/detr/#detrconfig
|
.md
|
`use_timm_backbone` = `True`.
class_cost (`float`, *optional*, defaults to 1):
Relative weight of the classification error in the Hungarian matching cost.
bbox_cost (`float`, *optional*, defaults to 5):
Relative weight of the L1 error of the bounding box coordinates in the Hungarian matching cost.
giou_cost (`float`, *optional*, defaults to 2):
Relative weight of the generalized IoU loss of the bounding box in the Hungarian matching cost.
mask_loss_coefficient (`float`, *optional*, defaults to 1):
|
332_5_11
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/detr/#detrconfig
|
.md
|
mask_loss_coefficient (`float`, *optional*, defaults to 1):
Relative weight of the Focal loss in the panoptic segmentation loss.
dice_loss_coefficient (`float`, *optional*, defaults to 1):
Relative weight of the DICE/F-1 loss in the panoptic segmentation loss.
bbox_loss_coefficient (`float`, *optional*, defaults to 5):
Relative weight of the L1 bounding box loss in the object detection loss.
giou_loss_coefficient (`float`, *optional*, defaults to 2):
|
332_5_12
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/detr/#detrconfig
|
.md
|
giou_loss_coefficient (`float`, *optional*, defaults to 2):
Relative weight of the generalized IoU loss in the object detection loss.
eos_coefficient (`float`, *optional*, defaults to 0.1):
Relative classification weight of the 'no-object' class in the object detection loss.
Examples:
```python
>>> from transformers import DetrConfig, DetrModel
|
332_5_13
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/detr/#detrconfig
|
.md
|
>>> # Initializing a DETR facebook/detr-resnet-50 style configuration
>>> configuration = DetrConfig()
>>> # Initializing a model (with random weights) from the facebook/detr-resnet-50 style configuration
>>> model = DetrModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
332_5_14
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/detr/#detrimageprocessor
|
.md
|
Constructs a Detr image processor.
Args:
format (`str`, *optional*, defaults to `"coco_detection"`):
Data format of the annotations. One of "coco_detection" or "coco_panoptic".
do_resize (`bool`, *optional*, defaults to `True`):
Controls whether to resize the image's `(height, width)` dimensions to the specified `size`. Can be
overridden by the `do_resize` parameter in the `preprocess` method.
size (`Dict[str, int]` *optional*, defaults to `{"shortest_edge": 800, "longest_edge": 1333}`):
|
332_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/detr/#detrimageprocessor
|
.md
|
size (`Dict[str, int]` *optional*, defaults to `{"shortest_edge": 800, "longest_edge": 1333}`):
Size of the image's `(height, width)` dimensions after resizing. Can be overridden by the `size` parameter
in the `preprocess` method. Available options are:
- `{"height": int, "width": int}`: The image will be resized to the exact size `(height, width)`.
Do NOT keep the aspect ratio.
- `{"shortest_edge": int, "longest_edge": int}`: The image will be resized to a maximum size respecting
|
332_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/detr/#detrimageprocessor
|
.md
|
- `{"shortest_edge": int, "longest_edge": int}`: The image will be resized to a maximum size respecting
the aspect ratio and keeping the shortest edge less or equal to `shortest_edge` and the longest edge
less or equal to `longest_edge`.
- `{"max_height": int, "max_width": int}`: The image will be resized to the maximum size respecting the
aspect ratio and keeping the height less or equal to `max_height` and the width less or equal to
`max_width`.
|
332_6_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/detr/#detrimageprocessor
|
.md
|
aspect ratio and keeping the height less or equal to `max_height` and the width less or equal to
`max_width`.
resample (`PILImageResampling`, *optional*, defaults to `PILImageResampling.BILINEAR`):
Resampling filter to use if resizing the image.
do_rescale (`bool`, *optional*, defaults to `True`):
Controls whether to rescale the image by the specified scale `rescale_factor`. Can be overridden by the
`do_rescale` parameter in the `preprocess` method.
|
332_6_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/detr/#detrimageprocessor
|
.md
|
`do_rescale` parameter in the `preprocess` method.
rescale_factor (`int` or `float`, *optional*, defaults to `1/255`):
Scale factor to use if rescaling the image. Can be overridden by the `rescale_factor` parameter in the
`preprocess` method.
do_normalize (`bool`, *optional*, defaults to True):
Controls whether to normalize the image. Can be overridden by the `do_normalize` parameter in the
`preprocess` method.
image_mean (`float` or `List[float]`, *optional*, defaults to `IMAGENET_DEFAULT_MEAN`):
|
332_6_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/detr/#detrimageprocessor
|
.md
|
`preprocess` method.
image_mean (`float` or `List[float]`, *optional*, defaults to `IMAGENET_DEFAULT_MEAN`):
Mean values to use when normalizing the image. Can be a single value or a list of values, one for each
channel. Can be overridden by the `image_mean` parameter in the `preprocess` method.
image_std (`float` or `List[float]`, *optional*, defaults to `IMAGENET_DEFAULT_STD`):
Standard deviation values to use when normalizing the image. Can be a single value or a list of values, one
|
332_6_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/detr/#detrimageprocessor
|
.md
|
Standard deviation values to use when normalizing the image. Can be a single value or a list of values, one
for each channel. Can be overridden by the `image_std` parameter in the `preprocess` method.
do_convert_annotations (`bool`, *optional*, defaults to `True`):
Controls whether to convert the annotations to the format expected by the DETR model. Converts the
bounding boxes to the format `(center_x, center_y, width, height)` and in the range `[0, 1]`.
|
332_6_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/detr/#detrimageprocessor
|
.md
|
bounding boxes to the format `(center_x, center_y, width, height)` and in the range `[0, 1]`.
Can be overridden by the `do_convert_annotations` parameter in the `preprocess` method.
do_pad (`bool`, *optional*, defaults to `True`):
Controls whether to pad the image. Can be overridden by the `do_pad` parameter in the `preprocess`
method. If `True`, padding will be applied to the bottom and right of the image with zeros.
If `pad_size` is provided, the image will be padded to the specified dimensions.
|
332_6_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/detr/#detrimageprocessor
|
.md
|
If `pad_size` is provided, the image will be padded to the specified dimensions.
Otherwise, the image will be padded to the maximum height and width of the batch.
pad_size (`Dict[str, int]`, *optional*):
The size `{"height": int, "width" int}` to pad the images to. Must be larger than any image size
provided for preprocessing. If `pad_size` is not provided, images will be padded to the largest
height and width in the batch.
Methods: preprocess
- post_process_object_detection
|
332_6_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/detr/#detrimageprocessor
|
.md
|
height and width in the batch.
Methods: preprocess
- post_process_object_detection
- post_process_semantic_segmentation
- post_process_instance_segmentation
- post_process_panoptic_segmentation
|
332_6_9
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.