source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/helium.md
https://huggingface.co/docs/transformers/en/model_doc/helium/#heliumconfig
.md
Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if `config.is_decoder=True`. tie_word_embeddings (`bool`, *optional*, defaults to `False`): Whether to tie weight embeddings rope_theta (`float`, *optional*, defaults to 100000.0): The base period of the RoPE embeddings. pad_token_id (`int`, *optional*, defaults to 3): Padding token id. eos_token_id (`int` | `list`, *optional*, defaults to 2): End of stream token id.
368_9_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/helium.md
https://huggingface.co/docs/transformers/en/model_doc/helium/#heliumconfig
.md
Padding token id. eos_token_id (`int` | `list`, *optional*, defaults to 2): End of stream token id. bos_token_id (`int`, *optional*, defaults to 1): Beginning of stream token id. attention_bias (`bool`, *optional*, defaults to `False`): Whether to use a bias in the query, key, value and output projection layers during self-attention. mlp_bias (`bool`, *optional*, defaults to `False`): Whether to use a bias in up_proj, down_proj and gate_proj layers in the MLP layers. ```python
368_9_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/helium.md
https://huggingface.co/docs/transformers/en/model_doc/helium/#heliumconfig
.md
Whether to use a bias in up_proj, down_proj and gate_proj layers in the MLP layers. ```python >>> from transformers import HeliumModel, HeliumConfig >>> # Initializing a Helium 2b style configuration >>> configuration = HeliumConfig() >>> # Initializing a model from the Helium 2b style configuration >>> model = HeliumModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
368_9_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/helium.md
https://huggingface.co/docs/transformers/en/model_doc/helium/#heliummodel
.md
HeliumModel The bare Helium Model outputting raw hidden-states without any specific head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
368_10_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/helium.md
https://huggingface.co/docs/transformers/en/model_doc/helium/#heliummodel
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`HeliumConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the
368_10_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/helium.md
https://huggingface.co/docs/transformers/en/model_doc/helium/#heliummodel
.md
load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Transformer decoder consisting of *config.num_hidden_layers* layers. Each layer is a [`HeliumDecoderLayer`] Args: config: HeliumConfig - forward
368_10_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/helium.md
https://huggingface.co/docs/transformers/en/model_doc/helium/#heliumforcausallm
.md
HeliumForCausalLM - forward
368_11_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/helium.md
https://huggingface.co/docs/transformers/en/model_doc/helium/#heliumforsequenceclassification
.md
HeliumForSequenceClassification The Helium Model transformer with a sequence classification head on top (linear layer). [`HeliumForSequenceClassification`] uses the last token in order to do the classification, as other causal models (e.g. GPT-2) do. Since it does classification on the last token, it requires to know the position of the last token. If a `pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If
368_12_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/helium.md
https://huggingface.co/docs/transformers/en/model_doc/helium/#heliumforsequenceclassification
.md
`pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If no `pad_token_id` is defined, it simply takes the last value in each row of the batch. Since it cannot guess the padding tokens when `inputs_embeds` are passed instead of `input_ids`, it does the same (take the last value in each row of the batch). This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
368_12_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/helium.md
https://huggingface.co/docs/transformers/en/model_doc/helium/#heliumforsequenceclassification
.md
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters:
368_12_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/helium.md
https://huggingface.co/docs/transformers/en/model_doc/helium/#heliumforsequenceclassification
.md
and behavior. Parameters: config ([`HeliumConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. - forward
368_12_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/helium.md
https://huggingface.co/docs/transformers/en/model_doc/helium/#heliumfortokenclassification
.md
HeliumForTokenClassification The Helium Model transformer with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
368_13_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/helium.md
https://huggingface.co/docs/transformers/en/model_doc/helium/#heliumfortokenclassification
.md
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`HeliumConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not
368_13_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/helium.md
https://huggingface.co/docs/transformers/en/model_doc/helium/#heliumfortokenclassification
.md
Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. - forward
368_13_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/autoformer.md
https://huggingface.co/docs/transformers/en/model_doc/autoformer/
.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
369_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/autoformer.md
https://huggingface.co/docs/transformers/en/model_doc/autoformer/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
369_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/autoformer.md
https://huggingface.co/docs/transformers/en/model_doc/autoformer/#overview
.md
The Autoformer model was proposed in [Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting](https://arxiv.org/abs/2106.13008) by Haixu Wu, Jiehui Xu, Jianmin Wang, Mingsheng Long. This model augments the Transformer as a deep decomposition architecture, which can progressively decompose the trend and seasonal components during the forecasting process. The abstract from the paper is the following:
369_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/autoformer.md
https://huggingface.co/docs/transformers/en/model_doc/autoformer/#overview
.md
*Extending the forecasting time is a critical demand for real applications, such as extreme weather early warning and long-term energy consumption planning. This paper studies the long-term forecasting problem of time series. Prior Transformer-based models adopt various self-attention mechanisms to discover the long-range dependencies. However, intricate temporal patterns of the long-term future prohibit the model from finding reliable dependencies. Also, Transformers have to adopt the sparse versions of
369_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/autoformer.md
https://huggingface.co/docs/transformers/en/model_doc/autoformer/#overview
.md
long-term future prohibit the model from finding reliable dependencies. Also, Transformers have to adopt the sparse versions of point-wise self-attentions for long series efficiency, resulting in the information utilization bottleneck. Going beyond Transformers, we design Autoformer as a novel decomposition architecture with an Auto-Correlation mechanism. We break with the pre-processing convention of series decomposition and renovate it as a basic inner block of deep models. This design empowers
369_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/autoformer.md
https://huggingface.co/docs/transformers/en/model_doc/autoformer/#overview
.md
pre-processing convention of series decomposition and renovate it as a basic inner block of deep models. This design empowers Autoformer with progressive decomposition capacities for complex time series. Further, inspired by the stochastic process theory, we design the Auto-Correlation mechanism based on the series periodicity, which conducts the dependencies discovery and representation aggregation at the sub-series level. Auto-Correlation outperforms self-attention in both efficiency and accuracy. In
369_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/autoformer.md
https://huggingface.co/docs/transformers/en/model_doc/autoformer/#overview
.md
aggregation at the sub-series level. Auto-Correlation outperforms self-attention in both efficiency and accuracy. In long-term forecasting, Autoformer yields state-of-the-art accuracy, with a 38% relative improvement on six benchmarks, covering five practical applications: energy, traffic, economics, weather and disease.*
369_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/autoformer.md
https://huggingface.co/docs/transformers/en/model_doc/autoformer/#overview
.md
This model was contributed by [elisim](https://huggingface.co/elisim) and [kashif](https://huggingface.co/kashif). The original code can be found [here](https://github.com/thuml/Autoformer).
369_1_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/autoformer.md
https://huggingface.co/docs/transformers/en/model_doc/autoformer/#resources
.md
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. - Check out the Autoformer blog-post in HuggingFace blog: [Yes, Transformers are Effective for Time Series Forecasting (+ Autoformer)](https://huggingface.co/blog/autoformer)
369_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/autoformer.md
https://huggingface.co/docs/transformers/en/model_doc/autoformer/#autoformerconfig
.md
AutoformerConfig This is the configuration class to store the configuration of an [`AutoformerModel`]. It is used to instantiate an Autoformer model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Autoformer [huggingface/autoformer-tourism-monthly](https://huggingface.co/huggingface/autoformer-tourism-monthly) architecture.
369_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/autoformer.md
https://huggingface.co/docs/transformers/en/model_doc/autoformer/#autoformerconfig
.md
[huggingface/autoformer-tourism-monthly](https://huggingface.co/huggingface/autoformer-tourism-monthly) architecture. Configuration objects inherit from [`PretrainedConfig`] can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: prediction_length (`int`): The prediction length for the decoder. In other words, the prediction horizon of the model. context_length (`int`, *optional*, defaults to `prediction_length`):
369_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/autoformer.md
https://huggingface.co/docs/transformers/en/model_doc/autoformer/#autoformerconfig
.md
context_length (`int`, *optional*, defaults to `prediction_length`): The context length for the encoder. If unset, the context length will be the same as the `prediction_length`. distribution_output (`string`, *optional*, defaults to `"student_t"`): The distribution emission head for the model. Could be either "student_t", "normal" or "negative_binomial". loss (`string`, *optional*, defaults to `"nll"`): The loss function for the model corresponding to the `distribution_output` head. For parametric
369_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/autoformer.md
https://huggingface.co/docs/transformers/en/model_doc/autoformer/#autoformerconfig
.md
The loss function for the model corresponding to the `distribution_output` head. For parametric distributions it is the negative log likelihood (nll) - which currently is the only supported one. input_size (`int`, *optional*, defaults to 1): The size of the target variable which by default is 1 for univariate targets. Would be > 1 in case of multivariate targets. lags_sequence (`list[int]`, *optional*, defaults to `[1, 2, 3, 4, 5, 6, 7]`):
369_3_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/autoformer.md
https://huggingface.co/docs/transformers/en/model_doc/autoformer/#autoformerconfig
.md
multivariate targets. lags_sequence (`list[int]`, *optional*, defaults to `[1, 2, 3, 4, 5, 6, 7]`): The lags of the input time series as covariates often dictated by the frequency. Default is `[1, 2, 3, 4, 5, 6, 7]`. scaling (`bool`, *optional* defaults to `True`): Whether to scale the input targets. num_time_features (`int`, *optional*, defaults to 0): The number of time features in the input time series. num_dynamic_real_features (`int`, *optional*, defaults to 0):
369_3_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/autoformer.md
https://huggingface.co/docs/transformers/en/model_doc/autoformer/#autoformerconfig
.md
The number of time features in the input time series. num_dynamic_real_features (`int`, *optional*, defaults to 0): The number of dynamic real valued features. num_static_categorical_features (`int`, *optional*, defaults to 0): The number of static categorical features. num_static_real_features (`int`, *optional*, defaults to 0): The number of static real valued features. cardinality (`list[int]`, *optional*):
369_3_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/autoformer.md
https://huggingface.co/docs/transformers/en/model_doc/autoformer/#autoformerconfig
.md
The number of static real valued features. cardinality (`list[int]`, *optional*): The cardinality (number of different values) for each of the static categorical features. Should be a list of integers, having the same length as `num_static_categorical_features`. Cannot be `None` if `num_static_categorical_features` is > 0. embedding_dimension (`list[int]`, *optional*): The dimension of the embedding for each of the static categorical features. Should be a list of integers,
369_3_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/autoformer.md
https://huggingface.co/docs/transformers/en/model_doc/autoformer/#autoformerconfig
.md
The dimension of the embedding for each of the static categorical features. Should be a list of integers, having the same length as `num_static_categorical_features`. Cannot be `None` if `num_static_categorical_features` is > 0. d_model (`int`, *optional*, defaults to 64): Dimensionality of the transformer layers. encoder_layers (`int`, *optional*, defaults to 2): Number of encoder layers. decoder_layers (`int`, *optional*, defaults to 2): Number of decoder layers.
369_3_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/autoformer.md
https://huggingface.co/docs/transformers/en/model_doc/autoformer/#autoformerconfig
.md
Number of encoder layers. decoder_layers (`int`, *optional*, defaults to 2): Number of decoder layers. encoder_attention_heads (`int`, *optional*, defaults to 2): Number of attention heads for each attention layer in the Transformer encoder. decoder_attention_heads (`int`, *optional*, defaults to 2): Number of attention heads for each attention layer in the Transformer decoder. encoder_ffn_dim (`int`, *optional*, defaults to 32): Dimension of the "intermediate" (often named feed-forward) layer in encoder.
369_3_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/autoformer.md
https://huggingface.co/docs/transformers/en/model_doc/autoformer/#autoformerconfig
.md
Dimension of the "intermediate" (often named feed-forward) layer in encoder. decoder_ffn_dim (`int`, *optional*, defaults to 32): Dimension of the "intermediate" (often named feed-forward) layer in decoder. activation_function (`str` or `function`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder and decoder. If string, `"gelu"` and `"relu"` are supported. dropout (`float`, *optional*, defaults to 0.1):
369_3_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/autoformer.md
https://huggingface.co/docs/transformers/en/model_doc/autoformer/#autoformerconfig
.md
`"relu"` are supported. dropout (`float`, *optional*, defaults to 0.1): The dropout probability for all fully connected layers in the encoder, and decoder. encoder_layerdrop (`float`, *optional*, defaults to 0.1): The dropout probability for the attention and fully connected layers for each encoder layer. decoder_layerdrop (`float`, *optional*, defaults to 0.1): The dropout probability for the attention and fully connected layers for each decoder layer.
369_3_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/autoformer.md
https://huggingface.co/docs/transformers/en/model_doc/autoformer/#autoformerconfig
.md
The dropout probability for the attention and fully connected layers for each decoder layer. attention_dropout (`float`, *optional*, defaults to 0.1): The dropout probability for the attention probabilities. activation_dropout (`float`, *optional*, defaults to 0.1): The dropout probability used between the two layers of the feed-forward networks. num_parallel_samples (`int`, *optional*, defaults to 100): The number of samples to generate in parallel for each time step of inference.
369_3_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/autoformer.md
https://huggingface.co/docs/transformers/en/model_doc/autoformer/#autoformerconfig
.md
The number of samples to generate in parallel for each time step of inference. init_std (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated normal weight initialization distribution. use_cache (`bool`, *optional*, defaults to `True`): Whether to use the past key/values attentions (if applicable to the model) to speed up decoding. label_length (`int`, *optional*, defaults to 10): Start token length of the Autoformer decoder, which is used for direct multi-step prediction (i.e.
369_3_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/autoformer.md
https://huggingface.co/docs/transformers/en/model_doc/autoformer/#autoformerconfig
.md
Start token length of the Autoformer decoder, which is used for direct multi-step prediction (i.e. non-autoregressive generation). moving_average (`int`, *optional*, defaults to 25): The window size of the moving average. In practice, it's the kernel size in AvgPool1d of the Decomposition Layer. autocorrelation_factor (`int`, *optional*, defaults to 3): "Attention" (i.e. AutoCorrelation mechanism) factor which is used to find top k autocorrelations delays.
369_3_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/autoformer.md
https://huggingface.co/docs/transformers/en/model_doc/autoformer/#autoformerconfig
.md
"Attention" (i.e. AutoCorrelation mechanism) factor which is used to find top k autocorrelations delays. It's recommended in the paper to set it to a number between 1 and 5. Example: ```python >>> from transformers import AutoformerConfig, AutoformerModel
369_3_14
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/autoformer.md
https://huggingface.co/docs/transformers/en/model_doc/autoformer/#autoformerconfig
.md
>>> # Initializing a default Autoformer configuration >>> configuration = AutoformerConfig() >>> # Randomly initializing a model (with random weights) from the configuration >>> model = AutoformerModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
369_3_15
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/autoformer.md
https://huggingface.co/docs/transformers/en/model_doc/autoformer/#autoformermodel
.md
AutoformerModel The bare Autoformer Model outputting raw hidden-states without any specific head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
369_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/autoformer.md
https://huggingface.co/docs/transformers/en/model_doc/autoformer/#autoformermodel
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`AutoformerConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the
369_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/autoformer.md
https://huggingface.co/docs/transformers/en/model_doc/autoformer/#autoformermodel
.md
load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
369_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/autoformer.md
https://huggingface.co/docs/transformers/en/model_doc/autoformer/#autoformerforprediction
.md
AutoformerForPrediction The Autoformer Model with a distribution head on top for time-series forecasting. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
369_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/autoformer.md
https://huggingface.co/docs/transformers/en/model_doc/autoformer/#autoformerforprediction
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`AutoformerConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the
369_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/autoformer.md
https://huggingface.co/docs/transformers/en/model_doc/autoformer/#autoformerforprediction
.md
load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
369_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clipseg.md
https://huggingface.co/docs/transformers/en/model_doc/clipseg/
.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
370_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clipseg.md
https://huggingface.co/docs/transformers/en/model_doc/clipseg/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
370_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clipseg.md
https://huggingface.co/docs/transformers/en/model_doc/clipseg/#overview
.md
The CLIPSeg model was proposed in [Image Segmentation Using Text and Image Prompts](https://arxiv.org/abs/2112.10003) by Timo Lüddecke and Alexander Ecker. CLIPSeg adds a minimal decoder on top of a frozen [CLIP](clip) model for zero-shot and one-shot image segmentation. The abstract from the paper is the following: *Image segmentation is usually addressed by training a model for a fixed set of object classes. Incorporating additional classes or more complex queries later is expensive
370_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clipseg.md
https://huggingface.co/docs/transformers/en/model_doc/clipseg/#overview
.md
model for a fixed set of object classes. Incorporating additional classes or more complex queries later is expensive as it requires re-training the model on a dataset that encompasses these expressions. Here we propose a system that can generate image segmentations based on arbitrary prompts at test time. A prompt can be either a text or an image. This approach enables us to create a unified model (trained once) for three common segmentation tasks, which
370_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clipseg.md
https://huggingface.co/docs/transformers/en/model_doc/clipseg/#overview
.md
image. This approach enables us to create a unified model (trained once) for three common segmentation tasks, which come with distinct challenges: referring expression segmentation, zero-shot segmentation and one-shot segmentation. We build upon the CLIP model as a backbone which we extend with a transformer-based decoder that enables dense prediction. After training on an extended version of the
370_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clipseg.md
https://huggingface.co/docs/transformers/en/model_doc/clipseg/#overview
.md
prediction. After training on an extended version of the PhraseCut dataset, our system generates a binary segmentation map for an image based on a free-text prompt or on an additional image expressing the query. We analyze different variants of the latter image-based prompts in detail. This novel hybrid input allows for dynamic adaptation not only to the three segmentation tasks mentioned above, but to any binary segmentation task where a text or image query
370_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clipseg.md
https://huggingface.co/docs/transformers/en/model_doc/clipseg/#overview
.md
only to the three segmentation tasks mentioned above, but to any binary segmentation task where a text or image query can be formulated. Finally, we find our system to adapt well to generalized queries involving affordances or properties* <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/clipseg_architecture.png" alt="drawing" width="600"/>
370_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clipseg.md
https://huggingface.co/docs/transformers/en/model_doc/clipseg/#overview
.md
alt="drawing" width="600"/> <small> CLIPSeg overview. Taken from the <a href="https://arxiv.org/abs/2112.10003">original paper.</a> </small> This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/timojl/clipseg).
370_1_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clipseg.md
https://huggingface.co/docs/transformers/en/model_doc/clipseg/#usage-tips
.md
- [`CLIPSegForImageSegmentation`] adds a decoder on top of [`CLIPSegModel`]. The latter is identical to [`CLIPModel`]. - [`CLIPSegForImageSegmentation`] can generate image segmentations based on arbitrary prompts at test time. A prompt can be either a text (provided to the model as `input_ids`) or an image (provided to the model as `conditional_pixel_values`). One can also provide custom conditional embeddings (provided to the model as `conditional_embeddings`).
370_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clipseg.md
https://huggingface.co/docs/transformers/en/model_doc/clipseg/#resources
.md
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with CLIPSeg. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. <PipelineTag pipeline="image-segmentation"/>
370_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clipseg.md
https://huggingface.co/docs/transformers/en/model_doc/clipseg/#resources
.md
<PipelineTag pipeline="image-segmentation"/> - A notebook that illustrates [zero-shot image segmentation with CLIPSeg](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/CLIPSeg/Zero_shot_image_segmentation_with_CLIPSeg.ipynb).
370_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clipseg.md
https://huggingface.co/docs/transformers/en/model_doc/clipseg/#clipsegconfig
.md
[`CLIPSegConfig`] is the configuration class to store the configuration of a [`CLIPSegModel`]. It is used to instantiate a CLIPSeg model according to the specified arguments, defining the text model and vision model configs. Instantiating a configuration with the defaults will yield a similar configuration to that of the CLIPSeg [CIDAS/clipseg-rd64](https://huggingface.co/CIDAS/clipseg-rd64) architecture.
370_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clipseg.md
https://huggingface.co/docs/transformers/en/model_doc/clipseg/#clipsegconfig
.md
[CIDAS/clipseg-rd64](https://huggingface.co/CIDAS/clipseg-rd64) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: text_config (`dict`, *optional*): Dictionary of configuration options used to initialize [`CLIPSegTextConfig`]. vision_config (`dict`, *optional*): Dictionary of configuration options used to initialize [`CLIPSegVisionConfig`].
370_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clipseg.md
https://huggingface.co/docs/transformers/en/model_doc/clipseg/#clipsegconfig
.md
vision_config (`dict`, *optional*): Dictionary of configuration options used to initialize [`CLIPSegVisionConfig`]. projection_dim (`int`, *optional*, defaults to 512): Dimensionality of text and vision projection layers. logit_scale_init_value (`float`, *optional*, defaults to 2.6592): The initial value of the *logit_scale* parameter. Default is used as per the original CLIPSeg implementation. extract_layers (`List[int]`, *optional*, defaults to `[3, 6, 9]`):
370_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clipseg.md
https://huggingface.co/docs/transformers/en/model_doc/clipseg/#clipsegconfig
.md
extract_layers (`List[int]`, *optional*, defaults to `[3, 6, 9]`): Layers to extract when forwarding the query image through the frozen visual backbone of CLIP. reduce_dim (`int`, *optional*, defaults to 64): Dimensionality to reduce the CLIP vision embedding. decoder_num_attention_heads (`int`, *optional*, defaults to 4): Number of attention heads in the decoder of CLIPSeg. decoder_attention_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities.
370_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clipseg.md
https://huggingface.co/docs/transformers/en/model_doc/clipseg/#clipsegconfig
.md
decoder_attention_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. decoder_hidden_act (`str` or `function`, *optional*, defaults to `"quick_gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` `"quick_gelu"` are supported. decoder_intermediate_size (`int`, *optional*, defaults to 2048):
370_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clipseg.md
https://huggingface.co/docs/transformers/en/model_doc/clipseg/#clipsegconfig
.md
decoder_intermediate_size (`int`, *optional*, defaults to 2048): Dimensionality of the "intermediate" (i.e., feed-forward) layers in the Transformer decoder. conditional_layer (`int`, *optional*, defaults to 0): The layer to use of the Transformer encoder whose activations will be combined with the condition embeddings using FiLM (Feature-wise Linear Modulation). If 0, the last layer is used. use_complex_transposed_convolution (`bool`, *optional*, defaults to `False`):
370_4_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clipseg.md
https://huggingface.co/docs/transformers/en/model_doc/clipseg/#clipsegconfig
.md
use_complex_transposed_convolution (`bool`, *optional*, defaults to `False`): Whether to use a more complex transposed convolution in the decoder, enabling more fine-grained segmentation. kwargs (*optional*): Dictionary of keyword arguments. Example: ```python >>> from transformers import CLIPSegConfig, CLIPSegModel
370_4_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clipseg.md
https://huggingface.co/docs/transformers/en/model_doc/clipseg/#clipsegconfig
.md
>>> # Initializing a CLIPSegConfig with CIDAS/clipseg-rd64 style configuration >>> configuration = CLIPSegConfig() >>> # Initializing a CLIPSegModel (with random weights) from the CIDAS/clipseg-rd64 style configuration >>> model = CLIPSegModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config >>> # We can also initialize a CLIPSegConfig from a CLIPSegTextConfig and a CLIPSegVisionConfig
370_4_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clipseg.md
https://huggingface.co/docs/transformers/en/model_doc/clipseg/#clipsegconfig
.md
>>> # We can also initialize a CLIPSegConfig from a CLIPSegTextConfig and a CLIPSegVisionConfig >>> # Initializing a CLIPSegText and CLIPSegVision configuration >>> config_text = CLIPSegTextConfig() >>> config_vision = CLIPSegVisionConfig() >>> config = CLIPSegConfig.from_text_vision_configs(config_text, config_vision) ``` Methods: from_text_vision_configs
370_4_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clipseg.md
https://huggingface.co/docs/transformers/en/model_doc/clipseg/#clipsegtextconfig
.md
This is the configuration class to store the configuration of a [`CLIPSegModel`]. It is used to instantiate an CLIPSeg model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the CLIPSeg [CIDAS/clipseg-rd64](https://huggingface.co/CIDAS/clipseg-rd64) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
370_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clipseg.md
https://huggingface.co/docs/transformers/en/model_doc/clipseg/#clipsegtextconfig
.md
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 49408): Vocabulary size of the CLIPSeg text model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`CLIPSegModel`]. hidden_size (`int`, *optional*, defaults to 512): Dimensionality of the encoder layers and the pooler layer.
370_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clipseg.md
https://huggingface.co/docs/transformers/en/model_doc/clipseg/#clipsegtextconfig
.md
hidden_size (`int`, *optional*, defaults to 512): Dimensionality of the encoder layers and the pooler layer. intermediate_size (`int`, *optional*, defaults to 2048): Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder. num_hidden_layers (`int`, *optional*, defaults to 12): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 8): Number of attention heads for each attention layer in the Transformer encoder.
370_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clipseg.md
https://huggingface.co/docs/transformers/en/model_doc/clipseg/#clipsegtextconfig
.md
Number of attention heads for each attention layer in the Transformer encoder. max_position_embeddings (`int`, *optional*, defaults to 77): The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). hidden_act (`str` or `function`, *optional*, defaults to `"quick_gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
370_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clipseg.md
https://huggingface.co/docs/transformers/en/model_doc/clipseg/#clipsegtextconfig
.md
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` `"quick_gelu"` are supported. layer_norm_eps (`float`, *optional*, defaults to 1e-05): The epsilon used by the layer normalization layers. attention_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. initializer_range (`float`, *optional*, defaults to 0.02):
370_5_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clipseg.md
https://huggingface.co/docs/transformers/en/model_doc/clipseg/#clipsegtextconfig
.md
The dropout ratio for the attention probabilities. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. initializer_factor (`float`, *optional*, defaults to 1.0): A factor for initializing all weight matrices (should be kept to 1, used internally for initialization testing). pad_token_id (`int`, *optional*, defaults to 1): Padding token id. bos_token_id (`int`, *optional*, defaults to 49406):
370_5_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clipseg.md
https://huggingface.co/docs/transformers/en/model_doc/clipseg/#clipsegtextconfig
.md
pad_token_id (`int`, *optional*, defaults to 1): Padding token id. bos_token_id (`int`, *optional*, defaults to 49406): Beginning of stream token id. eos_token_id (`int`, *optional*, defaults to 49407): End of stream token id. Example: ```python >>> from transformers import CLIPSegTextConfig, CLIPSegTextModel
370_5_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clipseg.md
https://huggingface.co/docs/transformers/en/model_doc/clipseg/#clipsegtextconfig
.md
>>> # Initializing a CLIPSegTextConfig with CIDAS/clipseg-rd64 style configuration >>> configuration = CLIPSegTextConfig() >>> # Initializing a CLIPSegTextModel (with random weights) from the CIDAS/clipseg-rd64 style configuration >>> model = CLIPSegTextModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
370_5_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clipseg.md
https://huggingface.co/docs/transformers/en/model_doc/clipseg/#clipsegvisionconfig
.md
This is the configuration class to store the configuration of a [`CLIPSegModel`]. It is used to instantiate an CLIPSeg model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the CLIPSeg [CIDAS/clipseg-rd64](https://huggingface.co/CIDAS/clipseg-rd64) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
370_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clipseg.md
https://huggingface.co/docs/transformers/en/model_doc/clipseg/#clipsegvisionconfig
.md
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: hidden_size (`int`, *optional*, defaults to 768): Dimensionality of the encoder layers and the pooler layer. intermediate_size (`int`, *optional*, defaults to 3072): Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder. num_hidden_layers (`int`, *optional*, defaults to 12):
370_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clipseg.md
https://huggingface.co/docs/transformers/en/model_doc/clipseg/#clipsegvisionconfig
.md
num_hidden_layers (`int`, *optional*, defaults to 12): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 12): Number of attention heads for each attention layer in the Transformer encoder. num_channels (`int`, *optional*, defaults to 3): The number of input channels. image_size (`int`, *optional*, defaults to 224): The size (resolution) of each image. patch_size (`int`, *optional*, defaults to 32): The size (resolution) of each patch.
370_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clipseg.md
https://huggingface.co/docs/transformers/en/model_doc/clipseg/#clipsegvisionconfig
.md
The size (resolution) of each image. patch_size (`int`, *optional*, defaults to 32): The size (resolution) of each patch. hidden_act (`str` or `function`, *optional*, defaults to `"quick_gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` `"quick_gelu"` are supported. layer_norm_eps (`float`, *optional*, defaults to 1e-05): The epsilon used by the layer normalization layers.
370_6_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clipseg.md
https://huggingface.co/docs/transformers/en/model_doc/clipseg/#clipsegvisionconfig
.md
layer_norm_eps (`float`, *optional*, defaults to 1e-05): The epsilon used by the layer normalization layers. attention_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. initializer_factor (`float`, *optional*, defaults to 1.0):
370_6_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clipseg.md
https://huggingface.co/docs/transformers/en/model_doc/clipseg/#clipsegvisionconfig
.md
initializer_factor (`float`, *optional*, defaults to 1.0): A factor for initializing all weight matrices (should be kept to 1, used internally for initialization testing). Example: ```python >>> from transformers import CLIPSegVisionConfig, CLIPSegVisionModel
370_6_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clipseg.md
https://huggingface.co/docs/transformers/en/model_doc/clipseg/#clipsegvisionconfig
.md
>>> # Initializing a CLIPSegVisionConfig with CIDAS/clipseg-rd64 style configuration >>> configuration = CLIPSegVisionConfig() >>> # Initializing a CLIPSegVisionModel (with random weights) from the CIDAS/clipseg-rd64 style configuration >>> model = CLIPSegVisionModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
370_6_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clipseg.md
https://huggingface.co/docs/transformers/en/model_doc/clipseg/#clipsegprocessor
.md
Constructs a CLIPSeg processor which wraps a CLIPSeg image processor and a CLIP tokenizer into a single processor. [`CLIPSegProcessor`] offers all the functionalities of [`ViTImageProcessor`] and [`CLIPTokenizerFast`]. See the [`~CLIPSegProcessor.__call__`] and [`~CLIPSegProcessor.decode`] for more information. Args: image_processor ([`ViTImageProcessor`], *optional*): The image processor is a required input. tokenizer ([`CLIPTokenizerFast`], *optional*): The tokenizer is a required input.
370_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clipseg.md
https://huggingface.co/docs/transformers/en/model_doc/clipseg/#clipsegmodel
.md
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`CLIPSegConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
370_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clipseg.md
https://huggingface.co/docs/transformers/en/model_doc/clipseg/#clipsegmodel
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward - get_text_features - get_image_features
370_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clipseg.md
https://huggingface.co/docs/transformers/en/model_doc/clipseg/#clipsegtextmodel
.md
No docstring available for CLIPSegTextModel Methods: forward
370_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clipseg.md
https://huggingface.co/docs/transformers/en/model_doc/clipseg/#clipsegvisionmodel
.md
No docstring available for CLIPSegVisionModel Methods: forward
370_10_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clipseg.md
https://huggingface.co/docs/transformers/en/model_doc/clipseg/#clipsegforimagesegmentation
.md
CLIPSeg model with a Transformer-based decoder on top for zero-shot and one-shot image segmentation. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`CLIPSegConfig`]): Model configuration class with all the parameters of the model.
370_11_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clipseg.md
https://huggingface.co/docs/transformers/en/model_doc/clipseg/#clipsegforimagesegmentation
.md
behavior. Parameters: config ([`CLIPSegConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
370_11_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/conditional_detr.md
https://huggingface.co/docs/transformers/en/model_doc/conditional_detr/
.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
371_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/conditional_detr.md
https://huggingface.co/docs/transformers/en/model_doc/conditional_detr/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
371_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/conditional_detr.md
https://huggingface.co/docs/transformers/en/model_doc/conditional_detr/#overview
.md
The Conditional DETR model was proposed in [Conditional DETR for Fast Training Convergence](https://arxiv.org/abs/2108.06152) by Depu Meng, Xiaokang Chen, Zejia Fan, Gang Zeng, Houqiang Li, Yuhui Yuan, Lei Sun, Jingdong Wang. Conditional DETR presents a conditional cross-attention mechanism for fast DETR training. Conditional DETR converges 6.7× to 10× faster than DETR. The abstract from the paper is the following:
371_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/conditional_detr.md
https://huggingface.co/docs/transformers/en/model_doc/conditional_detr/#overview
.md
*The recently-developed DETR approach applies the transformer encoder and decoder architecture to object detection and achieves promising performance. In this paper, we handle the critical issue, slow training convergence, and present a conditional cross-attention mechanism for fast DETR training. Our approach is motivated by that the cross-attention in DETR relies highly on the content embeddings for localizing the four extremities and predicting the box, which increases the need for high-quality content
371_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/conditional_detr.md
https://huggingface.co/docs/transformers/en/model_doc/conditional_detr/#overview
.md
embeddings for localizing the four extremities and predicting the box, which increases the need for high-quality content embeddings and thus the training difficulty. Our approach, named conditional DETR, learns a conditional spatial query from the decoder embedding for decoder multi-head cross-attention. The benefit is that through the conditional spatial query, each cross-attention head is able to attend to a band containing a distinct region, e.g., one object extremity or a region inside the object box.
371_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/conditional_detr.md
https://huggingface.co/docs/transformers/en/model_doc/conditional_detr/#overview
.md
head is able to attend to a band containing a distinct region, e.g., one object extremity or a region inside the object box. This narrows down the spatial range for localizing the distinct regions for object classification and box regression, thus relaxing the dependence on the content embeddings and easing the training. Empirical results show that conditional DETR converges 6.7× faster for the backbones R50 and R101 and 10× faster for stronger backbones DC5-R50 and DC5-R101. Code is available at
371_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/conditional_detr.md
https://huggingface.co/docs/transformers/en/model_doc/conditional_detr/#overview
.md
6.7× faster for the backbones R50 and R101 and 10× faster for stronger backbones DC5-R50 and DC5-R101. Code is available at https://github.com/Atten4Vis/ConditionalDETR.*
371_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/conditional_detr.md
https://huggingface.co/docs/transformers/en/model_doc/conditional_detr/#overview
.md
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/conditional_detr_curve.jpg" alt="drawing" width="600"/> <small> Conditional DETR shows much faster convergence compared to the original DETR. Taken from the <a href="https://arxiv.org/abs/2108.06152">original paper</a>.</small> This model was contributed by [DepuMeng](https://huggingface.co/DepuMeng). The original code can be found [here](https://github.com/Atten4Vis/ConditionalDETR).
371_1_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/conditional_detr.md
https://huggingface.co/docs/transformers/en/model_doc/conditional_detr/#resources
.md
- Scripts for finetuning [`ConditionalDetrForObjectDetection`] with [`Trainer`] or [Accelerate](https://huggingface.co/docs/accelerate/index) can be found [here](https://github.com/huggingface/transformers/tree/main/examples/pytorch/object-detection). - See also: [Object detection task guide](../tasks/object_detection).
371_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/conditional_detr.md
https://huggingface.co/docs/transformers/en/model_doc/conditional_detr/#conditionaldetrconfig
.md
This is the configuration class to store the configuration of a [`ConditionalDetrModel`]. It is used to instantiate a Conditional DETR model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Conditional DETR [microsoft/conditional-detr-resnet-50](https://huggingface.co/microsoft/conditional-detr-resnet-50) architecture.
371_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/conditional_detr.md
https://huggingface.co/docs/transformers/en/model_doc/conditional_detr/#conditionaldetrconfig
.md
[microsoft/conditional-detr-resnet-50](https://huggingface.co/microsoft/conditional-detr-resnet-50) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: use_timm_backbone (`bool`, *optional*, defaults to `True`): Whether or not to use the `timm` library for the backbone. If set to `False`, will use the [`AutoBackbone`] API.
371_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/conditional_detr.md
https://huggingface.co/docs/transformers/en/model_doc/conditional_detr/#conditionaldetrconfig
.md
Whether or not to use the `timm` library for the backbone. If set to `False`, will use the [`AutoBackbone`] API. backbone_config (`PretrainedConfig` or `dict`, *optional*): The configuration of the backbone model. Only used in case `use_timm_backbone` is set to `False` in which case it will default to `ResNetConfig()`. num_channels (`int`, *optional*, defaults to 3): The number of input channels. num_queries (`int`, *optional*, defaults to 100):
371_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/conditional_detr.md
https://huggingface.co/docs/transformers/en/model_doc/conditional_detr/#conditionaldetrconfig
.md
The number of input channels. num_queries (`int`, *optional*, defaults to 100): Number of object queries, i.e. detection slots. This is the maximal number of objects [`ConditionalDetrModel`] can detect in a single image. For COCO, we recommend 100 queries. d_model (`int`, *optional*, defaults to 256): Dimension of the layers. encoder_layers (`int`, *optional*, defaults to 6): Number of encoder layers. decoder_layers (`int`, *optional*, defaults to 6): Number of decoder layers.
371_3_3