source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/align.md
|
https://huggingface.co/docs/transformers/en/model_doc/align/#aligntextmodel
|
.md
|
The text model from ALIGN without any head or projection on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
269_9_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/align.md
|
https://huggingface.co/docs/transformers/en/model_doc/align/#aligntextmodel
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`AlignConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
269_9_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/align.md
|
https://huggingface.co/docs/transformers/en/model_doc/align/#aligntextmodel
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
269_9_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/align.md
|
https://huggingface.co/docs/transformers/en/model_doc/align/#alignvisionmodel
|
.md
|
The vision model from ALIGN without any head or projection on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
269_10_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/align.md
|
https://huggingface.co/docs/transformers/en/model_doc/align/#alignvisionmodel
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`AlignConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
269_10_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/align.md
|
https://huggingface.co/docs/transformers/en/model_doc/align/#alignvisionmodel
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
269_10_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dialogpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/dialogpt/
|
.md
|
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
270_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dialogpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/dialogpt/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
270_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dialogpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/dialogpt/#overview
|
.md
|
DialoGPT was proposed in [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) by Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao,
Jianfeng Gao, Jingjing Liu, Bill Dolan. It's a GPT2 Model trained on 147M conversation-like exchanges extracted from
Reddit.
The abstract from the paper is the following:
|
270_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dialogpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/dialogpt/#overview
|
.md
|
Reddit.
The abstract from the paper is the following:
*We present a large, tunable neural conversational response generation model, DialoGPT (dialogue generative pre-trained
transformer). Trained on 147M conversation-like exchanges extracted from Reddit comment chains over a period spanning
from 2005 through 2017, DialoGPT extends the Hugging Face PyTorch transformer to attain a performance close to human
|
270_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dialogpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/dialogpt/#overview
|
.md
|
from 2005 through 2017, DialoGPT extends the Hugging Face PyTorch transformer to attain a performance close to human
both in terms of automatic and human evaluation in single-turn dialogue settings. We show that conversational systems
that leverage DialoGPT generate more relevant, contentful and context-consistent responses than strong baseline
systems. The pre-trained model and training pipeline are publicly released to facilitate research into neural response
|
270_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dialogpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/dialogpt/#overview
|
.md
|
systems. The pre-trained model and training pipeline are publicly released to facilitate research into neural response
generation and the development of more intelligent open-domain dialogue systems.*
The original code can be found [here](https://github.com/microsoft/DialoGPT).
|
270_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dialogpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/dialogpt/#usage-tips
|
.md
|
- DialoGPT is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather
than the left.
- DialoGPT was trained with a causal language modeling (CLM) objective on conversational data and is therefore powerful
at response generation in open-domain dialogue systems.
- DialoGPT enables the user to create a chat bot in just 10 lines of code as shown on [DialoGPT's model card](https://huggingface.co/microsoft/DialoGPT-medium).
Training:
|
270_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dialogpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/dialogpt/#usage-tips
|
.md
|
Training:
In order to train or fine-tune DialoGPT, one can use causal language modeling training. To cite the official paper: *We
follow the OpenAI GPT-2 to model a multiturn dialogue session as a long text and frame the generation task as language
modeling. We first concatenate all dialog turns within a dialogue session into a long text x_1,..., x_N (N is the
sequence length), ended by the end-of-text token.* For more information please confer to the original paper.
<Tip>
|
270_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dialogpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/dialogpt/#usage-tips
|
.md
|
sequence length), ended by the end-of-text token.* For more information please confer to the original paper.
<Tip>
DialoGPT's architecture is based on the GPT2 model, refer to [GPT2's documentation page](gpt2) for API reference and examples.
</Tip>
|
270_2_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/regnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/regnet/
|
.md
|
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
271_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/regnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/regnet/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
271_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/regnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/regnet/#overview
|
.md
|
The RegNet model was proposed in [Designing Network Design Spaces](https://arxiv.org/abs/2003.13678) by Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, Piotr Dollár.
The authors design search spaces to perform Neural Architecture Search (NAS). They first start from a high dimensional search space and iteratively reduce the search space by empirically applying constraints based on the best-performing models sampled by the current search space.
|
271_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/regnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/regnet/#overview
|
.md
|
The abstract from the paper is the following:
|
271_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/regnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/regnet/#overview
|
.md
|
*In this work, we present a new network design paradigm. Our goal is to help advance the understanding of network design and discover design principles that generalize across settings. Instead of focusing on designing individual network instances, we design network design spaces that parametrize populations of networks. The overall process is analogous to classic manual design of networks, but elevated to the design space level. Using our methodology we explore the structure aspect of network design and
|
271_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/regnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/regnet/#overview
|
.md
|
networks, but elevated to the design space level. Using our methodology we explore the structure aspect of network design and arrive at a low-dimensional design space consisting of simple, regular networks that we call RegNet. The core insight of the RegNet parametrization is surprisingly simple: widths and depths of good networks can be explained by a quantized linear function. We analyze the RegNet design space and arrive at interesting findings that do not match the current practice of network design.
|
271_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/regnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/regnet/#overview
|
.md
|
We analyze the RegNet design space and arrive at interesting findings that do not match the current practice of network design. The RegNet design space provides simple and fast networks that work well across a wide range of flop regimes. Under comparable training settings and flops, the RegNet models outperform the popular EfficientNet models while being up to 5x faster on GPUs.*
|
271_1_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/regnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/regnet/#overview
|
.md
|
This model was contributed by [Francesco](https://huggingface.co/Francesco). The TensorFlow version of the model
was contributed by [sayakpaul](https://huggingface.co/sayakpaul) and [ariG23498](https://huggingface.co/ariG23498).
The original code can be found [here](https://github.com/facebookresearch/pycls).
The huge 10B model from [Self-supervised Pretraining of Visual Features in the Wild](https://arxiv.org/abs/2103.01988),
|
271_1_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/regnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/regnet/#overview
|
.md
|
The huge 10B model from [Self-supervised Pretraining of Visual Features in the Wild](https://arxiv.org/abs/2103.01988),
trained on one billion Instagram images, is available on the [hub](https://huggingface.co/facebook/regnet-y-10b-seer)
|
271_1_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/regnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/regnet/#resources
|
.md
|
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with RegNet.
<PipelineTag pipeline="image-classification"/>
- [`RegNetForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb).
|
271_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/regnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/regnet/#resources
|
.md
|
- See also: [Image classification task guide](../tasks/image_classification)
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
|
271_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/regnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/regnet/#regnetconfig
|
.md
|
This is the configuration class to store the configuration of a [`RegNetModel`]. It is used to instantiate a RegNet
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the RegNet
[facebook/regnet-y-040](https://huggingface.co/facebook/regnet-y-040) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
271_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/regnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/regnet/#regnetconfig
|
.md
|
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
num_channels (`int`, *optional*, defaults to 3):
The number of input channels.
embedding_size (`int`, *optional*, defaults to 64):
Dimensionality (hidden size) for the embedding layer.
hidden_sizes (`List[int]`, *optional*, defaults to `[256, 512, 1024, 2048]`):
Dimensionality (hidden size) at each stage.
|
271_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/regnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/regnet/#regnetconfig
|
.md
|
hidden_sizes (`List[int]`, *optional*, defaults to `[256, 512, 1024, 2048]`):
Dimensionality (hidden size) at each stage.
depths (`List[int]`, *optional*, defaults to `[3, 4, 6, 3]`):
Depth (number of layers) for each stage.
layer_type (`str`, *optional*, defaults to `"y"`):
The layer to use, it can be either `"x" or `"y"`. An `x` layer is a ResNet's BottleNeck layer with
`reduction` fixed to `1`. While a `y` layer is a `x` but with squeeze and excitation. Please refer to the
|
271_3_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/regnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/regnet/#regnetconfig
|
.md
|
`reduction` fixed to `1`. While a `y` layer is a `x` but with squeeze and excitation. Please refer to the
paper for a detailed explanation of how these layers were constructed.
hidden_act (`str`, *optional*, defaults to `"relu"`):
The non-linear activation function in each block. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"`
are supported.
downsample_in_first_stage (`bool`, *optional*, defaults to `False`):
If `True`, the first stage will downsample the inputs using a `stride` of 2.
Example:
|
271_3_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/regnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/regnet/#regnetconfig
|
.md
|
If `True`, the first stage will downsample the inputs using a `stride` of 2.
Example:
```python
>>> from transformers import RegNetConfig, RegNetModel
|
271_3_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/regnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/regnet/#regnetconfig
|
.md
|
>>> # Initializing a RegNet regnet-y-40 style configuration
>>> configuration = RegNetConfig()
>>> # Initializing a model from the regnet-y-40 style configuration
>>> model = RegNetModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
<frameworkcontent>
<pt>
|
271_3_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/regnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/regnet/#regnetmodel
|
.md
|
The bare RegNet model outputting raw features without any specific head on top.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matters related to general usage and
behavior.
Parameters:
config ([`RegNetConfig`]): Model configuration class with all the parameters of the model.
|
271_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/regnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/regnet/#regnetmodel
|
.md
|
behavior.
Parameters:
config ([`RegNetConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
271_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/regnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/regnet/#regnetforimageclassification
|
.md
|
RegNet Model with an image classification head on top (a linear layer on top of the pooled features), e.g. for
ImageNet.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matters related to general usage and
behavior.
Parameters:
config ([`RegNetConfig`]): Model configuration class with all the parameters of the model.
|
271_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/regnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/regnet/#regnetforimageclassification
|
.md
|
behavior.
Parameters:
config ([`RegNetConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
</pt>
<tf>
|
271_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/regnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/regnet/#tfregnetmodel
|
.md
|
No docstring available for TFRegNetModel
Methods: call
|
271_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/regnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/regnet/#tfregnetforimageclassification
|
.md
|
No docstring available for TFRegNetForImageClassification
Methods: call
</tf>
<jax>
|
271_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/regnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/regnet/#flaxregnetmodel
|
.md
|
No docstring available for FlaxRegNetModel
Methods: __call__
|
271_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/regnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/regnet/#flaxregnetforimageclassification
|
.md
|
No docstring available for FlaxRegNetForImageClassification
Methods: __call__
</jax>
</frameworkcontent>
|
271_9_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/depth_anything.md
|
https://huggingface.co/docs/transformers/en/model_doc/depth_anything/
|
.md
|
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
272_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/depth_anything.md
|
https://huggingface.co/docs/transformers/en/model_doc/depth_anything/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
272_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/depth_anything.md
|
https://huggingface.co/docs/transformers/en/model_doc/depth_anything/#overview
|
.md
|
The Depth Anything model was proposed in [Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data](https://arxiv.org/abs/2401.10891) by Lihe Yang, Bingyi Kang, Zilong Huang, Xiaogang Xu, Jiashi Feng, Hengshuang Zhao. Depth Anything is based on the [DPT](dpt) architecture, trained on ~62 million images, obtaining state-of-the-art results for both relative and absolute depth estimation.
<Tip>
|
272_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/depth_anything.md
|
https://huggingface.co/docs/transformers/en/model_doc/depth_anything/#overview
|
.md
|
<Tip>
[Depth Anything V2](depth_anything_v2) was released in June 2024. It uses the same architecture as Depth Anything and therefore it is compatible with all code examples and existing workflows. However, it leverages synthetic data and a larger capacity teacher model to achieve much finer and robust depth predictions.
</Tip>
The abstract from the paper is the following:
|
272_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/depth_anything.md
|
https://huggingface.co/docs/transformers/en/model_doc/depth_anything/#overview
|
.md
|
*This work presents Depth Anything, a highly practical solution for robust monocular depth estimation. Without pursuing novel technical modules, we aim to build a simple yet powerful foundation model dealing with any images under any circumstances. To this end, we scale up the dataset by designing a data engine to collect and automatically annotate large-scale unlabeled data (~62M), which significantly enlarges the data coverage and thus is able to reduce the generalization error. We investigate two simple
|
272_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/depth_anything.md
|
https://huggingface.co/docs/transformers/en/model_doc/depth_anything/#overview
|
.md
|
which significantly enlarges the data coverage and thus is able to reduce the generalization error. We investigate two simple yet effective strategies that make data scaling-up promising. First, a more challenging optimization target is created by leveraging data augmentation tools. It compels the model to actively seek extra visual knowledge and acquire robust representations. Second, an auxiliary supervision is developed to enforce the model to inherit rich semantic priors from pre-trained encoders. We
|
272_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/depth_anything.md
|
https://huggingface.co/docs/transformers/en/model_doc/depth_anything/#overview
|
.md
|
an auxiliary supervision is developed to enforce the model to inherit rich semantic priors from pre-trained encoders. We evaluate its zero-shot capabilities extensively, including six public datasets and randomly captured photos. It demonstrates impressive generalization ability. Further, through fine-tuning it with metric depth information from NYUv2 and KITTI, new SOTAs are set. Our better depth model also results in a better depth-conditioned ControlNet.*
|
272_1_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/depth_anything.md
|
https://huggingface.co/docs/transformers/en/model_doc/depth_anything/#overview
|
.md
|
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/depth_anything_overview.jpg"
alt="drawing" width="600"/>
<small> Depth Anything overview. Taken from the <a href="https://arxiv.org/abs/2401.10891">original paper</a>.</small>
This model was contributed by [nielsr](https://huggingface.co/nielsr).
The original code can be found [here](https://github.com/LiheYoung/Depth-Anything).
|
272_1_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/depth_anything.md
|
https://huggingface.co/docs/transformers/en/model_doc/depth_anything/#usage-example
|
.md
|
There are 2 main ways to use Depth Anything: either using the pipeline API, which abstracts away all the complexity for you, or by using the `DepthAnythingForDepthEstimation` class yourself.
|
272_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/depth_anything.md
|
https://huggingface.co/docs/transformers/en/model_doc/depth_anything/#pipeline-api
|
.md
|
The pipeline allows to use the model in a few lines of code:
```python
>>> from transformers import pipeline
>>> from PIL import Image
>>> import requests
>>> # load pipe
>>> pipe = pipeline(task="depth-estimation", model="LiheYoung/depth-anything-small-hf")
>>> # load image
>>> url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> # inference
>>> depth = pipe(image)["depth"]
```
|
272_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/depth_anything.md
|
https://huggingface.co/docs/transformers/en/model_doc/depth_anything/#using-the-model-yourself
|
.md
|
If you want to do the pre- and postprocessing yourself, here's how to do that:
```python
>>> from transformers import AutoImageProcessor, AutoModelForDepthEstimation
>>> import torch
>>> import numpy as np
>>> from PIL import Image
>>> import requests
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
|
272_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/depth_anything.md
|
https://huggingface.co/docs/transformers/en/model_doc/depth_anything/#using-the-model-yourself
|
.md
|
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> image_processor = AutoImageProcessor.from_pretrained("LiheYoung/depth-anything-small-hf")
>>> model = AutoModelForDepthEstimation.from_pretrained("LiheYoung/depth-anything-small-hf")
>>> # prepare image for the model
>>> inputs = image_processor(images=image, return_tensors="pt")
>>> with torch.no_grad():
... outputs = model(**inputs)
|
272_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/depth_anything.md
|
https://huggingface.co/docs/transformers/en/model_doc/depth_anything/#using-the-model-yourself
|
.md
|
>>> with torch.no_grad():
... outputs = model(**inputs)
>>> # interpolate to original size and visualize the prediction
>>> post_processed_output = image_processor.post_process_depth_estimation(
... outputs,
... target_sizes=[(image.height, image.width)],
... )
|
272_4_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/depth_anything.md
|
https://huggingface.co/docs/transformers/en/model_doc/depth_anything/#using-the-model-yourself
|
.md
|
>>> predicted_depth = post_processed_output[0]["predicted_depth"]
>>> depth = (predicted_depth - predicted_depth.min()) / (predicted_depth.max() - predicted_depth.min())
>>> depth = depth.detach().cpu().numpy() * 255
>>> depth = Image.fromarray(depth.astype("uint8"))
```
|
272_4_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/depth_anything.md
|
https://huggingface.co/docs/transformers/en/model_doc/depth_anything/#resources
|
.md
|
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with Depth Anything.
- [Monocular depth estimation task guide](../tasks/monocular_depth_estimation)
- A notebook showcasing inference with [`DepthAnythingForDepthEstimation`] can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/Depth%20Anything/Predicting_depth_in_an_image_with_Depth_Anything.ipynb). 🌎
|
272_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/depth_anything.md
|
https://huggingface.co/docs/transformers/en/model_doc/depth_anything/#resources
|
.md
|
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
|
272_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/depth_anything.md
|
https://huggingface.co/docs/transformers/en/model_doc/depth_anything/#depthanythingconfig
|
.md
|
This is the configuration class to store the configuration of a [`DepthAnythingModel`]. It is used to instantiate a DepthAnything
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the DepthAnything
[LiheYoung/depth-anything-small-hf](https://huggingface.co/LiheYoung/depth-anything-small-hf) architecture.
|
272_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/depth_anything.md
|
https://huggingface.co/docs/transformers/en/model_doc/depth_anything/#depthanythingconfig
|
.md
|
[LiheYoung/depth-anything-small-hf](https://huggingface.co/LiheYoung/depth-anything-small-hf) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
backbone_config (`Union[Dict[str, Any], PretrainedConfig]`, *optional*):
The configuration of the backbone model. Only used in case `is_hybrid` is `True` or in case you want to
leverage the [`AutoBackbone`] API.
|
272_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/depth_anything.md
|
https://huggingface.co/docs/transformers/en/model_doc/depth_anything/#depthanythingconfig
|
.md
|
leverage the [`AutoBackbone`] API.
backbone (`str`, *optional*):
Name of backbone to use when `backbone_config` is `None`. If `use_pretrained_backbone` is `True`, this
will load the corresponding pretrained weights from the timm or transformers library. If `use_pretrained_backbone`
is `False`, this loads the backbone's config and uses that to initialize the backbone with random weights.
use_pretrained_backbone (`bool`, *optional*, defaults to `False`):
Whether to use pretrained weights for the backbone.
|
272_6_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/depth_anything.md
|
https://huggingface.co/docs/transformers/en/model_doc/depth_anything/#depthanythingconfig
|
.md
|
use_pretrained_backbone (`bool`, *optional*, defaults to `False`):
Whether to use pretrained weights for the backbone.
use_timm_backbone (`bool`, *optional*, defaults to `False`):
Whether or not to use the `timm` library for the backbone. If set to `False`, will use the [`AutoBackbone`]
API.
backbone_kwargs (`dict`, *optional*):
Keyword arguments to be passed to AutoBackbone when loading from a checkpoint
e.g. `{'out_indices': (0, 1, 2, 3)}`. Cannot be specified if `backbone_config` is set.
|
272_6_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/depth_anything.md
|
https://huggingface.co/docs/transformers/en/model_doc/depth_anything/#depthanythingconfig
|
.md
|
e.g. `{'out_indices': (0, 1, 2, 3)}`. Cannot be specified if `backbone_config` is set.
patch_size (`int`, *optional*, defaults to 14):
The size of the patches to extract from the backbone features.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
reassemble_hidden_size (`int`, *optional*, defaults to 384):
The number of input channels of the reassemble layers.
|
272_6_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/depth_anything.md
|
https://huggingface.co/docs/transformers/en/model_doc/depth_anything/#depthanythingconfig
|
.md
|
reassemble_hidden_size (`int`, *optional*, defaults to 384):
The number of input channels of the reassemble layers.
reassemble_factors (`List[int]`, *optional*, defaults to `[4, 2, 1, 0.5]`):
The up/downsampling factors of the reassemble layers.
neck_hidden_sizes (`List[str]`, *optional*, defaults to `[48, 96, 192, 384]`):
The hidden sizes to project to for the feature maps of the backbone.
fusion_hidden_size (`int`, *optional*, defaults to 64):
The number of channels before fusion.
|
272_6_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/depth_anything.md
|
https://huggingface.co/docs/transformers/en/model_doc/depth_anything/#depthanythingconfig
|
.md
|
fusion_hidden_size (`int`, *optional*, defaults to 64):
The number of channels before fusion.
head_in_index (`int`, *optional*, defaults to -1):
The index of the features to use in the depth estimation head.
head_hidden_size (`int`, *optional*, defaults to 32):
The number of output channels in the second convolution of the depth estimation head.
depth_estimation_type (`str`, *optional*, defaults to `"relative"`):
The type of depth estimation to use. Can be one of `["relative", "metric"]`.
|
272_6_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/depth_anything.md
|
https://huggingface.co/docs/transformers/en/model_doc/depth_anything/#depthanythingconfig
|
.md
|
The type of depth estimation to use. Can be one of `["relative", "metric"]`.
max_depth (`float`, *optional*):
The maximum depth to use for the "metric" depth estimation head. 20 should be used for indoor models
and 80 for outdoor models. For "relative" depth estimation, this value is ignored.
Example:
```python
>>> from transformers import DepthAnythingConfig, DepthAnythingForDepthEstimation
|
272_6_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/depth_anything.md
|
https://huggingface.co/docs/transformers/en/model_doc/depth_anything/#depthanythingconfig
|
.md
|
>>> # Initializing a DepthAnything small style configuration
>>> configuration = DepthAnythingConfig()
>>> # Initializing a model from the DepthAnything small style configuration
>>> model = DepthAnythingForDepthEstimation(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
272_6_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/depth_anything.md
|
https://huggingface.co/docs/transformers/en/model_doc/depth_anything/#depthanythingfordepthestimation
|
.md
|
Depth Anything Model with a depth estimation head on top (consisting of 3 convolutional layers) e.g. for KITTI, NYUv2.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`DepthAnythingConfig`]): Model configuration class with all the parameters of the model.
|
272_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/depth_anything.md
|
https://huggingface.co/docs/transformers/en/model_doc/depth_anything/#depthanythingfordepthestimation
|
.md
|
behavior.
Parameters:
config ([`DepthAnythingConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
272_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yolos.md
|
https://huggingface.co/docs/transformers/en/model_doc/yolos/
|
.md
|
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
273_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yolos.md
|
https://huggingface.co/docs/transformers/en/model_doc/yolos/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
273_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yolos.md
|
https://huggingface.co/docs/transformers/en/model_doc/yolos/#overview
|
.md
|
The YOLOS model was proposed in [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) by Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu.
|
273_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yolos.md
|
https://huggingface.co/docs/transformers/en/model_doc/yolos/#overview
|
.md
|
YOLOS proposes to just leverage the plain [Vision Transformer (ViT)](vit) for object detection, inspired by DETR. It turns out that a base-sized encoder-only Transformer can also achieve 42 AP on COCO, similar to DETR and much more complex frameworks such as Faster R-CNN.
The abstract from the paper is the following:
|
273_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yolos.md
|
https://huggingface.co/docs/transformers/en/model_doc/yolos/#overview
|
.md
|
*Can Transformer perform 2D object- and region-level recognition from a pure sequence-to-sequence perspective with minimal knowledge about the 2D spatial structure? To answer this question, we present You Only Look at One Sequence (YOLOS), a series of object detection models based on the vanilla Vision Transformer with the fewest possible modifications, region priors, as well as inductive biases of the target task. We find that YOLOS pre-trained on the mid-sized ImageNet-1k dataset only can already achieve
|
273_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yolos.md
|
https://huggingface.co/docs/transformers/en/model_doc/yolos/#overview
|
.md
|
biases of the target task. We find that YOLOS pre-trained on the mid-sized ImageNet-1k dataset only can already achieve quite competitive performance on the challenging COCO object detection benchmark, e.g., YOLOS-Base directly adopted from BERT-Base architecture can obtain 42.0 box AP on COCO val. We also discuss the impacts as well as limitations of current pre-train schemes and model scaling strategies for Transformer in vision through YOLOS.*
|
273_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yolos.md
|
https://huggingface.co/docs/transformers/en/model_doc/yolos/#overview
|
.md
|
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/yolos_architecture.png"
alt="drawing" width="600"/>
<small> YOLOS architecture. Taken from the <a href="https://arxiv.org/abs/2106.00666">original paper</a>.</small>
This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/hustvl/YOLOS).
|
273_1_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yolos.md
|
https://huggingface.co/docs/transformers/en/model_doc/yolos/#using-scaled-dot-product-attention-sdpa
|
.md
|
PyTorch includes a native scaled dot-product attention (SDPA) operator as part of `torch.nn.functional`. This function
encompasses several implementations that can be applied depending on the inputs and the hardware in use. See the
[official documentation](https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html)
or the [GPU Inference](https://huggingface.co/docs/transformers/main/en/perf_infer_gpu_one#pytorch-scaled-dot-product-attention)
page for more information.
|
273_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yolos.md
|
https://huggingface.co/docs/transformers/en/model_doc/yolos/#using-scaled-dot-product-attention-sdpa
|
.md
|
page for more information.
SDPA is used by default for `torch>=2.1.1` when an implementation is available, but you may also set
`attn_implementation="sdpa"` in `from_pretrained()` to explicitly request SDPA to be used.
```
from transformers import AutoModelForObjectDetection
model = AutoModelForObjectDetection.from_pretrained("hustvl/yolos-base", attn_implementation="sdpa", torch_dtype=torch.float16)
...
```
|
273_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yolos.md
|
https://huggingface.co/docs/transformers/en/model_doc/yolos/#using-scaled-dot-product-attention-sdpa
|
.md
|
...
```
For the best speedups, we recommend loading the model in half-precision (e.g. `torch.float16` or `torch.bfloat16`).
On a local benchmark (A100-40GB, PyTorch 2.3.0, OS Ubuntu 22.04) with `float32` and `hustvl/yolos-base` model, we saw the following speedups during inference.
| Batch size | Average inference time (ms), eager mode | Average inference time (ms), sdpa model | Speed up, Sdpa / Eager (x) |
|
273_2_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yolos.md
|
https://huggingface.co/docs/transformers/en/model_doc/yolos/#using-scaled-dot-product-attention-sdpa
|
.md
|
|--------------|-------------------------------------------|-------------------------------------------|------------------------------|
| 1 | 106 | 76 | 1.39 |
| 2 | 154 | 90 | 1.71 |
|
273_2_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yolos.md
|
https://huggingface.co/docs/transformers/en/model_doc/yolos/#using-scaled-dot-product-attention-sdpa
|
.md
|
| 4 | 222 | 116 | 1.91 |
| 8 | 368 | 168 | 2.19 |
|
273_2_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yolos.md
|
https://huggingface.co/docs/transformers/en/model_doc/yolos/#resources
|
.md
|
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with YOLOS.
<PipelineTag pipeline="object-detection"/>
- All example notebooks illustrating inference + fine-tuning [`YolosForObjectDetection`] on a custom dataset can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/YOLOS).
|
273_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yolos.md
|
https://huggingface.co/docs/transformers/en/model_doc/yolos/#resources
|
.md
|
- Scripts for finetuning [`YolosForObjectDetection`] with [`Trainer`] or [Accelerate](https://huggingface.co/docs/accelerate/index) can be found [here](https://github.com/huggingface/transformers/tree/main/examples/pytorch/object-detection).
- See also: [Object detection task guide](../tasks/object_detection)
|
273_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yolos.md
|
https://huggingface.co/docs/transformers/en/model_doc/yolos/#resources
|
.md
|
- See also: [Object detection task guide](../tasks/object_detection)
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
<Tip>
Use [`YolosImageProcessor`] for preparing images (and optional targets) for the model. Contrary to [DETR](detr), YOLOS doesn't require a `pixel_mask` to be created.
</Tip>
|
273_3_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yolos.md
|
https://huggingface.co/docs/transformers/en/model_doc/yolos/#yolosconfig
|
.md
|
This is the configuration class to store the configuration of a [`YolosModel`]. It is used to instantiate a YOLOS
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the YOLOS
[hustvl/yolos-base](https://huggingface.co/hustvl/yolos-base) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
273_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yolos.md
|
https://huggingface.co/docs/transformers/en/model_doc/yolos/#yolosconfig
|
.md
|
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
hidden_size (`int`, *optional*, defaults to 768):
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (`int`, *optional*, defaults to 12):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 12):
|
273_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yolos.md
|
https://huggingface.co/docs/transformers/en/model_doc/yolos/#yolosconfig
|
.md
|
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 12):
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (`int`, *optional*, defaults to 3072):
Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
|
273_4_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yolos.md
|
https://huggingface.co/docs/transformers/en/model_doc/yolos/#yolosconfig
|
.md
|
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"selu"` and `"gelu_new"` are supported.
hidden_dropout_prob (`float`, *optional*, defaults to 0.0):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
initializer_range (`float`, *optional*, defaults to 0.02):
|
273_4_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yolos.md
|
https://huggingface.co/docs/transformers/en/model_doc/yolos/#yolosconfig
|
.md
|
The dropout ratio for the attention probabilities.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (`float`, *optional*, defaults to 1e-12):
The epsilon used by the layer normalization layers.
image_size (`List[int]`, *optional*, defaults to `[512, 864]`):
The size (resolution) of each image.
patch_size (`int`, *optional*, defaults to 16):
The size (resolution) of each patch.
|
273_4_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yolos.md
|
https://huggingface.co/docs/transformers/en/model_doc/yolos/#yolosconfig
|
.md
|
The size (resolution) of each image.
patch_size (`int`, *optional*, defaults to 16):
The size (resolution) of each patch.
num_channels (`int`, *optional*, defaults to 3):
The number of input channels.
qkv_bias (`bool`, *optional*, defaults to `True`):
Whether to add a bias to the queries, keys and values.
num_detection_tokens (`int`, *optional*, defaults to 100):
The number of detection tokens.
use_mid_position_embeddings (`bool`, *optional*, defaults to `True`):
|
273_4_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yolos.md
|
https://huggingface.co/docs/transformers/en/model_doc/yolos/#yolosconfig
|
.md
|
The number of detection tokens.
use_mid_position_embeddings (`bool`, *optional*, defaults to `True`):
Whether to use the mid-layer position encodings.
auxiliary_loss (`bool`, *optional*, defaults to `False`):
Whether auxiliary decoding losses (loss at each decoder layer) are to be used.
class_cost (`float`, *optional*, defaults to 1):
Relative weight of the classification error in the Hungarian matching cost.
bbox_cost (`float`, *optional*, defaults to 5):
|
273_4_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yolos.md
|
https://huggingface.co/docs/transformers/en/model_doc/yolos/#yolosconfig
|
.md
|
Relative weight of the classification error in the Hungarian matching cost.
bbox_cost (`float`, *optional*, defaults to 5):
Relative weight of the L1 error of the bounding box coordinates in the Hungarian matching cost.
giou_cost (`float`, *optional*, defaults to 2):
Relative weight of the generalized IoU loss of the bounding box in the Hungarian matching cost.
bbox_loss_coefficient (`float`, *optional*, defaults to 5):
Relative weight of the L1 bounding box loss in the object detection loss.
|
273_4_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yolos.md
|
https://huggingface.co/docs/transformers/en/model_doc/yolos/#yolosconfig
|
.md
|
Relative weight of the L1 bounding box loss in the object detection loss.
giou_loss_coefficient (`float`, *optional*, defaults to 2):
Relative weight of the generalized IoU loss in the object detection loss.
eos_coefficient (`float`, *optional*, defaults to 0.1):
Relative classification weight of the 'no-object' class in the object detection loss.
Example:
```python
>>> from transformers import YolosConfig, YolosModel
|
273_4_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yolos.md
|
https://huggingface.co/docs/transformers/en/model_doc/yolos/#yolosconfig
|
.md
|
>>> # Initializing a YOLOS hustvl/yolos-base style configuration
>>> configuration = YolosConfig()
>>> # Initializing a model (with random weights) from the hustvl/yolos-base style configuration
>>> model = YolosModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
273_4_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yolos.md
|
https://huggingface.co/docs/transformers/en/model_doc/yolos/#yolosimageprocessor
|
.md
|
Constructs a Detr image processor.
Args:
format (`str`, *optional*, defaults to `"coco_detection"`):
Data format of the annotations. One of "coco_detection" or "coco_panoptic".
do_resize (`bool`, *optional*, defaults to `True`):
Controls whether to resize the image's (height, width) dimensions to the specified `size`. Can be
overridden by the `do_resize` parameter in the `preprocess` method.
size (`Dict[str, int]` *optional*, defaults to `{"shortest_edge": 800, "longest_edge": 1333}`):
|
273_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yolos.md
|
https://huggingface.co/docs/transformers/en/model_doc/yolos/#yolosimageprocessor
|
.md
|
size (`Dict[str, int]` *optional*, defaults to `{"shortest_edge": 800, "longest_edge": 1333}`):
Size of the image's `(height, width)` dimensions after resizing. Can be overridden by the `size` parameter
in the `preprocess` method. Available options are:
- `{"height": int, "width": int}`: The image will be resized to the exact size `(height, width)`.
Do NOT keep the aspect ratio.
- `{"shortest_edge": int, "longest_edge": int}`: The image will be resized to a maximum size respecting
|
273_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yolos.md
|
https://huggingface.co/docs/transformers/en/model_doc/yolos/#yolosimageprocessor
|
.md
|
- `{"shortest_edge": int, "longest_edge": int}`: The image will be resized to a maximum size respecting
the aspect ratio and keeping the shortest edge less or equal to `shortest_edge` and the longest edge
less or equal to `longest_edge`.
- `{"max_height": int, "max_width": int}`: The image will be resized to the maximum size respecting the
aspect ratio and keeping the height less or equal to `max_height` and the width less or equal to
`max_width`.
|
273_5_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yolos.md
|
https://huggingface.co/docs/transformers/en/model_doc/yolos/#yolosimageprocessor
|
.md
|
aspect ratio and keeping the height less or equal to `max_height` and the width less or equal to
`max_width`.
resample (`PILImageResampling`, *optional*, defaults to `PILImageResampling.BILINEAR`):
Resampling filter to use if resizing the image.
do_rescale (`bool`, *optional*, defaults to `True`):
Controls whether to rescale the image by the specified scale `rescale_factor`. Can be overridden by the
`do_rescale` parameter in the `preprocess` method.
|
273_5_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yolos.md
|
https://huggingface.co/docs/transformers/en/model_doc/yolos/#yolosimageprocessor
|
.md
|
`do_rescale` parameter in the `preprocess` method.
rescale_factor (`int` or `float`, *optional*, defaults to `1/255`):
Scale factor to use if rescaling the image. Can be overridden by the `rescale_factor` parameter in the
`preprocess` method.
do_normalize:
Controls whether to normalize the image. Can be overridden by the `do_normalize` parameter in the
`preprocess` method.
image_mean (`float` or `List[float]`, *optional*, defaults to `IMAGENET_DEFAULT_MEAN`):
|
273_5_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yolos.md
|
https://huggingface.co/docs/transformers/en/model_doc/yolos/#yolosimageprocessor
|
.md
|
`preprocess` method.
image_mean (`float` or `List[float]`, *optional*, defaults to `IMAGENET_DEFAULT_MEAN`):
Mean values to use when normalizing the image. Can be a single value or a list of values, one for each
channel. Can be overridden by the `image_mean` parameter in the `preprocess` method.
image_std (`float` or `List[float]`, *optional*, defaults to `IMAGENET_DEFAULT_STD`):
Standard deviation values to use when normalizing the image. Can be a single value or a list of values, one
|
273_5_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yolos.md
|
https://huggingface.co/docs/transformers/en/model_doc/yolos/#yolosimageprocessor
|
.md
|
Standard deviation values to use when normalizing the image. Can be a single value or a list of values, one
for each channel. Can be overridden by the `image_std` parameter in the `preprocess` method.
do_pad (`bool`, *optional*, defaults to `True`):
Controls whether to pad the image. Can be overridden by the `do_pad` parameter in the `preprocess`
method. If `True`, padding will be applied to the bottom and right of the image with zeros.
|
273_5_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yolos.md
|
https://huggingface.co/docs/transformers/en/model_doc/yolos/#yolosimageprocessor
|
.md
|
method. If `True`, padding will be applied to the bottom and right of the image with zeros.
If `pad_size` is provided, the image will be padded to the specified dimensions.
Otherwise, the image will be padded to the maximum height and width of the batch.
pad_size (`Dict[str, int]`, *optional*):
The size `{"height": int, "width" int}` to pad the images to. Must be larger than any image size
provided for preprocessing. If `pad_size` is not provided, images will be padded to the largest
|
273_5_7
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.