source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#overview
|
.md
|
features by decoupling the intra-scale interaction and cross-scale fusion, and propose IoU-aware query selection to improve the initialization of object queries. In addition, our proposed detector supports flexibly adjustment of the inference speed by using different decoder layers without the need for retraining, which facilitates the practical application of real-time object detectors. Our RT-DETR-L achieves 53.0% AP on COCO val2017 and 114 FPS on T4 GPU, while RT-DETR-X achieves 54.8% AP and 74 FPS,
|
256_1_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#overview
|
.md
|
detectors. Our RT-DETR-L achieves 53.0% AP on COCO val2017 and 114 FPS on T4 GPU, while RT-DETR-X achieves 54.8% AP and 74 FPS, outperforming all YOLO detectors of the same scale in both speed and accuracy. Furthermore, our RT-DETR-R50 achieves 53.1% AP and 108 FPS, outperforming DINO-Deformable-DETR-R50 by 2.2% AP in accuracy and by about 21 times in FPS.*
|
256_1_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#overview
|
.md
|
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/rt_detr_overview.png"
alt="drawing" width="600"/>
<small> RT-DETR performance relative to YOLO models. Taken from the <a href="https://arxiv.org/abs/2304.08069">original paper.</a> </small>
|
256_1_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#overview
|
.md
|
The model version was contributed by [rafaelpadilla](https://huggingface.co/rafaelpadilla) and [sangbumchoi](https://github.com/SangbumChoi). The original code can be found [here](https://github.com/lyuwenyu/RT-DETR/).
|
256_1_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#usage-tips
|
.md
|
Initially, an image is processed using a pre-trained convolutional neural network, specifically a Resnet-D variant as referenced in the original code. This network extracts features from the final three layers of the architecture. Following this, a hybrid encoder is employed to convert the multi-scale features into a sequential array of image features. Then, a decoder, equipped with auxiliary prediction heads is used to refine the object queries. This process facilitates the direct generation of bounding
|
256_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#usage-tips
|
.md
|
auxiliary prediction heads is used to refine the object queries. This process facilitates the direct generation of bounding boxes, eliminating the need for any additional post-processing to acquire the logits and coordinates for the bounding boxes.
|
256_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#usage-tips
|
.md
|
```py
>>> import torch
>>> import requests
|
256_2_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#usage-tips
|
.md
|
>>> from PIL import Image
>>> from transformers import RTDetrForObjectDetection, RTDetrImageProcessor
>>> url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> image_processor = RTDetrImageProcessor.from_pretrained("PekingU/rtdetr_r50vd")
>>> model = RTDetrForObjectDetection.from_pretrained("PekingU/rtdetr_r50vd")
>>> inputs = image_processor(images=image, return_tensors="pt")
|
256_2_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#usage-tips
|
.md
|
>>> inputs = image_processor(images=image, return_tensors="pt")
>>> with torch.no_grad():
... outputs = model(**inputs)
>>> results = image_processor.post_process_object_detection(outputs, target_sizes=torch.tensor([(image.height, image.width)]), threshold=0.3)
|
256_2_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#usage-tips
|
.md
|
>>> for result in results:
... for score, label_id, box in zip(result["scores"], result["labels"], result["boxes"]):
... score, label = score.item(), label_id.item()
... box = [round(i, 2) for i in box.tolist()]
... print(f"{model.config.id2label[label]}: {score:.2f} {box}")
sofa: 0.97 [0.14, 0.38, 640.13, 476.21]
cat: 0.96 [343.38, 24.28, 640.14, 371.5]
cat: 0.96 [13.23, 54.18, 318.98, 472.22]
remote: 0.95 [40.11, 73.44, 175.96, 118.48]
|
256_2_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#usage-tips
|
.md
|
cat: 0.96 [343.38, 24.28, 640.14, 371.5]
cat: 0.96 [13.23, 54.18, 318.98, 472.22]
remote: 0.95 [40.11, 73.44, 175.96, 118.48]
remote: 0.92 [333.73, 76.58, 369.97, 186.99]
```
|
256_2_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#resources
|
.md
|
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with RT-DETR.
<PipelineTag pipeline="object-detection"/>
- Scripts for finetuning [`RTDetrForObjectDetection`] with [`Trainer`] or [Accelerate](https://huggingface.co/docs/accelerate/index) can be found [here](https://github.com/huggingface/transformers/tree/main/examples/pytorch/object-detection).
- See also: [Object detection task guide](../tasks/object_detection).
|
256_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#resources
|
.md
|
- See also: [Object detection task guide](../tasks/object_detection).
- Notebooks regarding inference and fine-tuning RT-DETR on a custom dataset can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/RT-DETR). 🌎
|
256_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#rtdetrconfig
|
.md
|
This is the configuration class to store the configuration of a [`RTDetrModel`]. It is used to instantiate a
RT-DETR model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the RT-DETR
[checkpoing/todo](https://huggingface.co/checkpoing/todo) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
256_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#rtdetrconfig
|
.md
|
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
initializer_range (`float`, *optional*, defaults to 0.01):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
initializer_bias_prior_prob (`float`, *optional*):
The prior probability used by the bias initializer to initialize biases for `enc_score_head` and `class_embed`.
|
256_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#rtdetrconfig
|
.md
|
The prior probability used by the bias initializer to initialize biases for `enc_score_head` and `class_embed`.
If `None`, `prior_prob` computed as `prior_prob = 1 / (num_labels + 1)` while initializing model weights.
layer_norm_eps (`float`, *optional*, defaults to 1e-05):
The epsilon used by the layer normalization layers.
batch_norm_eps (`float`, *optional*, defaults to 1e-05):
The epsilon used by the batch normalization layers.
backbone_config (`Dict`, *optional*, defaults to `RTDetrResNetConfig()`):
|
256_4_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#rtdetrconfig
|
.md
|
The epsilon used by the batch normalization layers.
backbone_config (`Dict`, *optional*, defaults to `RTDetrResNetConfig()`):
The configuration of the backbone model.
backbone (`str`, *optional*):
Name of backbone to use when `backbone_config` is `None`. If `use_pretrained_backbone` is `True`, this
will load the corresponding pretrained weights from the timm or transformers library. If `use_pretrained_backbone`
|
256_4_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#rtdetrconfig
|
.md
|
will load the corresponding pretrained weights from the timm or transformers library. If `use_pretrained_backbone`
is `False`, this loads the backbone's config and uses that to initialize the backbone with random weights.
use_pretrained_backbone (`bool`, *optional*, defaults to `False`):
Whether to use pretrained weights for the backbone.
use_timm_backbone (`bool`, *optional*, defaults to `False`):
Whether to load `backbone` from the timm library. If `False`, the backbone is loaded from the transformers
|
256_4_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#rtdetrconfig
|
.md
|
Whether to load `backbone` from the timm library. If `False`, the backbone is loaded from the transformers
library.
freeze_backbone_batch_norms (`bool`, *optional*, defaults to `True`):
Whether to freeze the batch normalization layers in the backbone.
backbone_kwargs (`dict`, *optional*):
Keyword arguments to be passed to AutoBackbone when loading from a checkpoint
e.g. `{'out_indices': (0, 1, 2, 3)}`. Cannot be specified if `backbone_config` is set.
encoder_hidden_dim (`int`, *optional*, defaults to 256):
|
256_4_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#rtdetrconfig
|
.md
|
encoder_hidden_dim (`int`, *optional*, defaults to 256):
Dimension of the layers in hybrid encoder.
encoder_in_channels (`list`, *optional*, defaults to `[512, 1024, 2048]`):
Multi level features input for encoder.
feat_strides (`List[int]`, *optional*, defaults to `[8, 16, 32]`):
Strides used in each feature map.
encoder_layers (`int`, *optional*, defaults to 1):
Total of layers to be used by the encoder.
encoder_ffn_dim (`int`, *optional*, defaults to 1024):
|
256_4_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#rtdetrconfig
|
.md
|
Total of layers to be used by the encoder.
encoder_ffn_dim (`int`, *optional*, defaults to 1024):
Dimension of the "intermediate" (often named feed-forward) layer in decoder.
encoder_attention_heads (`int`, *optional*, defaults to 8):
Number of attention heads for each attention layer in the Transformer encoder.
dropout (`float`, *optional*, defaults to 0.0):
The ratio for all dropout layers.
activation_dropout (`float`, *optional*, defaults to 0.0):
|
256_4_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#rtdetrconfig
|
.md
|
The ratio for all dropout layers.
activation_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for activations inside the fully connected layer.
encode_proj_layers (`List[int]`, *optional*, defaults to `[2]`):
Indexes of the projected layers to be used in the encoder.
positional_encoding_temperature (`int`, *optional*, defaults to 10000):
The temperature parameter used to create the positional encodings.
encoder_activation_function (`str`, *optional*, defaults to `"gelu"`):
|
256_4_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#rtdetrconfig
|
.md
|
encoder_activation_function (`str`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"silu"` and `"gelu_new"` are supported.
activation_function (`str`, *optional*, defaults to `"silu"`):
The non-linear activation function (function or string) in the general layer. If string, `"gelu"`,
`"relu"`, `"silu"` and `"gelu_new"` are supported.
eval_size (`Tuple[int, int]`, *optional*):
|
256_4_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#rtdetrconfig
|
.md
|
`"relu"`, `"silu"` and `"gelu_new"` are supported.
eval_size (`Tuple[int, int]`, *optional*):
Height and width used to computes the effective height and width of the position embeddings after taking
into account the stride.
normalize_before (`bool`, *optional*, defaults to `False`):
Determine whether to apply layer normalization in the transformer encoder layer before self-attention and
feed-forward modules.
hidden_expansion (`float`, *optional*, defaults to 1.0):
|
256_4_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#rtdetrconfig
|
.md
|
feed-forward modules.
hidden_expansion (`float`, *optional*, defaults to 1.0):
Expansion ratio to enlarge the dimension size of RepVGGBlock and CSPRepLayer.
d_model (`int`, *optional*, defaults to 256):
Dimension of the layers exclude hybrid encoder.
num_queries (`int`, *optional*, defaults to 300):
Number of object queries.
decoder_in_channels (`list`, *optional*, defaults to `[256, 256, 256]`):
Multi level features dimension for decoder
decoder_ffn_dim (`int`, *optional*, defaults to 1024):
|
256_4_11
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#rtdetrconfig
|
.md
|
Multi level features dimension for decoder
decoder_ffn_dim (`int`, *optional*, defaults to 1024):
Dimension of the "intermediate" (often named feed-forward) layer in decoder.
num_feature_levels (`int`, *optional*, defaults to 3):
The number of input feature levels.
decoder_n_points (`int`, *optional*, defaults to 4):
The number of sampled keys in each feature level for each attention head in the decoder.
decoder_layers (`int`, *optional*, defaults to 6):
Number of decoder layers.
|
256_4_12
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#rtdetrconfig
|
.md
|
decoder_layers (`int`, *optional*, defaults to 6):
Number of decoder layers.
decoder_attention_heads (`int`, *optional*, defaults to 8):
Number of attention heads for each attention layer in the Transformer decoder.
decoder_activation_function (`str`, *optional*, defaults to `"relu"`):
The non-linear activation function (function or string) in the decoder. If string, `"gelu"`,
`"relu"`, `"silu"` and `"gelu_new"` are supported.
attention_dropout (`float`, *optional*, defaults to 0.0):
|
256_4_13
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#rtdetrconfig
|
.md
|
`"relu"`, `"silu"` and `"gelu_new"` are supported.
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
num_denoising (`int`, *optional*, defaults to 100):
The total number of denoising tasks or queries to be used for contrastive denoising.
label_noise_ratio (`float`, *optional*, defaults to 0.5):
The fraction of denoising labels to which random noise should be added.
box_noise_scale (`float`, *optional*, defaults to 1.0):
|
256_4_14
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#rtdetrconfig
|
.md
|
The fraction of denoising labels to which random noise should be added.
box_noise_scale (`float`, *optional*, defaults to 1.0):
Scale or magnitude of noise to be added to the bounding boxes.
learn_initial_query (`bool`, *optional*, defaults to `False`):
Indicates whether the initial query embeddings for the decoder should be learned during training
anchor_image_size (`Tuple[int, int]`, *optional*):
|
256_4_15
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#rtdetrconfig
|
.md
|
anchor_image_size (`Tuple[int, int]`, *optional*):
Height and width of the input image used during evaluation to generate the bounding box anchors. If None, automatic generate anchor is applied.
disable_custom_kernels (`bool`, *optional*, defaults to `True`):
Whether to disable custom kernels.
with_box_refine (`bool`, *optional*, defaults to `True`):
Whether to apply iterative bounding box refinement, where each decoder layer refines the bounding boxes
based on the predictions from the previous layer.
|
256_4_16
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#rtdetrconfig
|
.md
|
based on the predictions from the previous layer.
is_encoder_decoder (`bool`, *optional*, defaults to `True`):
Whether the architecture has an encoder decoder structure.
matcher_alpha (`float`, *optional*, defaults to 0.25):
Parameter alpha used by the Hungarian Matcher.
matcher_gamma (`float`, *optional*, defaults to 2.0):
Parameter gamma used by the Hungarian Matcher.
matcher_class_cost (`float`, *optional*, defaults to 2.0):
The relative weight of the class loss used by the Hungarian Matcher.
|
256_4_17
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#rtdetrconfig
|
.md
|
matcher_class_cost (`float`, *optional*, defaults to 2.0):
The relative weight of the class loss used by the Hungarian Matcher.
matcher_bbox_cost (`float`, *optional*, defaults to 5.0):
The relative weight of the bounding box loss used by the Hungarian Matcher.
matcher_giou_cost (`float`, *optional*, defaults to 2.0):
The relative weight of the giou loss of used by the Hungarian Matcher.
use_focal_loss (`bool`, *optional*, defaults to `True`):
Parameter informing if focal focal should be used.
|
256_4_18
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#rtdetrconfig
|
.md
|
use_focal_loss (`bool`, *optional*, defaults to `True`):
Parameter informing if focal focal should be used.
auxiliary_loss (`bool`, *optional*, defaults to `True`):
Whether auxiliary decoding losses (loss at each decoder layer) are to be used.
focal_loss_alpha (`float`, *optional*, defaults to 0.75):
Parameter alpha used to compute the focal loss.
focal_loss_gamma (`float`, *optional*, defaults to 2.0):
Parameter gamma used to compute the focal loss.
weight_loss_vfl (`float`, *optional*, defaults to 1.0):
|
256_4_19
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#rtdetrconfig
|
.md
|
Parameter gamma used to compute the focal loss.
weight_loss_vfl (`float`, *optional*, defaults to 1.0):
Relative weight of the varifocal loss in the object detection loss.
weight_loss_bbox (`float`, *optional*, defaults to 5.0):
Relative weight of the L1 bounding box loss in the object detection loss.
weight_loss_giou (`float`, *optional*, defaults to 2.0):
Relative weight of the generalized IoU loss in the object detection loss.
eos_coefficient (`float`, *optional*, defaults to 0.0001):
|
256_4_20
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#rtdetrconfig
|
.md
|
eos_coefficient (`float`, *optional*, defaults to 0.0001):
Relative classification weight of the 'no-object' class in the object detection loss.
Examples:
```python
>>> from transformers import RTDetrConfig, RTDetrModel
|
256_4_21
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#rtdetrconfig
|
.md
|
>>> # Initializing a RT-DETR configuration
>>> configuration = RTDetrConfig()
>>> # Initializing a model (with random weights) from the configuration
>>> model = RTDetrModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
256_4_22
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#rtdetrresnetconfig
|
.md
|
This is the configuration class to store the configuration of a [`RTDetrResnetBackbone`]. It is used to instantiate an
ResNet model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the ResNet
[microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
256_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#rtdetrresnetconfig
|
.md
|
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
num_channels (`int`, *optional*, defaults to 3):
The number of input channels.
embedding_size (`int`, *optional*, defaults to 64):
Dimensionality (hidden size) for the embedding layer.
hidden_sizes (`List[int]`, *optional*, defaults to `[256, 512, 1024, 2048]`):
Dimensionality (hidden size) at each stage.
|
256_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#rtdetrresnetconfig
|
.md
|
hidden_sizes (`List[int]`, *optional*, defaults to `[256, 512, 1024, 2048]`):
Dimensionality (hidden size) at each stage.
depths (`List[int]`, *optional*, defaults to `[3, 4, 6, 3]`):
Depth (number of layers) for each stage.
layer_type (`str`, *optional*, defaults to `"bottleneck"`):
The layer to use, it can be either `"basic"` (used for smaller models, like resnet-18 or resnet-34) or
`"bottleneck"` (used for larger models like resnet-50 and above).
hidden_act (`str`, *optional*, defaults to `"relu"`):
|
256_5_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#rtdetrresnetconfig
|
.md
|
`"bottleneck"` (used for larger models like resnet-50 and above).
hidden_act (`str`, *optional*, defaults to `"relu"`):
The non-linear activation function in each block. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"`
are supported.
downsample_in_first_stage (`bool`, *optional*, defaults to `False`):
If `True`, the first stage will downsample the inputs using a `stride` of 2.
downsample_in_bottleneck (`bool`, *optional*, defaults to `False`):
|
256_5_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#rtdetrresnetconfig
|
.md
|
downsample_in_bottleneck (`bool`, *optional*, defaults to `False`):
If `True`, the first conv 1x1 in ResNetBottleNeckLayer will downsample the inputs using a `stride` of 2.
out_features (`List[str]`, *optional*):
If used as backbone, list of features to output. Can be any of `"stem"`, `"stage1"`, `"stage2"`, etc.
(depending on how many stages the model has). If unset and `out_indices` is set, will default to the
|
256_5_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#rtdetrresnetconfig
|
.md
|
(depending on how many stages the model has). If unset and `out_indices` is set, will default to the
corresponding stages. If unset and `out_indices` is unset, will default to the last stage. Must be in the
same order as defined in the `stage_names` attribute.
out_indices (`List[int]`, *optional*):
If used as backbone, list of indices of features to output. Can be any of 0, 1, 2, etc. (depending on how
many stages the model has). If unset and `out_features` is set, will default to the corresponding stages.
|
256_5_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#rtdetrresnetconfig
|
.md
|
many stages the model has). If unset and `out_features` is set, will default to the corresponding stages.
If unset and `out_features` is unset, will default to the last stage. Must be in the
same order as defined in the `stage_names` attribute.
Example:
```python
>>> from transformers import RTDetrResNetConfig, RTDetrResnetBackbone
|
256_5_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#rtdetrresnetconfig
|
.md
|
>>> # Initializing a ResNet resnet-50 style configuration
>>> configuration = RTDetrResNetConfig()
>>> # Initializing a model (with random weights) from the resnet-50 style configuration
>>> model = RTDetrResnetBackbone(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
256_5_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#rtdetrimageprocessor
|
.md
|
Constructs a RT-DETR image processor.
Args:
format (`str`, *optional*, defaults to `AnnotationFormat.COCO_DETECTION`):
Data format of the annotations. One of "coco_detection" or "coco_panoptic".
do_resize (`bool`, *optional*, defaults to `True`):
Controls whether to resize the image's (height, width) dimensions to the specified `size`. Can be
overridden by the `do_resize` parameter in the `preprocess` method.
size (`Dict[str, int]` *optional*, defaults to `{"height": 640, "width": 640}`):
|
256_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#rtdetrimageprocessor
|
.md
|
size (`Dict[str, int]` *optional*, defaults to `{"height": 640, "width": 640}`):
Size of the image's `(height, width)` dimensions after resizing. Can be overridden by the `size` parameter
in the `preprocess` method. Available options are:
- `{"height": int, "width": int}`: The image will be resized to the exact size `(height, width)`.
Do NOT keep the aspect ratio.
- `{"shortest_edge": int, "longest_edge": int}`: The image will be resized to a maximum size respecting
|
256_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#rtdetrimageprocessor
|
.md
|
- `{"shortest_edge": int, "longest_edge": int}`: The image will be resized to a maximum size respecting
the aspect ratio and keeping the shortest edge less or equal to `shortest_edge` and the longest edge
less or equal to `longest_edge`.
- `{"max_height": int, "max_width": int}`: The image will be resized to the maximum size respecting the
aspect ratio and keeping the height less or equal to `max_height` and the width less or equal to
`max_width`.
|
256_6_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#rtdetrimageprocessor
|
.md
|
aspect ratio and keeping the height less or equal to `max_height` and the width less or equal to
`max_width`.
resample (`PILImageResampling`, *optional*, defaults to `PILImageResampling.BILINEAR`):
Resampling filter to use if resizing the image.
do_rescale (`bool`, *optional*, defaults to `True`):
Controls whether to rescale the image by the specified scale `rescale_factor`. Can be overridden by the
`do_rescale` parameter in the `preprocess` method.
|
256_6_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#rtdetrimageprocessor
|
.md
|
`do_rescale` parameter in the `preprocess` method.
rescale_factor (`int` or `float`, *optional*, defaults to `1/255`):
Scale factor to use if rescaling the image. Can be overridden by the `rescale_factor` parameter in the
`preprocess` method.
Controls whether to normalize the image. Can be overridden by the `do_normalize` parameter in the
`preprocess` method.
do_normalize (`bool`, *optional*, defaults to `False`):
Whether to normalize the image.
|
256_6_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#rtdetrimageprocessor
|
.md
|
`preprocess` method.
do_normalize (`bool`, *optional*, defaults to `False`):
Whether to normalize the image.
image_mean (`float` or `List[float]`, *optional*, defaults to `IMAGENET_DEFAULT_MEAN`):
Mean values to use when normalizing the image. Can be a single value or a list of values, one for each
channel. Can be overridden by the `image_mean` parameter in the `preprocess` method.
image_std (`float` or `List[float]`, *optional*, defaults to `IMAGENET_DEFAULT_STD`):
|
256_6_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#rtdetrimageprocessor
|
.md
|
image_std (`float` or `List[float]`, *optional*, defaults to `IMAGENET_DEFAULT_STD`):
Standard deviation values to use when normalizing the image. Can be a single value or a list of values, one
for each channel. Can be overridden by the `image_std` parameter in the `preprocess` method.
do_convert_annotations (`bool`, *optional*, defaults to `True`):
Controls whether to convert the annotations to the format expected by the DETR model. Converts the
|
256_6_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#rtdetrimageprocessor
|
.md
|
Controls whether to convert the annotations to the format expected by the DETR model. Converts the
bounding boxes to the format `(center_x, center_y, width, height)` and in the range `[0, 1]`.
Can be overridden by the `do_convert_annotations` parameter in the `preprocess` method.
do_pad (`bool`, *optional*, defaults to `False`):
Controls whether to pad the image. Can be overridden by the `do_pad` parameter in the `preprocess`
|
256_6_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#rtdetrimageprocessor
|
.md
|
Controls whether to pad the image. Can be overridden by the `do_pad` parameter in the `preprocess`
method. If `True`, padding will be applied to the bottom and right of the image with zeros.
If `pad_size` is provided, the image will be padded to the specified dimensions.
Otherwise, the image will be padded to the maximum height and width of the batch.
pad_size (`Dict[str, int]`, *optional*):
The size `{"height": int, "width" int}` to pad the images to. Must be larger than any image size
|
256_6_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#rtdetrimageprocessor
|
.md
|
The size `{"height": int, "width" int}` to pad the images to. Must be larger than any image size
provided for preprocessing. If `pad_size` is not provided, images will be padded to the largest
height and width in the batch.
Methods: preprocess
- post_process_object_detection
|
256_6_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#rtdetrimageprocessorfast
|
.md
|
Constructs a fast RTDetr image processor.
Args:
format (`str`, *optional*, defaults to `AnnotationFormat.COCO_DETECTION`):
Data format of the annotations. One of "coco_detection" or "coco_panoptic".
do_resize (`bool`, *optional*, defaults to `True`):
Controls whether to resize the image's `(height, width)` dimensions to the specified `size`. Can be
overridden by the `do_resize` parameter in the `preprocess` method.
|
256_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#rtdetrimageprocessorfast
|
.md
|
overridden by the `do_resize` parameter in the `preprocess` method.
size (`Dict[str, int]` *optional*, defaults to `{"shortest_edge": 800, "longest_edge": 1333}`):
Size of the image's `(height, width)` dimensions after resizing. Can be overridden by the `size` parameter
in the `preprocess` method. Available options are:
- `{"height": int, "width": int}`: The image will be resized to the exact size `(height, width)`.
Do NOT keep the aspect ratio.
|
256_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#rtdetrimageprocessorfast
|
.md
|
- `{"height": int, "width": int}`: The image will be resized to the exact size `(height, width)`.
Do NOT keep the aspect ratio.
- `{"shortest_edge": int, "longest_edge": int}`: The image will be resized to a maximum size respecting
the aspect ratio and keeping the shortest edge less or equal to `shortest_edge` and the longest edge
less or equal to `longest_edge`.
- `{"max_height": int, "max_width": int}`: The image will be resized to the maximum size respecting the
|
256_7_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#rtdetrimageprocessorfast
|
.md
|
- `{"max_height": int, "max_width": int}`: The image will be resized to the maximum size respecting the
aspect ratio and keeping the height less or equal to `max_height` and the width less or equal to
`max_width`.
resample (`PILImageResampling`, *optional*, defaults to `PILImageResampling.BILINEAR`):
Resampling filter to use if resizing the image.
do_rescale (`bool`, *optional*, defaults to `True`):
Controls whether to rescale the image by the specified scale `rescale_factor`. Can be overridden by the
|
256_7_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#rtdetrimageprocessorfast
|
.md
|
Controls whether to rescale the image by the specified scale `rescale_factor`. Can be overridden by the
`do_rescale` parameter in the `preprocess` method.
rescale_factor (`int` or `float`, *optional*, defaults to `1/255`):
Scale factor to use if rescaling the image. Can be overridden by the `rescale_factor` parameter in the
`preprocess` method.
do_normalize (`bool`, *optional*, defaults to `False`):
Controls whether to normalize the image. Can be overridden by the `do_normalize` parameter in the
|
256_7_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#rtdetrimageprocessorfast
|
.md
|
Controls whether to normalize the image. Can be overridden by the `do_normalize` parameter in the
`preprocess` method.
image_mean (`float` or `List[float]`, *optional*, defaults to `IMAGENET_DEFAULT_MEAN`):
Mean values to use when normalizing the image. Can be a single value or a list of values, one for each
channel. Can be overridden by the `image_mean` parameter in the `preprocess` method.
image_std (`float` or `List[float]`, *optional*, defaults to `IMAGENET_DEFAULT_STD`):
|
256_7_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#rtdetrimageprocessorfast
|
.md
|
image_std (`float` or `List[float]`, *optional*, defaults to `IMAGENET_DEFAULT_STD`):
Standard deviation values to use when normalizing the image. Can be a single value or a list of values, one
for each channel. Can be overridden by the `image_std` parameter in the `preprocess` method.
do_convert_annotations (`bool`, *optional*, defaults to `True`):
Controls whether to convert the annotations to the format expected by the RT_DETR model. Converts the
|
256_7_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#rtdetrimageprocessorfast
|
.md
|
Controls whether to convert the annotations to the format expected by the RT_DETR model. Converts the
bounding boxes to the format `(center_x, center_y, width, height)` and in the range `[0, 1]`.
Can be overridden by the `do_convert_annotations` parameter in the `preprocess` method.
do_pad (`bool`, *optional*, defaults to `False`):
Controls whether to pad the image. Can be overridden by the `do_pad` parameter in the `preprocess`
|
256_7_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#rtdetrimageprocessorfast
|
.md
|
Controls whether to pad the image. Can be overridden by the `do_pad` parameter in the `preprocess`
method. If `True`, padding will be applied to the bottom and right of the image with zeros.
If `pad_size` is provided, the image will be padded to the specified dimensions.
Otherwise, the image will be padded to the maximum height and width of the batch.
pad_size (`Dict[str, int]`, *optional*):
The size `{"height": int, "width" int}` to pad the images to. Must be larger than any image size
|
256_7_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#rtdetrimageprocessorfast
|
.md
|
The size `{"height": int, "width" int}` to pad the images to. Must be larger than any image size
provided for preprocessing. If `pad_size` is not provided, images will be padded to the largest
height and width in the batch.
Methods: preprocess
- post_process_object_detection
|
256_7_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#rtdetrmodel
|
.md
|
RT-DETR Model (consisting of a backbone and encoder-decoder) outputting raw hidden states without any head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
256_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#rtdetrmodel
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`RTDetrConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
|
256_8_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#rtdetrmodel
|
.md
|
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
256_8_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#rtdetrforobjectdetection
|
.md
|
RT-DETR Model (consisting of a backbone and encoder-decoder) outputting bounding boxes and logits to be further
decoded into scores and classes.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
256_9_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#rtdetrforobjectdetection
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`RTDetrConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
|
256_9_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#rtdetrforobjectdetection
|
.md
|
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
256_9_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#rtdetrresnetbackbone
|
.md
|
ResNet backbone, to be used with frameworks like RTDETR.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`RTDetrResNetConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
256_10_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#rtdetrresnetbackbone
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
256_10_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon3.md
|
https://huggingface.co/docs/transformers/en/model_doc/falcon3/
|
.md
|
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
257_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon3.md
|
https://huggingface.co/docs/transformers/en/model_doc/falcon3/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
257_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon3.md
|
https://huggingface.co/docs/transformers/en/model_doc/falcon3/#overview
|
.md
|
Falcon3 represents a natural evolution from previous releases, emphasizing expanding the models' science, math, and code capabilities. This iteration includes five base models: Falcon3-1B-Base, Falcon3-3B-Base, Falcon3-Mamba-7B-Base, Falcon3-7B-Base, and Falcon3-10B-Base. In developing these models, we incorporated several key innovations aimed at improving the models' performances while reducing training costs:
|
257_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon3.md
|
https://huggingface.co/docs/transformers/en/model_doc/falcon3/#overview
|
.md
|
One pre-training: We conducted a single large-scale pretraining run on the 7B model, using 2048 H100 GPU chips, leveraging 14 trillion tokens featuring web, code, STEM, and curated high-quality and multilingual data.
|
257_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon3.md
|
https://huggingface.co/docs/transformers/en/model_doc/falcon3/#overview
|
.md
|
Depth up-scaling for improved reasoning: Building on recent studies on the effects of model depth, we upscaled the 7B model to a 10B parameters model by duplicating the redundant layers and continuing pre-training with 2TT of high-quality data. This yielded Falcon3-10B-Base which achieves state-of-the-art zero-shot and few-shot performance for models under 13B parameters.
|
257_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon3.md
|
https://huggingface.co/docs/transformers/en/model_doc/falcon3/#overview
|
.md
|
Knowledge distillation for better tiny models: To provide compact and efficient alternatives, we developed Falcon3-1B-Base and Falcon3-3B-Base by leveraging pruning and knowledge distillation techniques, using less than 100GT of curated high-quality data, thereby redefining pre-training efficiency.
|
257_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon3.md
|
https://huggingface.co/docs/transformers/en/model_doc/falcon3/#resources
|
.md
|
- [Blog post](https://huggingface.co/blog/falcon3)
- [Models on Huggingface](https://huggingface.co/collections/tiiuae/falcon3-67605ae03578be86e4e87026)
|
257_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics2.md
|
https://huggingface.co/docs/transformers/en/model_doc/idefics2/
|
.md
|
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
258_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics2.md
|
https://huggingface.co/docs/transformers/en/model_doc/idefics2/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
258_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics2.md
|
https://huggingface.co/docs/transformers/en/model_doc/idefics2/#overview
|
.md
|
The Idefics2 model was proposed in [What matters when building vision-language models?](https://arxiv.org/abs/2405.02246) by Léo Tronchon, Hugo Laurencon, Victor Sanh. The accompanying blog post can be found [here](https://huggingface.co/blog/idefics2).
Idefics2 is an open multimodal model that accepts arbitrary sequences of image and text inputs and produces text
outputs. The model can answer questions about images, describe visual content, create stories grounded on multiple
|
258_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics2.md
|
https://huggingface.co/docs/transformers/en/model_doc/idefics2/#overview
|
.md
|
outputs. The model can answer questions about images, describe visual content, create stories grounded on multiple
images, or simply behave as a pure language model without visual inputs. It improves upon IDEFICS-1, notably on
document understanding, OCR, or visual reasoning. Idefics2 is lightweight (8 billion parameters) and treats
images in their native aspect ratio and resolution, which allows for varying inference efficiency.
The abstract from the paper is the following:
|
258_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics2.md
|
https://huggingface.co/docs/transformers/en/model_doc/idefics2/#overview
|
.md
|
*The growing interest in vision-language models (VLMs) has been driven by improvements in large language models and vision transformers. Despite the abundance of literature on this subject, we observe that critical decisions regarding the design of VLMs are often not justified. We argue that these unsupported decisions impede progress in the field by making it difficult to identify which choices improve model performance. To address this issue, we conduct extensive experiments around pre-trained models,
|
258_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics2.md
|
https://huggingface.co/docs/transformers/en/model_doc/idefics2/#overview
|
.md
|
which choices improve model performance. To address this issue, we conduct extensive experiments around pre-trained models, architecture choice, data, and training methods. Our consolidation of findings includes the development of Idefics2, an efficient foundational VLM of 8 billion parameters. Idefics2 achieves state-of-the-art performance within its size category across various multimodal benchmarks, and is often on par with models four times its size. We release the model (base, instructed, and chat)
|
258_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics2.md
|
https://huggingface.co/docs/transformers/en/model_doc/idefics2/#overview
|
.md
|
multimodal benchmarks, and is often on par with models four times its size. We release the model (base, instructed, and chat) along with the datasets created for its training.*
|
258_1_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics2.md
|
https://huggingface.co/docs/transformers/en/model_doc/idefics2/#overview
|
.md
|
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/idefics2_architecture.png"
alt="drawing" width="600"/>
<small> Idefics2 architecture. Taken from the <a href="https://arxiv.org/abs/2405.02246">original paper.</a> </small>
This model was contributed by [amyeroberts](https://huggingface.co/amyeroberts).
The original code can be found [here](https://huggingface.co/HuggingFaceM4/idefics2).
|
258_1_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics2.md
|
https://huggingface.co/docs/transformers/en/model_doc/idefics2/#usage-tips
|
.md
|
- Each sample can contain multiple images, and the number of images can vary between samples. The processor will pad the inputs to the maximum number of images in a batch for input to the model.
|
258_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics2.md
|
https://huggingface.co/docs/transformers/en/model_doc/idefics2/#usage-tips
|
.md
|
- The processor has a `do_image_splitting` option. If `True`, each input image will be split into 4 sub-images, and concatenated with the original to form 5 images. This is useful for increasing model performance. Make sure `processor.image_processor.do_image_splitting` is set to `False` if the model was not trained with this option.
|
258_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics2.md
|
https://huggingface.co/docs/transformers/en/model_doc/idefics2/#usage-tips
|
.md
|
- `text` passed to the processor should have the `<image>` tokens where the images should be inserted. And `<end_of_utterance>` at the end of each utterance if the text is a chat message.
- The processor has its own `apply_chat_template` method to convert chat messages to text that can then be passed as `text` to the processor.
Example of how to use the processor on chat messages:
```python
import requests
from PIL import Image
|
258_2_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics2.md
|
https://huggingface.co/docs/transformers/en/model_doc/idefics2/#usage-tips
|
.md
|
Example of how to use the processor on chat messages:
```python
import requests
from PIL import Image
from transformers import Idefics2Processor, Idefics2ForConditionalGeneration
import torch
|
258_2_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics2.md
|
https://huggingface.co/docs/transformers/en/model_doc/idefics2/#usage-tips
|
.md
|
device = "cuda" if torch.cuda.is_available() else "cpu"
url_1 = "http://images.cocodataset.org/val2017/000000039769.jpg"
url_2 = "http://images.cocodataset.org/val2017/000000219578.jpg"
image_1 = Image.open(requests.get(url_1, stream=True).raw)
image_2 = Image.open(requests.get(url_2, stream=True).raw)
images = [image_1, image_2]
messages = [{
"role": "user",
"content": [
{"type": "text", "text": "What’s the difference between these two images?"},
{"type": "image"},
{"type": "image"},
],
}]
|
258_2_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics2.md
|
https://huggingface.co/docs/transformers/en/model_doc/idefics2/#usage-tips
|
.md
|
processor = Idefics2Processor.from_pretrained("HuggingFaceM4/idefics2-8b")
model = Idefics2ForConditionalGeneration.from_pretrained("HuggingFaceM4/idefics2-8b")
model.to(device)
# at inference time, one needs to pass `add_generation_prompt=True` in order to make sure the model completes the prompt
text = processor.apply_chat_template(messages, add_generation_prompt=True)
print(text)
# 'User: What’s the difference between these two images?<image><image><end_of_utterance>\nAssistant:'
|
258_2_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics2.md
|
https://huggingface.co/docs/transformers/en/model_doc/idefics2/#usage-tips
|
.md
|
inputs = processor(images=images, text=text, return_tensors="pt").to(device)
|
258_2_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics2.md
|
https://huggingface.co/docs/transformers/en/model_doc/idefics2/#usage-tips
|
.md
|
generated_text = model.generate(**inputs, max_new_tokens=500)
generated_text = processor.batch_decode(generated_text, skip_special_tokens=True)[0]
print("Generated text:", generated_text)
```
- During training, it's important to determine which tokens the model should not learn. For Idefics2, this typically comes down to the image and padding tokens. This means that one can create the labels as follows:
```python
import requests
from PIL import Image
|
258_2_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics2.md
|
https://huggingface.co/docs/transformers/en/model_doc/idefics2/#usage-tips
|
.md
|
```python
import requests
from PIL import Image
from transformers import Idefics2Processor, Idefics2ForConditionalGeneration
import torch
|
258_2_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics2.md
|
https://huggingface.co/docs/transformers/en/model_doc/idefics2/#usage-tips
|
.md
|
url_1 = "http://images.cocodataset.org/val2017/000000039769.jpg"
url_2 = "http://images.cocodataset.org/val2017/000000219578.jpg"
image_1 = Image.open(requests.get(url_1, stream=True).raw)
image_2 = Image.open(requests.get(url_2, stream=True).raw)
images = [image_1, image_2]
|
258_2_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics2.md
|
https://huggingface.co/docs/transformers/en/model_doc/idefics2/#usage-tips
|
.md
|
messages = [{
"role": "user",
"content": [
{"type": "text", "text": "What’s the difference between these two images?"},
{"type": "image"},
{"type": "image"},
],
},
{
"role": "assistant",
"content": [
{"type": "text", "text": "The difference is that one image is about dogs and the other one about cats."},
],
}]
device = "cuda" if torch.cuda.is_available() else "cpu"
|
258_2_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics2.md
|
https://huggingface.co/docs/transformers/en/model_doc/idefics2/#usage-tips
|
.md
|
device = "cuda" if torch.cuda.is_available() else "cpu"
processor = Idefics2Processor.from_pretrained("HuggingFaceM4/idefics2-8b")
model = Idefics2ForConditionalGeneration.from_pretrained("HuggingFaceM4/idefics2-8b")
model.to(device)
text = processor.apply_chat_template(messages, add_generation_prompt=False)
inputs = processor(images=images, text=text, return_tensors="pt").to(device)
|
258_2_11
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics2.md
|
https://huggingface.co/docs/transformers/en/model_doc/idefics2/#usage-tips
|
.md
|
labels = inputs.input_ids.clone()
labels[labels == processor.tokenizer.pad_token_id] = -100
labels[labels == model.config.image_token_id] = -100
inputs["labels"] = labels
outputs = model(**inputs)
loss = outputs.loss
loss.backward()
```
Do note that when training Idefics2 on multi-turn conversations between a user and an assistant, one typically also sets all the tokens corresponding to the user messages to -100.
|
258_2_12
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.