source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/whisper.md | https://huggingface.co/docs/transformers/en/model_doc/whisper/#tfwhisperforconditionalgeneration | .md | No docstring available for TFWhisperForConditionalGeneration
Methods: call
</tf>
<jax> | 147_16_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/whisper.md | https://huggingface.co/docs/transformers/en/model_doc/whisper/#flaxwhispermodel | .md | No docstring available for FlaxWhisperModel
Methods: __call__ | 147_17_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/whisper.md | https://huggingface.co/docs/transformers/en/model_doc/whisper/#flaxwhisperforconditionalgeneration | .md | No docstring available for FlaxWhisperForConditionalGeneration
Methods: __call__ | 147_18_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/whisper.md | https://huggingface.co/docs/transformers/en/model_doc/whisper/#flaxwhisperforaudioclassification | .md | No docstring available for FlaxWhisperForAudioClassification
Methods: __call__
</jax>
</frameworkcontent> | 147_19_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/ | .md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 148_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 148_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#overview | .md | The LayoutLMv3 model was proposed in [LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking](https://arxiv.org/abs/2204.08387) by Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei.
LayoutLMv3 simplifies [LayoutLMv2](layoutlmv2) by using patch embeddings (as in [ViT](vit)) instead of leveraging a CNN backbone, and pre-trains the model on 3 objectives: masked language modeling (MLM), masked image modeling (MIM)
and word-patch alignment (WPA). | 148_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#overview | .md | and word-patch alignment (WPA).
The abstract from the paper is the following: | 148_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#overview | .md | *Self-supervised pre-training techniques have achieved remarkable progress in Document AI. Most multimodal pre-trained models use a masked language modeling objective to learn bidirectional representations on the text modality, but they differ in pre-training objectives for the image modality. This discrepancy adds difficulty to multimodal representation learning. In this paper, we propose LayoutLMv3 to pre-train multimodal Transformers for Document AI with unified text and image masking. Additionally, | 148_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#overview | .md | we propose LayoutLMv3 to pre-train multimodal Transformers for Document AI with unified text and image masking. Additionally, LayoutLMv3 is pre-trained with a word-patch alignment objective to learn cross-modal alignment by predicting whether the corresponding image patch of a text word is masked. The simple unified architecture and training objectives make LayoutLMv3 a general-purpose pre-trained model for both text-centric and image-centric Document AI tasks. Experimental results show that LayoutLMv3 | 148_1_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#overview | .md | pre-trained model for both text-centric and image-centric Document AI tasks. Experimental results show that LayoutLMv3 achieves state-of-the-art performance not only in text-centric tasks, including form understanding, receipt understanding, and document visual question answering, but also in image-centric tasks such as document image classification and document layout analysis.* | 148_1_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#overview | .md | <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/layoutlmv3_architecture.png"
alt="drawing" width="600"/>
<small> LayoutLMv3 architecture. Taken from the <a href="https://arxiv.org/abs/2204.08387">original paper</a>. </small> | 148_1_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#overview | .md | <small> LayoutLMv3 architecture. Taken from the <a href="https://arxiv.org/abs/2204.08387">original paper</a>. </small>
This model was contributed by [nielsr](https://huggingface.co/nielsr). The TensorFlow version of this model was added by [chriskoo](https://huggingface.co/chriskoo), [tokec](https://huggingface.co/tokec), and [lre](https://huggingface.co/lre). The original code can be found [here](https://github.com/microsoft/unilm/tree/master/layoutlmv3). | 148_1_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#usage-tips | .md | - In terms of data processing, LayoutLMv3 is identical to its predecessor [LayoutLMv2](layoutlmv2), except that:
- images need to be resized and normalized with channels in regular RGB format. LayoutLMv2 on the other hand normalizes the images internally and expects the channels in BGR format.
- text is tokenized using byte-pair encoding (BPE), as opposed to WordPiece. | 148_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#usage-tips | .md | - text is tokenized using byte-pair encoding (BPE), as opposed to WordPiece.
Due to these differences in data preprocessing, one can use [`LayoutLMv3Processor`] which internally combines a [`LayoutLMv3ImageProcessor`] (for the image modality) and a [`LayoutLMv3Tokenizer`]/[`LayoutLMv3TokenizerFast`] (for the text modality) to prepare all data for the model.
- Regarding usage of [`LayoutLMv3Processor`], we refer to the [usage guide](layoutlmv2#usage-layoutlmv2processor) of its predecessor. | 148_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#resources | .md | A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with LayoutLMv3. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
<Tip> | 148_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#resources | .md | <Tip>
LayoutLMv3 is nearly identical to LayoutLMv2, so we've also included LayoutLMv2 resources you can adapt for LayoutLMv3 tasks. For these notebooks, take care to use [`LayoutLMv2Processor`] instead when preparing data for the model!
</Tip>
- Demo notebooks for LayoutLMv3 can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/LayoutLMv3).
- Demo scripts can be found [here](https://github.com/huggingface/transformers/tree/main/examples/research_projects/layoutlmv3). | 148_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#resources | .md | <PipelineTag pipeline="text-classification"/>
- [`LayoutLMv2ForSequenceClassification`] is supported by this [notebook](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/LayoutLMv2/RVL-CDIP/Fine_tuning_LayoutLMv2ForSequenceClassification_on_RVL_CDIP.ipynb).
- [Text classification task guide](../tasks/sequence_classification)
<PipelineTag pipeline="token-classification"/> | 148_3_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#resources | .md | - [Text classification task guide](../tasks/sequence_classification)
<PipelineTag pipeline="token-classification"/>
- [`LayoutLMv3ForTokenClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/research_projects/layoutlmv3) and [notebook](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/LayoutLMv3/Fine_tune_LayoutLMv3_on_FUNSD_(HuggingFace_Trainer).ipynb). | 148_3_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#resources | .md | - A [notebook](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/LayoutLMv2/FUNSD/Inference_with_LayoutLMv2ForTokenClassification.ipynb) for how to perform inference with [`LayoutLMv2ForTokenClassification`] and a [notebook](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/LayoutLMv2/FUNSD/True_inference_with_LayoutLMv2ForTokenClassification_%2B_Gradio_demo.ipynb) for how to perform inference when no labels are available with | 148_3_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#resources | .md | for how to perform inference when no labels are available with [`LayoutLMv2ForTokenClassification`]. | 148_3_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#resources | .md | - A [notebook](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/LayoutLMv2/FUNSD/Fine_tuning_LayoutLMv2ForTokenClassification_on_FUNSD_using_HuggingFace_Trainer.ipynb) for how to finetune [`LayoutLMv2ForTokenClassification`] with the 🤗 Trainer.
- [Token classification task guide](../tasks/token_classification)
<PipelineTag pipeline="question-answering"/> | 148_3_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#resources | .md | - [Token classification task guide](../tasks/token_classification)
<PipelineTag pipeline="question-answering"/>
- [`LayoutLMv2ForQuestionAnswering`] is supported by this [notebook](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/LayoutLMv2/DocVQA/Fine_tuning_LayoutLMv2ForQuestionAnswering_on_DocVQA.ipynb).
- [Question answering task guide](../tasks/question_answering)
**Document question answering** | 148_3_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#resources | .md | - [Question answering task guide](../tasks/question_answering)
**Document question answering**
- [Document question answering task guide](../tasks/document_question_answering) | 148_3_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#layoutlmv3config | .md | This is the configuration class to store the configuration of a [`LayoutLMv3Model`]. It is used to instantiate an
LayoutLMv3 model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the LayoutLMv3
[microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) architecture. | 148_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#layoutlmv3config | .md | [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 50265):
Vocabulary size of the LayoutLMv3 model. Defines the number of different tokens that can be represented by
the `inputs_ids` passed when calling [`LayoutLMv3Model`]. | 148_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#layoutlmv3config | .md | the `inputs_ids` passed when calling [`LayoutLMv3Model`].
hidden_size (`int`, *optional*, defaults to 768):
Dimension of the encoder layers and the pooler layer.
num_hidden_layers (`int`, *optional*, defaults to 12):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 12):
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (`int`, *optional*, defaults to 3072): | 148_4_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#layoutlmv3config | .md | intermediate_size (`int`, *optional*, defaults to 3072):
Dimension of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"selu"` and `"gelu_new"` are supported.
hidden_dropout_prob (`float`, *optional*, defaults to 0.1): | 148_4_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#layoutlmv3config | .md | `"relu"`, `"selu"` and `"gelu_new"` are supported.
hidden_dropout_prob (`float`, *optional*, defaults to 0.1):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1):
The dropout ratio for the attention probabilities.
max_position_embeddings (`int`, *optional*, defaults to 512):
The maximum sequence length that this model might ever be used with. Typically set this to something large | 148_4_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#layoutlmv3config | .md | The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (`int`, *optional*, defaults to 2):
The vocabulary size of the `token_type_ids` passed when calling [`LayoutLMv3Model`].
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (`float`, *optional*, defaults to 1e-5): | 148_4_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#layoutlmv3config | .md | layer_norm_eps (`float`, *optional*, defaults to 1e-5):
The epsilon used by the layer normalization layers.
max_2d_position_embeddings (`int`, *optional*, defaults to 1024):
The maximum value that the 2D position embedding might ever be used with. Typically set this to something
large just in case (e.g., 1024).
coordinate_size (`int`, *optional*, defaults to `128`):
Dimension of the coordinate embeddings.
shape_size (`int`, *optional*, defaults to `128`):
Dimension of the width and height embeddings. | 148_4_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#layoutlmv3config | .md | shape_size (`int`, *optional*, defaults to `128`):
Dimension of the width and height embeddings.
has_relative_attention_bias (`bool`, *optional*, defaults to `True`):
Whether or not to use a relative attention bias in the self-attention mechanism.
rel_pos_bins (`int`, *optional*, defaults to 32):
The number of relative position bins to be used in the self-attention mechanism.
max_rel_pos (`int`, *optional*, defaults to 128): | 148_4_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#layoutlmv3config | .md | max_rel_pos (`int`, *optional*, defaults to 128):
The maximum number of relative positions to be used in the self-attention mechanism.
max_rel_2d_pos (`int`, *optional*, defaults to 256):
The maximum number of relative 2D positions in the self-attention mechanism.
rel_2d_pos_bins (`int`, *optional*, defaults to 64):
The number of 2D relative position bins in the self-attention mechanism.
has_spatial_attention_bias (`bool`, *optional*, defaults to `True`): | 148_4_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#layoutlmv3config | .md | has_spatial_attention_bias (`bool`, *optional*, defaults to `True`):
Whether or not to use a spatial attention bias in the self-attention mechanism.
visual_embed (`bool`, *optional*, defaults to `True`):
Whether or not to add patch embeddings.
input_size (`int`, *optional*, defaults to `224`):
The size (resolution) of the images.
num_channels (`int`, *optional*, defaults to `3`):
The number of channels of the images.
patch_size (`int`, *optional*, defaults to `16`)
The size (resolution) of the patches. | 148_4_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#layoutlmv3config | .md | The number of channels of the images.
patch_size (`int`, *optional*, defaults to `16`)
The size (resolution) of the patches.
classifier_dropout (`float`, *optional*):
The dropout ratio for the classification head.
Example:
```python
>>> from transformers import LayoutLMv3Config, LayoutLMv3Model | 148_4_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#layoutlmv3config | .md | >>> # Initializing a LayoutLMv3 microsoft/layoutlmv3-base style configuration
>>> configuration = LayoutLMv3Config()
>>> # Initializing a model (with random weights) from the microsoft/layoutlmv3-base style configuration
>>> model = LayoutLMv3Model(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
``` | 148_4_11 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#layoutlmv3featureextractor | .md | No docstring available for LayoutLMv3FeatureExtractor
Methods: __call__ | 148_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#layoutlmv3imageprocessor | .md | Constructs a LayoutLMv3 image processor.
Args:
do_resize (`bool`, *optional*, defaults to `True`):
Whether to resize the image's (height, width) dimensions to `(size["height"], size["width"])`. Can be
overridden by `do_resize` in `preprocess`.
size (`Dict[str, int]` *optional*, defaults to `{"height": 224, "width": 224}`):
Size of the image after resizing. Can be overridden by `size` in `preprocess`.
resample (`PILImageResampling`, *optional*, defaults to `PILImageResampling.BILINEAR`): | 148_6_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#layoutlmv3imageprocessor | .md | resample (`PILImageResampling`, *optional*, defaults to `PILImageResampling.BILINEAR`):
Resampling filter to use if resizing the image. Can be overridden by `resample` in `preprocess`.
do_rescale (`bool`, *optional*, defaults to `True`):
Whether to rescale the image's pixel values by the specified `rescale_value`. Can be overridden by
`do_rescale` in `preprocess`.
rescale_factor (`float`, *optional*, defaults to 1 / 255): | 148_6_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#layoutlmv3imageprocessor | .md | `do_rescale` in `preprocess`.
rescale_factor (`float`, *optional*, defaults to 1 / 255):
Value by which the image's pixel values are rescaled. Can be overridden by `rescale_factor` in
`preprocess`.
do_normalize (`bool`, *optional*, defaults to `True`):
Whether to normalize the image. Can be overridden by the `do_normalize` parameter in the `preprocess`
method.
image_mean (`Iterable[float]` or `float`, *optional*, defaults to `IMAGENET_STANDARD_MEAN`): | 148_6_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#layoutlmv3imageprocessor | .md | method.
image_mean (`Iterable[float]` or `float`, *optional*, defaults to `IMAGENET_STANDARD_MEAN`):
Mean to use if normalizing the image. This is a float or list of floats the length of the number of
channels in the image. Can be overridden by the `image_mean` parameter in the `preprocess` method.
image_std (`Iterable[float]` or `float`, *optional*, defaults to `IMAGENET_STANDARD_STD`):
Standard deviation to use if normalizing the image. This is a float or list of floats the length of the | 148_6_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#layoutlmv3imageprocessor | .md | Standard deviation to use if normalizing the image. This is a float or list of floats the length of the
number of channels in the image. Can be overridden by the `image_std` parameter in the `preprocess` method.
apply_ocr (`bool`, *optional*, defaults to `True`):
Whether to apply the Tesseract OCR engine to get words + normalized bounding boxes. Can be overridden by
the `apply_ocr` parameter in the `preprocess` method.
ocr_lang (`str`, *optional*): | 148_6_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#layoutlmv3imageprocessor | .md | the `apply_ocr` parameter in the `preprocess` method.
ocr_lang (`str`, *optional*):
The language, specified by its ISO code, to be used by the Tesseract OCR engine. By default, English is
used. Can be overridden by the `ocr_lang` parameter in the `preprocess` method.
tesseract_config (`str`, *optional*):
Any additional custom configuration flags that are forwarded to the `config` parameter when calling
Tesseract. For example: '--psm 6'. Can be overridden by the `tesseract_config` parameter in the | 148_6_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#layoutlmv3imageprocessor | .md | Tesseract. For example: '--psm 6'. Can be overridden by the `tesseract_config` parameter in the
`preprocess` method.
Methods: preprocess | 148_6_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#layoutlmv3tokenizer | .md | Construct a LayoutLMv3 tokenizer. Based on [`RoBERTatokenizer`] (Byte Pair Encoding or BPE).
[`LayoutLMv3Tokenizer`] can be used to turn words, word-level bounding boxes and optional word labels to
token-level `input_ids`, `attention_mask`, `token_type_ids`, `bbox`, and optional `labels` (for token
classification).
This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods. | 148_7_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#layoutlmv3tokenizer | .md | this superclass for more information regarding those methods.
[`LayoutLMv3Tokenizer`] runs end-to-end tokenization: punctuation splitting and wordpiece. It also turns the
word-level bounding boxes into token-level bounding boxes.
Args:
vocab_file (`str`):
Path to the vocabulary file.
merges_file (`str`):
Path to the merges file.
errors (`str`, *optional*, defaults to `"replace"`):
Paradigm to follow when decoding bytes to UTF-8. See | 148_7_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#layoutlmv3tokenizer | .md | errors (`str`, *optional*, defaults to `"replace"`):
Paradigm to follow when decoding bytes to UTF-8. See
[bytes.decode](https://docs.python.org/3/library/stdtypes.html#bytes.decode) for more information.
bos_token (`str`, *optional*, defaults to `"<s>"`):
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
<Tip>
When building a sequence using special tokens, this is not the token that is used for the beginning of | 148_7_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#layoutlmv3tokenizer | .md | <Tip>
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the `cls_token`.
</Tip>
eos_token (`str`, *optional*, defaults to `"</s>"`):
The end of sequence token.
<Tip>
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the `sep_token`.
</Tip>
sep_token (`str`, *optional*, defaults to `"</s>"`): | 148_7_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#layoutlmv3tokenizer | .md | The token used is the `sep_token`.
</Tip>
sep_token (`str`, *optional*, defaults to `"</s>"`):
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (`str`, *optional*, defaults to `"<s>"`): | 148_7_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#layoutlmv3tokenizer | .md | token of a sequence built with special tokens.
cls_token (`str`, *optional*, defaults to `"<s>"`):
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (`str`, *optional*, defaults to `"<unk>"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead. | 148_7_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#layoutlmv3tokenizer | .md | The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (`str`, *optional*, defaults to `"<pad>"`):
The token used for padding, for example when batching sequences of different lengths.
mask_token (`str`, *optional*, defaults to `"<mask>"`):
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict. | 148_7_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#layoutlmv3tokenizer | .md | modeling. This is the token which the model will try to predict.
add_prefix_space (`bool`, *optional*, defaults to `True`):
Whether or not to add an initial space to the input. This allows to treat the leading word just as any
other word. (RoBERTa tokenizer detect beginning of words by the preceding space).
cls_token_box (`List[int]`, *optional*, defaults to `[0, 0, 0, 0]`):
The bounding box to use for the special [CLS] token.
sep_token_box (`List[int]`, *optional*, defaults to `[0, 0, 0, 0]`): | 148_7_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#layoutlmv3tokenizer | .md | The bounding box to use for the special [CLS] token.
sep_token_box (`List[int]`, *optional*, defaults to `[0, 0, 0, 0]`):
The bounding box to use for the special [SEP] token.
pad_token_box (`List[int]`, *optional*, defaults to `[0, 0, 0, 0]`):
The bounding box to use for the special [PAD] token.
pad_token_label (`int`, *optional*, defaults to -100):
The label to use for padding tokens. Defaults to -100, which is the `ignore_index` of PyTorch's
CrossEntropyLoss. | 148_7_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#layoutlmv3tokenizer | .md | The label to use for padding tokens. Defaults to -100, which is the `ignore_index` of PyTorch's
CrossEntropyLoss.
only_label_first_subword (`bool`, *optional*, defaults to `True`):
Whether or not to only label the first subword, in case word labels are provided.
Methods: __call__
- save_vocabulary | 148_7_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#layoutlmv3tokenizerfast | .md | Construct a "fast" LayoutLMv3 tokenizer (backed by HuggingFace's *tokenizers* library). Based on BPE.
This tokenizer inherits from [`PreTrainedTokenizerFast`] which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
Args:
vocab_file (`str`):
Path to the vocabulary file.
merges_file (`str`):
Path to the merges file.
errors (`str`, *optional*, defaults to `"replace"`):
Paradigm to follow when decoding bytes to UTF-8. See | 148_8_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#layoutlmv3tokenizerfast | .md | errors (`str`, *optional*, defaults to `"replace"`):
Paradigm to follow when decoding bytes to UTF-8. See
[bytes.decode](https://docs.python.org/3/library/stdtypes.html#bytes.decode) for more information.
bos_token (`str`, *optional*, defaults to `"<s>"`):
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
<Tip>
When building a sequence using special tokens, this is not the token that is used for the beginning of | 148_8_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#layoutlmv3tokenizerfast | .md | <Tip>
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the `cls_token`.
</Tip>
eos_token (`str`, *optional*, defaults to `"</s>"`):
The end of sequence token.
<Tip>
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the `sep_token`.
</Tip>
sep_token (`str`, *optional*, defaults to `"</s>"`): | 148_8_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#layoutlmv3tokenizerfast | .md | The token used is the `sep_token`.
</Tip>
sep_token (`str`, *optional*, defaults to `"</s>"`):
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (`str`, *optional*, defaults to `"<s>"`): | 148_8_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#layoutlmv3tokenizerfast | .md | token of a sequence built with special tokens.
cls_token (`str`, *optional*, defaults to `"<s>"`):
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (`str`, *optional*, defaults to `"<unk>"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead. | 148_8_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#layoutlmv3tokenizerfast | .md | The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (`str`, *optional*, defaults to `"<pad>"`):
The token used for padding, for example when batching sequences of different lengths.
mask_token (`str`, *optional*, defaults to `"<mask>"`):
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict. | 148_8_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#layoutlmv3tokenizerfast | .md | modeling. This is the token which the model will try to predict.
add_prefix_space (`bool`, *optional*, defaults to `False`):
Whether or not to add an initial space to the input. This allows to treat the leading word just as any
other word. (RoBERTa tokenizer detect beginning of words by the preceding space).
trim_offsets (`bool`, *optional*, defaults to `True`):
Whether the post processing step should trim offsets to avoid including whitespaces. | 148_8_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#layoutlmv3tokenizerfast | .md | Whether the post processing step should trim offsets to avoid including whitespaces.
cls_token_box (`List[int]`, *optional*, defaults to `[0, 0, 0, 0]`):
The bounding box to use for the special [CLS] token.
sep_token_box (`List[int]`, *optional*, defaults to `[0, 0, 0, 0]`):
The bounding box to use for the special [SEP] token.
pad_token_box (`List[int]`, *optional*, defaults to `[0, 0, 0, 0]`):
The bounding box to use for the special [PAD] token.
pad_token_label (`int`, *optional*, defaults to -100): | 148_8_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#layoutlmv3tokenizerfast | .md | The bounding box to use for the special [PAD] token.
pad_token_label (`int`, *optional*, defaults to -100):
The label to use for padding tokens. Defaults to -100, which is the `ignore_index` of PyTorch's
CrossEntropyLoss.
only_label_first_subword (`bool`, *optional*, defaults to `True`):
Whether or not to only label the first subword, in case word labels are provided.
Methods: __call__ | 148_8_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#layoutlmv3processor | .md | Constructs a LayoutLMv3 processor which combines a LayoutLMv3 image processor and a LayoutLMv3 tokenizer into a
single processor.
[`LayoutLMv3Processor`] offers all the functionalities you need to prepare data for the model.
It first uses [`LayoutLMv3ImageProcessor`] to resize and normalize document images, and optionally applies OCR to
get words and normalized bounding boxes. These are then provided to [`LayoutLMv3Tokenizer`] or | 148_9_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#layoutlmv3processor | .md | get words and normalized bounding boxes. These are then provided to [`LayoutLMv3Tokenizer`] or
[`LayoutLMv3TokenizerFast`], which turns the words and bounding boxes into token-level `input_ids`,
`attention_mask`, `token_type_ids`, `bbox`. Optionally, one can provide integer `word_labels`, which are turned
into token-level `labels` for token classification tasks (such as FUNSD, CORD).
Args:
image_processor (`LayoutLMv3ImageProcessor`, *optional*): | 148_9_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#layoutlmv3processor | .md | Args:
image_processor (`LayoutLMv3ImageProcessor`, *optional*):
An instance of [`LayoutLMv3ImageProcessor`]. The image processor is a required input.
tokenizer (`LayoutLMv3Tokenizer` or `LayoutLMv3TokenizerFast`, *optional*):
An instance of [`LayoutLMv3Tokenizer`] or [`LayoutLMv3TokenizerFast`]. The tokenizer is a required input.
Methods: __call__
<frameworkcontent>
<pt> | 148_9_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#layoutlmv3model | .md | The bare LayoutLMv3 Model transformer outputting raw hidden-states without any specific head on top.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`LayoutLMv3Config`]): Model configuration class with all the parameters of the model. | 148_10_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#layoutlmv3model | .md | behavior.
Parameters:
config ([`LayoutLMv3Config`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 148_10_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#layoutlmv3forsequenceclassification | .md | LayoutLMv3 Model with a sequence classification head on top (a linear layer on top of the final hidden state of the
[CLS] token) e.g. for document image classification tasks such as the
[RVL-CDIP](https://www.cs.cmu.edu/~aharley/rvl-cdip/) dataset.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters: | 148_11_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#layoutlmv3forsequenceclassification | .md | behavior.
Parameters:
config ([`LayoutLMv3Config`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 148_11_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#layoutlmv3fortokenclassification | .md | LayoutLMv3 Model with a token classification head on top (a linear layer on top of the final hidden states) e.g.
for sequence labeling (information extraction) tasks such as [FUNSD](https://guillaumejaume.github.io/FUNSD/),
[SROIE](https://rrc.cvc.uab.es/?ch=13), [CORD](https://github.com/clovaai/cord) and
[Kleister-NDA](https://github.com/applicaai/kleister-nda).
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use | 148_12_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#layoutlmv3fortokenclassification | .md | This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`LayoutLMv3Config`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the | 148_12_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#layoutlmv3fortokenclassification | .md | Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 148_12_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#layoutlmv3forquestionanswering | .md | LayoutLMv3 Model with a span classification head on top for extractive question-answering tasks such as
[DocVQA](https://rrc.cvc.uab.es/?ch=17) (a linear layer on top of the text part of the hidden-states output to
compute `span start logits` and `span end logits`).
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior. | 148_13_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#layoutlmv3forquestionanswering | .md | it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`LayoutLMv3Config`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
</pt>
<tf> | 148_13_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#tflayoutlmv3model | .md | No docstring available for TFLayoutLMv3Model
Methods: call | 148_14_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#tflayoutlmv3forsequenceclassification | .md | No docstring available for TFLayoutLMv3ForSequenceClassification
Methods: call | 148_15_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#tflayoutlmv3fortokenclassification | .md | No docstring available for TFLayoutLMv3ForTokenClassification
Methods: call | 148_16_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv3.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv3/#tflayoutlmv3forquestionanswering | .md | No docstring available for TFLayoutLMv3ForQuestionAnswering
Methods: call
</tf>
</frameworkcontent> | 148_17_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deformable_detr.md | https://huggingface.co/docs/transformers/en/model_doc/deformable_detr/ | .md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 149_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deformable_detr.md | https://huggingface.co/docs/transformers/en/model_doc/deformable_detr/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 149_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deformable_detr.md | https://huggingface.co/docs/transformers/en/model_doc/deformable_detr/#overview | .md | The Deformable DETR model was proposed in [Deformable DETR: Deformable Transformers for End-to-End Object Detection](https://arxiv.org/abs/2010.04159) by Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, Jifeng Dai.
Deformable DETR mitigates the slow convergence issues and limited feature spatial resolution of the original [DETR](detr) by leveraging a new deformable attention module which only attends to a small set of key sampling points around a reference. | 149_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deformable_detr.md | https://huggingface.co/docs/transformers/en/model_doc/deformable_detr/#overview | .md | The abstract from the paper is the following: | 149_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deformable_detr.md | https://huggingface.co/docs/transformers/en/model_doc/deformable_detr/#overview | .md | *DETR has been recently proposed to eliminate the need for many hand-designed components in object detection while demonstrating good performance. However, it suffers from slow convergence and limited feature spatial resolution, due to the limitation of Transformer attention modules in processing image feature maps. To mitigate these issues, we proposed Deformable DETR, whose attention modules only attend to a small set of key sampling points around a reference. Deformable DETR can achieve better | 149_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deformable_detr.md | https://huggingface.co/docs/transformers/en/model_doc/deformable_detr/#overview | .md | attention modules only attend to a small set of key sampling points around a reference. Deformable DETR can achieve better performance than DETR (especially on small objects) with 10 times less training epochs. Extensive experiments on the COCO benchmark demonstrate the effectiveness of our approach.* | 149_1_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deformable_detr.md | https://huggingface.co/docs/transformers/en/model_doc/deformable_detr/#overview | .md | <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/deformable_detr_architecture.png"
alt="drawing" width="600"/>
<small> Deformable DETR architecture. Taken from the <a href="https://arxiv.org/abs/2010.04159">original paper</a>.</small>
This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/fundamentalvision/Deformable-DETR). | 149_1_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deformable_detr.md | https://huggingface.co/docs/transformers/en/model_doc/deformable_detr/#usage-tips | .md | - Training Deformable DETR is equivalent to training the original [DETR](detr) model. See the [resources](#resources) section below for demo notebooks. | 149_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deformable_detr.md | https://huggingface.co/docs/transformers/en/model_doc/deformable_detr/#resources | .md | A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with Deformable DETR.
<PipelineTag pipeline="object-detection"/>
- Demo notebooks regarding inference + fine-tuning on a custom dataset for [`DeformableDetrForObjectDetection`] can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/Deformable-DETR). | 149_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deformable_detr.md | https://huggingface.co/docs/transformers/en/model_doc/deformable_detr/#resources | .md | - Scripts for finetuning [`DeformableDetrForObjectDetection`] with [`Trainer`] or [Accelerate](https://huggingface.co/docs/accelerate/index) can be found [here](https://github.com/huggingface/transformers/tree/main/examples/pytorch/object-detection).
- See also: [Object detection task guide](../tasks/object_detection). | 149_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deformable_detr.md | https://huggingface.co/docs/transformers/en/model_doc/deformable_detr/#resources | .md | - See also: [Object detection task guide](../tasks/object_detection).
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. | 149_3_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deformable_detr.md | https://huggingface.co/docs/transformers/en/model_doc/deformable_detr/#deformabledetrimageprocessor | .md | Constructs a Deformable DETR image processor.
Args:
format (`str`, *optional*, defaults to `"coco_detection"`):
Data format of the annotations. One of "coco_detection" or "coco_panoptic".
do_resize (`bool`, *optional*, defaults to `True`):
Controls whether to resize the image's (height, width) dimensions to the specified `size`. Can be
overridden by the `do_resize` parameter in the `preprocess` method.
size (`Dict[str, int]` *optional*, defaults to `{"shortest_edge": 800, "longest_edge": 1333}`): | 149_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deformable_detr.md | https://huggingface.co/docs/transformers/en/model_doc/deformable_detr/#deformabledetrimageprocessor | .md | size (`Dict[str, int]` *optional*, defaults to `{"shortest_edge": 800, "longest_edge": 1333}`):
Size of the image's `(height, width)` dimensions after resizing. Can be overridden by the `size` parameter
in the `preprocess` method. Available options are:
- `{"height": int, "width": int}`: The image will be resized to the exact size `(height, width)`.
Do NOT keep the aspect ratio.
- `{"shortest_edge": int, "longest_edge": int}`: The image will be resized to a maximum size respecting | 149_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deformable_detr.md | https://huggingface.co/docs/transformers/en/model_doc/deformable_detr/#deformabledetrimageprocessor | .md | - `{"shortest_edge": int, "longest_edge": int}`: The image will be resized to a maximum size respecting
the aspect ratio and keeping the shortest edge less or equal to `shortest_edge` and the longest edge
less or equal to `longest_edge`.
- `{"max_height": int, "max_width": int}`: The image will be resized to the maximum size respecting the
aspect ratio and keeping the height less or equal to `max_height` and the width less or equal to
`max_width`. | 149_4_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deformable_detr.md | https://huggingface.co/docs/transformers/en/model_doc/deformable_detr/#deformabledetrimageprocessor | .md | aspect ratio and keeping the height less or equal to `max_height` and the width less or equal to
`max_width`.
resample (`PILImageResampling`, *optional*, defaults to `PILImageResampling.BILINEAR`):
Resampling filter to use if resizing the image.
do_rescale (`bool`, *optional*, defaults to `True`):
Controls whether to rescale the image by the specified scale `rescale_factor`. Can be overridden by the
`do_rescale` parameter in the `preprocess` method. | 149_4_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deformable_detr.md | https://huggingface.co/docs/transformers/en/model_doc/deformable_detr/#deformabledetrimageprocessor | .md | `do_rescale` parameter in the `preprocess` method.
rescale_factor (`int` or `float`, *optional*, defaults to `1/255`):
Scale factor to use if rescaling the image. Can be overridden by the `rescale_factor` parameter in the
`preprocess` method.
do_normalize:
Controls whether to normalize the image. Can be overridden by the `do_normalize` parameter in the
`preprocess` method.
image_mean (`float` or `List[float]`, *optional*, defaults to `IMAGENET_DEFAULT_MEAN`): | 149_4_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deformable_detr.md | https://huggingface.co/docs/transformers/en/model_doc/deformable_detr/#deformabledetrimageprocessor | .md | `preprocess` method.
image_mean (`float` or `List[float]`, *optional*, defaults to `IMAGENET_DEFAULT_MEAN`):
Mean values to use when normalizing the image. Can be a single value or a list of values, one for each
channel. Can be overridden by the `image_mean` parameter in the `preprocess` method.
image_std (`float` or `List[float]`, *optional*, defaults to `IMAGENET_DEFAULT_STD`):
Standard deviation values to use when normalizing the image. Can be a single value or a list of values, one | 149_4_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deformable_detr.md | https://huggingface.co/docs/transformers/en/model_doc/deformable_detr/#deformabledetrimageprocessor | .md | Standard deviation values to use when normalizing the image. Can be a single value or a list of values, one
for each channel. Can be overridden by the `image_std` parameter in the `preprocess` method.
do_convert_annotations (`bool`, *optional*, defaults to `True`):
Controls whether to convert the annotations to the format expected by the DETR model. Converts the
bounding boxes to the format `(center_x, center_y, width, height)` and in the range `[0, 1]`. | 149_4_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deformable_detr.md | https://huggingface.co/docs/transformers/en/model_doc/deformable_detr/#deformabledetrimageprocessor | .md | bounding boxes to the format `(center_x, center_y, width, height)` and in the range `[0, 1]`.
Can be overridden by the `do_convert_annotations` parameter in the `preprocess` method.
do_pad (`bool`, *optional*, defaults to `True`):
Controls whether to pad the image. Can be overridden by the `do_pad` parameter in the `preprocess`
method. If `True`, padding will be applied to the bottom and right of the image with zeros.
If `pad_size` is provided, the image will be padded to the specified dimensions. | 149_4_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deformable_detr.md | https://huggingface.co/docs/transformers/en/model_doc/deformable_detr/#deformabledetrimageprocessor | .md | If `pad_size` is provided, the image will be padded to the specified dimensions.
Otherwise, the image will be padded to the maximum height and width of the batch.
pad_size (`Dict[str, int]`, *optional*):
The size `{"height": int, "width" int}` to pad the images to. Must be larger than any image size
provided for preprocessing. If `pad_size` is not provided, images will be padded to the largest
height and width in the batch.
Methods: preprocess
- post_process_object_detection | 149_4_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deformable_detr.md | https://huggingface.co/docs/transformers/en/model_doc/deformable_detr/#deformabledetrimageprocessorfast | .md | Constructs a fast DeformableDetr image processor.
Args:
format (`str`, *optional*, defaults to `AnnotationFormat.COCO_DETECTION`):
Data format of the annotations. One of "coco_detection" or "coco_panoptic".
do_resize (`bool`, *optional*, defaults to `True`):
Controls whether to resize the image's `(height, width)` dimensions to the specified `size`. Can be
overridden by the `do_resize` parameter in the `preprocess` method. | 149_5_0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.