source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md | https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#mobilebertfornextsentenceprediction | .md | MobileBert Model with a `next sentence prediction (classification)` head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | 115_11_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md | https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#mobilebertfornextsentenceprediction | .md | etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`MobileBertConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the | 115_11_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md | https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#mobilebertfornextsentenceprediction | .md | Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 115_11_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md | https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#mobilebertforsequenceclassification | .md | MobileBert Model transformer with a sequence classification/regression head on top (a linear layer on top of the
pooled output) e.g. for GLUE tasks.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | 115_12_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md | https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#mobilebertforsequenceclassification | .md | etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`MobileBertConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the | 115_12_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md | https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#mobilebertforsequenceclassification | .md | Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 115_12_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md | https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#mobilebertformultiplechoice | .md | MobileBert Model with a multiple choice classification head on top (a linear layer on top of the pooled output and
a softmax) e.g. for RocStories/SWAG tasks.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | 115_13_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md | https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#mobilebertformultiplechoice | .md | etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`MobileBertConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the | 115_13_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md | https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#mobilebertformultiplechoice | .md | Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 115_13_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md | https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#mobilebertfortokenclassification | .md | MobileBert Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g.
for Named-Entity-Recognition (NER) tasks.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | 115_14_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md | https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#mobilebertfortokenclassification | .md | etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`MobileBertConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the | 115_14_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md | https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#mobilebertfortokenclassification | .md | Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 115_14_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md | https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#mobilebertforquestionanswering | .md | MobileBert Model with a span classification head on top for extractive question-answering tasks like SQuAD (a
linear layers on top of the hidden-states output to compute `span start logits` and `span end logits`).
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.) | 115_15_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md | https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#mobilebertforquestionanswering | .md | library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`MobileBertConfig`]): Model configuration class with all the parameters of the model. | 115_15_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md | https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#mobilebertforquestionanswering | .md | and behavior.
Parameters:
config ([`MobileBertConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
</pt>
<tf> | 115_15_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md | https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#tfmobilebertmodel | .md | No docstring available for TFMobileBertModel
Methods: call | 115_16_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md | https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#tfmobilebertforpretraining | .md | No docstring available for TFMobileBertForPreTraining
Methods: call | 115_17_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md | https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#tfmobilebertformaskedlm | .md | No docstring available for TFMobileBertForMaskedLM
Methods: call | 115_18_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md | https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#tfmobilebertfornextsentenceprediction | .md | No docstring available for TFMobileBertForNextSentencePrediction
Methods: call | 115_19_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md | https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#tfmobilebertforsequenceclassification | .md | No docstring available for TFMobileBertForSequenceClassification
Methods: call | 115_20_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md | https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#tfmobilebertformultiplechoice | .md | No docstring available for TFMobileBertForMultipleChoice
Methods: call | 115_21_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md | https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#tfmobilebertfortokenclassification | .md | No docstring available for TFMobileBertForTokenClassification
Methods: call | 115_22_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md | https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#tfmobilebertforquestionanswering | .md | No docstring available for TFMobileBertForQuestionAnswering
Methods: call
</tf>
</frameworkcontent> | 115_23_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pvt_v2.md | https://huggingface.co/docs/transformers/en/model_doc/pvt_v2/ | .md | <!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 116_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pvt_v2.md | https://huggingface.co/docs/transformers/en/model_doc/pvt_v2/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
--> | 116_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pvt_v2.md | https://huggingface.co/docs/transformers/en/model_doc/pvt_v2/#overview | .md | The PVTv2 model was proposed in | 116_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pvt_v2.md | https://huggingface.co/docs/transformers/en/model_doc/pvt_v2/#overview | .md | [PVT v2: Improved Baselines with Pyramid Vision Transformer](https://arxiv.org/abs/2106.13797) by Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao Song, Ding Liang, Tong Lu, Ping Luo, and Ling Shao. As an improved variant of PVT, it eschews position embeddings, relying instead on positional information encoded through zero-padding and overlapping patch embeddings. This lack of reliance on position embeddings simplifies the architecture, and enables running inference at any resolution without needing | 116_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pvt_v2.md | https://huggingface.co/docs/transformers/en/model_doc/pvt_v2/#overview | .md | of reliance on position embeddings simplifies the architecture, and enables running inference at any resolution without needing to interpolate them. | 116_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pvt_v2.md | https://huggingface.co/docs/transformers/en/model_doc/pvt_v2/#overview | .md | The PVTv2 encoder structure has been successfully deployed to achieve state-of-the-art scores in [Segformer](https://arxiv.org/abs/2105.15203) for semantic segmentation, [GLPN](https://arxiv.org/abs/2201.07436) for monocular depth, and [Panoptic Segformer](https://arxiv.org/abs/2109.03814) for panoptic segmentation. | 116_1_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pvt_v2.md | https://huggingface.co/docs/transformers/en/model_doc/pvt_v2/#overview | .md | PVTv2 belongs to a family of models called [hierarchical transformers](https://natecibik.medium.com/the-rise-of-vision-transformers-f623c980419f) , which make adaptations to transformer layers in order to generate multi-scale feature maps. Unlike the columnal structure of Vision Transformer ([ViT](https://arxiv.org/abs/2010.11929)) which loses fine-grained detail, multi-scale feature maps are known preserve this detail and aid performance in dense prediction tasks. In the case of PVTv2, this is achieved by | 116_1_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pvt_v2.md | https://huggingface.co/docs/transformers/en/model_doc/pvt_v2/#overview | .md | maps are known preserve this detail and aid performance in dense prediction tasks. In the case of PVTv2, this is achieved by generating image patch tokens using 2D convolution with overlapping kernels in each encoder layer. | 116_1_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pvt_v2.md | https://huggingface.co/docs/transformers/en/model_doc/pvt_v2/#overview | .md | The multi-scale features of hierarchical transformers allow them to be easily swapped in for traditional workhorse computer vision backbone models like ResNet in larger architectures. Both Segformer and Panoptic Segformer demonstrated that configurations using PVTv2 for a backbone consistently outperformed those with similarly sized ResNet backbones. | 116_1_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pvt_v2.md | https://huggingface.co/docs/transformers/en/model_doc/pvt_v2/#overview | .md | Another powerful feature of the PVTv2 is the complexity reduction in the self-attention layers called Spatial Reduction Attention (SRA), which uses 2D convolution layers to project hidden states to a smaller resolution before attending to them with the queries, improving the $O(n^2)$ complexity of self-attention to $O(n^2/R)$, with $R$ being the spatial reduction ratio (`sr_ratio`, aka kernel size and stride in the 2D convolution). | 116_1_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pvt_v2.md | https://huggingface.co/docs/transformers/en/model_doc/pvt_v2/#overview | .md | SRA was introduced in PVT, and is the default attention complexity reduction method used in PVTv2. However, PVTv2 also introduced the option of using a self-attention mechanism with linear complexity related to image size, which they called "Linear SRA". This method uses average pooling to reduce the hidden states to a fixed size that is invariant to their original resolution (although this is inherently more lossy than regular SRA). This option can be enabled by setting `linear_attention` to `True` in the | 116_1_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pvt_v2.md | https://huggingface.co/docs/transformers/en/model_doc/pvt_v2/#overview | .md | this is inherently more lossy than regular SRA). This option can be enabled by setting `linear_attention` to `True` in the PVTv2Config. | 116_1_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pvt_v2.md | https://huggingface.co/docs/transformers/en/model_doc/pvt_v2/#abstract-from-the-paper | .md | *Transformer recently has presented encouraging progress in computer vision. In this work, we present new baselines by improving the original Pyramid Vision Transformer (PVT v1) by adding three designs, including (1) linear complexity attention layer, (2) overlapping patch embedding, and (3) convolutional feed-forward network. With these modifications, PVT v2 reduces the computational complexity of PVT v1 to linear and achieves significant improvements on fundamental vision tasks such as classification, | 116_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pvt_v2.md | https://huggingface.co/docs/transformers/en/model_doc/pvt_v2/#abstract-from-the-paper | .md | complexity of PVT v1 to linear and achieves significant improvements on fundamental vision tasks such as classification, detection, and segmentation. Notably, the proposed PVT v2 achieves comparable or better performances than recent works such as Swin Transformer. We hope this work will facilitate state-of-the-art Transformer researches in computer vision. Code is available at https://github.com/whai362/PVT.* | 116_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pvt_v2.md | https://huggingface.co/docs/transformers/en/model_doc/pvt_v2/#abstract-from-the-paper | .md | This model was contributed by [FoamoftheSea](https://huggingface.co/FoamoftheSea). The original code can be found [here](https://github.com/whai362/PVT). | 116_2_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pvt_v2.md | https://huggingface.co/docs/transformers/en/model_doc/pvt_v2/#usage-tips | .md | - [PVTv2](https://arxiv.org/abs/2106.13797) is a hierarchical transformer model which has demonstrated powerful performance in image classification and multiple other tasks, used as a backbone for semantic segmentation in [Segformer](https://arxiv.org/abs/2105.15203), monocular depth estimation in [GLPN](https://arxiv.org/abs/2201.07436), and panoptic segmentation in [Panoptic Segformer](https://arxiv.org/abs/2109.03814), consistently showing higher performance than similar ResNet configurations. | 116_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pvt_v2.md | https://huggingface.co/docs/transformers/en/model_doc/pvt_v2/#usage-tips | .md | - Hierarchical transformers like PVTv2 achieve superior data and parameter efficiency on image data compared with pure transformer architectures by incorporating design elements of convolutional neural networks (CNNs) into their encoders. This creates a best-of-both-worlds architecture that infuses the useful inductive biases of CNNs like translation equivariance and locality into the network while still enjoying the benefits of dynamic data response and global relationship modeling provided by the | 116_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pvt_v2.md | https://huggingface.co/docs/transformers/en/model_doc/pvt_v2/#usage-tips | .md | into the network while still enjoying the benefits of dynamic data response and global relationship modeling provided by the self-attention mechanism of [transformers](https://arxiv.org/abs/1706.03762). | 116_3_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pvt_v2.md | https://huggingface.co/docs/transformers/en/model_doc/pvt_v2/#usage-tips | .md | - PVTv2 uses overlapping patch embeddings to create multi-scale feature maps, which are infused with location information using zero-padding and depth-wise convolutions. | 116_3_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pvt_v2.md | https://huggingface.co/docs/transformers/en/model_doc/pvt_v2/#usage-tips | .md | - To reduce the complexity in the attention layers, PVTv2 performs a spatial reduction on the hidden states using either strided 2D convolution (SRA) or fixed-size average pooling (Linear SRA). Although inherently more lossy, Linear SRA provides impressive performance with a linear complexity with respect to image size. To use Linear SRA in the self-attention layers, set `linear_attention=True` in the `PvtV2Config`. | 116_3_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pvt_v2.md | https://huggingface.co/docs/transformers/en/model_doc/pvt_v2/#usage-tips | .md | - [`PvtV2Model`] is the hierarchical transformer encoder (which is also often referred to as Mix Transformer or MiT in the literature). [`PvtV2ForImageClassification`] adds a simple classifier head on top to perform Image Classification. [`PvtV2Backbone`] can be used with the [`AutoBackbone`] system in larger architectures like Deformable DETR.
- ImageNet pretrained weights for all model sizes can be found on the [hub](https://huggingface.co/models?other=pvt_v2). | 116_3_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pvt_v2.md | https://huggingface.co/docs/transformers/en/model_doc/pvt_v2/#usage-tips | .md | - ImageNet pretrained weights for all model sizes can be found on the [hub](https://huggingface.co/models?other=pvt_v2).
The best way to get started with the PVTv2 is to load the pretrained checkpoint with the size of your choosing using `AutoModelForImageClassification`:
```python
import requests
import torch | 116_3_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pvt_v2.md | https://huggingface.co/docs/transformers/en/model_doc/pvt_v2/#usage-tips | .md | from transformers import AutoModelForImageClassification, AutoImageProcessor
from PIL import Image | 116_3_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pvt_v2.md | https://huggingface.co/docs/transformers/en/model_doc/pvt_v2/#usage-tips | .md | model = AutoModelForImageClassification.from_pretrained("OpenGVLab/pvt_v2_b0")
image_processor = AutoImageProcessor.from_pretrained("OpenGVLab/pvt_v2_b0")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
processed = image_processor(image)
outputs = model(torch.tensor(processed["pixel_values"]))
``` | 116_3_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pvt_v2.md | https://huggingface.co/docs/transformers/en/model_doc/pvt_v2/#usage-tips | .md | processed = image_processor(image)
outputs = model(torch.tensor(processed["pixel_values"]))
```
To use the PVTv2 as a backbone for more complex architectures like DeformableDETR, you can use AutoBackbone (this model would need fine-tuning as you're replacing the backbone in the pretrained model):
```python
import requests
import torch | 116_3_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pvt_v2.md | https://huggingface.co/docs/transformers/en/model_doc/pvt_v2/#usage-tips | .md | from transformers import AutoConfig, AutoModelForObjectDetection, AutoImageProcessor
from PIL import Image
model = AutoModelForObjectDetection.from_config(
config=AutoConfig.from_pretrained(
"SenseTime/deformable-detr",
backbone_config=AutoConfig.from_pretrained("OpenGVLab/pvt_v2_b5"),
use_timm_backbone=False
),
) | 116_3_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pvt_v2.md | https://huggingface.co/docs/transformers/en/model_doc/pvt_v2/#usage-tips | .md | image_processor = AutoImageProcessor.from_pretrained("SenseTime/deformable-detr")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
processed = image_processor(image)
outputs = model(torch.tensor(processed["pixel_values"]))
```
[PVTv2](https://github.com/whai362/PVT/tree/v2) performance on ImageNet-1K by model size (B0-B5):
| Method | Size | Acc@1 | #Params (M) |
|------------------|:----:|:-----:|:-----------:| | 116_3_11 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pvt_v2.md | https://huggingface.co/docs/transformers/en/model_doc/pvt_v2/#usage-tips | .md | | Method | Size | Acc@1 | #Params (M) |
|------------------|:----:|:-----:|:-----------:|
| PVT-V2-B0 | 224 | 70.5 | 3.7 |
| PVT-V2-B1 | 224 | 78.7 | 14.0 |
| PVT-V2-B2-Linear | 224 | 82.1 | 22.6 |
| PVT-V2-B2 | 224 | 82.0 | 25.4 |
| PVT-V2-B3 | 224 | 83.1 | 45.2 |
| PVT-V2-B4 | 224 | 83.6 | 62.6 |
| PVT-V2-B5 | 224 | 83.8 | 82.0 | | 116_3_12 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pvt_v2.md | https://huggingface.co/docs/transformers/en/model_doc/pvt_v2/#pvtv2config | .md | This is the configuration class to store the configuration of a [`PvtV2Model`]. It is used to instantiate a Pvt V2
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the Pvt V2 B0
[OpenGVLab/pvt_v2_b0](https://huggingface.co/OpenGVLab/pvt_v2_b0) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the | 116_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pvt_v2.md | https://huggingface.co/docs/transformers/en/model_doc/pvt_v2/#pvtv2config | .md | Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
image_size (`Union[int, Tuple[int, int]]`, *optional*, defaults to 224):
The input image size. Pass int value for square image, or tuple of (height, width).
num_channels (`int`, *optional*, defaults to 3):
The number of input channels.
num_encoder_blocks (`[int]`, *optional*, defaults to 4): | 116_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pvt_v2.md | https://huggingface.co/docs/transformers/en/model_doc/pvt_v2/#pvtv2config | .md | The number of input channels.
num_encoder_blocks (`[int]`, *optional*, defaults to 4):
The number of encoder blocks (i.e. stages in the Mix Transformer encoder).
depths (`List[int]`, *optional*, defaults to `[2, 2, 2, 2]`):
The number of layers in each encoder block.
sr_ratios (`List[int]`, *optional*, defaults to `[8, 4, 2, 1]`):
Spatial reduction ratios in each encoder block.
hidden_sizes (`List[int]`, *optional*, defaults to `[32, 64, 160, 256]`):
Dimension of each of the encoder blocks. | 116_4_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pvt_v2.md | https://huggingface.co/docs/transformers/en/model_doc/pvt_v2/#pvtv2config | .md | hidden_sizes (`List[int]`, *optional*, defaults to `[32, 64, 160, 256]`):
Dimension of each of the encoder blocks.
patch_sizes (`List[int]`, *optional*, defaults to `[7, 3, 3, 3]`):
Patch size for overlapping patch embedding before each encoder block.
strides (`List[int]`, *optional*, defaults to `[4, 2, 2, 2]`):
Stride for overlapping patch embedding before each encoder block.
num_attention_heads (`List[int]`, *optional*, defaults to `[1, 2, 5, 8]`): | 116_4_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pvt_v2.md | https://huggingface.co/docs/transformers/en/model_doc/pvt_v2/#pvtv2config | .md | num_attention_heads (`List[int]`, *optional*, defaults to `[1, 2, 5, 8]`):
Number of attention heads for each attention layer in each block of the Transformer encoder.
mlp_ratios (`List[int]`, *optional*, defaults to `[8, 8, 4, 4]`):
Ratio of the size of the hidden layer compared to the size of the input layer of the Mix FFNs in the
encoder blocks.
hidden_act (`str` or `Callable`, *optional*, defaults to `"gelu"`): | 116_4_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pvt_v2.md | https://huggingface.co/docs/transformers/en/model_doc/pvt_v2/#pvtv2config | .md | encoder blocks.
hidden_act (`str` or `Callable`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"selu"` and `"gelu_new"` are supported.
hidden_dropout_prob (`float`, *optional*, defaults to 0.0):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (`float`, *optional*, defaults to 0.0): | 116_4_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pvt_v2.md | https://huggingface.co/docs/transformers/en/model_doc/pvt_v2/#pvtv2config | .md | attention_probs_dropout_prob (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
drop_path_rate (`float`, *optional*, defaults to 0.0):
The dropout probability for stochastic depth, used in the blocks of the Transformer encoder.
layer_norm_eps (`float`, *optional*, defaults to 1e-06): | 116_4_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pvt_v2.md | https://huggingface.co/docs/transformers/en/model_doc/pvt_v2/#pvtv2config | .md | layer_norm_eps (`float`, *optional*, defaults to 1e-06):
The epsilon used by the layer normalization layers.
qkv_bias (`bool`, *optional*, defaults to `True`):
Whether or not a learnable bias should be added to the queries, keys and values.
linear_attention (`bool`, *optional*, defaults to `False`):
Use linear attention complexity. If set to True, `sr_ratio` is ignored and average pooling is used for
dimensionality reduction in the attention layers rather than strided convolution. | 116_4_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pvt_v2.md | https://huggingface.co/docs/transformers/en/model_doc/pvt_v2/#pvtv2config | .md | dimensionality reduction in the attention layers rather than strided convolution.
out_features (`List[str]`, *optional*):
If used as backbone, list of features to output. Can be any of `"stem"`, `"stage1"`, `"stage2"`, etc.
(depending on how many stages the model has). If unset and `out_indices` is set, will default to the
corresponding stages. If unset and `out_indices` is unset, will default to the last stage.
out_indices (`List[int]`, *optional*): | 116_4_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pvt_v2.md | https://huggingface.co/docs/transformers/en/model_doc/pvt_v2/#pvtv2config | .md | out_indices (`List[int]`, *optional*):
If used as backbone, list of indices of features to output. Can be any of 0, 1, 2, etc. (depending on how
many stages the model has). If unset and `out_features` is set, will default to the corresponding stages.
If unset and `out_features` is unset, will default to the last stage.
Example:
```python
>>> from transformers import PvtV2Model, PvtV2Config | 116_4_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pvt_v2.md | https://huggingface.co/docs/transformers/en/model_doc/pvt_v2/#pvtv2config | .md | >>> # Initializing a pvt_v2_b0 style configuration
>>> configuration = PvtV2Config()
>>> # Initializing a model from the OpenGVLab/pvt_v2_b0 style configuration
>>> model = PvtV2Model(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
``` | 116_4_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pvt_v2.md | https://huggingface.co/docs/transformers/en/model_doc/pvt_v2/#pvtforimageclassification | .md | Pvt-v2 Model transformer with an image classification head on top (a linear layer on top of the final hidden state
of the [CLS] token) e.g. for ImageNet.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`~PvtV2Config`]): Model configuration class with all the parameters of the model. | 116_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pvt_v2.md | https://huggingface.co/docs/transformers/en/model_doc/pvt_v2/#pvtforimageclassification | .md | behavior.
Parameters:
config ([`~PvtV2Config`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 116_5_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pvt_v2.md | https://huggingface.co/docs/transformers/en/model_doc/pvt_v2/#pvtmodel | .md | The bare Pvt-v2 encoder outputting raw hidden-states without any specific head on top.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`~PvtV2Config`]): Model configuration class with all the parameters of the model. | 116_6_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pvt_v2.md | https://huggingface.co/docs/transformers/en/model_doc/pvt_v2/#pvtmodel | .md | behavior.
Parameters:
config ([`~PvtV2Config`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 116_6_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/trocr.md | https://huggingface.co/docs/transformers/en/model_doc/trocr/ | .md | <!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the
License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 117_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/trocr.md | https://huggingface.co/docs/transformers/en/model_doc/trocr/ | .md | "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
specific language governing permissions and limitations under the License. --> | 117_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/trocr.md | https://huggingface.co/docs/transformers/en/model_doc/trocr/#overview | .md | The TrOCR model was proposed in [TrOCR: Transformer-based Optical Character Recognition with Pre-trained
Models](https://arxiv.org/abs/2109.10282) by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang,
Zhoujun Li, Furu Wei. TrOCR consists of an image Transformer encoder and an autoregressive text Transformer decoder to
perform [optical character recognition (OCR)](https://en.wikipedia.org/wiki/Optical_character_recognition).
The abstract from the paper is the following: | 117_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/trocr.md | https://huggingface.co/docs/transformers/en/model_doc/trocr/#overview | .md | The abstract from the paper is the following:
*Text recognition is a long-standing research problem for document digitalization. Existing approaches for text recognition
are usually built based on CNN for image understanding and RNN for char-level text generation. In addition, another language
model is usually needed to improve the overall accuracy as a post-processing step. In this paper, we propose an end-to-end | 117_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/trocr.md | https://huggingface.co/docs/transformers/en/model_doc/trocr/#overview | .md | model is usually needed to improve the overall accuracy as a post-processing step. In this paper, we propose an end-to-end
text recognition approach with pre-trained image Transformer and text Transformer models, namely TrOCR, which leverages the
Transformer architecture for both image understanding and wordpiece-level text generation. The TrOCR model is simple but
effective, and can be pre-trained with large-scale synthetic data and fine-tuned with human-labeled datasets. Experiments | 117_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/trocr.md | https://huggingface.co/docs/transformers/en/model_doc/trocr/#overview | .md | effective, and can be pre-trained with large-scale synthetic data and fine-tuned with human-labeled datasets. Experiments
show that the TrOCR model outperforms the current state-of-the-art models on both printed and handwritten text recognition
tasks.*
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/trocr_architecture.jpg"
alt="drawing" width="600"/> | 117_1_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/trocr.md | https://huggingface.co/docs/transformers/en/model_doc/trocr/#overview | .md | alt="drawing" width="600"/>
<small> TrOCR architecture. Taken from the <a href="https://arxiv.org/abs/2109.10282">original paper</a>. </small>
Please refer to the [`VisionEncoderDecoder`] class on how to use this model.
This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found
[here](https://github.com/microsoft/unilm/tree/6f60612e7cc86a2a1ae85c47231507a587ab4e01/trocr). | 117_1_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/trocr.md | https://huggingface.co/docs/transformers/en/model_doc/trocr/#usage-tips | .md | - The quickest way to get started with TrOCR is by checking the [tutorial
notebooks](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/TrOCR), which show how to use the model
at inference time as well as fine-tuning on custom data.
- TrOCR is pre-trained in 2 stages before being fine-tuned on downstream datasets. It achieves state-of-the-art results
on both printed (e.g. the [SROIE dataset](https://paperswithcode.com/dataset/sroie) and handwritten (e.g. the [IAM | 117_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/trocr.md | https://huggingface.co/docs/transformers/en/model_doc/trocr/#usage-tips | .md | on both printed (e.g. the [SROIE dataset](https://paperswithcode.com/dataset/sroie) and handwritten (e.g. the [IAM
Handwriting dataset](https://fki.tic.heia-fr.ch/databases/iam-handwriting-database>) text recognition tasks. For more
information, see the [official models](https://huggingface.co/models?other=trocr>).
- TrOCR is always used within the [VisionEncoderDecoder](vision-encoder-decoder) framework. | 117_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/trocr.md | https://huggingface.co/docs/transformers/en/model_doc/trocr/#resources | .md | A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with TrOCR. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
<PipelineTag pipeline="text-classification"/>
- A blog post on [Accelerating Document AI](https://huggingface.co/blog/document-ai) with TrOCR. | 117_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/trocr.md | https://huggingface.co/docs/transformers/en/model_doc/trocr/#resources | .md | - A blog post on [Accelerating Document AI](https://huggingface.co/blog/document-ai) with TrOCR.
- A blog post on how to [Document AI](https://github.com/philschmid/document-ai-transformers) with TrOCR.
- A notebook on how to [finetune TrOCR on IAM Handwriting Database using Seq2SeqTrainer](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/TrOCR/Fine_tune_TrOCR_on_IAM_Handwriting_Database_using_Seq2SeqTrainer.ipynb). | 117_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/trocr.md | https://huggingface.co/docs/transformers/en/model_doc/trocr/#resources | .md | - A notebook on [inference with TrOCR](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/TrOCR/Inference_with_TrOCR_%2B_Gradio_demo.ipynb) and Gradio demo.
- A notebook on [finetune TrOCR on the IAM Handwriting Database](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/TrOCR/Fine_tune_TrOCR_on_IAM_Handwriting_Database_using_native_PyTorch.ipynb) using native PyTorch. | 117_3_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/trocr.md | https://huggingface.co/docs/transformers/en/model_doc/trocr/#resources | .md | - A notebook on [evaluating TrOCR on the IAM test set](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/TrOCR/Evaluating_TrOCR_base_handwritten_on_the_IAM_test_set.ipynb).
<PipelineTag pipeline="text-generation"/>
- [Casual language modeling](https://huggingface.co/docs/transformers/tasks/language_modeling) task guide.
⚡️ Inference
- An interactive-demo on [TrOCR handwritten character recognition](https://huggingface.co/spaces/nielsr/TrOCR-handwritten). | 117_3_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/trocr.md | https://huggingface.co/docs/transformers/en/model_doc/trocr/#inference | .md | TrOCR's [`VisionEncoderDecoder`] model accepts images as input and makes use of
[`~generation.GenerationMixin.generate`] to autoregressively generate text given the input image.
The [`ViTImageProcessor`/`DeiTImageProcessor`] class is responsible for preprocessing the input image and
[`RobertaTokenizer`/`XLMRobertaTokenizer`] decodes the generated target tokens to the target string. The
[`TrOCRProcessor`] wraps [`ViTImageProcessor`/`DeiTImageProcessor`] and [`RobertaTokenizer`/`XLMRobertaTokenizer`] | 117_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/trocr.md | https://huggingface.co/docs/transformers/en/model_doc/trocr/#inference | .md | [`TrOCRProcessor`] wraps [`ViTImageProcessor`/`DeiTImageProcessor`] and [`RobertaTokenizer`/`XLMRobertaTokenizer`]
into a single instance to both extract the input features and decode the predicted token ids.
- Step-by-step Optical Character Recognition (OCR)
``` py
>>> from transformers import TrOCRProcessor, VisionEncoderDecoderModel
>>> import requests
>>> from PIL import Image | 117_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/trocr.md | https://huggingface.co/docs/transformers/en/model_doc/trocr/#inference | .md | >>> processor = TrOCRProcessor.from_pretrained("microsoft/trocr-base-handwritten")
>>> model = VisionEncoderDecoderModel.from_pretrained("microsoft/trocr-base-handwritten")
>>> # load image from the IAM dataset
>>> url = "https://fki.tic.heia-fr.ch/static/img/a01-122-02.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw).convert("RGB")
>>> pixel_values = processor(image, return_tensors="pt").pixel_values
>>> generated_ids = model.generate(pixel_values) | 117_4_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/trocr.md | https://huggingface.co/docs/transformers/en/model_doc/trocr/#inference | .md | >>> pixel_values = processor(image, return_tensors="pt").pixel_values
>>> generated_ids = model.generate(pixel_values)
>>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
See the [model hub](https://huggingface.co/models?filter=trocr) to look for TrOCR checkpoints. | 117_4_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/trocr.md | https://huggingface.co/docs/transformers/en/model_doc/trocr/#trocrconfig | .md | This is the configuration class to store the configuration of a [`TrOCRForCausalLM`]. It is used to instantiate an
TrOCR model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the TrOCR
[microsoft/trocr-base-handwritten](https://huggingface.co/microsoft/trocr-base-handwritten) architecture. | 117_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/trocr.md | https://huggingface.co/docs/transformers/en/model_doc/trocr/#trocrconfig | .md | [microsoft/trocr-base-handwritten](https://huggingface.co/microsoft/trocr-base-handwritten) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 50265):
Vocabulary size of the TrOCR model. Defines the number of different tokens that can be represented by the
`inputs_ids` passed when calling [`TrOCRForCausalLM`]. | 117_5_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/trocr.md | https://huggingface.co/docs/transformers/en/model_doc/trocr/#trocrconfig | .md | `inputs_ids` passed when calling [`TrOCRForCausalLM`].
d_model (`int`, *optional*, defaults to 1024):
Dimensionality of the layers and the pooler layer.
decoder_layers (`int`, *optional*, defaults to 12):
Number of decoder layers.
decoder_attention_heads (`int`, *optional*, defaults to 16):
Number of attention heads for each attention layer in the Transformer decoder.
decoder_ffn_dim (`int`, *optional*, defaults to 4096):
Dimensionality of the "intermediate" (often named feed-forward) layer in decoder. | 117_5_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/trocr.md | https://huggingface.co/docs/transformers/en/model_doc/trocr/#trocrconfig | .md | Dimensionality of the "intermediate" (often named feed-forward) layer in decoder.
activation_function (`str` or `function`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the pooler. If string, `"gelu"`, `"relu"`,
`"silu"` and `"gelu_new"` are supported.
max_position_embeddings (`int`, *optional*, defaults to 512):
The maximum sequence length that this model might ever be used with. Typically set this to something large | 117_5_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/trocr.md | https://huggingface.co/docs/transformers/en/model_doc/trocr/#trocrconfig | .md | The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
dropout (`float`, *optional*, defaults to 0.1):
The dropout probability for all fully connected layers in the embeddings, and pooler.
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
activation_dropout (`float`, *optional*, defaults to 0.0): | 117_5_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/trocr.md | https://huggingface.co/docs/transformers/en/model_doc/trocr/#trocrconfig | .md | The dropout ratio for the attention probabilities.
activation_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for activations inside the fully connected layer.
init_std (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
decoder_layerdrop (`float`, *optional*, defaults to 0.0):
The LayerDrop probability for the decoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details. | 117_5_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/trocr.md | https://huggingface.co/docs/transformers/en/model_doc/trocr/#trocrconfig | .md | The LayerDrop probability for the decoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models).
scale_embedding (`bool`, *optional*, defaults to `False`):
Whether or not to scale the word embeddings by sqrt(d_model).
use_learned_position_embeddings (`bool`, *optional*, defaults to `True`): | 117_5_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/trocr.md | https://huggingface.co/docs/transformers/en/model_doc/trocr/#trocrconfig | .md | use_learned_position_embeddings (`bool`, *optional*, defaults to `True`):
Whether or not to use learned position embeddings. If not, sinusoidal position embeddings will be used.
layernorm_embedding (`bool`, *optional*, defaults to `True`):
Whether or not to use a layernorm after the word + position embeddings.
Example:
```python
>>> from transformers import TrOCRConfig, TrOCRForCausalLM | 117_5_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/trocr.md | https://huggingface.co/docs/transformers/en/model_doc/trocr/#trocrconfig | .md | >>> # Initializing a TrOCR-base style configuration
>>> configuration = TrOCRConfig()
>>> # Initializing a model (with random weights) from the TrOCR-base style configuration
>>> model = TrOCRForCausalLM(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
``` | 117_5_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/trocr.md | https://huggingface.co/docs/transformers/en/model_doc/trocr/#trocrprocessor | .md | Constructs a TrOCR processor which wraps a vision image processor and a TrOCR tokenizer into a single processor.
[`TrOCRProcessor`] offers all the functionalities of [`ViTImageProcessor`/`DeiTImageProcessor`] and
[`RobertaTokenizer`/`XLMRobertaTokenizer`]. See the [`~TrOCRProcessor.__call__`] and [`~TrOCRProcessor.decode`] for
more information.
Args:
image_processor ([`ViTImageProcessor`/`DeiTImageProcessor`], *optional*): | 117_6_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/trocr.md | https://huggingface.co/docs/transformers/en/model_doc/trocr/#trocrprocessor | .md | more information.
Args:
image_processor ([`ViTImageProcessor`/`DeiTImageProcessor`], *optional*):
An instance of [`ViTImageProcessor`/`DeiTImageProcessor`]. The image processor is a required input.
tokenizer ([`RobertaTokenizer`/`XLMRobertaTokenizer`], *optional*):
An instance of [`RobertaTokenizer`/`XLMRobertaTokenizer`]. The tokenizer is a required input.
Methods: __call__
- from_pretrained
- save_pretrained
- batch_decode
- decode | 117_6_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/trocr.md | https://huggingface.co/docs/transformers/en/model_doc/trocr/#trocrforcausallm | .md | The TrOCR Decoder with a language modeling head. Can be used as the decoder part of [`EncoderDecoderModel`] and [`VisionEncoderDecoder`].
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | 117_7_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/trocr.md | https://huggingface.co/docs/transformers/en/model_doc/trocr/#trocrforcausallm | .md | etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`TrOCRConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the | 117_7_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/trocr.md | https://huggingface.co/docs/transformers/en/model_doc/trocr/#trocrforcausallm | .md | load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 117_7_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bartpho.md | https://huggingface.co/docs/transformers/en/model_doc/bartpho/ | .md | <!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 118_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bartpho.md | https://huggingface.co/docs/transformers/en/model_doc/bartpho/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 118_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bartpho.md | https://huggingface.co/docs/transformers/en/model_doc/bartpho/#overview | .md | The BARTpho model was proposed in [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://arxiv.org/abs/2109.09701) by Nguyen Luong Tran, Duong Minh Le and Dat Quoc Nguyen.
The abstract from the paper is the following:
*We present BARTpho with two versions -- BARTpho_word and BARTpho_syllable -- the first public large-scale monolingual
sequence-to-sequence models pre-trained for Vietnamese. Our BARTpho uses the "large" architecture and pre-training | 118_1_0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.