source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next/#llavanextimageprocessor | .md | image_mean (`float` or `List[float]`, *optional*, defaults to `[0.48145466, 0.4578275, 0.40821073]`):
Mean to use if normalizing the image. This is a float or list of floats the length of the number of
channels in the image. Can be overridden by the `image_mean` parameter in the `preprocess` method.
image_std (`float` or `List[float]`, *optional*, defaults to `[0.26862954, 0.26130258, 0.27577711]`):
Standard deviation to use if normalizing the image. This is a float or list of floats the length of the | 130_8_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next/#llavanextimageprocessor | .md | Standard deviation to use if normalizing the image. This is a float or list of floats the length of the
number of channels in the image. Can be overridden by the `image_std` parameter in the `preprocess` method.
Can be overridden by the `image_std` parameter in the `preprocess` method.
do_pad (`bool`, *optional*, defaults to `True`):
Whether to pad the image. If `True`, will pad the patch dimension of the images in the batch to the largest | 130_8_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next/#llavanextimageprocessor | .md | Whether to pad the image. If `True`, will pad the patch dimension of the images in the batch to the largest
number of patches in the batch. Padding will be applied to the bottom and right with zeros.
do_convert_rgb (`bool`, *optional*, defaults to `True`):
Whether to convert the image to RGB.
Methods: preprocess | 130_8_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next/#llavanextprocessor | .md | Constructs a LLaVa-NeXT processor which wraps a LLaVa-NeXT image processor and a LLaMa tokenizer into a single processor.
[`LlavaNextProcessor`] offers all the functionalities of [`LlavaNextImageProcessor`] and [`LlamaTokenizerFast`]. See the
[`~LlavaNextProcessor.__call__`] and [`~LlavaNextProcessor.decode`] for more information.
Args:
image_processor ([`LlavaNextImageProcessor`], *optional*):
The image processor is a required input.
tokenizer ([`LlamaTokenizerFast`], *optional*): | 130_9_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next/#llavanextprocessor | .md | The image processor is a required input.
tokenizer ([`LlamaTokenizerFast`], *optional*):
The tokenizer is a required input.
patch_size (`int`, *optional*):
Patch size from the vision tower.
vision_feature_select_strategy (`str`, *optional*):
The feature selection strategy used to select the vision feature from the vision backbone.
Shoudl be same as in model's config
chat_template (`str`, *optional*): A Jinja template which will be used to convert lists of messages
in a chat into a tokenizable string. | 130_9_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next/#llavanextprocessor | .md | in a chat into a tokenizable string.
image_token (`str`, *optional*, defaults to `"<image>"`):
Special token used to denote image location.
num_additional_image_tokens (`int`, *optional*, defaults to 0):
Number of additional tokens added to the image embeddings, such as CLS (+1). If the backbone has no CLS or other
extra tokens appended, no need to set this arg. | 130_9_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next/#llavanextforconditionalgeneration | .md | The LLAVA-NeXT model which consists of a vision backbone and a language model.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | 130_10_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next/#llavanextforconditionalgeneration | .md | etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`LlavaNextConfig`] or [`LlavaNextVisionConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the | 130_10_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next/#llavanextforconditionalgeneration | .md | load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 130_10_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xglm.md | https://huggingface.co/docs/transformers/en/model_doc/xglm/ | .md | <!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 131_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xglm.md | https://huggingface.co/docs/transformers/en/model_doc/xglm/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 131_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xglm.md | https://huggingface.co/docs/transformers/en/model_doc/xglm/#overview | .md | The XGLM model was proposed in [Few-shot Learning with Multilingual Language Models](https://arxiv.org/abs/2112.10668)
by Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal,
Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo,
Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li.
The abstract from the paper is the following: | 131_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xglm.md | https://huggingface.co/docs/transformers/en/model_doc/xglm/#overview | .md | The abstract from the paper is the following:
*Large-scale autoregressive language models such as GPT-3 are few-shot learners that can perform a wide range of language
tasks without fine-tuning. While these models are known to be able to jointly represent many different languages,
their training data is dominated by English, potentially limiting their cross-lingual generalization.
In this work, we train multilingual autoregressive language models on a balanced corpus covering a diverse set of languages, | 131_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xglm.md | https://huggingface.co/docs/transformers/en/model_doc/xglm/#overview | .md | In this work, we train multilingual autoregressive language models on a balanced corpus covering a diverse set of languages,
and study their few- and zero-shot learning capabilities in a wide range of tasks. Our largest model with 7.5 billion parameters
sets new state of the art in few-shot learning in more than 20 representative languages, outperforming GPT-3 of comparable size
in multilingual commonsense reasoning (with +7.4% absolute accuracy improvement in 0-shot settings and +9.4% in 4-shot settings) | 131_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xglm.md | https://huggingface.co/docs/transformers/en/model_doc/xglm/#overview | .md | and natural language inference (+5.4% in each of 0-shot and 4-shot settings). On the FLORES-101 machine translation benchmark,
our model outperforms GPT-3 on 171 out of 182 translation directions with 32 training examples, while surpassing the
official supervised baseline in 45 directions. We present a detailed analysis of where the model succeeds and fails,
showing in particular that it enables cross-lingual in-context learning on some tasks, while there is still room for improvement | 131_1_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xglm.md | https://huggingface.co/docs/transformers/en/model_doc/xglm/#overview | .md | on surface form robustness and adaptation to tasks that do not have a natural cloze form. Finally, we evaluate our models
in social value tasks such as hate speech detection in five languages and find it has limitations similar to comparable sized GPT-3 models.*
This model was contributed by [Suraj](https://huggingface.co/valhalla). The original code can be found [here](https://github.com/pytorch/fairseq/tree/main/examples/xglm). | 131_1_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xglm.md | https://huggingface.co/docs/transformers/en/model_doc/xglm/#resources | .md | - [Causal language modeling task guide](../tasks/language_modeling) | 131_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xglm.md | https://huggingface.co/docs/transformers/en/model_doc/xglm/#xglmconfig | .md | This is the configuration class to store the configuration of a [`XGLMModel`]. It is used to instantiate an XGLM
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the XGLM
[facebook/xglm-564M](https://huggingface.co/facebook/xglm-564M) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the | 131_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xglm.md | https://huggingface.co/docs/transformers/en/model_doc/xglm/#xglmconfig | .md | Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 256008):
Vocabulary size of the XGLM model. Defines the number of different tokens that can be represented by the
`inputs_ids` passed when calling [`XGLMModel`] or [`FlaxXGLMModel`].
max_position_embeddings (`int`, *optional*, defaults to 2048): | 131_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xglm.md | https://huggingface.co/docs/transformers/en/model_doc/xglm/#xglmconfig | .md | max_position_embeddings (`int`, *optional*, defaults to 2048):
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
d_model (`int`, *optional*, defaults to 1024):
Dimension of the layers and the pooler layer.
ffn_dim (`int`, *optional*, defaults to 4096):
Dimension of the "intermediate" (often named feed-forward) layer in decoder.
num_layers (`int`, *optional*, defaults to 24): | 131_3_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xglm.md | https://huggingface.co/docs/transformers/en/model_doc/xglm/#xglmconfig | .md | Dimension of the "intermediate" (often named feed-forward) layer in decoder.
num_layers (`int`, *optional*, defaults to 24):
Number of hidden layers Transformer decoder.
attention_heads (`int`, *optional*, defaults to 16):
Number of attention heads for each attention layer in the Transformer decoder.
activation_function (`str` or `function`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, | 131_3_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xglm.md | https://huggingface.co/docs/transformers/en/model_doc/xglm/#xglmconfig | .md | The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"silu"` and `"gelu_new"` are supported.
dropout (`float`, *optional*, defaults to 0.1):
The dropout probability for all fully connected layers in the embeddings, dencoder, and pooler.
attention_dropout (`float`, *optional*, defaults to 0.1):
The dropout ratio for the attention probabilities.
activation_dropout (`float`, *optional*, defaults to 0.0): | 131_3_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xglm.md | https://huggingface.co/docs/transformers/en/model_doc/xglm/#xglmconfig | .md | The dropout ratio for the attention probabilities.
activation_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for activations inside the fully connected layer.
layerdrop (`float`, *optional*, defaults to 0.0):
The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
init_std (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices. | 131_3_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xglm.md | https://huggingface.co/docs/transformers/en/model_doc/xglm/#xglmconfig | .md | The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
scale_embedding (`bool`, *optional*, defaults to `True`):
Scale embeddings by diving by sqrt(d_model).
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models).
Example:
```python
>>> from transformers import XGLMModel, XGLMConfig | 131_3_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xglm.md | https://huggingface.co/docs/transformers/en/model_doc/xglm/#xglmconfig | .md | >>> # Initializing a XGLM facebook/xglm-564M style configuration
>>> configuration = XGLMConfig()
>>> # Initializing a model from the facebook/xglm-564M style configuration
>>> model = XGLMModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
``` | 131_3_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xglm.md | https://huggingface.co/docs/transformers/en/model_doc/xglm/#xglmtokenizer | .md | Adapted from [`RobertaTokenizer`] and [`XLNetTokenizer`]. Based on
[SentencePiece](https://github.com/google/sentencepiece).
This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
Args:
vocab_file (`str`):
Path to the vocabulary file.
bos_token (`str`, *optional*, defaults to `"<s>"`): | 131_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xglm.md | https://huggingface.co/docs/transformers/en/model_doc/xglm/#xglmtokenizer | .md | Args:
vocab_file (`str`):
Path to the vocabulary file.
bos_token (`str`, *optional*, defaults to `"<s>"`):
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
<Tip>
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the `cls_token`.
</Tip>
eos_token (`str`, *optional*, defaults to `"</s>"`):
The end of sequence token.
<Tip> | 131_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xglm.md | https://huggingface.co/docs/transformers/en/model_doc/xglm/#xglmtokenizer | .md | </Tip>
eos_token (`str`, *optional*, defaults to `"</s>"`):
The end of sequence token.
<Tip>
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the `sep_token`.
</Tip>
sep_token (`str`, *optional*, defaults to `"</s>"`):
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for | 131_4_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xglm.md | https://huggingface.co/docs/transformers/en/model_doc/xglm/#xglmtokenizer | .md | The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (`str`, *optional*, defaults to `"<s>"`):
The classifier token which is used when doing sequence classification (classification of the whole sequence | 131_4_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xglm.md | https://huggingface.co/docs/transformers/en/model_doc/xglm/#xglmtokenizer | .md | The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (`str`, *optional*, defaults to `"<unk>"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (`str`, *optional*, defaults to `"<pad>"`): | 131_4_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xglm.md | https://huggingface.co/docs/transformers/en/model_doc/xglm/#xglmtokenizer | .md | token instead.
pad_token (`str`, *optional*, defaults to `"<pad>"`):
The token used for padding, for example when batching sequences of different lengths.
sp_model_kwargs (`dict`, *optional*):
Will be passed to the `SentencePieceProcessor.__init__()` method. The [Python wrapper for
SentencePiece](https://github.com/google/sentencepiece/tree/master/python) can be used, among other things,
to set:
- `enable_sampling`: Enable subword regularization. | 131_4_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xglm.md | https://huggingface.co/docs/transformers/en/model_doc/xglm/#xglmtokenizer | .md | to set:
- `enable_sampling`: Enable subword regularization.
- `nbest_size`: Sampling parameters for unigram. Invalid for BPE-Dropout.
- `nbest_size = {0,1}`: No sampling is performed.
- `nbest_size > 1`: samples from the nbest_size results.
- `nbest_size < 0`: assuming that nbest_size is infinite and samples from the all hypothesis (lattice)
using forward-filtering-and-backward-sampling algorithm.
- `alpha`: Smoothing parameter for unigram sampling, and dropout probability of merge operations for | 131_4_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xglm.md | https://huggingface.co/docs/transformers/en/model_doc/xglm/#xglmtokenizer | .md | - `alpha`: Smoothing parameter for unigram sampling, and dropout probability of merge operations for
BPE-dropout.
Attributes:
sp_model (`SentencePieceProcessor`):
The *SentencePiece* processor that is used for every conversion (string, tokens and IDs).
Methods: build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary | 131_4_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xglm.md | https://huggingface.co/docs/transformers/en/model_doc/xglm/#xglmtokenizerfast | .md | Construct a "fast" XGLM tokenizer (backed by HuggingFace's *tokenizers* library). Adapted from [`RobertaTokenizer`]
and [`XLNetTokenizer`]. Based on
[BPE](https://huggingface.co/docs/tokenizers/python/latest/components.html?highlight=BPE#models).
This tokenizer inherits from [`PreTrainedTokenizerFast`] which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
Args:
vocab_file (`str`):
Path to the vocabulary file. | 131_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xglm.md | https://huggingface.co/docs/transformers/en/model_doc/xglm/#xglmtokenizerfast | .md | refer to this superclass for more information regarding those methods.
Args:
vocab_file (`str`):
Path to the vocabulary file.
bos_token (`str`, *optional*, defaults to `"<s>"`):
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
<Tip>
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the `cls_token`.
</Tip>
eos_token (`str`, *optional*, defaults to `"</s>"`): | 131_5_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xglm.md | https://huggingface.co/docs/transformers/en/model_doc/xglm/#xglmtokenizerfast | .md | sequence. The token used is the `cls_token`.
</Tip>
eos_token (`str`, *optional*, defaults to `"</s>"`):
The end of sequence token.
<Tip>
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the `sep_token`.
</Tip>
sep_token (`str`, *optional*, defaults to `"</s>"`):
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for | 131_5_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xglm.md | https://huggingface.co/docs/transformers/en/model_doc/xglm/#xglmtokenizerfast | .md | The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (`str`, *optional*, defaults to `"<s>"`):
The classifier token which is used when doing sequence classification (classification of the whole sequence | 131_5_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xglm.md | https://huggingface.co/docs/transformers/en/model_doc/xglm/#xglmtokenizerfast | .md | The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (`str`, *optional*, defaults to `"<unk>"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (`str`, *optional*, defaults to `"<pad>"`): | 131_5_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xglm.md | https://huggingface.co/docs/transformers/en/model_doc/xglm/#xglmtokenizerfast | .md | token instead.
pad_token (`str`, *optional*, defaults to `"<pad>"`):
The token used for padding, for example when batching sequences of different lengths.
additional_special_tokens (`List[str]`, *optional*, defaults to `["<s>NOTUSED", "</s>NOTUSED"]`):
Additional special tokens used by the tokenizer.
<frameworkcontent>
<pt> | 131_5_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xglm.md | https://huggingface.co/docs/transformers/en/model_doc/xglm/#xglmmodel | .md | The bare XGLM Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | 131_6_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xglm.md | https://huggingface.co/docs/transformers/en/model_doc/xglm/#xglmmodel | .md | etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`XGLMConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the | 131_6_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xglm.md | https://huggingface.co/docs/transformers/en/model_doc/xglm/#xglmmodel | .md | load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Transformer decoder consisting of *config.num_layers* layers. Each layer is a [`XGLMDecoderLayer`]
Args:
config: XGLMConfig
embed_tokens (nn.Embedding): output embedding
Methods: forward | 131_6_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xglm.md | https://huggingface.co/docs/transformers/en/model_doc/xglm/#xglmforcausallm | .md | The XGLM Model transformer with a language modeling head on top (linear layer with weights tied to the input
embeddings).
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | 131_7_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xglm.md | https://huggingface.co/docs/transformers/en/model_doc/xglm/#xglmforcausallm | .md | etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`XGLMConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the | 131_7_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xglm.md | https://huggingface.co/docs/transformers/en/model_doc/xglm/#xglmforcausallm | .md | load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
</pt>
<tf> | 131_7_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xglm.md | https://huggingface.co/docs/transformers/en/model_doc/xglm/#tfxglmmodel | .md | No docstring available for TFXGLMModel
Methods: call | 131_8_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xglm.md | https://huggingface.co/docs/transformers/en/model_doc/xglm/#tfxglmforcausallm | .md | No docstring available for TFXGLMForCausalLM
Methods: call
</tf>
<jax> | 131_9_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xglm.md | https://huggingface.co/docs/transformers/en/model_doc/xglm/#flaxxglmmodel | .md | No docstring available for FlaxXGLMModel
Methods: __call__ | 131_10_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xglm.md | https://huggingface.co/docs/transformers/en/model_doc/xglm/#flaxxglmforcausallm | .md | No docstring available for FlaxXGLMForCausalLM
Methods: __call__
</jax>
</frameworkcontent> | 131_11_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv2.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv2/ | .md | <!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 132_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv2.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv2/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 132_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv2.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv2/#overview | .md | The LayoutLMV2 model was proposed in [LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740) by Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu,
Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou. LayoutLMV2 improves [LayoutLM](layoutlm) to obtain
state-of-the-art results across several document image understanding benchmarks: | 132_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv2.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv2/#overview | .md | state-of-the-art results across several document image understanding benchmarks:
- information extraction from scanned documents: the [FUNSD](https://guillaumejaume.github.io/FUNSD/) dataset (a
collection of 199 annotated forms comprising more than 30,000 words), the [CORD](https://github.com/clovaai/cord) | 132_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv2.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv2/#overview | .md | collection of 199 annotated forms comprising more than 30,000 words), the [CORD](https://github.com/clovaai/cord)
dataset (a collection of 800 receipts for training, 100 for validation and 100 for testing), the [SROIE](https://rrc.cvc.uab.es/?ch=13) dataset (a collection of 626 receipts for training and 347 receipts for testing)
and the [Kleister-NDA](https://github.com/applicaai/kleister-nda) dataset (a collection of non-disclosure | 132_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv2.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv2/#overview | .md | and the [Kleister-NDA](https://github.com/applicaai/kleister-nda) dataset (a collection of non-disclosure
agreements from the EDGAR database, including 254 documents for training, 83 documents for validation, and 203
documents for testing).
- document image classification: the [RVL-CDIP](https://www.cs.cmu.edu/~aharley/rvl-cdip/) dataset (a collection of
400,000 images belonging to one of 16 classes). | 132_1_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv2.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv2/#overview | .md | 400,000 images belonging to one of 16 classes).
- document visual question answering: the [DocVQA](https://arxiv.org/abs/2007.00398) dataset (a collection of 50,000
questions defined on 12,000+ document images).
The abstract from the paper is the following:
*Pre-training of text and layout has proved effective in a variety of visually-rich document understanding tasks due to
its effective model architecture and the advantage of large-scale unlabeled scanned/digital-born documents. In this | 132_1_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv2.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv2/#overview | .md | its effective model architecture and the advantage of large-scale unlabeled scanned/digital-born documents. In this
paper, we present LayoutLMv2 by pre-training text, layout and image in a multi-modal framework, where new model
architectures and pre-training tasks are leveraged. Specifically, LayoutLMv2 not only uses the existing masked
visual-language modeling task but also the new text-image alignment and text-image matching tasks in the pre-training | 132_1_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv2.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv2/#overview | .md | visual-language modeling task but also the new text-image alignment and text-image matching tasks in the pre-training
stage, where cross-modality interaction is better learned. Meanwhile, it also integrates a spatial-aware self-attention
mechanism into the Transformer architecture, so that the model can fully understand the relative positional
relationship among different text blocks. Experiment results show that LayoutLMv2 outperforms strong baselines and | 132_1_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv2.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv2/#overview | .md | relationship among different text blocks. Experiment results show that LayoutLMv2 outperforms strong baselines and
achieves new state-of-the-art results on a wide variety of downstream visually-rich document understanding tasks,
including FUNSD (0.7895 -> 0.8420), CORD (0.9493 -> 0.9601), SROIE (0.9524 -> 0.9781), Kleister-NDA (0.834 -> 0.852),
RVL-CDIP (0.9443 -> 0.9564), and DocVQA (0.7295 -> 0.8672). The pre-trained LayoutLMv2 model is publicly available at
this https URL.* | 132_1_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv2.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv2/#overview | .md | this https URL.*
LayoutLMv2 depends on `detectron2`, `torchvision` and `tesseract`. Run the
following to install them:
```bash
python -m pip install 'git+https://github.com/facebookresearch/detectron2.git'
python -m pip install torchvision tesseract
```
(If you are developing for LayoutLMv2, note that passing the doctests also requires the installation of these packages.) | 132_1_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv2.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv2/#usage-tips | .md | - The main difference between LayoutLMv1 and LayoutLMv2 is that the latter incorporates visual embeddings during
pre-training (while LayoutLMv1 only adds visual embeddings during fine-tuning).
- LayoutLMv2 adds both a relative 1D attention bias as well as a spatial 2D attention bias to the attention scores in
the self-attention layers. Details can be found on page 5 of the [paper](https://arxiv.org/abs/2012.14740). | 132_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv2.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv2/#usage-tips | .md | the self-attention layers. Details can be found on page 5 of the [paper](https://arxiv.org/abs/2012.14740).
- Demo notebooks on how to use the LayoutLMv2 model on RVL-CDIP, FUNSD, DocVQA, CORD can be found [here](https://github.com/NielsRogge/Transformers-Tutorials).
- LayoutLMv2 uses Facebook AI's [Detectron2](https://github.com/facebookresearch/detectron2/) package for its visual
backbone. See [this link](https://detectron2.readthedocs.io/en/latest/tutorials/install.html) for installation
instructions. | 132_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv2.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv2/#usage-tips | .md | backbone. See [this link](https://detectron2.readthedocs.io/en/latest/tutorials/install.html) for installation
instructions.
- In addition to `input_ids`, [`~LayoutLMv2Model.forward`] expects 2 additional inputs, namely
`image` and `bbox`. The `image` input corresponds to the original document image in which the text
tokens occur. The model expects each document image to be of size 224x224. This means that if you have a batch of | 132_2_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv2.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv2/#usage-tips | .md | tokens occur. The model expects each document image to be of size 224x224. This means that if you have a batch of
document images, `image` should be a tensor of shape (batch_size, 3, 224, 224). This can be either a
`torch.Tensor` or a `Detectron2.structures.ImageList`. You don't need to normalize the channels, as this is
done by the model. Important to note is that the visual backbone expects BGR channels instead of RGB, as all models | 132_2_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv2.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv2/#usage-tips | .md | done by the model. Important to note is that the visual backbone expects BGR channels instead of RGB, as all models
in Detectron2 are pre-trained using the BGR format. The `bbox` input are the bounding boxes (i.e. 2D-positions)
of the input text tokens. This is identical to [`LayoutLMModel`]. These can be obtained using an
external OCR engine such as Google's [Tesseract](https://github.com/tesseract-ocr/tesseract) (there's a [Python | 132_2_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv2.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv2/#usage-tips | .md | external OCR engine such as Google's [Tesseract](https://github.com/tesseract-ocr/tesseract) (there's a [Python
wrapper](https://pypi.org/project/pytesseract/) available). Each bounding box should be in (x0, y0, x1, y1)
format, where (x0, y0) corresponds to the position of the upper left corner in the bounding box, and (x1, y1)
represents the position of the lower right corner. Note that one first needs to normalize the bounding boxes to be on | 132_2_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv2.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv2/#usage-tips | .md | represents the position of the lower right corner. Note that one first needs to normalize the bounding boxes to be on
a 0-1000 scale. To normalize, you can use the following function:
```python
def normalize_bbox(bbox, width, height):
return [
int(1000 * (bbox[0] / width)),
int(1000 * (bbox[1] / height)),
int(1000 * (bbox[2] / width)),
int(1000 * (bbox[3] / height)),
]
```
Here, `width` and `height` correspond to the width and height of the original document in which the token | 132_2_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv2.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv2/#usage-tips | .md | ]
```
Here, `width` and `height` correspond to the width and height of the original document in which the token
occurs (before resizing the image). Those can be obtained using the Python Image Library (PIL) library for example, as
follows:
```python
from PIL import Image | 132_2_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv2.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv2/#usage-tips | .md | image = Image.open(
"name_of_your_document - can be a png, jpg, etc. of your documents (PDFs must be converted to images)."
) | 132_2_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv2.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv2/#usage-tips | .md | width, height = image.size
```
However, this model includes a brand new [`~transformers.LayoutLMv2Processor`] which can be used to directly
prepare data for the model (including applying OCR under the hood). More information can be found in the "Usage"
section below.
- Internally, [`~transformers.LayoutLMv2Model`] will send the `image` input through its visual backbone to
obtain a lower-resolution feature map, whose shape is equal to the `image_feature_pool_shape` attribute of | 132_2_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv2.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv2/#usage-tips | .md | obtain a lower-resolution feature map, whose shape is equal to the `image_feature_pool_shape` attribute of
[`~transformers.LayoutLMv2Config`]. This feature map is then flattened to obtain a sequence of image tokens. As
the size of the feature map is 7x7 by default, one obtains 49 image tokens. These are then concatenated with the text
tokens, and send through the Transformer encoder. This means that the last hidden states of the model will have a | 132_2_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv2.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv2/#usage-tips | .md | tokens, and send through the Transformer encoder. This means that the last hidden states of the model will have a
length of 512 + 49 = 561, if you pad the text tokens up to the max length. More generally, the last hidden states
will have a shape of `seq_length` + `image_feature_pool_shape[0]` *
`config.image_feature_pool_shape[1]`.
- When calling [`~transformers.LayoutLMv2Model.from_pretrained`], a warning will be printed with a long list of | 132_2_11 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv2.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv2/#usage-tips | .md | - When calling [`~transformers.LayoutLMv2Model.from_pretrained`], a warning will be printed with a long list of
parameter names that are not initialized. This is not a problem, as these parameters are batch normalization
statistics, which are going to have values when fine-tuning on a custom dataset.
- If you want to train the model in a distributed environment, make sure to call [`synchronize_batch_norm`] on the | 132_2_12 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv2.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv2/#usage-tips | .md | - If you want to train the model in a distributed environment, make sure to call [`synchronize_batch_norm`] on the
model in order to properly synchronize the batch normalization layers of the visual backbone.
In addition, there's LayoutXLM, which is a multilingual version of LayoutLMv2. More information can be found on
[LayoutXLM's documentation page](layoutxlm). | 132_2_13 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv2.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv2/#resources | .md | A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with LayoutLMv2. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
<PipelineTag pipeline="text-classification"/> | 132_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv2.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv2/#resources | .md | <PipelineTag pipeline="text-classification"/>
- A notebook on how to [finetune LayoutLMv2 for text-classification on RVL-CDIP dataset](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/LayoutLMv2/RVL-CDIP/Fine_tuning_LayoutLMv2ForSequenceClassification_on_RVL_CDIP.ipynb).
- See also: [Text classification task guide](../tasks/sequence_classification)
<PipelineTag pipeline="question-answering"/> | 132_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv2.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv2/#resources | .md | - See also: [Text classification task guide](../tasks/sequence_classification)
<PipelineTag pipeline="question-answering"/>
- A notebook on how to [finetune LayoutLMv2 for question-answering on DocVQA dataset](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/LayoutLMv2/DocVQA/Fine_tuning_LayoutLMv2ForQuestionAnswering_on_DocVQA.ipynb).
- See also: [Question answering task guide](../tasks/question_answering) | 132_3_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv2.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv2/#resources | .md | - See also: [Question answering task guide](../tasks/question_answering)
- See also: [Document question answering task guide](../tasks/document_question_answering)
<PipelineTag pipeline="token-classification"/>
- A notebook on how to [finetune LayoutLMv2 for token-classification on CORD dataset](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/LayoutLMv2/CORD/Fine_tuning_LayoutLMv2ForTokenClassification_on_CORD.ipynb). | 132_3_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv2.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv2/#resources | .md | - A notebook on how to [finetune LayoutLMv2 for token-classification on FUNSD dataset](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/LayoutLMv2/FUNSD/Fine_tuning_LayoutLMv2ForTokenClassification_on_FUNSD_using_HuggingFace_Trainer.ipynb).
- See also: [Token classification task guide](../tasks/token_classification) | 132_3_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv2.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv2/#usage-layoutlmv2processor | .md | The easiest way to prepare data for the model is to use [`LayoutLMv2Processor`], which internally
combines a image processor ([`LayoutLMv2ImageProcessor`]) and a tokenizer
([`LayoutLMv2Tokenizer`] or [`LayoutLMv2TokenizerFast`]). The image processor
handles the image modality, while the tokenizer handles the text modality. A processor combines both, which is ideal
for a multi-modal model like LayoutLMv2. Note that you can still use both separately, if you only want to handle one
modality.
```python | 132_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv2.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv2/#usage-layoutlmv2processor | .md | modality.
```python
from transformers import LayoutLMv2ImageProcessor, LayoutLMv2TokenizerFast, LayoutLMv2Processor | 132_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv2.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv2/#usage-layoutlmv2processor | .md | image_processor = LayoutLMv2ImageProcessor() # apply_ocr is set to True by default
tokenizer = LayoutLMv2TokenizerFast.from_pretrained("microsoft/layoutlmv2-base-uncased")
processor = LayoutLMv2Processor(image_processor, tokenizer)
```
In short, one can provide a document image (and possibly additional data) to [`LayoutLMv2Processor`],
and it will create the inputs expected by the model. Internally, the processor first uses | 132_4_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv2.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv2/#usage-layoutlmv2processor | .md | and it will create the inputs expected by the model. Internally, the processor first uses
[`LayoutLMv2ImageProcessor`] to apply OCR on the image to get a list of words and normalized
bounding boxes, as well to resize the image to a given size in order to get the `image` input. The words and
normalized bounding boxes are then provided to [`LayoutLMv2Tokenizer`] or
[`LayoutLMv2TokenizerFast`], which converts them to token-level `input_ids`, | 132_4_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv2.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv2/#usage-layoutlmv2processor | .md | [`LayoutLMv2TokenizerFast`], which converts them to token-level `input_ids`,
`attention_mask`, `token_type_ids`, `bbox`. Optionally, one can provide word labels to the processor,
which are turned into token-level `labels`.
[`LayoutLMv2Processor`] uses [PyTesseract](https://pypi.org/project/pytesseract/), a Python
wrapper around Google's Tesseract OCR engine, under the hood. Note that you can still use your own OCR engine of | 132_4_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv2.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv2/#usage-layoutlmv2processor | .md | wrapper around Google's Tesseract OCR engine, under the hood. Note that you can still use your own OCR engine of
choice, and provide the words and normalized boxes yourself. This requires initializing
[`LayoutLMv2ImageProcessor`] with `apply_ocr` set to `False`.
In total, there are 5 use cases that are supported by the processor. Below, we list them all. Note that each of these
use cases work for both batched and non-batched inputs (we illustrate them for non-batched inputs). | 132_4_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv2.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv2/#usage-layoutlmv2processor | .md | use cases work for both batched and non-batched inputs (we illustrate them for non-batched inputs).
**Use case 1: document image classification (training, inference) + token classification (inference), apply_ocr =
True**
This is the simplest case, in which the processor (actually the image processor) will perform OCR on the image to get
the words and normalized bounding boxes.
```python
from transformers import LayoutLMv2Processor
from PIL import Image | 132_4_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv2.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv2/#usage-layoutlmv2processor | .md | processor = LayoutLMv2Processor.from_pretrained("microsoft/layoutlmv2-base-uncased") | 132_4_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv2.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv2/#usage-layoutlmv2processor | .md | image = Image.open(
"name_of_your_document - can be a png, jpg, etc. of your documents (PDFs must be converted to images)."
).convert("RGB")
encoding = processor(
image, return_tensors="pt"
) # you can also add all tokenizer parameters here such as padding, truncation
print(encoding.keys())
# dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'bbox', 'image'])
```
**Use case 2: document image classification (training, inference) + token classification (inference), apply_ocr=False** | 132_4_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv2.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv2/#usage-layoutlmv2processor | .md | ```
**Use case 2: document image classification (training, inference) + token classification (inference), apply_ocr=False**
In case one wants to do OCR themselves, one can initialize the image processor with `apply_ocr` set to
`False`. In that case, one should provide the words and corresponding (normalized) bounding boxes themselves to
the processor.
```python
from transformers import LayoutLMv2Processor
from PIL import Image | 132_4_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv2.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv2/#usage-layoutlmv2processor | .md | processor = LayoutLMv2Processor.from_pretrained("microsoft/layoutlmv2-base-uncased", revision="no_ocr") | 132_4_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv2.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv2/#usage-layoutlmv2processor | .md | image = Image.open(
"name_of_your_document - can be a png, jpg, etc. of your documents (PDFs must be converted to images)."
).convert("RGB")
words = ["hello", "world"]
boxes = [[1, 2, 3, 4], [5, 6, 7, 8]] # make sure to normalize your bounding boxes
encoding = processor(image, words, boxes=boxes, return_tensors="pt")
print(encoding.keys())
# dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'bbox', 'image'])
```
**Use case 3: token classification (training), apply_ocr=False** | 132_4_11 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv2.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv2/#usage-layoutlmv2processor | .md | ```
**Use case 3: token classification (training), apply_ocr=False**
For token classification tasks (such as FUNSD, CORD, SROIE, Kleister-NDA), one can also provide the corresponding word
labels in order to train a model. The processor will then convert these into token-level `labels`. By default, it
will only label the first wordpiece of a word, and label the remaining wordpieces with -100, which is the | 132_4_12 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv2.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv2/#usage-layoutlmv2processor | .md | will only label the first wordpiece of a word, and label the remaining wordpieces with -100, which is the
`ignore_index` of PyTorch's CrossEntropyLoss. In case you want all wordpieces of a word to be labeled, you can
initialize the tokenizer with `only_label_first_subword` set to `False`.
```python
from transformers import LayoutLMv2Processor
from PIL import Image | 132_4_13 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv2.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv2/#usage-layoutlmv2processor | .md | processor = LayoutLMv2Processor.from_pretrained("microsoft/layoutlmv2-base-uncased", revision="no_ocr") | 132_4_14 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv2.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv2/#usage-layoutlmv2processor | .md | image = Image.open(
"name_of_your_document - can be a png, jpg, etc. of your documents (PDFs must be converted to images)."
).convert("RGB")
words = ["hello", "world"]
boxes = [[1, 2, 3, 4], [5, 6, 7, 8]] # make sure to normalize your bounding boxes
word_labels = [1, 2]
encoding = processor(image, words, boxes=boxes, word_labels=word_labels, return_tensors="pt")
print(encoding.keys())
# dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'bbox', 'labels', 'image'])
``` | 132_4_15 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv2.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv2/#usage-layoutlmv2processor | .md | print(encoding.keys())
# dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'bbox', 'labels', 'image'])
```
**Use case 4: visual question answering (inference), apply_ocr=True**
For visual question answering tasks (such as DocVQA), you can provide a question to the processor. By default, the
processor will apply OCR on the image, and create [CLS] question tokens [SEP] word tokens [SEP].
```python
from transformers import LayoutLMv2Processor
from PIL import Image | 132_4_16 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv2.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv2/#usage-layoutlmv2processor | .md | processor = LayoutLMv2Processor.from_pretrained("microsoft/layoutlmv2-base-uncased") | 132_4_17 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv2.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv2/#usage-layoutlmv2processor | .md | image = Image.open(
"name_of_your_document - can be a png, jpg, etc. of your documents (PDFs must be converted to images)."
).convert("RGB")
question = "What's his name?"
encoding = processor(image, question, return_tensors="pt")
print(encoding.keys())
# dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'bbox', 'image'])
```
**Use case 5: visual question answering (inference), apply_ocr=False** | 132_4_18 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv2.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv2/#usage-layoutlmv2processor | .md | ```
**Use case 5: visual question answering (inference), apply_ocr=False**
For visual question answering tasks (such as DocVQA), you can provide a question to the processor. If you want to
perform OCR yourself, you can provide your own words and (normalized) bounding boxes to the processor.
```python
from transformers import LayoutLMv2Processor
from PIL import Image | 132_4_19 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/layoutlmv2.md | https://huggingface.co/docs/transformers/en/model_doc/layoutlmv2/#usage-layoutlmv2processor | .md | processor = LayoutLMv2Processor.from_pretrained("microsoft/layoutlmv2-base-uncased", revision="no_ocr") | 132_4_20 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.