source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/realm.md
|
https://huggingface.co/docs/transformers/en/model_doc/realm/#realmconfig
|
.md
|
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (`int`, *optional*, defaults to 2):
The vocabulary size of the `token_type_ids` passed when calling [`RealmEmbedder`], [`RealmScorer`],
[`RealmKnowledgeAugEncoder`], or [`RealmReader`].
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
|
235_3_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/realm.md
|
https://huggingface.co/docs/transformers/en/model_doc/realm/#realmconfig
|
.md
|
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (`float`, *optional*, defaults to 1e-12):
The epsilon used by the layer normalization layers.
span_hidden_size (`int`, *optional*, defaults to 256):
Dimension of the reader's spans.
max_span_width (`int`, *optional*, defaults to 10):
Max span width of the reader.
reader_layer_norm_eps (`float`, *optional*, defaults to 1e-3):
The epsilon used by the reader's layer normalization layers.
|
235_3_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/realm.md
|
https://huggingface.co/docs/transformers/en/model_doc/realm/#realmconfig
|
.md
|
reader_layer_norm_eps (`float`, *optional*, defaults to 1e-3):
The epsilon used by the reader's layer normalization layers.
reader_beam_size (`int`, *optional*, defaults to 5):
Beam size of the reader.
reader_seq_len (`int`, *optional*, defaults to 288+32):
Maximum sequence length of the reader.
num_block_records (`int`, *optional*, defaults to 13353718):
Number of block records.
searcher_beam_size (`int`, *optional*, defaults to 5000):
|
235_3_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/realm.md
|
https://huggingface.co/docs/transformers/en/model_doc/realm/#realmconfig
|
.md
|
Number of block records.
searcher_beam_size (`int`, *optional*, defaults to 5000):
Beam size of the searcher. Note that when eval mode is enabled, *searcher_beam_size* will be the same as
*reader_beam_size*.
Example:
```python
>>> from transformers import RealmConfig, RealmEmbedder
|
235_3_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/realm.md
|
https://huggingface.co/docs/transformers/en/model_doc/realm/#realmconfig
|
.md
|
>>> # Initializing a REALM realm-cc-news-pretrained-* style configuration
>>> configuration = RealmConfig()
>>> # Initializing a model (with random weights) from the google/realm-cc-news-pretrained-embedder style configuration
>>> model = RealmEmbedder(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
235_3_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/realm.md
|
https://huggingface.co/docs/transformers/en/model_doc/realm/#realmtokenizer
|
.md
|
Construct a REALM tokenizer.
[`RealmTokenizer`] is identical to [`BertTokenizer`] and runs end-to-end tokenization: punctuation splitting and
wordpiece.
This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
Args:
vocab_file (`str`):
File containing the vocabulary.
do_lower_case (`bool`, *optional*, defaults to `True`):
Whether or not to lowercase the input when tokenizing.
|
235_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/realm.md
|
https://huggingface.co/docs/transformers/en/model_doc/realm/#realmtokenizer
|
.md
|
do_lower_case (`bool`, *optional*, defaults to `True`):
Whether or not to lowercase the input when tokenizing.
do_basic_tokenize (`bool`, *optional*, defaults to `True`):
Whether or not to do basic tokenization before WordPiece.
never_split (`Iterable`, *optional*):
Collection of tokens which will never be split during tokenization. Only has an effect when
`do_basic_tokenize=True`
unk_token (`str`, *optional*, defaults to `"[UNK]"`):
|
235_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/realm.md
|
https://huggingface.co/docs/transformers/en/model_doc/realm/#realmtokenizer
|
.md
|
`do_basic_tokenize=True`
unk_token (`str`, *optional*, defaults to `"[UNK]"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
sep_token (`str`, *optional*, defaults to `"[SEP]"`):
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
|
235_4_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/realm.md
|
https://huggingface.co/docs/transformers/en/model_doc/realm/#realmtokenizer
|
.md
|
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
pad_token (`str`, *optional*, defaults to `"[PAD]"`):
The token used for padding, for example when batching sequences of different lengths.
cls_token (`str`, *optional*, defaults to `"[CLS]"`):
The classifier token which is used when doing sequence classification (classification of the whole sequence
|
235_4_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/realm.md
|
https://huggingface.co/docs/transformers/en/model_doc/realm/#realmtokenizer
|
.md
|
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
mask_token (`str`, *optional*, defaults to `"[MASK]"`):
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
tokenize_chinese_chars (`bool`, *optional*, defaults to `True`):
|
235_4_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/realm.md
|
https://huggingface.co/docs/transformers/en/model_doc/realm/#realmtokenizer
|
.md
|
tokenize_chinese_chars (`bool`, *optional*, defaults to `True`):
Whether or not to tokenize Chinese characters.
This should likely be deactivated for Japanese (see this
[issue](https://github.com/huggingface/transformers/issues/328)).
strip_accents (`bool`, *optional*):
Whether or not to strip all accents. If this option is not specified, then it will be determined by the
value for `lowercase` (as in the original BERT).
Methods: build_inputs_with_special_tokens
- get_special_tokens_mask
|
235_4_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/realm.md
|
https://huggingface.co/docs/transformers/en/model_doc/realm/#realmtokenizer
|
.md
|
value for `lowercase` (as in the original BERT).
Methods: build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
- batch_encode_candidates
|
235_4_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/realm.md
|
https://huggingface.co/docs/transformers/en/model_doc/realm/#realmtokenizerfast
|
.md
|
Construct a "fast" REALM tokenizer (backed by HuggingFace's *tokenizers* library). Based on WordPiece.
[`RealmTokenizerFast`] is identical to [`BertTokenizerFast`] and runs end-to-end tokenization: punctuation
splitting and wordpiece.
This tokenizer inherits from [`PreTrainedTokenizerFast`] which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
Args:
vocab_file (`str`):
File containing the vocabulary.
|
235_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/realm.md
|
https://huggingface.co/docs/transformers/en/model_doc/realm/#realmtokenizerfast
|
.md
|
Args:
vocab_file (`str`):
File containing the vocabulary.
do_lower_case (`bool`, *optional*, defaults to `True`):
Whether or not to lowercase the input when tokenizing.
unk_token (`str`, *optional*, defaults to `"[UNK]"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
sep_token (`str`, *optional*, defaults to `"[SEP]"`):
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
|
235_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/realm.md
|
https://huggingface.co/docs/transformers/en/model_doc/realm/#realmtokenizerfast
|
.md
|
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
pad_token (`str`, *optional*, defaults to `"[PAD]"`):
The token used for padding, for example when batching sequences of different lengths.
cls_token (`str`, *optional*, defaults to `"[CLS]"`):
|
235_5_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/realm.md
|
https://huggingface.co/docs/transformers/en/model_doc/realm/#realmtokenizerfast
|
.md
|
cls_token (`str`, *optional*, defaults to `"[CLS]"`):
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
mask_token (`str`, *optional*, defaults to `"[MASK]"`):
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
|
235_5_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/realm.md
|
https://huggingface.co/docs/transformers/en/model_doc/realm/#realmtokenizerfast
|
.md
|
modeling. This is the token which the model will try to predict.
clean_text (`bool`, *optional*, defaults to `True`):
Whether or not to clean the text before tokenization by removing any control characters and replacing all
whitespaces by the classic one.
tokenize_chinese_chars (`bool`, *optional*, defaults to `True`):
Whether or not to tokenize Chinese characters. This should likely be deactivated for Japanese (see [this
issue](https://github.com/huggingface/transformers/issues/328)).
|
235_5_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/realm.md
|
https://huggingface.co/docs/transformers/en/model_doc/realm/#realmtokenizerfast
|
.md
|
issue](https://github.com/huggingface/transformers/issues/328)).
strip_accents (`bool`, *optional*):
Whether or not to strip all accents. If this option is not specified, then it will be determined by the
value for `lowercase` (as in the original BERT).
wordpieces_prefix (`str`, *optional*, defaults to `"##"`):
The prefix for subwords.
Methods: batch_encode_candidates
|
235_5_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/realm.md
|
https://huggingface.co/docs/transformers/en/model_doc/realm/#realmretriever
|
.md
|
The retriever of REALM outputting the retrieved evidence block and whether the block has answers as well as answer
positions."
Parameters:
block_records (`np.ndarray`):
A numpy array which cantains evidence texts.
tokenizer ([`RealmTokenizer`]):
The tokenizer to encode retrieved texts.
|
235_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/realm.md
|
https://huggingface.co/docs/transformers/en/model_doc/realm/#realmembedder
|
.md
|
The embedder of REALM outputting projected score that will be used to calculate relevance score.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`RealmConfig`]): Model configuration class with all the parameters of the model.
|
235_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/realm.md
|
https://huggingface.co/docs/transformers/en/model_doc/realm/#realmembedder
|
.md
|
behavior.
Parameters:
config ([`RealmConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
235_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/realm.md
|
https://huggingface.co/docs/transformers/en/model_doc/realm/#realmscorer
|
.md
|
The scorer of REALM outputting relevance scores representing the score of document candidates (before softmax).
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`RealmConfig`]): Model configuration class with all the parameters of the model.
|
235_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/realm.md
|
https://huggingface.co/docs/transformers/en/model_doc/realm/#realmscorer
|
.md
|
behavior.
Parameters:
config ([`RealmConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Args:
query_embedder ([`RealmEmbedder`]):
Embedder for input sequences. If not specified, it will use the same embedder as candidate sequences.
Methods: forward
|
235_8_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/realm.md
|
https://huggingface.co/docs/transformers/en/model_doc/realm/#realmknowledgeaugencoder
|
.md
|
The knowledge-augmented encoder of REALM outputting masked language model logits and marginal log-likelihood loss.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`RealmConfig`]): Model configuration class with all the parameters of the model.
|
235_9_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/realm.md
|
https://huggingface.co/docs/transformers/en/model_doc/realm/#realmknowledgeaugencoder
|
.md
|
behavior.
Parameters:
config ([`RealmConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
235_9_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/realm.md
|
https://huggingface.co/docs/transformers/en/model_doc/realm/#realmreader
|
.md
|
The reader of REALM.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`RealmConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
235_10_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/realm.md
|
https://huggingface.co/docs/transformers/en/model_doc/realm/#realmreader
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
235_10_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/realm.md
|
https://huggingface.co/docs/transformers/en/model_doc/realm/#realmforopenqa
|
.md
|
`RealmForOpenQA` for end-to-end open domain question answering.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`RealmConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
235_11_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/realm.md
|
https://huggingface.co/docs/transformers/en/model_doc/realm/#realmforopenqa
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: block_embedding_to
- forward
|
235_11_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/video_llava.md
|
https://huggingface.co/docs/transformers/en/model_doc/video_llava/
|
.md
|
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
236_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/video_llava.md
|
https://huggingface.co/docs/transformers/en/model_doc/video_llava/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
236_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/video_llava.md
|
https://huggingface.co/docs/transformers/en/model_doc/video_llava/#overview
|
.md
|
Video-LLaVa is an open-source multimodal LLM trained by fine-tuning LlamA/Vicuna on multimodal instruction-following data generated by Llava1.5 and VideChat. It is an auto-regressive language model, based on the transformer architecture. Video-LLaVa unifies visual representations to the language feature space, and enables an LLM to perform visual reasoning capabilities on both images and videos simultaneously.
|
236_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/video_llava.md
|
https://huggingface.co/docs/transformers/en/model_doc/video_llava/#overview
|
.md
|
The Video-LLaVA model was proposed in [Video-LLaVA: Learning United Visual Representation by Alignment Before Projection](https://arxiv.org/abs/2311.10122) by Bin Lin, Yang Ye, Bin Zhu, Jiaxi Cui, Munang Ning, Peng Jin, Li Yuan.
The abstract from the paper is the following:
*The Large Vision-Language Model (LVLM) has enhanced the performance of various downstream tasks in
visual-language understanding. Most existing approaches
encode images and videos into separate feature spaces,
|
236_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/video_llava.md
|
https://huggingface.co/docs/transformers/en/model_doc/video_llava/#overview
|
.md
|
visual-language understanding. Most existing approaches
encode images and videos into separate feature spaces,
which are then fed as inputs to large language models.
However, due to the lack of unified tokenization for images and videos, namely misalignment before projection, it
becomes challenging for a Large Language Model (LLM)
|
236_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/video_llava.md
|
https://huggingface.co/docs/transformers/en/model_doc/video_llava/#overview
|
.md
|
becomes challenging for a Large Language Model (LLM)
to learn multi-modal interactions from several poor projection layers. In this work, we unify visual representation into the language feature space to advance the foundational LLM towards a unified LVLM. As a result, we establish a simple but robust LVLM baseline, Video-LLaVA,
which learns from a mixed dataset of images and videos,
|
236_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/video_llava.md
|
https://huggingface.co/docs/transformers/en/model_doc/video_llava/#overview
|
.md
|
which learns from a mixed dataset of images and videos,
mutually enhancing each other. Video-LLaVA achieves superior performances on a broad range of 9 image benchmarks across 5 image question-answering datasets and 4
image benchmark toolkits. Additionally, our Video-LLaVA
also outperforms Video-ChatGPT by 5.8%, 9.9%, 18.6%,
and 10.1% on MSRVTT, MSVD, TGIF, and ActivityNet, respectively. Notably, extensive experiments demonstrate that
Video-LLaVA mutually benefits images and videos within
|
236_1_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/video_llava.md
|
https://huggingface.co/docs/transformers/en/model_doc/video_llava/#overview
|
.md
|
Video-LLaVA mutually benefits images and videos within
a unified visual representation, outperforming models designed specifically for images or videos. We aim for this
work to provide modest insights into the multi-modal inputs
for the LLM*
|
236_1_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/video_llava.md
|
https://huggingface.co/docs/transformers/en/model_doc/video_llava/#usage-tips
|
.md
|
- We advise users to use padding_side="left" when computing batched generation as it leads to more accurate results. Simply make sure to call processor.tokenizer.padding_side = "left" before generating.
- Note the model has not been explicitly trained to process multiple images/videos in the same prompt, although this is technically possible, you may experience inaccurate results.
- Note that the video inputs should have exactly 8 frames at the input, since the models were trained in that setting.
|
236_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/video_llava.md
|
https://huggingface.co/docs/transformers/en/model_doc/video_llava/#usage-tips
|
.md
|
- Note that the video inputs should have exactly 8 frames at the input, since the models were trained in that setting.
This model was contributed by [RaushanTurganbay](https://huggingface.co/RaushanTurganbay).
The original code can be found [here](https://github.com/PKU-YuanGroup/Video-LLaVA).
> [!NOTE]
|
236_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/video_llava.md
|
https://huggingface.co/docs/transformers/en/model_doc/video_llava/#usage-tips
|
.md
|
The original code can be found [here](https://github.com/PKU-YuanGroup/Video-LLaVA).
> [!NOTE]
> LLaVA models after release v4.46 will raise warnings about adding `processor.patch_size = {{patch_size}}`, `processor.num_additional_image_tokens = {{num_additional_image_tokens}}` and processor.vision_feature_select_strategy = {{vision_feature_select_strategy}}`. It is strongly recommended to add the attributes to the processor if you own the model checkpoint, or open a PR if it is not owned by you.
|
236_2_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/video_llava.md
|
https://huggingface.co/docs/transformers/en/model_doc/video_llava/#usage-tips
|
.md
|
Adding these attributes means that LLaVA will try to infer the number of image tokens required per image and expand the text with as many `<image>` placeholders as there will be tokens. Usually it is around 500 tokens per image, so make sure that the text is not truncated as otherwise there will be failure when merging the embeddings.
|
236_2_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/video_llava.md
|
https://huggingface.co/docs/transformers/en/model_doc/video_llava/#usage-tips
|
.md
|
The attributes can be obtained from model config, as `model.config.vision_config.patch_size` or `model.config.vision_feature_select_strategy`. The `num_additional_image_tokens` should be `1` if the vision backbone adds a CLS token or `0` if nothing extra is added to the vision patches.
|
236_2_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/video_llava.md
|
https://huggingface.co/docs/transformers/en/model_doc/video_llava/#single-media-mode
|
.md
|
The model can accept both images and videos as input. Here's an example code for inference in half-precision (`torch.float16`):
```python
import av
import torch
import numpy as np
from transformers import VideoLlavaForConditionalGeneration, VideoLlavaProcessor
|
236_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/video_llava.md
|
https://huggingface.co/docs/transformers/en/model_doc/video_llava/#single-media-mode
|
.md
|
def read_video_pyav(container, indices):
'''
Decode the video with PyAV decoder.
Args:
container (`av.container.input.InputContainer`): PyAV container.
indices (`List[int]`): List of frame indices to decode.
Returns:
result (np.ndarray): np array of decoded frames of shape (num_frames, height, width, 3).
'''
frames = []
container.seek(0)
start_index = indices[0]
end_index = indices[-1]
for i, frame in enumerate(container.decode(video=0)):
if i > end_index:
break
if i >= start_index and i in indices:
|
236_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/video_llava.md
|
https://huggingface.co/docs/transformers/en/model_doc/video_llava/#single-media-mode
|
.md
|
for i, frame in enumerate(container.decode(video=0)):
if i > end_index:
break
if i >= start_index and i in indices:
frames.append(frame)
return np.stack([x.to_ndarray(format="rgb24") for x in frames])
|
236_3_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/video_llava.md
|
https://huggingface.co/docs/transformers/en/model_doc/video_llava/#single-media-mode
|
.md
|
# Load the model in half-precision
model = VideoLlavaForConditionalGeneration.from_pretrained("LanguageBind/Video-LLaVA-7B-hf", torch_dtype=torch.float16, device_map="auto")
processor = VideoLlavaProcessor.from_pretrained("LanguageBind/Video-LLaVA-7B-hf")
|
236_3_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/video_llava.md
|
https://huggingface.co/docs/transformers/en/model_doc/video_llava/#single-media-mode
|
.md
|
# Load the video as an np.arrau, sampling uniformly 8 frames
video_path = hf_hub_download(repo_id="raushan-testing-hf/videos-test", filename="sample_demo_1.mp4", repo_type="dataset")
container = av.open(video_path)
total_frames = container.streams.video[0].frames
indices = np.arange(0, total_frames, total_frames / 8).astype(int)
video = read_video_pyav(container, indices)
|
236_3_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/video_llava.md
|
https://huggingface.co/docs/transformers/en/model_doc/video_llava/#single-media-mode
|
.md
|
# For better results, we recommend to prompt the model in the following format
prompt = "USER: <video>\nWhy is this funny? ASSISTANT:"
inputs = processor(text=prompt, videos=video, return_tensors="pt")
|
236_3_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/video_llava.md
|
https://huggingface.co/docs/transformers/en/model_doc/video_llava/#single-media-mode
|
.md
|
out = model.generate(**inputs, max_new_tokens=60)
processor.batch_decode(out, skip_special_tokens=True, clean_up_tokenization_spaces=True)
```
For multiple turns conversation change the prompt format to:
```bash
"USER: <video>\nWhat do you see in this video? ASSISTANT: A baby reading a book. USER: Why is the it funny? ASSISTANT:"
```
|
236_3_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/video_llava.md
|
https://huggingface.co/docs/transformers/en/model_doc/video_llava/#mixed-media-mode
|
.md
|
The model can also generate from an interleaved image-video inputs. However note, that it was not trained in interleaved image-video setting which might affect the performance. Below is an example usage for mixed media input, add the following lines to the above code snippet:
```python
from PIL import Image
import requests
|
236_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/video_llava.md
|
https://huggingface.co/docs/transformers/en/model_doc/video_llava/#mixed-media-mode
|
.md
|
# Generate from image and video mixed inputs
# Load and image and write a new prompt
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
prompt = "USER: <image>\nHow many cats are there in the image? ASSISTANT: There are two cats. USER: <video>\nWhy is this video funny? ASSISTANT:"
inputs = processor(text=prompt, images=image, videos=clip, padding=True, return_tensors="pt")
|
236_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/video_llava.md
|
https://huggingface.co/docs/transformers/en/model_doc/video_llava/#mixed-media-mode
|
.md
|
inputs = processor(text=prompt, images=image, videos=clip, padding=True, return_tensors="pt")
# Generate
generate_ids = model.generate(**inputs, max_length=50)
processor.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True)
```
|
236_4_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/video_llava.md
|
https://huggingface.co/docs/transformers/en/model_doc/video_llava/#quantization-using-bitsandbytes-for-memory-efficiency
|
.md
|
The model can be loaded in lower bits, significantly reducing memory burden while maintaining the performance of the original model. his allows for efficient deployment on resource-constrained cases.
First make sure to install bitsandbytes by running `pip install bitsandbytes` and to have access to a GPU/accelerator that is supported by the library.
<Tip>
|
236_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/video_llava.md
|
https://huggingface.co/docs/transformers/en/model_doc/video_llava/#quantization-using-bitsandbytes-for-memory-efficiency
|
.md
|
<Tip>
bitsandbytes is being refactored to support multiple backends beyond CUDA. Currently, ROCm (AMD GPU) and Intel CPU implementations are mature, with Intel XPU in progress and Apple Silicon support expected by Q4/Q1. For installation instructions and the latest backend updates, visit [this link](https://huggingface.co/docs/bitsandbytes/main/en/installation#multi-backend).
|
236_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/video_llava.md
|
https://huggingface.co/docs/transformers/en/model_doc/video_llava/#quantization-using-bitsandbytes-for-memory-efficiency
|
.md
|
We value your feedback to help identify bugs before the full release! Check out [these docs](https://huggingface.co/docs/bitsandbytes/main/en/non_cuda_backends) for more details and feedback links.
</Tip>
Load the quantized model by simply adding [`BitsAndBytesConfig`](../main_classes/quantization#transformers.BitsAndBytesConfig) as shown below:
```python
from transformers import VideoLlavaForConditionalGeneration, BitsAndBytesConfig
|
236_5_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/video_llava.md
|
https://huggingface.co/docs/transformers/en/model_doc/video_llava/#quantization-using-bitsandbytes-for-memory-efficiency
|
.md
|
# specify how to quantize the model
quantization_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.float16,
)
model = VideoLlavaForConditionalGeneration.from_pretrained("LanguageBind/Video-LLaVA-7B-hf", quantization_config=quantization_config, device_map="auto")
```
|
236_5_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/video_llava.md
|
https://huggingface.co/docs/transformers/en/model_doc/video_llava/#flash-attention-2-to-speed-up-generation
|
.md
|
Additionally, we can greatly speed-up model inference by using [Flash Attention](../perf_train_gpu_one#flash-attention-2), which is a faster implementation of the attention mechanism used inside the model.
First, make sure to install the latest version of Flash Attention 2:
```bash
pip install -U flash-attn --no-build-isolation
```
|
236_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/video_llava.md
|
https://huggingface.co/docs/transformers/en/model_doc/video_llava/#flash-attention-2-to-speed-up-generation
|
.md
|
```bash
pip install -U flash-attn --no-build-isolation
```
Also, you should have a hardware that is compatible with Flash-Attention 2. Read more about it in the official documentation of the [flash attention repository](https://github.com/Dao-AILab/flash-attention). FlashAttention-2 can only be used when a model is loaded in `torch.float16` or `torch.bfloat16`.
To load and run a model using Flash Attention-2, simply add `attn_implementation="flash_attention_2"` when loading the model as follows:
|
236_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/video_llava.md
|
https://huggingface.co/docs/transformers/en/model_doc/video_llava/#flash-attention-2-to-speed-up-generation
|
.md
|
```python
from transformers import VideoLlavaForConditionalGeneration
|
236_6_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/video_llava.md
|
https://huggingface.co/docs/transformers/en/model_doc/video_llava/#flash-attention-2-to-speed-up-generation
|
.md
|
model = VideoLlavaForConditionalGeneration.from_pretrained(
"LanguageBind/Video-LLaVA-7B-hf",
torch_dtype=torch.float16,
attn_implementation="flash_attention_2",
).to(0)
```
|
236_6_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/video_llava.md
|
https://huggingface.co/docs/transformers/en/model_doc/video_llava/#videollavaconfig
|
.md
|
This is the configuration class to store the configuration of a [`VideoLlavaForConditionalGeneration`]. It is used to instantiate an
VideoLlava model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the like LanguageBind/Video-LLaVA-7B-hf.
e.g. [LanguageBind/Video-LLaVA-7B-hf](https://huggingface.co/LanguageBind/Video-LLaVA-7B-hf)
|
236_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/video_llava.md
|
https://huggingface.co/docs/transformers/en/model_doc/video_llava/#videollavaconfig
|
.md
|
e.g. [LanguageBind/Video-LLaVA-7B-hf](https://huggingface.co/LanguageBind/Video-LLaVA-7B-hf)
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vision_config (`VideoLlavaVisionConfig`, *optional*):
Custom vision config or dict. Defaults to `CLIPVisionConfig` if not indicated.
text_config (`Union[AutoConfig, dict]`, *optional*):
|
236_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/video_llava.md
|
https://huggingface.co/docs/transformers/en/model_doc/video_llava/#videollavaconfig
|
.md
|
text_config (`Union[AutoConfig, dict]`, *optional*):
The config object of the text backbone. Can be any of `LlamaConfig` or `MistralConfig`.
Defaults to `LlamaConfig` if not indicated.
ignore_index (`int`, *optional*, defaults to -100):
The ignore index for the loss function.
image_token_index (`int`, *optional*, defaults to 32000):
The image token index to encode the image prompt.
video_token_index (`int`, *optional*, defaults to 32001):
The video token index to encode the image prompt.
|
236_7_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/video_llava.md
|
https://huggingface.co/docs/transformers/en/model_doc/video_llava/#videollavaconfig
|
.md
|
video_token_index (`int`, *optional*, defaults to 32001):
The video token index to encode the image prompt.
projector_hidden_act (`str`, *optional*, defaults to `"gelu"`):
The activation function used by the multimodal projector.
vision_feature_select_strategy (`str`, *optional*, defaults to `"default"`):
The feature selection strategy used to select the vision feature from the CLIP backbone.
Can be either "full" to select all features or "default" to select features without `CLS`.
|
236_7_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/video_llava.md
|
https://huggingface.co/docs/transformers/en/model_doc/video_llava/#videollavaconfig
|
.md
|
Can be either "full" to select all features or "default" to select features without `CLS`.
vision_feature_layer (`int`, *optional*, defaults to -2):
The index of the layer to select the vision feature.
image_seq_length (`int`, *optional*, defaults to 256):
Sequence length of one image embedding.
video_seq_length (`int`, *optional*, defaults to 2056):
Sequence length of one video embedding.
multimodal_projector_bias (`bool`, *optional*, defaults to `True`):
Whether to use bias in the multimodal projector.
|
236_7_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/video_llava.md
|
https://huggingface.co/docs/transformers/en/model_doc/video_llava/#videollavaconfig
|
.md
|
multimodal_projector_bias (`bool`, *optional*, defaults to `True`):
Whether to use bias in the multimodal projector.
Example:
```python
>>> from transformers import VideoLlavaForConditionalGeneration, VideoLlavaConfig, CLIPVisionConfig, LlamaConfig
|
236_7_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/video_llava.md
|
https://huggingface.co/docs/transformers/en/model_doc/video_llava/#videollavaconfig
|
.md
|
>>> # Initializing a CLIP-vision config
>>> vision_config = CLIPVisionConfig()
>>> # Initializing a Llama config
>>> text_config = LlamaConfig()
>>> # Initializing a VideoLlava video_llava-1.5-7b style configuration
>>> configuration = VideoLlavaConfig(vision_config, text_config)
>>> # Initializing a model from the video_llava-1.5-7b style configuration
>>> model = VideoLlavaForConditionalGeneration(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
236_7_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/video_llava.md
|
https://huggingface.co/docs/transformers/en/model_doc/video_llava/#videollavaimageprocessor
|
.md
|
Constructs a CLIP image processor.
Args:
do_resize (`bool`, *optional*, defaults to `True`):
Whether to resize the image's (height, width) dimensions to the specified `size`. Can be overridden by
`do_resize` in the `preprocess` method.
size (`Dict[str, int]` *optional*, defaults to `{"shortest_edge": 224}`):
Size of the image after resizing. The shortest edge of the image is resized to size["shortest_edge"], with
|
236_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/video_llava.md
|
https://huggingface.co/docs/transformers/en/model_doc/video_llava/#videollavaimageprocessor
|
.md
|
Size of the image after resizing. The shortest edge of the image is resized to size["shortest_edge"], with
the longest edge resized to keep the input aspect ratio. Can be overridden by `size` in the `preprocess`
method.
resample (`PILImageResampling`, *optional*, defaults to `Resampling.BICUBIC`):
Resampling filter to use if resizing the image. Can be overridden by `resample` in the `preprocess` method.
do_center_crop (`bool`, *optional*, defaults to `True`):
|
236_8_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/video_llava.md
|
https://huggingface.co/docs/transformers/en/model_doc/video_llava/#videollavaimageprocessor
|
.md
|
do_center_crop (`bool`, *optional*, defaults to `True`):
Whether to center crop the image to the specified `crop_size`. Can be overridden by `do_center_crop` in the
`preprocess` method.
crop_size (`Dict[str, int]` *optional*, defaults to 224):
Size of the output image after applying `center_crop`. Can be overridden by `crop_size` in the `preprocess`
method.
do_rescale (`bool`, *optional*, defaults to `True`):
|
236_8_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/video_llava.md
|
https://huggingface.co/docs/transformers/en/model_doc/video_llava/#videollavaimageprocessor
|
.md
|
method.
do_rescale (`bool`, *optional*, defaults to `True`):
Whether to rescale the image by the specified scale `rescale_factor`. Can be overridden by `do_rescale` in
the `preprocess` method.
rescale_factor (`int` or `float`, *optional*, defaults to `1/255`):
Scale factor to use if rescaling the image. Can be overridden by `rescale_factor` in the `preprocess`
method.
do_normalize (`bool`, *optional*, defaults to `True`):
|
236_8_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/video_llava.md
|
https://huggingface.co/docs/transformers/en/model_doc/video_llava/#videollavaimageprocessor
|
.md
|
method.
do_normalize (`bool`, *optional*, defaults to `True`):
Whether to normalize the image. Can be overridden by `do_normalize` in the `preprocess` method.
image_mean (`float` or `List[float]`, *optional*, defaults to `[0.48145466, 0.4578275, 0.40821073]`):
Mean to use if normalizing the image. This is a float or list of floats the length of the number of
channels in the image. Can be overridden by the `image_mean` parameter in the `preprocess` method.
|
236_8_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/video_llava.md
|
https://huggingface.co/docs/transformers/en/model_doc/video_llava/#videollavaimageprocessor
|
.md
|
channels in the image. Can be overridden by the `image_mean` parameter in the `preprocess` method.
image_std (`float` or `List[float]`, *optional*, defaults to `[0.26862954, 0.26130258, 0.27577711]`):
Standard deviation to use if normalizing the image. This is a float or list of floats the length of the
number of channels in the image. Can be overridden by the `image_std` parameter in the `preprocess` method.
Can be overridden by the `image_std` parameter in the `preprocess` method.
|
236_8_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/video_llava.md
|
https://huggingface.co/docs/transformers/en/model_doc/video_llava/#videollavaimageprocessor
|
.md
|
Can be overridden by the `image_std` parameter in the `preprocess` method.
do_convert_rgb (`bool`, *optional*, defaults to `True`):
Whether to convert the image to RGB.
|
236_8_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/video_llava.md
|
https://huggingface.co/docs/transformers/en/model_doc/video_llava/#videollavaprocessor
|
.md
|
Constructs a VideoLlava processor which wraps a VideoLlava image processor and a Llava tokenizer into a single processor.
[`VideoLlavaProcessor`] offers all the functionalities of [`VideoLlavaImageProcessor`] and [`LlamaTokenizerFast`]. See the
[`~VideoLlavaProcessor.__call__`] and [`~VideoLlavaProcessor.decode`] for more information.
Args:
image_processor ([`VideoLlavaImageProcessor`], *optional*):
The image processor is a required input.
tokenizer ([`LlamaTokenizerFast`], *optional*):
|
236_9_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/video_llava.md
|
https://huggingface.co/docs/transformers/en/model_doc/video_llava/#videollavaprocessor
|
.md
|
The image processor is a required input.
tokenizer ([`LlamaTokenizerFast`], *optional*):
The tokenizer is a required input.
patch_size (`int`, *optional*, defaults to 14):
Patch size from the vision tower.
vision_feature_select_strategy (`str`, *optional*, defaults to `"default"`):
The feature selection strategy used to select the vision feature from the vision backbone.
Shoudl be same as in model's config
image_token (`str`, *optional*, defaults to `"<image>"`):
|
236_9_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/video_llava.md
|
https://huggingface.co/docs/transformers/en/model_doc/video_llava/#videollavaprocessor
|
.md
|
Shoudl be same as in model's config
image_token (`str`, *optional*, defaults to `"<image>"`):
Special token used to denote image location.
video_token (`str`, *optional*, defaults to `"<video>"`):
Special token used to denote video location.
chat_template (`str`, *optional*): A Jinja template which will be used to convert lists of messages
in a chat into a tokenizable string.
num_additional_image_tokens (`int`, *optional*, defaults to 1):
|
236_9_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/video_llava.md
|
https://huggingface.co/docs/transformers/en/model_doc/video_llava/#videollavaprocessor
|
.md
|
in a chat into a tokenizable string.
num_additional_image_tokens (`int`, *optional*, defaults to 1):
Number of additional tokens added to the image embeddings, such as CLS (+1). If the backbone has no CLS or other
extra tokens appended, no need to set this arg.
|
236_9_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/video_llava.md
|
https://huggingface.co/docs/transformers/en/model_doc/video_llava/#videollavaforconditionalgeneration
|
.md
|
The VideoLlava model which consists of a vision backbone and a language model.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
236_10_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/video_llava.md
|
https://huggingface.co/docs/transformers/en/model_doc/video_llava/#videollavaforconditionalgeneration
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`VideoLlavaConfig`] or [`VideoLlavaVisionConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
|
236_10_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/video_llava.md
|
https://huggingface.co/docs/transformers/en/model_doc/video_llava/#videollavaforconditionalgeneration
|
.md
|
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
236_10_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mgp-str.md
|
https://huggingface.co/docs/transformers/en/model_doc/mgp-str/
|
.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
237_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mgp-str.md
|
https://huggingface.co/docs/transformers/en/model_doc/mgp-str/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
237_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mgp-str.md
|
https://huggingface.co/docs/transformers/en/model_doc/mgp-str/#overview
|
.md
|
The MGP-STR model was proposed in [Multi-Granularity Prediction for Scene Text Recognition](https://arxiv.org/abs/2209.03592) by Peng Wang, Cheng Da, and Cong Yao. MGP-STR is a conceptually **simple** yet **powerful** vision Scene Text Recognition (STR) model, which is built upon the [Vision Transformer (ViT)](vit). To integrate linguistic knowledge, Multi-Granularity Prediction (MGP) strategy is proposed to inject information from the language modality into the model in an implicit way.
|
237_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mgp-str.md
|
https://huggingface.co/docs/transformers/en/model_doc/mgp-str/#overview
|
.md
|
The abstract from the paper is the following:
|
237_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mgp-str.md
|
https://huggingface.co/docs/transformers/en/model_doc/mgp-str/#overview
|
.md
|
*Scene text recognition (STR) has been an active research topic in computer vision for years. To tackle this challenging problem, numerous innovative methods have been successively proposed and incorporating linguistic knowledge into STR models has recently become a prominent trend. In this work, we first draw inspiration from the recent progress in Vision Transformer (ViT) to construct a conceptually simple yet powerful vision STR model, which is built upon ViT and outperforms previous state-of-the-art
|
237_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mgp-str.md
|
https://huggingface.co/docs/transformers/en/model_doc/mgp-str/#overview
|
.md
|
a conceptually simple yet powerful vision STR model, which is built upon ViT and outperforms previous state-of-the-art models for scene text recognition, including both pure vision models and language-augmented methods. To integrate linguistic knowledge, we further propose a Multi-Granularity Prediction strategy to inject information from the language modality into the model in an implicit way, i.e. , subword representations (BPE and WordPiece) widely-used in NLP are introduced into the output space, in
|
237_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mgp-str.md
|
https://huggingface.co/docs/transformers/en/model_doc/mgp-str/#overview
|
.md
|
an implicit way, i.e. , subword representations (BPE and WordPiece) widely-used in NLP are introduced into the output space, in addition to the conventional character level representation, while no independent language model (LM) is adopted. The resultant algorithm (termed MGP-STR) is able to push the performance envelop of STR to an even higher level. Specifically, it achieves an average recognition accuracy of 93.35% on standard benchmarks.*
|
237_1_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mgp-str.md
|
https://huggingface.co/docs/transformers/en/model_doc/mgp-str/#overview
|
.md
|
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/mgp_str_architecture.png"
alt="drawing" width="600"/>
<small> MGP-STR architecture. Taken from the <a href="https://arxiv.org/abs/2209.03592">original paper</a>. </small>
|
237_1_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mgp-str.md
|
https://huggingface.co/docs/transformers/en/model_doc/mgp-str/#overview
|
.md
|
<small> MGP-STR architecture. Taken from the <a href="https://arxiv.org/abs/2209.03592">original paper</a>. </small>
MGP-STR is trained on two synthetic datasets [MJSynth]((http://www.robots.ox.ac.uk/~vgg/data/text/)) (MJ) and [SynthText](http://www.robots.ox.ac.uk/~vgg/data/scenetext/) (ST) without fine-tuning on other datasets. It achieves state-of-the-art results on six standard Latin scene text benchmarks, including 3 regular text datasets (IC13, SVT, IIIT) and 3 irregular ones (IC15, SVTP, CUTE).
|
237_1_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mgp-str.md
|
https://huggingface.co/docs/transformers/en/model_doc/mgp-str/#overview
|
.md
|
This model was contributed by [yuekun](https://huggingface.co/yuekun). The original code can be found [here](https://github.com/AlibabaResearch/AdvancedLiterateMachinery/tree/main/OCR/MGP-STR).
|
237_1_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mgp-str.md
|
https://huggingface.co/docs/transformers/en/model_doc/mgp-str/#inference-example
|
.md
|
[`MgpstrModel`] accepts images as input and generates three types of predictions, which represent textual information at different granularities.
The three types of predictions are fused to give the final prediction result.
The [`ViTImageProcessor`] class is responsible for preprocessing the input image and
[`MgpstrTokenizer`] decodes the generated character tokens to the target string. The
[`MgpstrProcessor`] wraps [`ViTImageProcessor`] and [`MgpstrTokenizer`]
|
237_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mgp-str.md
|
https://huggingface.co/docs/transformers/en/model_doc/mgp-str/#inference-example
|
.md
|
[`MgpstrProcessor`] wraps [`ViTImageProcessor`] and [`MgpstrTokenizer`]
into a single instance to both extract the input features and decode the predicted token ids.
- Step-by-step Optical Character Recognition (OCR)
```py
>>> from transformers import MgpstrProcessor, MgpstrForSceneTextRecognition
>>> import requests
>>> from PIL import Image
|
237_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mgp-str.md
|
https://huggingface.co/docs/transformers/en/model_doc/mgp-str/#inference-example
|
.md
|
>>> processor = MgpstrProcessor.from_pretrained('alibaba-damo/mgp-str-base')
>>> model = MgpstrForSceneTextRecognition.from_pretrained('alibaba-damo/mgp-str-base')
>>> # load image from the IIIT-5k dataset
>>> url = "https://i.postimg.cc/ZKwLg2Gw/367-14.png"
>>> image = Image.open(requests.get(url, stream=True).raw).convert("RGB")
>>> pixel_values = processor(images=image, return_tensors="pt").pixel_values
>>> outputs = model(pixel_values)
|
237_2_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mgp-str.md
|
https://huggingface.co/docs/transformers/en/model_doc/mgp-str/#inference-example
|
.md
|
>>> pixel_values = processor(images=image, return_tensors="pt").pixel_values
>>> outputs = model(pixel_values)
>>> generated_text = processor.batch_decode(outputs.logits)['generated_text']
```
|
237_2_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mgp-str.md
|
https://huggingface.co/docs/transformers/en/model_doc/mgp-str/#mgpstrconfig
|
.md
|
This is the configuration class to store the configuration of an [`MgpstrModel`]. It is used to instantiate an
MGP-STR model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the MGP-STR
[alibaba-damo/mgp-str-base](https://huggingface.co/alibaba-damo/mgp-str-base) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
237_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mgp-str.md
|
https://huggingface.co/docs/transformers/en/model_doc/mgp-str/#mgpstrconfig
|
.md
|
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
image_size (`List[int]`, *optional*, defaults to `[32, 128]`):
The size (resolution) of each image.
patch_size (`int`, *optional*, defaults to 4):
The size (resolution) of each patch.
num_channels (`int`, *optional*, defaults to 3):
The number of input channels.
max_token_length (`int`, *optional*, defaults to 27):
|
237_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mgp-str.md
|
https://huggingface.co/docs/transformers/en/model_doc/mgp-str/#mgpstrconfig
|
.md
|
The number of input channels.
max_token_length (`int`, *optional*, defaults to 27):
The max number of output tokens.
num_character_labels (`int`, *optional*, defaults to 38):
The number of classes for character head .
num_bpe_labels (`int`, *optional*, defaults to 50257):
The number of classes for bpe head .
num_wordpiece_labels (`int`, *optional*, defaults to 30522):
The number of classes for wordpiece head .
hidden_size (`int`, *optional*, defaults to 768):
The embedding dimension.
|
237_3_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mgp-str.md
|
https://huggingface.co/docs/transformers/en/model_doc/mgp-str/#mgpstrconfig
|
.md
|
The number of classes for wordpiece head .
hidden_size (`int`, *optional*, defaults to 768):
The embedding dimension.
num_hidden_layers (`int`, *optional*, defaults to 12):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 12):
Number of attention heads for each attention layer in the Transformer encoder.
mlp_ratio (`float`, *optional*, defaults to 4.0):
The ratio of mlp hidden dim to embedding dim.
qkv_bias (`bool`, *optional*, defaults to `True`):
|
237_3_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mgp-str.md
|
https://huggingface.co/docs/transformers/en/model_doc/mgp-str/#mgpstrconfig
|
.md
|
The ratio of mlp hidden dim to embedding dim.
qkv_bias (`bool`, *optional*, defaults to `True`):
Whether to add a bias to the queries, keys and values.
distilled (`bool`, *optional*, defaults to `False`):
Model includes a distillation token and head as in DeiT models.
layer_norm_eps (`float`, *optional*, defaults to 1e-05):
The epsilon used by the layer normalization layers.
drop_rate (`float`, *optional*, defaults to 0.0):
The dropout probability for all fully connected layers in the embeddings, encoder.
|
237_3_4
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.