source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md | https://huggingface.co/docs/transformers/en/model_doc/llama/#llamatokenizerfast | .md | contains everything needed to load the tokenizer.
clean_up_tokenization_spaces (`bool`, *optional*, defaults to `False`):
Whether or not to cleanup spaces after decoding, cleanup consists in removing potential artifacts like
extra spaces.
unk_token (`str` or `tokenizers.AddedToken`, *optional*, defaults to `"<unk>"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead. | 142_6_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md | https://huggingface.co/docs/transformers/en/model_doc/llama/#llamatokenizerfast | .md | The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
bos_token (`str` or `tokenizers.AddedToken`, *optional*, defaults to `"<s>"`):
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
eos_token (`str` or `tokenizers.AddedToken`, *optional*, defaults to `"</s>"`):
The end of sequence token.
add_bos_token (`bool`, *optional*, defaults to `True`): | 142_6_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md | https://huggingface.co/docs/transformers/en/model_doc/llama/#llamatokenizerfast | .md | The end of sequence token.
add_bos_token (`bool`, *optional*, defaults to `True`):
Whether or not to add an `bos_token` at the start of sequences.
add_eos_token (`bool`, *optional*, defaults to `False`):
Whether or not to add an `eos_token` at the end of sequences.
use_default_system_prompt (`bool`, *optional*, defaults to `False`):
Whether or not the default system prompt for Llama should be used
legacy (`bool`, *optional*): | 142_6_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md | https://huggingface.co/docs/transformers/en/model_doc/llama/#llamatokenizerfast | .md | Whether or not the default system prompt for Llama should be used
legacy (`bool`, *optional*):
Whether or not the `legacy` behavior of the tokenizer should be used. Legacy is before the merge of #24622
and #25224 which includes fixes to properly handle tokens that appear after special tokens.
Make sure to also set `from_slow` to `True`.
A simple example:
- `legacy=True`:
```python
>>> from transformers import LlamaTokenizerFast | 142_6_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md | https://huggingface.co/docs/transformers/en/model_doc/llama/#llamatokenizerfast | .md | >>> tokenizer = LlamaTokenizerFast.from_pretrained("huggyllama/llama-7b", legacy=True, from_slow=True)
>>> tokenizer.encode("Hello <s>.") # 869 is '▁.'
[1, 15043, 29871, 1, 869]
```
- `legacy=False`:
```python
>>> from transformers import LlamaTokenizerFast | 142_6_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md | https://huggingface.co/docs/transformers/en/model_doc/llama/#llamatokenizerfast | .md | >>> tokenizer = LlamaTokenizerFast.from_pretrained("huggyllama/llama-7b", legacy=False, from_slow=True)
>>> tokenizer.encode("Hello <s>.") # 29889 is '.'
[1, 15043, 29871, 1, 29889]
```
Checkout the [pull request](https://github.com/huggingface/transformers/pull/24565) for more details.
add_prefix_space (`bool`, *optional*):
Whether or not the tokenizer should automatically add a prefix space
Methods: build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences | 142_6_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md | https://huggingface.co/docs/transformers/en/model_doc/llama/#llamatokenizerfast | .md | Methods: build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- update_post_processor
- save_vocabulary | 142_6_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md | https://huggingface.co/docs/transformers/en/model_doc/llama/#llamamodel | .md | The bare LLaMA Model outputting raw hidden-states without any specific head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | 142_7_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md | https://huggingface.co/docs/transformers/en/model_doc/llama/#llamamodel | .md | etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`LlamaConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the | 142_7_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md | https://huggingface.co/docs/transformers/en/model_doc/llama/#llamamodel | .md | load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Transformer decoder consisting of *config.num_hidden_layers* layers. Each layer is a [`LlamaDecoderLayer`]
Args:
config: LlamaConfig
Methods: forward | 142_7_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md | https://huggingface.co/docs/transformers/en/model_doc/llama/#llamaforcausallm | .md | No docstring available for LlamaForCausalLM
Methods: forward | 142_8_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md | https://huggingface.co/docs/transformers/en/model_doc/llama/#llamaforsequenceclassification | .md | The LLaMa Model transformer with a sequence classification head on top (linear layer).
[`LlamaForSequenceClassification`] uses the last token in order to do the classification, as other causal models
(e.g. GPT-2) do.
Since it does classification on the last token, it requires to know the position of the last token. If a
`pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If | 142_9_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md | https://huggingface.co/docs/transformers/en/model_doc/llama/#llamaforsequenceclassification | .md | `pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If
no `pad_token_id` is defined, it simply takes the last value in each row of the batch. Since it cannot guess the
padding tokens when `inputs_embeds` are passed instead of `input_ids`, it does the same (take the last value in
each row of the batch).
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the | 142_9_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md | https://huggingface.co/docs/transformers/en/model_doc/llama/#llamaforsequenceclassification | .md | This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters: | 142_9_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md | https://huggingface.co/docs/transformers/en/model_doc/llama/#llamaforsequenceclassification | .md | and behavior.
Parameters:
config ([`LlamaConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 142_9_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md | https://huggingface.co/docs/transformers/en/model_doc/llama/#llamaforquestionanswering | .md | The Llama Model transformer with a span classification head on top for extractive question-answering tasks like
SQuAD (a linear layer on top of the hidden-states output to compute `span start logits` and `span end logits`).
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.) | 142_10_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md | https://huggingface.co/docs/transformers/en/model_doc/llama/#llamaforquestionanswering | .md | library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`LlamaConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not | 142_10_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md | https://huggingface.co/docs/transformers/en/model_doc/llama/#llamaforquestionanswering | .md | Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 142_10_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md | https://huggingface.co/docs/transformers/en/model_doc/llama/#llamafortokenclassification | .md | The Llama Model transformer with a token classification head on top (a linear layer on top of the hidden-states
output) e.g. for Named-Entity-Recognition (NER) tasks.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.) | 142_11_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md | https://huggingface.co/docs/transformers/en/model_doc/llama/#llamafortokenclassification | .md | library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`LlamaConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not | 142_11_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md | https://huggingface.co/docs/transformers/en/model_doc/llama/#llamafortokenclassification | .md | Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 142_11_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md | https://huggingface.co/docs/transformers/en/model_doc/llama/#flaxllamamodel | .md | No docstring available for FlaxLlamaModel
Methods: __call__ | 142_12_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama.md | https://huggingface.co/docs/transformers/en/model_doc/llama/#flaxllamaforcausallm | .md | No docstring available for FlaxLlamaForCausalLM
Methods: __call__ | 142_13_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next_video.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next_video/ | .md | <!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 143_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next_video.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next_video/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 143_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next_video.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next_video/#overview | .md | The LLaVa-NeXT-Video model was proposed in [LLaVA-NeXT: A Strong Zero-shot Video Understanding Model
](https://llava-vl.github.io/blog/2024-04-30-llava-next-video/) by Yuanhan Zhang, Bo Li, Haotian Liu, Yong Jae Lee, Liangke Gui, Di Fu, Jiashi Feng, Ziwei Liu, Chunyuan Li. LLaVa-NeXT-Video improves upon [LLaVa-NeXT](llava_next) by fine-tuning on a mix if video and image dataset thus increasing the model's performance on videos. | 143_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next_video.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next_video/#overview | .md | [LLaVA-NeXT](llava_next) surprisingly has strong performance in understanding video content in zero-shot fashion with the AnyRes technique that it uses. The AnyRes technique naturally represents a high-resolution image into multiple images. This technique is naturally generalizable to represent videos because videos can be considered as a set of frames (similar to a set of images in LLaVa-NeXT). The current version of LLaVA-NeXT makes use of AnyRes and trains with supervised fine-tuning (SFT) on top of | 143_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next_video.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next_video/#overview | .md | in LLaVa-NeXT). The current version of LLaVA-NeXT makes use of AnyRes and trains with supervised fine-tuning (SFT) on top of LLaVA-Next on video data to achieves better video understanding capabilities.The model is a current SOTA among open-source models on [VideoMME bench](https://arxiv.org/abs/2405.21075). | 143_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next_video.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next_video/#overview | .md | The introduction from the blog is the following:
On January 30, 2024, we released LLaVA-NeXT, an open-source Large Multimodal Model (LMM) that has been trained exclusively on text-image data. With the proposed AnyRes technique, it boosts capabilities in reasoning, OCR, and world knowledge, demonstrating remarkable performance across a spectrum of image-based multimodal understanding tasks, and even exceeding Gemini-Pro on several image benchmarks, e.g. MMMU and MathVista. | 143_1_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next_video.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next_video/#overview | .md | **In today’s exploration, we delve into the performance of LLaVA-NeXT within the realm of video understanding tasks. We reveal that LLaVA-NeXT surprisingly has strong performance in understanding video content. The current version of LLaVA-NeXT for videos has several improvements: | 143_1_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next_video.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next_video/#overview | .md | - Zero-shot video representation capabilities with AnyRes: The AnyRes technique naturally represents a high-resolution image into multiple images that a pre-trained VIT is able to digest, and forms them into a concantenated sequence. This technique is naturally generalizable to represent videos (consisting of multiple frames), allowing the image-only-trained LLaVA-Next model to perform surprisingly well on video tasks. Notably, this is the first time that LMMs show strong zero-shot modality transfer | 143_1_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next_video.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next_video/#overview | .md | to perform surprisingly well on video tasks. Notably, this is the first time that LMMs show strong zero-shot modality transfer ability. | 143_1_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next_video.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next_video/#overview | .md | - Inference with length generalization improves on longer videos. The linear scaling technique enables length generalization, allowing LLaVA-NeXT to effectively handle long-video beyond the limitation of the "max_token_length" of the LLM. | 143_1_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next_video.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next_video/#overview | .md | - Strong video understanding ability. (1) LLaVA-Next-Image, which combines the above two techniques, yields superior zero-shot performance than open-source LMMs tuned on videos. (2) LLaVA-Next-Video, further supervised fine-tuning (SFT) LLaVA-Next-Image on video data, achieves better video understanding capabilities compared to LLaVA-Next-Image. (3) LLaVA-Next-Video-DPO, which aligns the model response with AI feedback using direct preference optimization (DPO), showing significant performance boost. | 143_1_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next_video.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next_video/#overview | .md | - Efficient deployment and inference with SGLang. It allows 5x faster inference on video tasks, allowing more scalable serving such as million-level video re-captioning. See instructions in our repo.**
This model was contributed by [RaushanTurganbay](https://huggingface.co/RaushanTurganbay).
The original code can be found [here](https://github.com/LLaVA-VL/LLaVA-NeXT/tree/inference). | 143_1_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next_video.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next_video/#usage-tips | .md | - We advise users to use `padding_side="left"` when computing batched generation as it leads to more accurate results. Simply make sure to call `processor.tokenizer.padding_side = "left"` before generating.
<Tip warning={true}>
- Llava-Next uses different number of patches for images and thus has to pad the inputs inside modeling code, aside from the padding done when processing the inputs. The default setting is "left-padding" if model is in `eval()` mode, otherwise "right-padding".
</Tip> | 143_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next_video.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next_video/#usage-tips | .md | </Tip>
> [!NOTE]
> LLaVA models after release v4.46 will raise warnings about adding `processor.patch_size = {{patch_size}}`, `processor.num_additional_image_tokens = {{num_additional_image_tokens}}` and processor.vision_feature_select_strategy = {{vision_feature_select_strategy}}`. It is strongly recommended to add the attributes to the processor if you own the model checkpoint, or open a PR if it is not owned by you. | 143_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next_video.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next_video/#usage-tips | .md | Adding these attributes means that LLaVA will try to infer the number of image tokens required per image and expand the text with as many `<image>` placeholders as there will be tokens. Usually it is around 500 tokens per image, so make sure that the text is not truncated as otherwise there will be failure when merging the embeddings. | 143_2_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next_video.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next_video/#usage-tips | .md | The attributes can be obtained from model config, as `model.config.vision_config.patch_size` or `model.config.vision_feature_select_strategy`. The `num_additional_image_tokens` should be `1` if the vision backbone adds a CLS token or `0` if nothing extra is added to the vision patches. | 143_2_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next_video.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next_video/#usage-tips | .md | - Note that each checkpoint has been trained with a specific prompt format, depending on which large language model (LLM) was used. You can use tokenizer's `apply_chat_template` to format your prompts correctly. Below is an example of how to do that.
We will use [LLaVA-NeXT-Video-7B-hf](https://huggingface.co/llava-hf/LLaVA-NeXT-Video-7B-hf) and a conversation history of videos and images. Each content field has to be a list of dicts, as follows:
```python | 143_2_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next_video.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next_video/#usage-tips | .md | ```python
from transformers import LlavaNextVideoProcessor | 143_2_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next_video.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next_video/#usage-tips | .md | processor = LlavaNextVideoProcessor.from_pretrained("llava-hf/LLaVA-NeXT-Video-7B-hf") | 143_2_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next_video.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next_video/#usage-tips | .md | conversation = [
{
"role": "system",
"content": [
{"type": "text", "text": "A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions."},
],
},
{
"role": "user",
"content": [
{"type": "text", "text": "What’s shown in this image?"},
{"type": "image"},
],
},
{
"role": "assistant",
"content": [{"type": "text", "text": "This image shows a red stop sign."},]
},
{ | 143_2_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next_video.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next_video/#usage-tips | .md | "role": "user",
"content": [
{"type": "text", "text": "Why is this video funny?"},
{"type": "video"},
],
},
]
text_prompt = processor.apply_chat_template(conversation, add_generation_prompt=True)
# Note that the template simply formats your prompt, you still have to tokenize it and obtain pixel values for your visuals
print(text_prompt)
``` | 143_2_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next_video.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next_video/#single-media-mode | .md | The model can accept both images and videos as input. Here's an example code for inference in half-precision (`torch.float16`):
```python
import av
import torch
import numpy as np
from transformers import LlavaNextVideoForConditionalGeneration, LlavaNextVideoProcessor | 143_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next_video.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next_video/#single-media-mode | .md | def read_video_pyav(container, indices):
'''
Decode the video with PyAV decoder.
Args:
container (`av.container.input.InputContainer`): PyAV container.
indices (`List[int]`): List of frame indices to decode.
Returns:
result (np.ndarray): np array of decoded frames of shape (num_frames, height, width, 3).
'''
frames = []
container.seek(0)
start_index = indices[0]
end_index = indices[-1]
for i, frame in enumerate(container.decode(video=0)):
if i > end_index:
break
if i >= start_index and i in indices: | 143_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next_video.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next_video/#single-media-mode | .md | for i, frame in enumerate(container.decode(video=0)):
if i > end_index:
break
if i >= start_index and i in indices:
frames.append(frame)
return np.stack([x.to_ndarray(format="rgb24") for x in frames]) | 143_3_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next_video.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next_video/#single-media-mode | .md | # Load the model in half-precision
model = LlavaNextVideoForConditionalGeneration.from_pretrained("llava-hf/LLaVA-NeXT-Video-7B-hf", torch_dtype=torch.float16, device_map="auto")
processor = LlavaNextVideoProcessor.from_pretrained("llava-hf/LLaVA-NeXT-Video-7B-hf") | 143_3_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next_video.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next_video/#single-media-mode | .md | # Load the video as an np.array, sampling uniformly 8 frames (can sample more for longer videos)
video_path = hf_hub_download(repo_id="raushan-testing-hf/videos-test", filename="sample_demo_1.mp4", repo_type="dataset")
container = av.open(video_path)
total_frames = container.streams.video[0].frames
indices = np.arange(0, total_frames, total_frames / 8).astype(int)
video = read_video_pyav(container, indices)
conversation = [
{ | 143_3_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next_video.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next_video/#single-media-mode | .md | conversation = [
{
"role": "user",
"content": [
{"type": "text", "text": "Why is this video funny?"},
{"type": "video"},
],
},
]
prompt = processor.apply_chat_template(conversation, add_generation_prompt=True)
inputs = processor(text=prompt, videos=video, return_tensors="pt")
out = model.generate(**inputs, max_new_tokens=60)
processor.batch_decode(out, skip_special_tokens=True, clean_up_tokenization_spaces=True)
``` | 143_3_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next_video.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next_video/#mixed-media-mode | .md | The model can also generate from an interleaved image-video inputs. However note, that it was not trained in interleaved image-video setting which might affect the performance. Below is an example usage for mixed media input, add the following lines to the above code snippet:
```python
from PIL import Image
import requests | 143_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next_video.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next_video/#mixed-media-mode | .md | # Generate from image and video mixed inputs
# Load and image and write a new prompt
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
conversation = [
{
"role": "user",
"content": [
{"type": "text", "text": "How many cats are there in the image?"},
{"type": "image"},
],
},
{
"role": "assistant",
"content": [{"type": "text", "text": "There are two cats"}],
},
{ | 143_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next_video.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next_video/#mixed-media-mode | .md | "role": "assistant",
"content": [{"type": "text", "text": "There are two cats"}],
},
{
"role": "user",
"content": [
{"type": "text", "text": "Why is this video funny?"},
{"type": "video"},
],
},
]
prompt = processor.apply_chat_template(conversation, add_generation_prompt=True)
inputs = processor(text=prompt, images=image, videos=clip, padding=True, return_tensors="pt") | 143_4_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next_video.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next_video/#mixed-media-mode | .md | # Generate
generate_ids = model.generate(**inputs, max_length=50)
processor.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True)
``` | 143_4_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next_video.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next_video/#quantization-using-bitsandbytes-for-memory-efficiency | .md | The model can be loaded in lower bits, significantly reducing memory burden while maintaining the performance of the original model. This allows for efficient deployment on resource-constrained cases.
First, make sure to install bitsandbytes by running `pip install bitsandbytes` and to have access to a GPU/accelerator that is supported by the library.
<Tip> | 143_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next_video.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next_video/#quantization-using-bitsandbytes-for-memory-efficiency | .md | <Tip>
bitsandbytes is being refactored to support multiple backends beyond CUDA. Currently, ROCm (AMD GPU) and Intel CPU implementations are mature, with Intel XPU in progress and Apple Silicon support expected by Q4/Q1. For installation instructions and the latest backend updates, visit [this link](https://huggingface.co/docs/bitsandbytes/main/en/installation#multi-backend). | 143_5_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next_video.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next_video/#quantization-using-bitsandbytes-for-memory-efficiency | .md | We value your feedback to help identify bugs before the full release! Check out [these docs](https://huggingface.co/docs/bitsandbytes/main/en/non_cuda_backends) for more details and feedback links.
</Tip>
Then simply load the quantized model by adding [`BitsAndBytesConfig`](../main_classes/quantization#transformers.BitsAndBytesConfig) as shown below:
```python
from transformers import LlavaNextVideoForConditionalGeneration, LlavaNextVideoProcessor | 143_5_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next_video.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next_video/#quantization-using-bitsandbytes-for-memory-efficiency | .md | # specify how to quantize the model
quantization_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.float16,
)
model = LlavaNextVideoForConditionalGeneration.from_pretrained("llava-hf/LLaVA-NeXT-Video-7B-hf", quantization_config=quantization_config, device_map="auto")
``` | 143_5_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next_video.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next_video/#flash-attention-2-to-speed-up-generation | .md | Additionally, we can greatly speed-up model inference by using [Flash Attention](../perf_train_gpu_one#flash-attention-2), which is a faster implementation of the attention mechanism used inside the model.
First, make sure to install the latest version of Flash Attention 2:
```bash
pip install -U flash-attn --no-build-isolation
``` | 143_6_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next_video.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next_video/#flash-attention-2-to-speed-up-generation | .md | ```bash
pip install -U flash-attn --no-build-isolation
```
Also, you should have a hardware that is compatible with Flash-Attention 2. Read more about it in the official documentation of the [flash attention repository](https://github.com/Dao-AILab/flash-attention). FlashAttention-2 can only be used when a model is loaded in `torch.float16` or `torch.bfloat16`.
To load and run a model using Flash Attention-2, simply add `attn_implementation="flash_attention_2"` when loading the model as follows: | 143_6_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next_video.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next_video/#flash-attention-2-to-speed-up-generation | .md | ```python
from transformers import LlavaNextVideoForConditionalGeneration | 143_6_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next_video.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next_video/#flash-attention-2-to-speed-up-generation | .md | model = LlavaNextVideoForConditionalGeneration.from_pretrained(
"llava-hf/LLaVA-NeXT-Video-7B-hf",
torch_dtype=torch.float16,
attn_implementation="flash_attention_2",
).to(0)
``` | 143_6_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next_video.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next_video/#llavanextvideoconfig | .md | This is the configuration class to store the configuration of a [`LlavaNextVideoForConditionalGeneration`]. It is used to instantiate an
Llava-NeXT model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the [llava-hf/LLaVA-NeXT-Video-7B-hf](https://huggingface.co/llava-hf/LLaVA-NeXT-Video-7B-hf)
model. | 143_7_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next_video.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next_video/#llavanextvideoconfig | .md | model.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vision_config (`Union[AutoConfig, dict]`, *optional*, defaults to `CLIPVisionConfig`):
The config object or dictionary of the vision backbone.
text_config (`Union[AutoConfig, dict]`, *optional*, defaults to `LlamaConfig`):
The config object or dictionary of the text backbone. | 143_7_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next_video.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next_video/#llavanextvideoconfig | .md | The config object or dictionary of the text backbone.
ignore_index (`int`, *optional*, defaults to -100):
The ignore index for the loss function.
image_token_index (`int`, *optional*, defaults to 32001):
The image token index to encode the image prompt.
projector_hidden_act (`str`, *optional*, defaults to `"gelu"`):
The activation function used by the multimodal projector.
multimodal_projector_bias (`bool`, *optional*, defaults to `True`):
Whether to use bias in the multimodal projector. | 143_7_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next_video.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next_video/#llavanextvideoconfig | .md | multimodal_projector_bias (`bool`, *optional*, defaults to `True`):
Whether to use bias in the multimodal projector.
vision_feature_select_strategy (`str`, *optional*, defaults to `"default"`):
The feature selection strategy used to select the vision feature from the vision backbone.
Can be one of `"default"` or `"full"`. If `"default"`, the CLS token is removed from the vision features.
If `"full"`, the full vision features are used.
vision_feature_layer (`int`, *optional*, defaults to -2): | 143_7_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next_video.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next_video/#llavanextvideoconfig | .md | If `"full"`, the full vision features are used.
vision_feature_layer (`int`, *optional*, defaults to -2):
The index of the layer to select the vision feature.
image_grid_pinpoints (`List`, *optional*, defaults to `[[336, 672], [672, 336], [672, 672], [1008, 336], [336, 1008]]`):
A list of possible resolutions to use for processing high resolution images. Each item in the list should be a tuple or list
of the form `(height, width)`.
tie_word_embeddings (`bool`, *optional*, defaults to `False`): | 143_7_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next_video.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next_video/#llavanextvideoconfig | .md | of the form `(height, width)`.
tie_word_embeddings (`bool`, *optional*, defaults to `False`):
Whether the model's input and output word embeddings should be tied.
video_token_index (`int`, *optional*, defaults to 32000):
The video token index to encode the image prompt.
spatial_pool_mode (`str`, *optional*, defaults to `"average"`):
Pooling mode to use for videos. Can be "average", "max" or "conv".
spatial_pool_stride (`int`, *optional*, defaults to 2):
Stride used in the pooling layer for videos. | 143_7_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next_video.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next_video/#llavanextvideoconfig | .md | spatial_pool_stride (`int`, *optional*, defaults to 2):
Stride used in the pooling layer for videos.
image_seq_length (`int`, *optional*, defaults to 576):
Sequence length of one image embedding.
video_seq_length (`int`, *optional*, defaults to 288):
Sequence length of one video embedding.
Example:
```python
>>> from transformers import LlavaNextVideoForConditionalGeneration, LlavaNextVideoConfig, CLIPVisionConfig, LlamaConfig | 143_7_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next_video.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next_video/#llavanextvideoconfig | .md | >>> # Initializing a CLIP-vision config
>>> vision_config = CLIPVisionConfig()
>>> # Initializing a Llama config
>>> text_config = LlamaConfig()
>>> configuration = LlavaNextVideoConfig(vision_config, text_config)
>>> model = LlavaNextVideoForConditionalGeneration(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
``` | 143_7_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next_video.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next_video/#llavanextvideoprocessor | .md | Constructs a LLaVa-NeXT-Video processor which wraps a LLaVa-NeXT image processor, LLaVa-NeXT-Video video processor and
a LLaMa tokenizer into a single processor.
[`LlavaNextVideoProcessor`] offers all the functionalities of [`LlavaNextImageProcessor`], [`LlavaNextVideoImageProcessor`] and
[`LlamaTokenizerFast`]. See the [`~LlavaNextVideoProcessor.__call__`] and [`~LlavaNextVideoProcessor.decode`] for more information.
Args:
video_processor ([`LlavaNextVideoImageProcessor`], *optional*): | 143_8_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next_video.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next_video/#llavanextvideoprocessor | .md | Args:
video_processor ([`LlavaNextVideoImageProcessor`], *optional*):
The video processor is a required input.
image_processor ([`LlavaNextImageProcessor`], *optional*):
The image processor is a required input.
tokenizer ([`LlamaTokenizerFast`], *optional*):
The tokenizer is a required input.
chat_template (`str`, *optional*):
Jinja chat template that will be used in tokenizer's `apply_chat_template`
patch_size (`int`, *optional*):
Patch size from the vision tower. | 143_8_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next_video.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next_video/#llavanextvideoprocessor | .md | patch_size (`int`, *optional*):
Patch size from the vision tower.
vision_feature_select_strategy (`str`, *optional*):
The feature selection strategy used to select the vision feature from the vision backbone.
Shoudl be same as in model's config
video_token (`str`, *optional*, defaults to `"<video>"`):
Special token used to denote video location.
image_token (`str`, *optional*, defaults to `"<image>"`):
Special token used to denote image location. | 143_8_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next_video.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next_video/#llavanextvideoprocessor | .md | image_token (`str`, *optional*, defaults to `"<image>"`):
Special token used to denote image location.
num_additional_image_tokens (`int`, *optional*, defaults to 0):
Number of additional tokens added to the image embeddings, such as CLS (+1). If the backbone has no CLS or other
extra tokens appended, no need to set this arg. | 143_8_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next_video.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next_video/#llavanextvideoimageprocessor | .md | Constructs a LLaVa-NeXT-Video video processor. Based on [`CLIPImageProcessor`] with incorporation of processing each video frame.
Args:
do_resize (`bool`, *optional*, defaults to `True`):
Whether to resize the image's (height, width) dimensions to the specified `size`. Can be overridden by
`do_resize` in the `preprocess` method.
size (`Dict[str, int]` *optional*, defaults to `{"shortest_edge": 224}`): | 143_9_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next_video.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next_video/#llavanextvideoimageprocessor | .md | `do_resize` in the `preprocess` method.
size (`Dict[str, int]` *optional*, defaults to `{"shortest_edge": 224}`):
Size of the image after resizing. The shortest edge of the image is resized to size["shortest_edge"], with
the longest edge resized to keep the input aspect ratio. Can be overridden by `size` in the `preprocess`
method.
image_grid_pinpoints (`List` *optional*, defaults to `[[672, 336], [336, 672], [672, 672], [336, 1008], [1008, 336]]`): | 143_9_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next_video.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next_video/#llavanextvideoimageprocessor | .md | method.
image_grid_pinpoints (`List` *optional*, defaults to `[[672, 336], [336, 672], [672, 672], [336, 1008], [1008, 336]]`):
A list of possible resolutions to use for processing high resolution images. The best resolution is selected
based on the original size of the image. Can be overridden by `image_grid_pinpoints` in the `preprocess`
method. Not used for processinf videos.
resample (`PILImageResampling`, *optional*, defaults to `Resampling.BICUBIC`): | 143_9_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next_video.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next_video/#llavanextvideoimageprocessor | .md | method. Not used for processinf videos.
resample (`PILImageResampling`, *optional*, defaults to `Resampling.BICUBIC`):
Resampling filter to use if resizing the image. Can be overridden by `resample` in the `preprocess` method.
do_center_crop (`bool`, *optional*, defaults to `True`):
Whether to center crop the image to the specified `crop_size`. Can be overridden by `do_center_crop` in the
`preprocess` method.
crop_size (`Dict[str, int]` *optional*, defaults to 224): | 143_9_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next_video.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next_video/#llavanextvideoimageprocessor | .md | `preprocess` method.
crop_size (`Dict[str, int]` *optional*, defaults to 224):
Size of the output image after applying `center_crop`. Can be overridden by `crop_size` in the `preprocess`
method.
do_rescale (`bool`, *optional*, defaults to `True`):
Whether to rescale the image by the specified scale `rescale_factor`. Can be overridden by `do_rescale` in
the `preprocess` method.
rescale_factor (`int` or `float`, *optional*, defaults to `1/255`): | 143_9_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next_video.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next_video/#llavanextvideoimageprocessor | .md | the `preprocess` method.
rescale_factor (`int` or `float`, *optional*, defaults to `1/255`):
Scale factor to use if rescaling the image. Can be overridden by `rescale_factor` in the `preprocess`
method.
do_normalize (`bool`, *optional*, defaults to `True`):
Whether to normalize the image. Can be overridden by `do_normalize` in the `preprocess` method.
image_mean (`float` or `List[float]`, *optional*, defaults to `[0.48145466, 0.4578275, 0.40821073]`): | 143_9_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next_video.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next_video/#llavanextvideoimageprocessor | .md | image_mean (`float` or `List[float]`, *optional*, defaults to `[0.48145466, 0.4578275, 0.40821073]`):
Mean to use if normalizing the image. This is a float or list of floats the length of the number of
channels in the image. Can be overridden by the `image_mean` parameter in the `preprocess` method.
image_std (`float` or `List[float]`, *optional*, defaults to `[0.26862954, 0.26130258, 0.27577711]`):
Standard deviation to use if normalizing the image. This is a float or list of floats the length of the | 143_9_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next_video.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next_video/#llavanextvideoimageprocessor | .md | Standard deviation to use if normalizing the image. This is a float or list of floats the length of the
number of channels in the image. Can be overridden by the `image_std` parameter in the `preprocess` method.
Can be overridden by the `image_std` parameter in the `preprocess` method.
do_convert_rgb (`bool`, *optional*, defaults to `True`):
Whether to convert the image to RGB. | 143_9_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next_video.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next_video/#llavanextvideoforconditionalgeneration | .md | The LLAVA-NeXT model which consists of a vision backbone and a language model.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | 143_10_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next_video.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next_video/#llavanextvideoforconditionalgeneration | .md | etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`LlavaNextVideoConfig`] or [`LlavaNextVideoVisionConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not | 143_10_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next_video.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next_video/#llavanextvideoforconditionalgeneration | .md | Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 143_10_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlsr_wav2vec2.md | https://huggingface.co/docs/transformers/en/model_doc/xlsr_wav2vec2/ | .md | <!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 144_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlsr_wav2vec2.md | https://huggingface.co/docs/transformers/en/model_doc/xlsr_wav2vec2/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 144_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlsr_wav2vec2.md | https://huggingface.co/docs/transformers/en/model_doc/xlsr_wav2vec2/#overview | .md | The XLSR-Wav2Vec2 model was proposed in [Unsupervised Cross-Lingual Representation Learning For Speech Recognition](https://arxiv.org/abs/2006.13979) by Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael
Auli.
The abstract from the paper is the following:
*This paper presents XLSR which learns cross-lingual speech representations by pretraining a single model from the raw | 144_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlsr_wav2vec2.md | https://huggingface.co/docs/transformers/en/model_doc/xlsr_wav2vec2/#overview | .md | *This paper presents XLSR which learns cross-lingual speech representations by pretraining a single model from the raw
waveform of speech in multiple languages. We build on wav2vec 2.0 which is trained by solving a contrastive task over
masked latent speech representations and jointly learns a quantization of the latents shared across languages. The
resulting model is fine-tuned on labeled data and experiments show that cross-lingual pretraining significantly | 144_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlsr_wav2vec2.md | https://huggingface.co/docs/transformers/en/model_doc/xlsr_wav2vec2/#overview | .md | resulting model is fine-tuned on labeled data and experiments show that cross-lingual pretraining significantly
outperforms monolingual pretraining. On the CommonVoice benchmark, XLSR shows a relative phoneme error rate reduction
of 72% compared to the best known results. On BABEL, our approach improves word error rate by 16% relative compared to
a comparable system. Our approach enables a single multilingual speech recognition model which is competitive to strong | 144_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlsr_wav2vec2.md | https://huggingface.co/docs/transformers/en/model_doc/xlsr_wav2vec2/#overview | .md | a comparable system. Our approach enables a single multilingual speech recognition model which is competitive to strong
individual models. Analysis shows that the latent discrete speech representations are shared across languages with
increased sharing for related languages. We hope to catalyze research in low-resource speech understanding by releasing
XLSR-53, a large model pretrained in 53 languages.* | 144_1_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlsr_wav2vec2.md | https://huggingface.co/docs/transformers/en/model_doc/xlsr_wav2vec2/#overview | .md | XLSR-53, a large model pretrained in 53 languages.*
The original code can be found [here](https://github.com/pytorch/fairseq/tree/master/fairseq/models/wav2vec).
Note: Meta (FAIR) released a new version of [Wav2Vec2-BERT 2.0](https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-bert) - it's pretrained on 4.5M hours of audio. We especially recommend using it for fine-tuning tasks, e.g. as per [this guide](https://huggingface.co/blog/fine-tune-w2v2-bert). | 144_1_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xlsr_wav2vec2.md | https://huggingface.co/docs/transformers/en/model_doc/xlsr_wav2vec2/#usage-tips | .md | - XLSR-Wav2Vec2 is a speech model that accepts a float array corresponding to the raw waveform of the speech signal.
- XLSR-Wav2Vec2 model was trained using connectionist temporal classification (CTC) so the model output has to be
decoded using [`Wav2Vec2CTCTokenizer`].
<Tip>
XLSR-Wav2Vec2's architecture is based on the Wav2Vec2 model, so one can refer to [Wav2Vec2's documentation page](wav2vec2).
</Tip> | 144_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mvp.md | https://huggingface.co/docs/transformers/en/model_doc/mvp/ | .md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 145_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mvp.md | https://huggingface.co/docs/transformers/en/model_doc/mvp/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 145_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mvp.md | https://huggingface.co/docs/transformers/en/model_doc/mvp/#overview | .md | The MVP model was proposed in [MVP: Multi-task Supervised Pre-training for Natural Language Generation](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen.
According to the abstract,
- MVP follows a standard Transformer encoder-decoder architecture.
- MVP is supervised pre-trained using labeled datasets.
- MVP also has task-specific soft prompts to stimulate the model's capacity in performing a certain task. | 145_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mvp.md | https://huggingface.co/docs/transformers/en/model_doc/mvp/#overview | .md | - MVP is specially designed for natural language generation and can be adapted to a wide range of generation tasks, including but not limited to summarization, data-to-text generation, open-ended dialogue system, story generation, question answering, question generation, task-oriented dialogue system, commonsense generation, paraphrase generation, text style transfer, and text simplification. Our model can also be adapted to natural language understanding tasks such as sequence classification and | 145_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mvp.md | https://huggingface.co/docs/transformers/en/model_doc/mvp/#overview | .md | text simplification. Our model can also be adapted to natural language understanding tasks such as sequence classification and (extractive) question answering. | 145_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mvp.md | https://huggingface.co/docs/transformers/en/model_doc/mvp/#overview | .md | This model was contributed by [Tianyi Tang](https://huggingface.co/StevenTang). The detailed information and instructions can be found [here](https://github.com/RUCAIBox/MVP). | 145_1_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mvp.md | https://huggingface.co/docs/transformers/en/model_doc/mvp/#usage-tips | .md | - We have released a series of models [here](https://huggingface.co/models?filter=mvp), including MVP, MVP with task-specific prompts, and multi-task pre-trained variants.
- If you want to use a model without prompts (standard Transformer), you can load it through `MvpForConditionalGeneration.from_pretrained('RUCAIBox/mvp')`.
- If you want to use a model with task-specific prompts, such as summarization, you can load it through `MvpForConditionalGeneration.from_pretrained('RUCAIBox/mvp-summarization')`. | 145_2_0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.