source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/auto.md
|
https://huggingface.co/docs/transformers/en/model_doc/auto/#multimodal
|
.md
|
The following auto classes are available for the following multimodal tasks.
|
212_75_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/auto.md
|
https://huggingface.co/docs/transformers/en/model_doc/auto/#automodelfortablequestionanswering
|
.md
|
AutoModel
This is a generic model class that will be instantiated as one of the base model classes of the library when created
with the [`~AutoModel.from_pretrained`] class method or the [`~AutoModel.from_config`] class
method.
This class cannot be instantiated directly using `__init__()` (throws an error).
ForTableQuestionAnswering
|
212_76_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/auto.md
|
https://huggingface.co/docs/transformers/en/model_doc/auto/#tfautomodelfortablequestionanswering
|
.md
|
No docstring available for TFAutoModelForTableQuestionAnswering
|
212_77_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/auto.md
|
https://huggingface.co/docs/transformers/en/model_doc/auto/#automodelfordocumentquestionanswering
|
.md
|
AutoModel
This is a generic model class that will be instantiated as one of the base model classes of the library when created
with the [`~AutoModel.from_pretrained`] class method or the [`~AutoModel.from_config`] class
method.
This class cannot be instantiated directly using `__init__()` (throws an error).
ForDocumentQuestionAnswering
|
212_78_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/auto.md
|
https://huggingface.co/docs/transformers/en/model_doc/auto/#tfautomodelfordocumentquestionanswering
|
.md
|
No docstring available for TFAutoModelForDocumentQuestionAnswering
|
212_79_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/auto.md
|
https://huggingface.co/docs/transformers/en/model_doc/auto/#automodelforvisualquestionanswering
|
.md
|
AutoModel
This is a generic model class that will be instantiated as one of the base model classes of the library when created
with the [`~AutoModel.from_pretrained`] class method or the [`~AutoModel.from_config`] class
method.
This class cannot be instantiated directly using `__init__()` (throws an error).
ForVisualQuestionAnswering
|
212_80_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/auto.md
|
https://huggingface.co/docs/transformers/en/model_doc/auto/#automodelforvision2seq
|
.md
|
AutoModel
This is a generic model class that will be instantiated as one of the base model classes of the library when created
with the [`~AutoModel.from_pretrained`] class method or the [`~AutoModel.from_config`] class
method.
This class cannot be instantiated directly using `__init__()` (throws an error).
ForVision2Seq
|
212_81_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/auto.md
|
https://huggingface.co/docs/transformers/en/model_doc/auto/#tfautomodelforvision2seq
|
.md
|
No docstring available for TFAutoModelForVision2Seq
|
212_82_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/auto.md
|
https://huggingface.co/docs/transformers/en/model_doc/auto/#flaxautomodelforvision2seq
|
.md
|
No docstring available for FlaxAutoModelForVision2Seq
|
212_83_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/auto.md
|
https://huggingface.co/docs/transformers/en/model_doc/auto/#automodelforimagetexttotext
|
.md
|
AutoModel
This is a generic model class that will be instantiated as one of the base model classes of the library when created
with the [`~AutoModel.from_pretrained`] class method or the [`~AutoModel.from_config`] class
method.
This class cannot be instantiated directly using `__init__()` (throws an error).
ForImageTextToText
|
212_84_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblip.md
|
https://huggingface.co/docs/transformers/en/model_doc/instructblip/
|
.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
213_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblip.md
|
https://huggingface.co/docs/transformers/en/model_doc/instructblip/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
|
213_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblip.md
|
https://huggingface.co/docs/transformers/en/model_doc/instructblip/#overview
|
.md
|
The InstructBLIP model was proposed in [InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning](https://arxiv.org/abs/2305.06500) by Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, Steven Hoi.
InstructBLIP leverages the [BLIP-2](blip2) architecture for visual instruction tuning.
The abstract from the paper is the following:
|
213_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblip.md
|
https://huggingface.co/docs/transformers/en/model_doc/instructblip/#overview
|
.md
|
*General-purpose language models that can solve various language-domain tasks have emerged driven by the pre-training and instruction-tuning pipeline. However, building general-purpose vision-language models is challenging due to the increased task discrepancy introduced by the additional visual input. Although vision-language pre-training has been widely studied, vision-language instruction tuning remains relatively less explored. In this paper, we conduct a systematic and comprehensive study on
|
213_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblip.md
|
https://huggingface.co/docs/transformers/en/model_doc/instructblip/#overview
|
.md
|
instruction tuning remains relatively less explored. In this paper, we conduct a systematic and comprehensive study on vision-language instruction tuning based on the pre-trained BLIP-2 models. We gather a wide variety of 26 publicly available datasets, transform them into instruction tuning format and categorize them into two clusters for held-in instruction tuning and held-out zero-shot evaluation. Additionally, we introduce instruction-aware visual feature extraction, a crucial method that enables the
|
213_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblip.md
|
https://huggingface.co/docs/transformers/en/model_doc/instructblip/#overview
|
.md
|
zero-shot evaluation. Additionally, we introduce instruction-aware visual feature extraction, a crucial method that enables the model to extract informative features tailored to the given instruction. The resulting InstructBLIP models achieve state-of-the-art zero-shot performance across all 13 held-out datasets, substantially outperforming BLIP-2 and the larger Flamingo. Our models also lead to state-of-the-art performance when finetuned on individual downstream tasks (e.g., 90.7% accuracy on ScienceQA
|
213_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblip.md
|
https://huggingface.co/docs/transformers/en/model_doc/instructblip/#overview
|
.md
|
also lead to state-of-the-art performance when finetuned on individual downstream tasks (e.g., 90.7% accuracy on ScienceQA IMG). Furthermore, we qualitatively demonstrate the advantages of InstructBLIP over concurrent multimodal models.*
|
213_1_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblip.md
|
https://huggingface.co/docs/transformers/en/model_doc/instructblip/#overview
|
.md
|
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/instructblip_architecture.jpg"
alt="drawing" width="600"/>
<small> InstructBLIP architecture. Taken from the <a href="https://arxiv.org/abs/2305.06500">original paper.</a> </small>
This model was contributed by [nielsr](https://huggingface.co/nielsr).
The original code can be found [here](https://github.com/salesforce/LAVIS/tree/main/projects/instructblip).
|
213_1_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblip.md
|
https://huggingface.co/docs/transformers/en/model_doc/instructblip/#usage-tips
|
.md
|
InstructBLIP uses the same architecture as [BLIP-2](blip2) with a tiny but important difference: it also feeds the text prompt (instruction) to the Q-Former.
> [!NOTE]
|
213_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblip.md
|
https://huggingface.co/docs/transformers/en/model_doc/instructblip/#usage-tips
|
.md
|
> BLIP models after release v4.46 will raise warnings about adding `processor.num_query_tokens = {{num_query_tokens}}` and expand model embeddings layer to add special `<image>` token. It is strongly recommended to add the attributes to the processor if you own the model checkpoint, or open a PR if it is not owned by you. Adding these attributes means that BLIP will add the number of query tokens required per image and expand the text with as many `<image>` placeholders as there will be query tokens.
|
213_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblip.md
|
https://huggingface.co/docs/transformers/en/model_doc/instructblip/#usage-tips
|
.md
|
of query tokens required per image and expand the text with as many `<image>` placeholders as there will be query tokens. Usually it is around 500 tokens per image, so make sure that the text is not truncated as otherwise there wil be failure when merging the embeddings.
|
213_2_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblip.md
|
https://huggingface.co/docs/transformers/en/model_doc/instructblip/#usage-tips
|
.md
|
The attributes can be obtained from model config, as `model.config.num_query_tokens` and model embeddings expansion can be done by following [this link](https://gist.github.com/zucchini-nlp/e9f20b054fa322f84ac9311d9ab67042).
|
213_2_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblip.md
|
https://huggingface.co/docs/transformers/en/model_doc/instructblip/#instructblipconfig
|
.md
|
[`InstructBlipConfig`] is the configuration class to store the configuration of a
[`InstructBlipForConditionalGeneration`]. It is used to instantiate a InstructBLIP model according to the specified
arguments, defining the vision model, Q-Former model and language model configs. Instantiating a configuration with
the defaults will yield a similar configuration to that of the InstructBLIP
[Salesforce/instruct-blip-flan-t5](https://huggingface.co/Salesforce/instruct-blip-flan-t5) architecture.
|
213_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblip.md
|
https://huggingface.co/docs/transformers/en/model_doc/instructblip/#instructblipconfig
|
.md
|
[Salesforce/instruct-blip-flan-t5](https://huggingface.co/Salesforce/instruct-blip-flan-t5) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vision_config (`dict`, *optional*):
Dictionary of configuration options used to initialize [`InstructBlipVisionConfig`].
qformer_config (`dict`, *optional*):
|
213_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblip.md
|
https://huggingface.co/docs/transformers/en/model_doc/instructblip/#instructblipconfig
|
.md
|
Dictionary of configuration options used to initialize [`InstructBlipVisionConfig`].
qformer_config (`dict`, *optional*):
Dictionary of configuration options used to initialize [`InstructBlipQFormerConfig`].
text_config (`dict`, *optional*):
Dictionary of configuration options used to initialize any [`PretrainedConfig`].
num_query_tokens (`int`, *optional*, defaults to 32):
The number of query tokens passed through the Transformer.
image_token_index (`int`, *optional*):
|
213_3_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblip.md
|
https://huggingface.co/docs/transformers/en/model_doc/instructblip/#instructblipconfig
|
.md
|
The number of query tokens passed through the Transformer.
image_token_index (`int`, *optional*):
Token index of special image token.
kwargs (*optional*):
Dictionary of keyword arguments.
Example:
```python
>>> from transformers import (
... InstructBlipVisionConfig,
... InstructBlipQFormerConfig,
... OPTConfig,
... InstructBlipConfig,
... InstructBlipForConditionalGeneration,
... )
|
213_3_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblip.md
|
https://huggingface.co/docs/transformers/en/model_doc/instructblip/#instructblipconfig
|
.md
|
>>> # Initializing a InstructBlipConfig with Salesforce/instruct-blip-flan-t5 style configuration
>>> configuration = InstructBlipConfig()
>>> # Initializing a InstructBlipForConditionalGeneration (with random weights) from the Salesforce/instruct-blip-flan-t5 style configuration
>>> model = InstructBlipForConditionalGeneration(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
|
213_3_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblip.md
|
https://huggingface.co/docs/transformers/en/model_doc/instructblip/#instructblipconfig
|
.md
|
>>> # Accessing the model configuration
>>> configuration = model.config
>>> # We can also initialize a InstructBlipConfig from a InstructBlipVisionConfig, InstructBlipQFormerConfig and any PretrainedConfig
>>> # Initializing InstructBLIP vision, InstructBLIP Q-Former and language model configurations
>>> vision_config = InstructBlipVisionConfig()
>>> qformer_config = InstructBlipQFormerConfig()
>>> text_config = OPTConfig()
|
213_3_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblip.md
|
https://huggingface.co/docs/transformers/en/model_doc/instructblip/#instructblipconfig
|
.md
|
>>> config = InstructBlipConfig.from_text_vision_configs(vision_config, qformer_config, text_config)
```
Methods: from_vision_qformer_text_configs
|
213_3_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblip.md
|
https://huggingface.co/docs/transformers/en/model_doc/instructblip/#instructblipvisionconfig
|
.md
|
This is the configuration class to store the configuration of a [`InstructBlipVisionModel`]. It is used to
instantiate a InstructBLIP vision encoder according to the specified arguments, defining the model architecture.
Instantiating a configuration defaults will yield a similar configuration to that of the InstructBLIP
[Salesforce/instruct-blip-flan-t5](https://huggingface.co/Salesforce/instruct-blip-flan-t5) architecture.
|
213_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblip.md
|
https://huggingface.co/docs/transformers/en/model_doc/instructblip/#instructblipvisionconfig
|
.md
|
[Salesforce/instruct-blip-flan-t5](https://huggingface.co/Salesforce/instruct-blip-flan-t5) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
hidden_size (`int`, *optional*, defaults to 1408):
Dimensionality of the encoder layers and the pooler layer.
intermediate_size (`int`, *optional*, defaults to 6144):
|
213_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblip.md
|
https://huggingface.co/docs/transformers/en/model_doc/instructblip/#instructblipvisionconfig
|
.md
|
Dimensionality of the encoder layers and the pooler layer.
intermediate_size (`int`, *optional*, defaults to 6144):
Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
num_hidden_layers (`int`, *optional*, defaults to 39):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 16):
Number of attention heads for each attention layer in the Transformer encoder.
image_size (`int`, *optional*, defaults to 224):
|
213_4_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblip.md
|
https://huggingface.co/docs/transformers/en/model_doc/instructblip/#instructblipvisionconfig
|
.md
|
Number of attention heads for each attention layer in the Transformer encoder.
image_size (`int`, *optional*, defaults to 224):
The size (resolution) of each image.
patch_size (`int`, *optional*, defaults to 14):
The size (resolution) of each patch.
hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
|
213_4_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblip.md
|
https://huggingface.co/docs/transformers/en/model_doc/instructblip/#instructblipvisionconfig
|
.md
|
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"selu"` and `"gelu_new"` `"gelu"` are supported. to 1e-5): The epsilon used by the layer
normalization layers.
layer_norm_eps (`float`, *optional*, defaults to 1e-06):
The epsilon used by the layer normalization layers.
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
initializer_range (`float`, *optional*, defaults to 1e-10):
|
213_4_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblip.md
|
https://huggingface.co/docs/transformers/en/model_doc/instructblip/#instructblipvisionconfig
|
.md
|
The dropout ratio for the attention probabilities.
initializer_range (`float`, *optional*, defaults to 1e-10):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
qkv_bias (`bool`, *optional*, defaults to `True`):
Whether to add a bias to the queries and values in the self-attention layers.
Example:
```python
>>> from transformers import InstructBlipVisionConfig, InstructBlipVisionModel
|
213_4_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblip.md
|
https://huggingface.co/docs/transformers/en/model_doc/instructblip/#instructblipvisionconfig
|
.md
|
>>> # Initializing a InstructBlipVisionConfig with Salesforce/instruct-blip-flan-t5 style configuration
>>> configuration = InstructBlipVisionConfig()
>>> # Initializing a InstructBlipVisionModel (with random weights) from the Salesforce/instruct-blip-flan-t5 style configuration
>>> model = InstructBlipVisionModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
213_4_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblip.md
|
https://huggingface.co/docs/transformers/en/model_doc/instructblip/#instructblipqformerconfig
|
.md
|
This is the configuration class to store the configuration of a [`InstructBlipQFormerModel`]. It is used to
instantiate a InstructBLIP Querying Transformer (Q-Former) model according to the specified arguments, defining the
model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of
the InstructBLIP [Salesforce/instruct-blip-flan-t5](https://huggingface.co/Salesforce/instruct-blip-flan-t5)
|
213_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblip.md
|
https://huggingface.co/docs/transformers/en/model_doc/instructblip/#instructblipqformerconfig
|
.md
|
the InstructBLIP [Salesforce/instruct-blip-flan-t5](https://huggingface.co/Salesforce/instruct-blip-flan-t5)
architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs.
Read the documentation from [`PretrainedConfig`] for more information.
Note that [`InstructBlipQFormerModel`] is very similar to [`BertLMHeadModel`] with interleaved cross-attention.
Args:
vocab_size (`int`, *optional*, defaults to 30522):
|
213_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblip.md
|
https://huggingface.co/docs/transformers/en/model_doc/instructblip/#instructblipqformerconfig
|
.md
|
Args:
vocab_size (`int`, *optional*, defaults to 30522):
Vocabulary size of the Q-Former model. Defines the number of different tokens that can be represented by
the `inputs_ids` passed when calling the model.
hidden_size (`int`, *optional*, defaults to 768):
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (`int`, *optional*, defaults to 12):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 12):
|
213_5_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblip.md
|
https://huggingface.co/docs/transformers/en/model_doc/instructblip/#instructblipqformerconfig
|
.md
|
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 12):
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (`int`, *optional*, defaults to 3072):
Dimensionality of the "intermediate" (often named feed-forward) layer in the Transformer encoder.
hidden_act (`str` or `Callable`, *optional*, defaults to `"gelu"`):
|
213_5_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblip.md
|
https://huggingface.co/docs/transformers/en/model_doc/instructblip/#instructblipqformerconfig
|
.md
|
hidden_act (`str` or `Callable`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"silu"` and `"gelu_new"` are supported.
hidden_dropout_prob (`float`, *optional*, defaults to 0.1):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1):
The dropout ratio for the attention probabilities.
|
213_5_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblip.md
|
https://huggingface.co/docs/transformers/en/model_doc/instructblip/#instructblipqformerconfig
|
.md
|
attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1):
The dropout ratio for the attention probabilities.
max_position_embeddings (`int`, *optional*, defaults to 512):
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
|
213_5_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblip.md
|
https://huggingface.co/docs/transformers/en/model_doc/instructblip/#instructblipqformerconfig
|
.md
|
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (`float`, *optional*, defaults to 1e-12):
The epsilon used by the layer normalization layers.
pad_token_id (`int`, *optional*, defaults to 0):
Token id used for padding sequences.
position_embedding_type (`str`, *optional*, defaults to `"absolute"`):
Type of position embedding. Choose one of `"absolute"`, `"relative_key"`, `"relative_key_query"`. For
|
213_5_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblip.md
|
https://huggingface.co/docs/transformers/en/model_doc/instructblip/#instructblipqformerconfig
|
.md
|
Type of position embedding. Choose one of `"absolute"`, `"relative_key"`, `"relative_key_query"`. For
positional embeddings use `"absolute"`. For more information on `"relative_key"`, please refer to
[Self-Attention with Relative Position Representations (Shaw et al.)](https://arxiv.org/abs/1803.02155).
For more information on `"relative_key_query"`, please refer to *Method 4* in [Improve Transformer Models
with Better Relative Position Embeddings (Huang et al.)](https://arxiv.org/abs/2009.13658).
|
213_5_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblip.md
|
https://huggingface.co/docs/transformers/en/model_doc/instructblip/#instructblipqformerconfig
|
.md
|
with Better Relative Position Embeddings (Huang et al.)](https://arxiv.org/abs/2009.13658).
cross_attention_frequency (`int`, *optional*, defaults to 2):
The frequency of adding cross-attention to the Transformer layers.
encoder_hidden_size (`int`, *optional*, defaults to 1408):
The hidden size of the hidden states for cross-attention.
Examples:
```python
>>> from transformers import InstructBlipQFormerConfig, InstructBlipQFormerModel
|
213_5_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblip.md
|
https://huggingface.co/docs/transformers/en/model_doc/instructblip/#instructblipqformerconfig
|
.md
|
>>> # Initializing a InstructBLIP Salesforce/instruct-blip-flan-t5 style configuration
>>> configuration = InstructBlipQFormerConfig()
>>> # Initializing a model (with random weights) from the Salesforce/instruct-blip-flan-t5 style configuration
>>> model = InstructBlipQFormerModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
213_5_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblip.md
|
https://huggingface.co/docs/transformers/en/model_doc/instructblip/#instructblipprocessor
|
.md
|
Constructs an InstructBLIP processor which wraps a BLIP image processor and a LLaMa/T5 tokenizer into a single
processor.
[`InstructBlipProcessor`] offers all the functionalities of [`BlipImageProcessor`] and [`AutoTokenizer`]. See the
docstring of [`~BlipProcessor.__call__`] and [`~BlipProcessor.decode`] for more information.
Args:
image_processor (`BlipImageProcessor`):
An instance of [`BlipImageProcessor`]. The image processor is a required input.
tokenizer (`AutoTokenizer`):
|
213_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblip.md
|
https://huggingface.co/docs/transformers/en/model_doc/instructblip/#instructblipprocessor
|
.md
|
An instance of [`BlipImageProcessor`]. The image processor is a required input.
tokenizer (`AutoTokenizer`):
An instance of ['PreTrainedTokenizer`]. The tokenizer is a required input.
qformer_tokenizer (`AutoTokenizer`):
An instance of ['PreTrainedTokenizer`]. The Q-Former tokenizer is a required input.
num_query_tokens (`int`, *optional*):"
Number of tokens used by the Qformer as queries, should be same as in model's config.
|
213_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblip.md
|
https://huggingface.co/docs/transformers/en/model_doc/instructblip/#instructblipvisionmodel
|
.md
|
No docstring available for InstructBlipVisionModel
Methods: forward
|
213_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblip.md
|
https://huggingface.co/docs/transformers/en/model_doc/instructblip/#instructblipqformermodel
|
.md
|
Querying Transformer (Q-Former), used in InstructBLIP. Slightly modified from BLIP-2 as it also takes the
instruction as input.
Methods: forward
|
213_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblip.md
|
https://huggingface.co/docs/transformers/en/model_doc/instructblip/#instructblipforconditionalgeneration
|
.md
|
InstructBLIP Model for generating text given an image and an optional text prompt. The model consists of a vision
encoder, Querying Transformer (Q-Former) and a language model.
One can optionally pass `input_ids` to the model, which serve as a text prompt, to make the language model continue
the prompt. Otherwise, the language model starts generating text from the [BOS] (beginning-of-sequence) token.
|
213_9_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblip.md
|
https://huggingface.co/docs/transformers/en/model_doc/instructblip/#instructblipforconditionalgeneration
|
.md
|
the prompt. Otherwise, the language model starts generating text from the [BOS] (beginning-of-sequence) token.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
213_9_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblip.md
|
https://huggingface.co/docs/transformers/en/model_doc/instructblip/#instructblipforconditionalgeneration
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`InstructBlipConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
213_9_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/instructblip.md
|
https://huggingface.co/docs/transformers/en/model_doc/instructblip/#instructblipforconditionalgeneration
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
- generate
|
213_9_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_audio.md
|
https://huggingface.co/docs/transformers/en/model_doc/qwen2_audio/
|
.md
|
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
214_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_audio.md
|
https://huggingface.co/docs/transformers/en/model_doc/qwen2_audio/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
214_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_audio.md
|
https://huggingface.co/docs/transformers/en/model_doc/qwen2_audio/#overview
|
.md
|
The Qwen2-Audio is the new model series of large audio-language models from the Qwen team. Qwen2-Audio is capable of accepting various audio signal inputs and performing audio analysis or direct textual responses with regard to speech instructions. We introduce two distinct audio interaction modes:
* voice chat: users can freely engage in voice interactions with Qwen2-Audio without text input
* audio analysis: users could provide audio and text instructions for analysis during the interaction
|
214_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_audio.md
|
https://huggingface.co/docs/transformers/en/model_doc/qwen2_audio/#overview
|
.md
|
* audio analysis: users could provide audio and text instructions for analysis during the interaction
It was proposed in [Qwen2-Audio Technical Report](https://arxiv.org/abs/2407.10759) by Yunfei Chu, Jin Xu, Qian Yang, Haojie Wei, Xipin Wei, Zhifang Guo, Yichong Leng, Yuanjun Lv, Jinzheng He, Junyang Lin, Chang Zhou, Jingren Zhou.
The abstract from the paper is the following:
|
214_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_audio.md
|
https://huggingface.co/docs/transformers/en/model_doc/qwen2_audio/#overview
|
.md
|
*We introduce the latest progress of Qwen-Audio, a large-scale audio-language model called Qwen2-Audio, which is capable of accepting various audio signal inputs and performing audio analysis or direct textual responses with regard to speech instructions. In contrast to complex hierarchical tags, we have simplified the pre-training process by utilizing natural language prompts for different data and tasks, and have further expanded the data volume. We have boosted the instruction-following capability of
|
214_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_audio.md
|
https://huggingface.co/docs/transformers/en/model_doc/qwen2_audio/#overview
|
.md
|
different data and tasks, and have further expanded the data volume. We have boosted the instruction-following capability of Qwen2-Audio and implemented two distinct audio interaction modes for voice chat and audio analysis. In the voice chat mode, users can freely engage in voice interactions with Qwen2-Audio without text input. In the audio analysis mode, users could provide audio and text instructions for analysis during the interaction. Note that we do not use any system prompts to switch between voice
|
214_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_audio.md
|
https://huggingface.co/docs/transformers/en/model_doc/qwen2_audio/#overview
|
.md
|
and text instructions for analysis during the interaction. Note that we do not use any system prompts to switch between voice chat and audio analysis modes. Qwen2-Audio is capable of intelligently comprehending the content within audio and following voice commands to respond appropriately. For instance, in an audio segment that simultaneously contains sounds, multi-speaker conversations, and a voice command, Qwen2-Audio can directly understand the command and provide an interpretation and response to the
|
214_1_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_audio.md
|
https://huggingface.co/docs/transformers/en/model_doc/qwen2_audio/#overview
|
.md
|
and a voice command, Qwen2-Audio can directly understand the command and provide an interpretation and response to the audio. Additionally, DPO has optimized the model's performance in terms of factuality and adherence to desired behavior. According to the evaluation results from AIR-Bench, Qwen2-Audio outperformed previous SOTAs, such as Gemini-1.5-pro, in tests focused on audio-centric instruction-following capabilities. Qwen2-Audio is open-sourced with the aim of fostering the advancement of the
|
214_1_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_audio.md
|
https://huggingface.co/docs/transformers/en/model_doc/qwen2_audio/#overview
|
.md
|
audio-centric instruction-following capabilities. Qwen2-Audio is open-sourced with the aim of fostering the advancement of the multi-modal language community. *
|
214_1_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_audio.md
|
https://huggingface.co/docs/transformers/en/model_doc/qwen2_audio/#usage-tips
|
.md
|
`Qwen2-Audio-7B` and `Qwen2-Audio-7B-Instruct` can be found on the [Huggingface Hub](https://huggingface.co/Qwen)
|
214_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_audio.md
|
https://huggingface.co/docs/transformers/en/model_doc/qwen2_audio/#inference
|
.md
|
```python
from io import BytesIO
from urllib.request import urlopen
import librosa
from transformers import AutoProcessor, Qwen2AudioForConditionalGeneration
model = Qwen2AudioForConditionalGeneration.from_pretrained("Qwen/Qwen2-Audio-7B", trust_remote_code=True, device_map="auto")
processor = AutoProcessor.from_pretrained("Qwen/Qwen2-Audio-7B", trust_remote_code=True)
|
214_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_audio.md
|
https://huggingface.co/docs/transformers/en/model_doc/qwen2_audio/#inference
|
.md
|
prompt = "<|audio_bos|><|AUDIO|><|audio_eos|>Generate the caption in English:"
url = "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-Audio/glass-breaking-151256.mp3"
audio, sr = librosa.load(BytesIO(urlopen(url).read()), sr=processor.feature_extractor.sampling_rate)
inputs = processor(text=prompt, audios=audio, return_tensors="pt").to(model.device)
generate_ids = model.generate(**inputs, max_length=256)
generate_ids = generate_ids[:, inputs.input_ids.size(1):]
|
214_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_audio.md
|
https://huggingface.co/docs/transformers/en/model_doc/qwen2_audio/#inference
|
.md
|
generate_ids = model.generate(**inputs, max_length=256)
generate_ids = generate_ids[:, inputs.input_ids.size(1):]
response = processor.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
# We can also omit the audio_bos and audio_eos tokens
prompt = "<|AUDIO|>Generate the caption in English:"
inputs = processor(text=prompt, audios=audio, return_tensors="pt").to(model.device)
|
214_3_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_audio.md
|
https://huggingface.co/docs/transformers/en/model_doc/qwen2_audio/#inference
|
.md
|
generate_ids = model.generate(**inputs, max_length=256)
generate_ids = generate_ids[:, inputs.input_ids.size(1):]
response = processor.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
```
In the following, we demonstrate how to use `Qwen2-Audio-7B-Instruct` for the inference, supporting both voice chat and audio analysis modes. Note that we have used the ChatML format for dialog, in this demo we show how to leverage `apply_chat_template` for this purpose.
|
214_3_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_audio.md
|
https://huggingface.co/docs/transformers/en/model_doc/qwen2_audio/#voice-chat-inference
|
.md
|
In the voice chat mode, users can freely engage in voice interactions with Qwen2-Audio without text input:
```python
from io import BytesIO
from urllib.request import urlopen
import librosa
from transformers import Qwen2AudioForConditionalGeneration, AutoProcessor
processor = AutoProcessor.from_pretrained("Qwen/Qwen2-Audio-7B-Instruct")
model = Qwen2AudioForConditionalGeneration.from_pretrained("Qwen/Qwen2-Audio-7B-Instruct", device_map="auto")
|
214_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_audio.md
|
https://huggingface.co/docs/transformers/en/model_doc/qwen2_audio/#voice-chat-inference
|
.md
|
conversation = [
{"role": "user", "content": [
{"type": "audio", "audio_url": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2-Audio/audio/guess_age_gender.wav"},
]},
{"role": "assistant", "content": "Yes, the speaker is female and in her twenties."},
{"role": "user", "content": [
{"type": "audio", "audio_url": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2-Audio/audio/translate_to_chinese.wav"},
]},
]
|
214_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_audio.md
|
https://huggingface.co/docs/transformers/en/model_doc/qwen2_audio/#voice-chat-inference
|
.md
|
]},
]
text = processor.apply_chat_template(conversation, add_generation_prompt=True, tokenize=False)
audios = []
for message in conversation:
if isinstance(message["content"], list):
for ele in message["content"]:
if ele["type"] == "audio":
audios.append(librosa.load(
BytesIO(urlopen(ele['audio_url']).read()),
sr=processor.feature_extractor.sampling_rate)[0]
)
|
214_4_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_audio.md
|
https://huggingface.co/docs/transformers/en/model_doc/qwen2_audio/#voice-chat-inference
|
.md
|
inputs = processor(text=text, audios=audios, return_tensors="pt", padding=True)
inputs.input_ids = inputs.input_ids.to("cuda")
generate_ids = model.generate(**inputs, max_length=256)
generate_ids = generate_ids[:, inputs.input_ids.size(1):]
response = processor.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
```
|
214_4_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_audio.md
|
https://huggingface.co/docs/transformers/en/model_doc/qwen2_audio/#audio-analysis-inference
|
.md
|
In the audio analysis, users could provide both audio and text instructions for analysis:
```python
from io import BytesIO
from urllib.request import urlopen
import librosa
from transformers import Qwen2AudioForConditionalGeneration, AutoProcessor
processor = AutoProcessor.from_pretrained("Qwen/Qwen2-Audio-7B-Instruct")
model = Qwen2AudioForConditionalGeneration.from_pretrained("Qwen/Qwen2-Audio-7B-Instruct", device_map="auto")
|
214_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_audio.md
|
https://huggingface.co/docs/transformers/en/model_doc/qwen2_audio/#audio-analysis-inference
|
.md
|
conversation = [
{'role': 'system', 'content': 'You are a helpful assistant.'},
{"role": "user", "content": [
{"type": "audio", "audio_url": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2-Audio/audio/glass-breaking-151256.mp3"},
{"type": "text", "text": "What's that sound?"},
]},
{"role": "assistant", "content": "It is the sound of glass shattering."},
{"role": "user", "content": [
{"type": "text", "text": "What can you do when you hear that?"},
]},
|
214_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_audio.md
|
https://huggingface.co/docs/transformers/en/model_doc/qwen2_audio/#audio-analysis-inference
|
.md
|
{"role": "user", "content": [
{"type": "text", "text": "What can you do when you hear that?"},
]},
{"role": "assistant", "content": "Stay alert and cautious, and check if anyone is hurt or if there is any damage to property."},
{"role": "user", "content": [
{"type": "audio", "audio_url": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2-Audio/audio/1272-128104-0000.flac"},
{"type": "text", "text": "What does the person say?"},
]},
]
|
214_5_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_audio.md
|
https://huggingface.co/docs/transformers/en/model_doc/qwen2_audio/#audio-analysis-inference
|
.md
|
{"type": "text", "text": "What does the person say?"},
]},
]
text = processor.apply_chat_template(conversation, add_generation_prompt=True, tokenize=False)
audios = []
for message in conversation:
if isinstance(message["content"], list):
for ele in message["content"]:
if ele["type"] == "audio":
audios.append(
librosa.load(
BytesIO(urlopen(ele['audio_url']).read()),
sr=processor.feature_extractor.sampling_rate)[0]
)
|
214_5_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_audio.md
|
https://huggingface.co/docs/transformers/en/model_doc/qwen2_audio/#audio-analysis-inference
|
.md
|
inputs = processor(text=text, audios=audios, return_tensors="pt", padding=True)
inputs.input_ids = inputs.input_ids.to("cuda")
generate_ids = model.generate(**inputs, max_length=256)
generate_ids = generate_ids[:, inputs.input_ids.size(1):]
response = processor.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
```
|
214_5_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_audio.md
|
https://huggingface.co/docs/transformers/en/model_doc/qwen2_audio/#batch-inference
|
.md
|
We also support batch inference:
```python
from io import BytesIO
from urllib.request import urlopen
import librosa
from transformers import Qwen2AudioForConditionalGeneration, AutoProcessor
processor = AutoProcessor.from_pretrained("Qwen/Qwen2-Audio-7B-Instruct")
model = Qwen2AudioForConditionalGeneration.from_pretrained("Qwen/Qwen2-Audio-7B-Instruct", device_map="auto")
|
214_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_audio.md
|
https://huggingface.co/docs/transformers/en/model_doc/qwen2_audio/#batch-inference
|
.md
|
conversation1 = [
{"role": "user", "content": [
{"type": "audio", "audio_url": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2-Audio/audio/glass-breaking-151256.mp3"},
{"type": "text", "text": "What's that sound?"},
]},
{"role": "assistant", "content": "It is the sound of glass shattering."},
{"role": "user", "content": [
{"type": "audio", "audio_url": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2-Audio/audio/f2641_0_throatclearing.wav"},
{"type": "text", "text": "What can you hear?"},
]}
|
214_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_audio.md
|
https://huggingface.co/docs/transformers/en/model_doc/qwen2_audio/#batch-inference
|
.md
|
{"type": "text", "text": "What can you hear?"},
]}
]
|
214_6_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_audio.md
|
https://huggingface.co/docs/transformers/en/model_doc/qwen2_audio/#batch-inference
|
.md
|
conversation2 = [
{"role": "user", "content": [
{"type": "audio", "audio_url": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2-Audio/audio/1272-128104-0000.flac"},
{"type": "text", "text": "What does the person say?"},
]},
]
conversations = [conversation1, conversation2]
text = [processor.apply_chat_template(conversation, add_generation_prompt=True, tokenize=False) for conversation in conversations]
|
214_6_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_audio.md
|
https://huggingface.co/docs/transformers/en/model_doc/qwen2_audio/#batch-inference
|
.md
|
audios = []
for conversation in conversations:
for message in conversation:
if isinstance(message["content"], list):
for ele in message["content"]:
if ele["type"] == "audio":
audios.append(
librosa.load(
BytesIO(urlopen(ele['audio_url']).read()),
sr=processor.feature_extractor.sampling_rate)[0]
)
inputs = processor(text=text, audios=audios, return_tensors="pt", padding=True)
inputs['input_ids'] = inputs['input_ids'].to("cuda")
inputs.input_ids = inputs.input_ids.to("cuda")
|
214_6_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_audio.md
|
https://huggingface.co/docs/transformers/en/model_doc/qwen2_audio/#batch-inference
|
.md
|
generate_ids = model.generate(**inputs, max_length=256)
generate_ids = generate_ids[:, inputs.input_ids.size(1):]
response = processor.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)
```
|
214_6_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_audio.md
|
https://huggingface.co/docs/transformers/en/model_doc/qwen2_audio/#qwen2audioconfig
|
.md
|
This is the configuration class to store the configuration of a [`Qwen2AudioForConditionalGeneration`]. It is used to instantiate an
Qwen2-Audio model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the Qwen2-Audio.
e.g. [Qwen/Qwen2-Audio-7B](https://huggingface.co/Qwen/Qwen2-Audio-7B)
|
214_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_audio.md
|
https://huggingface.co/docs/transformers/en/model_doc/qwen2_audio/#qwen2audioconfig
|
.md
|
e.g. [Qwen/Qwen2-Audio-7B](https://huggingface.co/Qwen/Qwen2-Audio-7B)
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
audio_config (`Union[AutoConfig, dict]`, *optional*, defaults to `CLIPVisionConfig`):
The config object or dictionary of the audio backbone.
text_config (`Union[AutoConfig, dict]`, *optional*, defaults to `LlamaConfig`):
|
214_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_audio.md
|
https://huggingface.co/docs/transformers/en/model_doc/qwen2_audio/#qwen2audioconfig
|
.md
|
text_config (`Union[AutoConfig, dict]`, *optional*, defaults to `LlamaConfig`):
The config object or dictionary of the text backbone.
audio_token_index (`int`, *optional*, defaults to 151646):
The image token index to encode the image prompt.
Example:
```python
>>> from transformers import Qwen2AudioForConditionalGeneration, Qwen2AudioConfig, Qwen2AudioEncoderConfig, Qwen2Config
|
214_7_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_audio.md
|
https://huggingface.co/docs/transformers/en/model_doc/qwen2_audio/#qwen2audioconfig
|
.md
|
>>> # Initializing a Qwen2AudioEncoder config
>>> audio_config = Qwen2AudioEncoderConfig()
>>> # Initializing a Qwen2 config
>>> text_config = Qwen2Config()
>>> # Initializing a Qwen2Audio configuration
>>> configuration = Qwen2AudioConfig(audio_config, text_config)
>>> # Initializing a model from the qwen2-audio style configuration
>>> model = Qwen2AudioForConditionalGeneration(configuration)
|
214_7_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_audio.md
|
https://huggingface.co/docs/transformers/en/model_doc/qwen2_audio/#qwen2audioconfig
|
.md
|
>>> # Accessing the model configuration
>>> configuration = model.config
```
This is the configuration class to store the configuration of a [`Qwen2AudioEncoder`]. It is used to instantiate a
Qwen2-Audio audio encoder according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the audio encoder of the Qwen2-Audio
architecture.
e.g. [Qwen/Qwen2-Audio-7B](https://huggingface.co/Qwen/Qwen2-Audio-7B)
|
214_7_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_audio.md
|
https://huggingface.co/docs/transformers/en/model_doc/qwen2_audio/#qwen2audioconfig
|
.md
|
architecture.
e.g. [Qwen/Qwen2-Audio-7B](https://huggingface.co/Qwen/Qwen2-Audio-7B)
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
num_mel_bins (`int`, *optional*, defaults to 128):
Number of mel features used per input features. Should correspond to the value used in the
`Qwen2AudioProcessor` class.
encoder_layers (`int`, *optional*, defaults to 32):
|
214_7_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_audio.md
|
https://huggingface.co/docs/transformers/en/model_doc/qwen2_audio/#qwen2audioconfig
|
.md
|
`Qwen2AudioProcessor` class.
encoder_layers (`int`, *optional*, defaults to 32):
Number of encoder layers.
encoder_attention_heads (`int`, *optional*, defaults to 20):
Number of attention heads for each attention layer in the Transformer encoder.
encoder_ffn_dim (`int`, *optional*, defaults to 5120):
Dimensionality of the "intermediate" (often named feed-forward) layer in encoder.
encoder_layerdrop (`float`, *optional*, defaults to 0.0):
|
214_7_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_audio.md
|
https://huggingface.co/docs/transformers/en/model_doc/qwen2_audio/#qwen2audioconfig
|
.md
|
encoder_layerdrop (`float`, *optional*, defaults to 0.0):
The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
d_model (`int`, *optional*, defaults to 1280):
Dimensionality of the layers.
dropout (`float`, *optional*, defaults to 0.0):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
|
214_7_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_audio.md
|
https://huggingface.co/docs/transformers/en/model_doc/qwen2_audio/#qwen2audioconfig
|
.md
|
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
activation_function (`str`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"silu"` and `"gelu_new"` are supported.
activation_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for activations inside the fully connected layer.
|
214_7_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_audio.md
|
https://huggingface.co/docs/transformers/en/model_doc/qwen2_audio/#qwen2audioconfig
|
.md
|
activation_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for activations inside the fully connected layer.
scale_embedding (`bool`, *optional*, defaults to `False`):
Scale embeddings by diving by sqrt(d_model).
init_std (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
max_source_positions (`int`, *optional*, defaults to 1500):
|
214_7_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_audio.md
|
https://huggingface.co/docs/transformers/en/model_doc/qwen2_audio/#qwen2audioconfig
|
.md
|
max_source_positions (`int`, *optional*, defaults to 1500):
The maximum sequence length of log-mel filter-bank features that this model might ever be used with.
Example:
```python
>>> from transformers import Qwen2AudioEncoderConfig, Qwen2AudioEncoder
|
214_7_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_audio.md
|
https://huggingface.co/docs/transformers/en/model_doc/qwen2_audio/#qwen2audioconfig
|
.md
|
>>> # Initializing a Qwen2AudioEncoderConfig
>>> configuration = Qwen2AudioEncoderConfig()
>>> # Initializing a Qwen2AudioEncoder (with random weights)
>>> model = Qwen2AudioEncoder(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
214_7_11
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_audio.md
|
https://huggingface.co/docs/transformers/en/model_doc/qwen2_audio/#qwen2audioprocessor
|
.md
|
Constructs a Qwen2Audio processor which wraps a Qwen2Audio feature extractor and a Qwen2Audio tokenizer into a single processor.
[`Qwen2AudioProcessor`] offers all the functionalities of [`WhisperFeatureExtractor`] and [`Qwen2TokenizerFast`]. See the
[`~Qwen2AudioProcessor.__call__`] and [`~Qwen2AudioProcessor.decode`] for more information.
Args:
feature_extractor ([`WhisperFeatureExtractor`], *optional*):
The feature extractor is a required input.
tokenizer ([`Qwen2TokenizerFast`], *optional*):
|
214_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_audio.md
|
https://huggingface.co/docs/transformers/en/model_doc/qwen2_audio/#qwen2audioprocessor
|
.md
|
The feature extractor is a required input.
tokenizer ([`Qwen2TokenizerFast`], *optional*):
The tokenizer is a required input.
chat_template (`Optional[str]`, *optional*):
The Jinja template to use for formatting the conversation. If not provided, the default chat template
is used.
audio_token (`str`, *optional*, defaults to `"<|AUDIO|>"`):
The token to use for audio tokens.
audio_bos_token (`str`, *optional*, defaults to `"<|audio_bos|>"`):
The token to use for audio bos tokens.
|
214_8_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_audio.md
|
https://huggingface.co/docs/transformers/en/model_doc/qwen2_audio/#qwen2audioprocessor
|
.md
|
audio_bos_token (`str`, *optional*, defaults to `"<|audio_bos|>"`):
The token to use for audio bos tokens.
audio_eos_token (`str`, *optional*, defaults to `"<|audio_eos|>"`):
The token to use for audio eos tokens.
|
214_8_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_audio.md
|
https://huggingface.co/docs/transformers/en/model_doc/qwen2_audio/#qwen2audioforconditionalgeneration
|
.md
|
The QWEN2AUDIO model which consists of a audio backbone and a language model.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
214_9_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_audio.md
|
https://huggingface.co/docs/transformers/en/model_doc/qwen2_audio/#qwen2audioforconditionalgeneration
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`Qwen2AudioConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
|
214_9_1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.