source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ctrl.md
https://huggingface.co/docs/transformers/en/model_doc/ctrl/#ctrlconfig
.md
`inputs_ids` passed when calling [`CTRLModel`] or [`TFCTRLModel`]. n_positions (`int`, *optional*, defaults to 256): The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). n_embd (`int`, *optional*, defaults to 1280): Dimensionality of the embeddings and hidden states. dff (`int`, *optional*, defaults to 8192): Dimensionality of the inner dimension of the feed forward networks (FFN).
326_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ctrl.md
https://huggingface.co/docs/transformers/en/model_doc/ctrl/#ctrlconfig
.md
dff (`int`, *optional*, defaults to 8192): Dimensionality of the inner dimension of the feed forward networks (FFN). n_layer (`int`, *optional*, defaults to 48): Number of hidden layers in the Transformer encoder. n_head (`int`, *optional*, defaults to 16): Number of attention heads for each attention layer in the Transformer encoder. resid_pdrop (`float`, *optional*, defaults to 0.1): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
326_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ctrl.md
https://huggingface.co/docs/transformers/en/model_doc/ctrl/#ctrlconfig
.md
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. embd_pdrop (`int`, *optional*, defaults to 0.1): The dropout ratio for the embeddings. layer_norm_epsilon (`float`, *optional*, defaults to 1e-06): The epsilon to use in the layer normalization layers initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. use_cache (`bool`, *optional*, defaults to `True`):
326_5_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ctrl.md
https://huggingface.co/docs/transformers/en/model_doc/ctrl/#ctrlconfig
.md
use_cache (`bool`, *optional*, defaults to `True`): Whether or not the model should return the last key/values attentions (not used by all models). Examples: ```python >>> from transformers import CTRLConfig, CTRLModel
326_5_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ctrl.md
https://huggingface.co/docs/transformers/en/model_doc/ctrl/#ctrlconfig
.md
>>> # Initializing a CTRL configuration >>> configuration = CTRLConfig() >>> # Initializing a model (with random weights) from the configuration >>> model = CTRLModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
326_5_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ctrl.md
https://huggingface.co/docs/transformers/en/model_doc/ctrl/#ctrltokenizer
.md
Construct a CTRL tokenizer. Based on Byte-Pair-Encoding. This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. Args: vocab_file (`str`): Path to the vocabulary file. merges_file (`str`): Path to the merges file. unk_token (`str`, *optional*, defaults to `"<unk>"`): The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
326_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ctrl.md
https://huggingface.co/docs/transformers/en/model_doc/ctrl/#ctrltokenizer
.md
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. Methods: save_vocabulary <frameworkcontent> <pt>
326_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ctrl.md
https://huggingface.co/docs/transformers/en/model_doc/ctrl/#ctrlmodel
.md
The bare CTRL Model transformer outputting raw hidden-states without any specific head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
326_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ctrl.md
https://huggingface.co/docs/transformers/en/model_doc/ctrl/#ctrlmodel
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`CTRLConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
326_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ctrl.md
https://huggingface.co/docs/transformers/en/model_doc/ctrl/#ctrlmodel
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
326_7_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ctrl.md
https://huggingface.co/docs/transformers/en/model_doc/ctrl/#ctrllmheadmodel
.md
The CTRL Model transformer with a language modeling head on top (linear layer with weights tied to the input embeddings). This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
326_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ctrl.md
https://huggingface.co/docs/transformers/en/model_doc/ctrl/#ctrllmheadmodel
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`CTRLConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
326_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ctrl.md
https://huggingface.co/docs/transformers/en/model_doc/ctrl/#ctrllmheadmodel
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
326_8_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ctrl.md
https://huggingface.co/docs/transformers/en/model_doc/ctrl/#ctrlforsequenceclassification
.md
The CTRL Model transformer with a sequence classification head on top (linear layer). [`CTRLForSequenceClassification`] uses the last token in order to do the classification, as other causal models (e.g. GPT-2) do. Since it does classification on the last token, it requires to know the position of the last token. If a `pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in
326_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ctrl.md
https://huggingface.co/docs/transformers/en/model_doc/ctrl/#ctrlforsequenceclassification
.md
token. If a `pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If no `pad_token_id` is defined, it simply takes the last value in each row of the batch. Since it cannot guess the padding tokens when `inputs_embeds` are passed instead of `input_ids`, it does the same (take the last value in each row of the batch). This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
326_9_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ctrl.md
https://huggingface.co/docs/transformers/en/model_doc/ctrl/#ctrlforsequenceclassification
.md
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters:
326_9_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ctrl.md
https://huggingface.co/docs/transformers/en/model_doc/ctrl/#ctrlforsequenceclassification
.md
and behavior. Parameters: config ([`CTRLConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward </pt> <tf>
326_9_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ctrl.md
https://huggingface.co/docs/transformers/en/model_doc/ctrl/#tfctrlmodel
.md
No docstring available for TFCTRLModel Methods: call
326_10_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ctrl.md
https://huggingface.co/docs/transformers/en/model_doc/ctrl/#tfctrllmheadmodel
.md
No docstring available for TFCTRLLMHeadModel Methods: call
326_11_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ctrl.md
https://huggingface.co/docs/transformers/en/model_doc/ctrl/#tfctrlforsequenceclassification
.md
No docstring available for TFCTRLForSequenceClassification Methods: call </tf> </frameworkcontent>
326_12_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/git.md
https://huggingface.co/docs/transformers/en/model_doc/git/
.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
327_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/git.md
https://huggingface.co/docs/transformers/en/model_doc/git/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
327_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/git.md
https://huggingface.co/docs/transformers/en/model_doc/git/#overview
.md
The GIT model was proposed in [GIT: A Generative Image-to-text Transformer for Vision and Language](https://arxiv.org/abs/2205.14100) by Jianfeng Wang, Zhengyuan Yang, Xiaowei Hu, Linjie Li, Kevin Lin, Zhe Gan, Zicheng Liu, Ce Liu, Lijuan Wang. GIT is a decoder-only Transformer that leverages [CLIP](clip)'s vision encoder to condition the model on vision inputs besides text. The model obtains state-of-the-art results on image captioning and visual question answering benchmarks.
327_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/git.md
https://huggingface.co/docs/transformers/en/model_doc/git/#overview
.md
image captioning and visual question answering benchmarks. The abstract from the paper is the following:
327_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/git.md
https://huggingface.co/docs/transformers/en/model_doc/git/#overview
.md
*In this paper, we design and train a Generative Image-to-text Transformer, GIT, to unify vision-language tasks such as image/video captioning and question answering. While generative models provide a consistent network architecture between pre-training and fine-tuning, existing work typically contains complex structures (uni/multi-modal encoder/decoder) and depends on external modules such as object detectors/taggers and optical character recognition (OCR). In GIT, we simplify the architecture as one
327_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/git.md
https://huggingface.co/docs/transformers/en/model_doc/git/#overview
.md
modules such as object detectors/taggers and optical character recognition (OCR). In GIT, we simplify the architecture as one image encoder and one text decoder under a single language modeling task. We also scale up the pre-training data and the model size to boost the model performance. Without bells and whistles, our GIT establishes new state of the arts on 12 challenging benchmarks with a large margin. For instance, our model surpasses the human performance for the first time on TextCaps (138.2 vs.
327_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/git.md
https://huggingface.co/docs/transformers/en/model_doc/git/#overview
.md
with a large margin. For instance, our model surpasses the human performance for the first time on TextCaps (138.2 vs. 125.5 in CIDEr). Furthermore, we present a new scheme of generation-based image classification and scene text recognition, achieving decent performance on standard benchmarks.*
327_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/git.md
https://huggingface.co/docs/transformers/en/model_doc/git/#overview
.md
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/git_architecture.jpg" alt="drawing" width="600"/> <small> GIT architecture. Taken from the <a href="https://arxiv.org/abs/2205.14100" target="_blank">original paper</a>. </small> This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/microsoft/GenerativeImage2Text).
327_1_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/git.md
https://huggingface.co/docs/transformers/en/model_doc/git/#usage-tips
.md
- GIT is implemented in a very similar way to GPT-2, the only difference being that the model is also conditioned on `pixel_values`.
327_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/git.md
https://huggingface.co/docs/transformers/en/model_doc/git/#resources
.md
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with GIT. - Demo notebooks regarding inference + fine-tuning GIT on custom data can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/GIT). - See also: [Causal language modeling task guide](../tasks/language_modeling) If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we will review it.
327_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/git.md
https://huggingface.co/docs/transformers/en/model_doc/git/#resources
.md
The resource should ideally demonstrate something new instead of duplicating an existing resource.
327_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/git.md
https://huggingface.co/docs/transformers/en/model_doc/git/#gitvisionconfig
.md
This is the configuration class to store the configuration of a [`GitVisionModel`]. It is used to instantiate a GIT vision encoder according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the vision encoder of the GIT [microsoft/git-base](https://huggingface.co/microsoft/git-base) architecture.
327_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/git.md
https://huggingface.co/docs/transformers/en/model_doc/git/#gitvisionconfig
.md
[microsoft/git-base](https://huggingface.co/microsoft/git-base) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: hidden_size (`int`, *optional*, defaults to 768): Dimensionality of the encoder layers and the pooler layer. intermediate_size (`int`, *optional*, defaults to 3072):
327_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/git.md
https://huggingface.co/docs/transformers/en/model_doc/git/#gitvisionconfig
.md
Dimensionality of the encoder layers and the pooler layer. intermediate_size (`int`, *optional*, defaults to 3072): Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder. num_hidden_layers (`int`, *optional*, defaults to 12): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 12): Number of attention heads for each attention layer in the Transformer encoder. image_size (`int`, *optional*, defaults to 224):
327_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/git.md
https://huggingface.co/docs/transformers/en/model_doc/git/#gitvisionconfig
.md
Number of attention heads for each attention layer in the Transformer encoder. image_size (`int`, *optional*, defaults to 224): The size (resolution) of each image. patch_size (`int`, *optional*, defaults to 16): The size (resolution) of each patch. hidden_act (`str` or `function`, *optional*, defaults to `"quick_gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` `"quick_gelu"` are supported.
327_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/git.md
https://huggingface.co/docs/transformers/en/model_doc/git/#gitvisionconfig
.md
`"relu"`, `"selu"` and `"gelu_new"` `"quick_gelu"` are supported. layer_norm_eps (`float`, *optional*, defaults to 1e-5): The epsilon used by the layer normalization layers. attention_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. Example: ```python
327_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/git.md
https://huggingface.co/docs/transformers/en/model_doc/git/#gitvisionconfig
.md
The standard deviation of the truncated_normal_initializer for initializing all weight matrices. Example: ```python >>> from transformers import GitVisionConfig, GitVisionModel
327_4_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/git.md
https://huggingface.co/docs/transformers/en/model_doc/git/#gitvisionconfig
.md
>>> # Initializing a GitVisionConfig with microsoft/git-base style configuration >>> configuration = GitVisionConfig() >>> # Initializing a GitVisionModel (with random weights) from the microsoft/git-base style configuration >>> model = GitVisionModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
327_4_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/git.md
https://huggingface.co/docs/transformers/en/model_doc/git/#gitvisionmodel
.md
The vision model from CLIP, used in GIT, without any head or projection on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
327_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/git.md
https://huggingface.co/docs/transformers/en/model_doc/git/#gitvisionmodel
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`GitConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
327_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/git.md
https://huggingface.co/docs/transformers/en/model_doc/git/#gitvisionmodel
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
327_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/git.md
https://huggingface.co/docs/transformers/en/model_doc/git/#gitconfig
.md
This is the configuration class to store the configuration of a [`GitModel`]. It is used to instantiate a GIT model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the GIT [microsoft/git-base](https://huggingface.co/microsoft/git-base) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
327_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/git.md
https://huggingface.co/docs/transformers/en/model_doc/git/#gitconfig
.md
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vision_config (`dict`, *optional*): Dictionary of configuration options used to initialize [`GitVisionConfig`]. vocab_size (`int`, *optional*, defaults to 30522): Vocabulary size of the GIT model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`GitModel`].
327_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/git.md
https://huggingface.co/docs/transformers/en/model_doc/git/#gitconfig
.md
`inputs_ids` passed when calling [`GitModel`]. hidden_size (`int`, *optional*, defaults to 768): Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (`int`, *optional*, defaults to 6): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 12): Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (`int`, *optional*, defaults to 3072):
327_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/git.md
https://huggingface.co/docs/transformers/en/model_doc/git/#gitconfig
.md
intermediate_size (`int`, *optional*, defaults to 3072): Dimensionality of the "intermediate" (often named feed-forward) layer in the Transformer encoder. hidden_act (`str` or `Callable`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"silu"` and `"gelu_new"` are supported. hidden_dropout_prob (`float`, *optional*, defaults to 0.1):
327_6_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/git.md
https://huggingface.co/docs/transformers/en/model_doc/git/#gitconfig
.md
`"relu"`, `"silu"` and `"gelu_new"` are supported. hidden_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout ratio for the attention probabilities. max_position_embeddings (`int`, *optional*, defaults to 1024): The maximum sequence length that this model might ever be used with. Typically set this to something large
327_6_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/git.md
https://huggingface.co/docs/transformers/en/model_doc/git/#gitconfig
.md
The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-12): The epsilon used by the layer normalization layers. position_embedding_type (`str`, *optional*, defaults to `"absolute"`):
327_6_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/git.md
https://huggingface.co/docs/transformers/en/model_doc/git/#gitconfig
.md
The epsilon used by the layer normalization layers. position_embedding_type (`str`, *optional*, defaults to `"absolute"`): Type of position embedding. Choose one of `"absolute"`, `"relative_key"`, `"relative_key_query"`. For positional embeddings use `"absolute"`. For more information on `"relative_key"`, please refer to [Self-Attention with Relative Position Representations (Shaw et al.)](https://arxiv.org/abs/1803.02155).
327_6_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/git.md
https://huggingface.co/docs/transformers/en/model_doc/git/#gitconfig
.md
[Self-Attention with Relative Position Representations (Shaw et al.)](https://arxiv.org/abs/1803.02155). For more information on `"relative_key_query"`, please refer to *Method 4* in [Improve Transformer Models with Better Relative Position Embeddings (Huang et al.)](https://arxiv.org/abs/2009.13658). use_cache (`bool`, *optional*, defaults to `True`): Whether or not the model should return the last key/values attentions (not used by all models). num_image_with_embedding (`int`, *optional*):
327_6_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/git.md
https://huggingface.co/docs/transformers/en/model_doc/git/#gitconfig
.md
num_image_with_embedding (`int`, *optional*): The number of temporal embeddings to add, in case the model is used for video captioning/VQA. Examples: ```python >>> from transformers import GitConfig, GitModel
327_6_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/git.md
https://huggingface.co/docs/transformers/en/model_doc/git/#gitconfig
.md
>>> # Initializing a GIT microsoft/git-base style configuration >>> configuration = GitConfig() >>> # Initializing a model (with random weights) from the microsoft/git-base style configuration >>> model = GitModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ``` Methods: all
327_6_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/git.md
https://huggingface.co/docs/transformers/en/model_doc/git/#gitprocessor
.md
Constructs a GIT processor which wraps a CLIP image processor and a BERT tokenizer into a single processor. [`GitProcessor`] offers all the functionalities of [`CLIPImageProcessor`] and [`BertTokenizerFast`]. See the [`~GitProcessor.__call__`] and [`~GitProcessor.decode`] for more information. Args: image_processor ([`AutoImageProcessor`]): The image processor is a required input. tokenizer ([`AutoTokenizer`]): The tokenizer is a required input. Methods: __call__
327_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/git.md
https://huggingface.co/docs/transformers/en/model_doc/git/#gitmodel
.md
The bare GIT Model transformer consisting of a CLIP image encoder and text decoder outputting raw hidden-states without any specific head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
327_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/git.md
https://huggingface.co/docs/transformers/en/model_doc/git/#gitmodel
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`GitConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
327_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/git.md
https://huggingface.co/docs/transformers/en/model_doc/git/#gitmodel
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
327_8_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/git.md
https://huggingface.co/docs/transformers/en/model_doc/git/#gitforcausallm
.md
GIT Model with a `language modeling` head on top for autoregressive language modeling. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
327_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/git.md
https://huggingface.co/docs/transformers/en/model_doc/git/#gitforcausallm
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`GitConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
327_9_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/git.md
https://huggingface.co/docs/transformers/en/model_doc/git/#gitforcausallm
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
327_9_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clap.md
https://huggingface.co/docs/transformers/en/model_doc/clap/
.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
328_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clap.md
https://huggingface.co/docs/transformers/en/model_doc/clap/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
328_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clap.md
https://huggingface.co/docs/transformers/en/model_doc/clap/#overview
.md
The CLAP model was proposed in [Large Scale Contrastive Language-Audio pretraining with feature fusion and keyword-to-caption augmentation](https://arxiv.org/pdf/2211.06687.pdf) by Yusong Wu, Ke Chen, Tianyu Zhang, Yuchen Hui, Taylor Berg-Kirkpatrick, Shlomo Dubnov.
328_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clap.md
https://huggingface.co/docs/transformers/en/model_doc/clap/#overview
.md
CLAP (Contrastive Language-Audio Pretraining) is a neural network trained on a variety of (audio, text) pairs. It can be instructed in to predict the most relevant text snippet, given an audio, without directly optimizing for the task. The CLAP model uses a SWINTransformer to get audio features from a log-Mel spectrogram input, and a RoBERTa model to get text features. Both the text and audio features are then projected to a latent space with identical dimension. The dot product between the projected audio
328_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clap.md
https://huggingface.co/docs/transformers/en/model_doc/clap/#overview
.md
and audio features are then projected to a latent space with identical dimension. The dot product between the projected audio and text features is then used as a similar score.
328_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clap.md
https://huggingface.co/docs/transformers/en/model_doc/clap/#overview
.md
The abstract from the paper is the following:
328_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clap.md
https://huggingface.co/docs/transformers/en/model_doc/clap/#overview
.md
*Contrastive learning has shown remarkable success in the field of multimodal representation learning. In this paper, we propose a pipeline of contrastive language-audio pretraining to develop an audio representation by combining audio data with natural language descriptions. To accomplish this target, we first release LAION-Audio-630K, a large collection of 633,526 audio-text pairs from different data sources. Second, we construct a contrastive language-audio pretraining model by considering different
328_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clap.md
https://huggingface.co/docs/transformers/en/model_doc/clap/#overview
.md
pairs from different data sources. Second, we construct a contrastive language-audio pretraining model by considering different audio encoders and text encoders. We incorporate the feature fusion mechanism and keyword-to-caption augmentation into the model design to further enable the model to process audio inputs of variable lengths and enhance the performance. Third, we perform comprehensive experiments to evaluate our model across three tasks: text-to-audio retrieval, zero-shot audio classification, and
328_1_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clap.md
https://huggingface.co/docs/transformers/en/model_doc/clap/#overview
.md
experiments to evaluate our model across three tasks: text-to-audio retrieval, zero-shot audio classification, and supervised audio classification. The results demonstrate that our model achieves superior performance in text-to-audio retrieval task. In audio classification tasks, the model achieves state-of-the-art performance in the zeroshot setting and is able to obtain performance comparable to models' results in the non-zero-shot setting. LAION-Audio-6*
328_1_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clap.md
https://huggingface.co/docs/transformers/en/model_doc/clap/#overview
.md
This model was contributed by [Younes Belkada](https://huggingface.co/ybelkada) and [Arthur Zucker](https://huggingface.co/ArthurZ) . The original code can be found [here](https://github.com/LAION-AI/Clap).
328_1_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clap.md
https://huggingface.co/docs/transformers/en/model_doc/clap/#clapconfig
.md
[`ClapConfig`] is the configuration class to store the configuration of a [`ClapModel`]. It is used to instantiate a CLAP model according to the specified arguments, defining the text model and audio model configs. Instantiating a configuration with the defaults will yield a similar configuration to that of the CLAP [laion/clap-htsat-fused](https://huggingface.co/laion/clap-htsat-fused) architecture.
328_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clap.md
https://huggingface.co/docs/transformers/en/model_doc/clap/#clapconfig
.md
[laion/clap-htsat-fused](https://huggingface.co/laion/clap-htsat-fused) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: text_config (`dict`, *optional*): Dictionary of configuration options used to initialize [`ClapTextConfig`]. audio_config (`dict`, *optional*): Dictionary of configuration options used to initialize [`ClapAudioConfig`].
328_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clap.md
https://huggingface.co/docs/transformers/en/model_doc/clap/#clapconfig
.md
audio_config (`dict`, *optional*): Dictionary of configuration options used to initialize [`ClapAudioConfig`]. logit_scale_init_value (`float`, *optional*, defaults to 14.29): The initial value of the *logit_scale* parameter. Default is used as per the original CLAP implementation. projection_dim (`int`, *optional*, defaults to 512): Dimensionality of text and audio projection layers. projection_hidden_act (`str`, *optional*, defaults to `"relu"`): Activation function for the projection layers.
328_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clap.md
https://huggingface.co/docs/transformers/en/model_doc/clap/#clapconfig
.md
projection_hidden_act (`str`, *optional*, defaults to `"relu"`): Activation function for the projection layers. initializer_factor (`float`, *optional*, defaults to 1.0): Factor to scale the initialization of the model weights. kwargs (*optional*): Dictionary of keyword arguments. Example: ```python >>> from transformers import ClapConfig, ClapModel
328_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clap.md
https://huggingface.co/docs/transformers/en/model_doc/clap/#clapconfig
.md
>>> # Initializing a ClapConfig with laion-ai/base style configuration >>> configuration = ClapConfig() >>> # Initializing a ClapModel (with random weights) from the laion-ai/base style configuration >>> model = ClapModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config >>> # We can also initialize a ClapConfig from a ClapTextConfig and a ClapAudioConfig >>> from transformers import ClapTextConfig, ClapAudioConfig
328_2_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clap.md
https://huggingface.co/docs/transformers/en/model_doc/clap/#clapconfig
.md
>>> # Initializing a ClapText and ClapAudioConfig configuration >>> config_text = ClapTextConfig() >>> config_audio = ClapAudioConfig() >>> config = ClapConfig.from_text_audio_configs(config_text, config_audio) ``` Methods: from_text_audio_configs
328_2_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clap.md
https://huggingface.co/docs/transformers/en/model_doc/clap/#claptextconfig
.md
This is the configuration class to store the configuration of a [`ClapTextModel`]. It is used to instantiate a CLAP model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the CLAP [calp-hsat-fused](https://huggingface.co/laion/clap-hsat-fused) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
328_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clap.md
https://huggingface.co/docs/transformers/en/model_doc/clap/#claptextconfig
.md
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 30522): Vocabulary size of the CLAP model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`ClapTextModel`]. hidden_size (`int`, *optional*, defaults to 768): Dimensionality of the encoder layers and the pooler layer.
328_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clap.md
https://huggingface.co/docs/transformers/en/model_doc/clap/#claptextconfig
.md
hidden_size (`int`, *optional*, defaults to 768): Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (`int`, *optional*, defaults to 12): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 12): Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (`int`, *optional*, defaults to 3072): Dimensionality of the "intermediate" (often named feed-forward) layer in the Transformer encoder.
328_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clap.md
https://huggingface.co/docs/transformers/en/model_doc/clap/#claptextconfig
.md
Dimensionality of the "intermediate" (often named feed-forward) layer in the Transformer encoder. hidden_act (`str` or `Callable`, *optional*, defaults to `"relu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"relu"`, `"relu"`, `"silu"` and `"relu_new"` are supported. hidden_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
328_3_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clap.md
https://huggingface.co/docs/transformers/en/model_doc/clap/#claptextconfig
.md
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout ratio for the attention probabilities. max_position_embeddings (`int`, *optional*, defaults to 512): The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). type_vocab_size (`int`, *optional*, defaults to 2):
328_3_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clap.md
https://huggingface.co/docs/transformers/en/model_doc/clap/#claptextconfig
.md
just in case (e.g., 512 or 1024 or 2048). type_vocab_size (`int`, *optional*, defaults to 2): The vocabulary size of the `token_type_ids` passed when calling [`ClapTextModel`]. layer_norm_eps (`float`, *optional*, defaults to 1e-12): The epsilon used by the layer normalization layers. position_embedding_type (`str`, *optional*, defaults to `"absolute"`): Type of position embedding. Choose one of `"absolute"`, `"relative_key"`, `"relative_key_query"`. For
328_3_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clap.md
https://huggingface.co/docs/transformers/en/model_doc/clap/#claptextconfig
.md
Type of position embedding. Choose one of `"absolute"`, `"relative_key"`, `"relative_key_query"`. For positional embeddings use `"absolute"`. For more information on `"relative_key"`, please refer to [Self-Attention with Relative Position Representations (Shaw et al.)](https://arxiv.org/abs/1803.02155). For more information on `"relative_key_query"`, please refer to *Method 4* in [Improve Transformer Models with Better Relative Position Embeddings (Huang et al.)](https://arxiv.org/abs/2009.13658).
328_3_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clap.md
https://huggingface.co/docs/transformers/en/model_doc/clap/#claptextconfig
.md
with Better Relative Position Embeddings (Huang et al.)](https://arxiv.org/abs/2009.13658). is_decoder (`bool`, *optional*, defaults to `False`): Whether the model is used as a decoder or not. If `False`, the model is used as an encoder. use_cache (`bool`, *optional*, defaults to `True`): Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if `config.is_decoder=True`. projection_hidden_act (`str`, *optional*, defaults to `"relu"`):
328_3_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clap.md
https://huggingface.co/docs/transformers/en/model_doc/clap/#claptextconfig
.md
relevant if `config.is_decoder=True`. projection_hidden_act (`str`, *optional*, defaults to `"relu"`): The non-linear activation function (function or string) in the projection layer. If string, `"gelu"`, `"relu"`, `"silu"` and `"gelu_new"` are supported. projection_dim (`int`, *optional*, defaults to 512) Dimension of the projection head of the `ClapTextModelWithProjection`. Examples: ```python >>> from transformers import ClapTextConfig, ClapTextModel
328_3_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clap.md
https://huggingface.co/docs/transformers/en/model_doc/clap/#claptextconfig
.md
>>> # Initializing a CLAP text configuration >>> configuration = ClapTextConfig() >>> # Initializing a model (with random weights) from the configuration >>> model = ClapTextModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
328_3_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clap.md
https://huggingface.co/docs/transformers/en/model_doc/clap/#clapaudioconfig
.md
This is the configuration class to store the configuration of a [`ClapAudioModel`]. It is used to instantiate a CLAP audio encoder according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the audio encoder of the CLAP [laion/clap-htsat-fused](https://huggingface.co/laion/clap-htsat-fused) architecture.
328_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clap.md
https://huggingface.co/docs/transformers/en/model_doc/clap/#clapaudioconfig
.md
[laion/clap-htsat-fused](https://huggingface.co/laion/clap-htsat-fused) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: window_size (`int`, *optional*, defaults to 8): Image size of the spectrogram num_mel_bins (`int`, *optional*, defaults to 64): Number of mel features used per frames. Should correspond to the value used in the `ClapProcessor` class.
328_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clap.md
https://huggingface.co/docs/transformers/en/model_doc/clap/#clapaudioconfig
.md
Number of mel features used per frames. Should correspond to the value used in the `ClapProcessor` class. spec_size (`int`, *optional*, defaults to 256): Desired input size of the spectrogram that the model supports. It can be different from the output of the `ClapFeatureExtractor`, in which case the input features will be resized. Corresponds to the `image_size` of the audio models. hidden_act (`str`, *optional*, defaults to `"gelu"`):
328_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clap.md
https://huggingface.co/docs/transformers/en/model_doc/clap/#clapaudioconfig
.md
of the audio models. hidden_act (`str`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"silu"` and `"gelu_new"` are supported. patch_size (`int`, *optional*, defaults to 4): Patch size for the audio spectrogram patch_stride (`list`, *optional*, defaults to `[4, 4]`): Patch stride for the audio spectrogram num_classes (`int`, *optional*, defaults to 527): Number of classes used for the head training
328_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clap.md
https://huggingface.co/docs/transformers/en/model_doc/clap/#clapaudioconfig
.md
num_classes (`int`, *optional*, defaults to 527): Number of classes used for the head training hidden_size (`int`, *optional*, defaults to 768): Hidden size of the output of the audio encoder. Correspond to the dimension of the penultimate layer's output,which is sent to the projection MLP layer. projection_dim (`int`, *optional*, defaults to 512): Hidden size of the projection layer. depths (`list`, *optional*, defaults to `[2, 2, 6, 2]`): Depths used for the Swin Layers of the audio model
328_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clap.md
https://huggingface.co/docs/transformers/en/model_doc/clap/#clapaudioconfig
.md
depths (`list`, *optional*, defaults to `[2, 2, 6, 2]`): Depths used for the Swin Layers of the audio model num_attention_heads (`list`, *optional*, defaults to `[4, 8, 16, 32]`): Number of attention heads used for the Swin Layers of the audio model enable_fusion (`bool`, *optional*, defaults to `False`): Whether or not to enable patch fusion. This is the main contribution of the authors, and should give the best results. hidden_dropout_prob (`float`, *optional*, defaults to 0.1):
328_4_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clap.md
https://huggingface.co/docs/transformers/en/model_doc/clap/#clapaudioconfig
.md
best results. hidden_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout probability for all fully connected layers in the encoder. fusion_type (`[type]`, *optional*): Fusion type used for the patch fusion. patch_embed_input_channels (`int`, *optional*, defaults to 1): Number of channels used for the input spectrogram flatten_patch_embeds (`bool`, *optional*, defaults to `True`): Whether or not to flatten the patch embeddings patch_embeds_hidden_size (`int`, *optional*, defaults to 96):
328_4_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clap.md
https://huggingface.co/docs/transformers/en/model_doc/clap/#clapaudioconfig
.md
Whether or not to flatten the patch embeddings patch_embeds_hidden_size (`int`, *optional*, defaults to 96): Hidden size of the patch embeddings. It is used as the number of output channels. enable_patch_layer_norm (`bool`, *optional*, defaults to `True`): Whether or not to enable layer normalization for the patch embeddings drop_path_rate (`float`, *optional*, defaults to 0.0): Drop path rate for the patch fusion attention_probs_dropout_prob (`float`, *optional*, defaults to 0.0):
328_4_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clap.md
https://huggingface.co/docs/transformers/en/model_doc/clap/#clapaudioconfig
.md
Drop path rate for the patch fusion attention_probs_dropout_prob (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. qkv_bias (`bool`, *optional*, defaults to `True`): Whether or not to add a bias to the query, key, value projections. mlp_ratio (`float`, *optional*, defaults to 4.0): Ratio of the mlp hidden dim to embedding dim. aff_block_r (`int`, *optional*, defaults to 4): downsize_ratio used in the AudioFF block
328_4_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clap.md
https://huggingface.co/docs/transformers/en/model_doc/clap/#clapaudioconfig
.md
aff_block_r (`int`, *optional*, defaults to 4): downsize_ratio used in the AudioFF block num_hidden_layers (`int`, *optional*, defaults to 4): Number of hidden layers in the Transformer encoder. projection_hidden_act (`str`, *optional*, defaults to `"relu"`): The non-linear activation function (function or string) in the projection layer. If string, `"gelu"`, `"relu"`, `"silu"` and `"gelu_new"` are supported. layer_norm_eps (`[type]`, *optional*, defaults to 1e-05):
328_4_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clap.md
https://huggingface.co/docs/transformers/en/model_doc/clap/#clapaudioconfig
.md
`"relu"`, `"silu"` and `"gelu_new"` are supported. layer_norm_eps (`[type]`, *optional*, defaults to 1e-05): The epsilon used by the layer normalization layers. initializer_factor (`float`, *optional*, defaults to 1.0): A factor for initializing all weight matrices (should be kept to 1, used internally for initialization testing). Example: ```python >>> from transformers import ClapAudioConfig, ClapAudioModel
328_4_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clap.md
https://huggingface.co/docs/transformers/en/model_doc/clap/#clapaudioconfig
.md
>>> # Initializing a ClapAudioConfig with laion/clap-htsat-fused style configuration >>> configuration = ClapAudioConfig() >>> # Initializing a ClapAudioModel (with random weights) from the laion/clap-htsat-fused style configuration >>> model = ClapAudioModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
328_4_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clap.md
https://huggingface.co/docs/transformers/en/model_doc/clap/#clapfeatureextractor
.md
Constructs a CLAP feature extractor. This feature extractor inherits from [`~feature_extraction_sequence_utils.SequenceFeatureExtractor`] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. This class extracts mel-filter bank features from raw speech using a custom numpy implementation of the *Short Time Fourier Transform* (STFT) which should match pytorch's `torch.stft` equivalent. Args:
328_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clap.md
https://huggingface.co/docs/transformers/en/model_doc/clap/#clapfeatureextractor
.md
Fourier Transform* (STFT) which should match pytorch's `torch.stft` equivalent. Args: feature_size (`int`, *optional*, defaults to 64): The feature dimension of the extracted Mel spectrograms. This corresponds to the number of mel filters (`n_mels`). sampling_rate (`int`, *optional*, defaults to 48000): The sampling rate at which the audio files should be digitalized expressed in hertz (Hz). This only serves to warn users if the audio fed to the feature extractor does not have the same sampling rate.
328_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clap.md
https://huggingface.co/docs/transformers/en/model_doc/clap/#clapfeatureextractor
.md
to warn users if the audio fed to the feature extractor does not have the same sampling rate. hop_length (`int`,*optional*, defaults to 480): Length of the overlaping windows for the STFT used to obtain the Mel Spectrogram. The audio will be split in smaller `frames` with a step of `hop_length` between each frame. max_length_s (`int`, *optional*, defaults to 10): The maximum input length of the model in seconds. This is used to pad the audio. fft_window_size (`int`, *optional*, defaults to 1024):
328_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clap.md
https://huggingface.co/docs/transformers/en/model_doc/clap/#clapfeatureextractor
.md
fft_window_size (`int`, *optional*, defaults to 1024): Size of the window (in samples) on which the Fourier transform is applied. This controls the frequency resolution of the spectrogram. 400 means that the fourrier transform is computed on windows of 400 samples. padding_value (`float`, *optional*, defaults to 0.0): Padding value used to pad the audio. Should correspond to silences. return_attention_mask (`bool`, *optional*, defaults to `False`):
328_5_3