source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clap.md
|
https://huggingface.co/docs/transformers/en/model_doc/clap/#clapfeatureextractor
|
.md
|
return_attention_mask (`bool`, *optional*, defaults to `False`):
Whether or not the model should return the attention masks coresponding to the input.
frequency_min (`float`, *optional*, defaults to 0):
The lowest frequency of interest. The STFT will not be computed for values below this.
frequency_max (`float`, *optional*, defaults to 14000):
The highest frequency of interest. The STFT will not be computed for values above this.
top_db (`float`, *optional*):
|
328_5_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clap.md
|
https://huggingface.co/docs/transformers/en/model_doc/clap/#clapfeatureextractor
|
.md
|
The highest frequency of interest. The STFT will not be computed for values above this.
top_db (`float`, *optional*):
The highest decibel value used to convert the mel spectrogram to the log scale. For more details see the
`audio_utils.power_to_db` function
truncation (`str`, *optional*, defaults to `"fusion"`):
Truncation pattern for long audio inputs. Two patterns are available:
- `fusion` will use `_random_mel_fusion`, which stacks 3 random crops from the mel spectrogram and a
|
328_5_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clap.md
|
https://huggingface.co/docs/transformers/en/model_doc/clap/#clapfeatureextractor
|
.md
|
- `fusion` will use `_random_mel_fusion`, which stacks 3 random crops from the mel spectrogram and a
downsampled version of the entire mel spectrogram.
If `config.fusion` is set to True, shorter audios also need to to return 4 mels, which will just be a copy
of the original mel obtained from the padded audio.
- `rand_trunc` will select a random crop of the mel spectrogram.
padding (`str`, *optional*, defaults to `"repeatpad"`):
|
328_5_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clap.md
|
https://huggingface.co/docs/transformers/en/model_doc/clap/#clapfeatureextractor
|
.md
|
- `rand_trunc` will select a random crop of the mel spectrogram.
padding (`str`, *optional*, defaults to `"repeatpad"`):
Padding pattern for shorter audio inputs. Three patterns were originally implemented:
- `repeatpad`: the audio is repeated, and then padded to fit the `max_length`.
- `repeat`: the audio is repeated and then cut to fit the `max_length`
- `pad`: the audio is padded.
|
328_5_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clap.md
|
https://huggingface.co/docs/transformers/en/model_doc/clap/#clapprocessor
|
.md
|
Constructs a CLAP processor which wraps a CLAP feature extractor and a RoBerta tokenizer into a single processor.
[`ClapProcessor`] offers all the functionalities of [`ClapFeatureExtractor`] and [`RobertaTokenizerFast`]. See the
[`~ClapProcessor.__call__`] and [`~ClapProcessor.decode`] for more information.
Args:
feature_extractor ([`ClapFeatureExtractor`]):
The audio processor is a required input.
tokenizer ([`RobertaTokenizerFast`]):
The tokenizer is a required input.
|
328_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clap.md
|
https://huggingface.co/docs/transformers/en/model_doc/clap/#clapmodel
|
.md
|
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
|
328_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clap.md
|
https://huggingface.co/docs/transformers/en/model_doc/clap/#clapmodel
|
.md
|
and behavior.
Parameters:
config ([`ClapConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
- get_text_features
- get_audio_features
|
328_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clap.md
|
https://huggingface.co/docs/transformers/en/model_doc/clap/#claptextmodel
|
.md
|
The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of
cross-attention is added between the self-attention layers, following the architecture described in *Attention is
all you need*_ by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz
Kaiser and Illia Polosukhin.
To behave as an decoder the model needs to be initialized with the `is_decoder` argument of the configuration set
|
328_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clap.md
|
https://huggingface.co/docs/transformers/en/model_doc/clap/#claptextmodel
|
.md
|
To behave as an decoder the model needs to be initialized with the `is_decoder` argument of the configuration set
to `True`. To be used in a Seq2Seq model, the model needs to initialized with both `is_decoder` argument and
`add_cross_attention` set to `True`; an `encoder_hidden_states` is then expected as an input to the forward pass.
.. _*Attention is all you need*: https://arxiv.org/abs/1706.03762
Methods: forward
|
328_8_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clap.md
|
https://huggingface.co/docs/transformers/en/model_doc/clap/#claptextmodelwithprojection
|
.md
|
CLAP Text Model with a projection layer on top (a linear layer on top of the pooled output).
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
328_9_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clap.md
|
https://huggingface.co/docs/transformers/en/model_doc/clap/#claptextmodelwithprojection
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`ClapConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
328_9_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clap.md
|
https://huggingface.co/docs/transformers/en/model_doc/clap/#claptextmodelwithprojection
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
328_9_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clap.md
|
https://huggingface.co/docs/transformers/en/model_doc/clap/#clapaudiomodel
|
.md
|
No docstring available for ClapAudioModel
Methods: forward
|
328_10_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clap.md
|
https://huggingface.co/docs/transformers/en/model_doc/clap/#clapaudiomodelwithprojection
|
.md
|
CLAP Audio Model with a projection layer on top (a linear layer on top of the pooled output).
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
328_11_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clap.md
|
https://huggingface.co/docs/transformers/en/model_doc/clap/#clapaudiomodelwithprojection
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`ClapConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
328_11_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clap.md
|
https://huggingface.co/docs/transformers/en/model_doc/clap/#clapaudiomodelwithprojection
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
328_11_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moonshine.md
|
https://huggingface.co/docs/transformers/en/model_doc/moonshine/
|
.md
|
<!--Copyright 2025 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
329_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moonshine.md
|
https://huggingface.co/docs/transformers/en/model_doc/moonshine/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
329_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moonshine.md
|
https://huggingface.co/docs/transformers/en/model_doc/moonshine/#overview
|
.md
|
The Moonshine model was proposed in [Moonshine: Speech Recognition for Live Transcription and Voice Commands
](https://arxiv.org/abs/2410.15608) by Nat Jeffries, Evan King, Manjunath Kudlur, Guy Nicholson, James Wang, Pete Warden.
The abstract from the paper is the following:
|
329_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moonshine.md
|
https://huggingface.co/docs/transformers/en/model_doc/moonshine/#overview
|
.md
|
*This paper introduces Moonshine, a family of speech recognition models optimized for live transcription and voice command processing. Moonshine is based on an encoder-decoder transformer architecture and employs Rotary Position Embedding (RoPE) instead of traditional absolute position embeddings. The model is trained on speech segments of various lengths, but without using zero-padding, leading to greater efficiency for the encoder during inference time. When benchmarked against OpenAI's Whisper tiny-en,
|
329_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moonshine.md
|
https://huggingface.co/docs/transformers/en/model_doc/moonshine/#overview
|
.md
|
leading to greater efficiency for the encoder during inference time. When benchmarked against OpenAI's Whisper tiny-en, Moonshine Tiny demonstrates a 5x reduction in compute requirements for transcribing a 10-second speech segment while incurring no increase in word error rates across standard evaluation datasets. These results highlight Moonshine's potential for real-time and resource-constrained applications.*
|
329_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moonshine.md
|
https://huggingface.co/docs/transformers/en/model_doc/moonshine/#overview
|
.md
|
Tips:
- Moonshine improves upon Whisper's architecture:
1. It uses SwiGLU activation instead of GELU in the decoder layers
2. Most importantly, it replaces absolute position embeddings with Rotary Position Embeddings (RoPE). This allows Moonshine to handle audio inputs of any length, unlike Whisper which is restricted to fixed 30-second windows.
This model was contributed by [Eustache Le Bihan (eustlb)](https://huggingface.co/eustlb).
|
329_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moonshine.md
|
https://huggingface.co/docs/transformers/en/model_doc/moonshine/#overview
|
.md
|
This model was contributed by [Eustache Le Bihan (eustlb)](https://huggingface.co/eustlb).
The original code can be found [here](https://github.com/usefulsensors/moonshine).
|
329_1_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moonshine.md
|
https://huggingface.co/docs/transformers/en/model_doc/moonshine/#resources
|
.md
|
- [Automatic speech recognition task guide](../tasks/asr)
|
329_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moonshine.md
|
https://huggingface.co/docs/transformers/en/model_doc/moonshine/#moonshineconfig
|
.md
|
This is the configuration class to store the configuration of a [`MoonshineModel`]. It is used to instantiate a Moonshine
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the Moonshine
[UsefulSensors/moonshine-tiny](https://huggingface.co/UsefulSensors/moonshine-tiny).
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
329_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moonshine.md
|
https://huggingface.co/docs/transformers/en/model_doc/moonshine/#moonshineconfig
|
.md
|
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 32768):
Vocabulary size of the Moonshine model. Defines the number of different tokens that can be represented by the
`inputs_ids` passed when calling [`MoonshineModel`].
hidden_size (`int`, *optional*, defaults to 288):
Dimension of the hidden representations.
|
329_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moonshine.md
|
https://huggingface.co/docs/transformers/en/model_doc/moonshine/#moonshineconfig
|
.md
|
hidden_size (`int`, *optional*, defaults to 288):
Dimension of the hidden representations.
intermediate_size (`int`, *optional*, defaults to 1152):
Dimension of the MLP representations.
encoder_num_hidden_layers (`int`, *optional*, defaults to 6):
Number of hidden layers in the Transformer encoder.
decoder_num_hidden_layers (`int`, *optional*, defaults to 6):
Number of hidden layers in the Transformer decoder.
encoder_num_attention_heads (`int`, *optional*, defaults to 8):
|
329_3_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moonshine.md
|
https://huggingface.co/docs/transformers/en/model_doc/moonshine/#moonshineconfig
|
.md
|
Number of hidden layers in the Transformer decoder.
encoder_num_attention_heads (`int`, *optional*, defaults to 8):
Number of attention heads for each attention layer in the Transformer encoder.
decoder_num_attention_heads (`int`, *optional*, defaults to 8):
Number of attention heads for each attention layer in the Transformer decoder.
encoder_num_key_value_heads (`int`, *optional*):
This is the number of key_value heads that should be used to implement Grouped Query Attention. If
|
329_3_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moonshine.md
|
https://huggingface.co/docs/transformers/en/model_doc/moonshine/#moonshineconfig
|
.md
|
This is the number of key_value heads that should be used to implement Grouped Query Attention. If
`encoder_num_key_value_heads=encoder_num_attention_heads`, the model will use Multi Head Attention (MHA), if
`encoder_num_key_value_heads=1` the model will use Multi Query Attention (MQA) otherwise GQA is used. When
converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
by meanpooling all the original heads within that group. For more details checkout [this
|
329_3_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moonshine.md
|
https://huggingface.co/docs/transformers/en/model_doc/moonshine/#moonshineconfig
|
.md
|
by meanpooling all the original heads within that group. For more details checkout [this
paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to
`num_attention_heads`.
decoder_num_key_value_heads (`int`, *optional*):
This is the number of key_value heads that should be used to implement Grouped Query Attention. If
`decoder_num_key_value_heads=decoder_num_attention_heads`, the model will use Multi Head Attention (MHA), if
|
329_3_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moonshine.md
|
https://huggingface.co/docs/transformers/en/model_doc/moonshine/#moonshineconfig
|
.md
|
`decoder_num_key_value_heads=decoder_num_attention_heads`, the model will use Multi Head Attention (MHA), if
`decoder_num_key_value_heads=1` the model will use Multi Query Attention (MQA) otherwise GQA is used. When
converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
by meanpooling all the original heads within that group. For more details checkout [this
paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to
|
329_3_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moonshine.md
|
https://huggingface.co/docs/transformers/en/model_doc/moonshine/#moonshineconfig
|
.md
|
paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to
`decoder_num_attention_heads`.
encoder_hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the encoder.
decoder_hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
The non-linear activation function (function or string) in the decoder.
max_position_embeddings (`int`, *optional*, defaults to 512):
|
329_3_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moonshine.md
|
https://huggingface.co/docs/transformers/en/model_doc/moonshine/#moonshineconfig
|
.md
|
max_position_embeddings (`int`, *optional*, defaults to 512):
The maximum sequence length that this model might ever be used with.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
decoder_start_token_id (`int`, *optional*, defaults to 1):
Corresponds to the "<|startoftranscript|>" token, which is automatically used when no `decoder_input_ids`
|
329_3_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moonshine.md
|
https://huggingface.co/docs/transformers/en/model_doc/moonshine/#moonshineconfig
|
.md
|
Corresponds to the "<|startoftranscript|>" token, which is automatically used when no `decoder_input_ids`
are provided to the `generate` function. It is used to guide the model`s generation process depending on
the task.
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models).
rope_theta (`float`, *optional*, defaults to 10000.0):
The base period of the RoPE embeddings.
rope_scaling (`Dict`, *optional*):
|
329_3_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moonshine.md
|
https://huggingface.co/docs/transformers/en/model_doc/moonshine/#moonshineconfig
|
.md
|
The base period of the RoPE embeddings.
rope_scaling (`Dict`, *optional*):
Dictionary containing the scaling configuration for the RoPE embeddings. NOTE: if you apply new rope type
and you expect the model to work on longer `max_position_embeddings`, we recommend you to update this value
accordingly.
Expected contents:
`rope_type` (`str`):
The sub-variant of RoPE to use. Can be one of ['default', 'linear', 'dynamic', 'yarn', 'longrope',
'llama3'], with 'default' being the original RoPE implementation.
|
329_3_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moonshine.md
|
https://huggingface.co/docs/transformers/en/model_doc/moonshine/#moonshineconfig
|
.md
|
'llama3'], with 'default' being the original RoPE implementation.
`factor` (`float`, *optional*):
Used with all rope types except 'default'. The scaling factor to apply to the RoPE embeddings. In
most scaling types, a `factor` of x will enable the model to handle sequences of length x *
original maximum pre-trained length.
`original_max_position_embeddings` (`int`, *optional*):
Used with 'dynamic', 'longrope' and 'llama3'. The original max position embeddings used during
pretraining.
|
329_3_11
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moonshine.md
|
https://huggingface.co/docs/transformers/en/model_doc/moonshine/#moonshineconfig
|
.md
|
Used with 'dynamic', 'longrope' and 'llama3'. The original max position embeddings used during
pretraining.
`attention_factor` (`float`, *optional*):
Used with 'yarn' and 'longrope'. The scaling factor to be applied on the attention
computation. If unspecified, it defaults to value recommended by the implementation, using the
`factor` field to infer the suggested value.
`beta_fast` (`float`, *optional*):
Only used with 'yarn'. Parameter to set the boundary for extrapolation (only) in the linear
|
329_3_12
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moonshine.md
|
https://huggingface.co/docs/transformers/en/model_doc/moonshine/#moonshineconfig
|
.md
|
`beta_fast` (`float`, *optional*):
Only used with 'yarn'. Parameter to set the boundary for extrapolation (only) in the linear
ramp function. If unspecified, it defaults to 32.
`beta_slow` (`float`, *optional*):
Only used with 'yarn'. Parameter to set the boundary for interpolation (only) in the linear
ramp function. If unspecified, it defaults to 1.
`short_factor` (`List[float]`, *optional*):
Only used with 'longrope'. The scaling factor to be applied to short contexts (<
|
329_3_13
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moonshine.md
|
https://huggingface.co/docs/transformers/en/model_doc/moonshine/#moonshineconfig
|
.md
|
`short_factor` (`List[float]`, *optional*):
Only used with 'longrope'. The scaling factor to be applied to short contexts (<
`original_max_position_embeddings`). Must be a list of numbers with the same length as the hidden
size divided by the number of attention heads divided by 2
`long_factor` (`List[float]`, *optional*):
Only used with 'longrope'. The scaling factor to be applied to long contexts (<
`original_max_position_embeddings`). Must be a list of numbers with the same length as the hidden
|
329_3_14
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moonshine.md
|
https://huggingface.co/docs/transformers/en/model_doc/moonshine/#moonshineconfig
|
.md
|
`original_max_position_embeddings`). Must be a list of numbers with the same length as the hidden
size divided by the number of attention heads divided by 2
`low_freq_factor` (`float`, *optional*):
Only used with 'llama3'. Scaling factor applied to low frequency components of the RoPE
`high_freq_factor` (`float`, *optional*):
Only used with 'llama3'. Scaling factor applied to high frequency components of the RoPE
partial_rotary_factor (`float`, *optional*, defaults to 0.9):
|
329_3_15
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moonshine.md
|
https://huggingface.co/docs/transformers/en/model_doc/moonshine/#moonshineconfig
|
.md
|
partial_rotary_factor (`float`, *optional*, defaults to 0.9):
Percentage of the query and keys which will have rotary embedding.
is_encoder_decoder (`bool`, *optional*, defaults to `True`):
Whether the model is used as an encoder/decoder or not.
attention_bias (`bool`, *optional*, defaults to `False`):
Whether to use a bias in the query, key, value and output projection layers during self-attention.
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
|
329_3_16
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moonshine.md
|
https://huggingface.co/docs/transformers/en/model_doc/moonshine/#moonshineconfig
|
.md
|
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
bos_token_id (`int`, *optional*, defaults to 1):
Denotes beginning of sequences token id.
eos_token_id (`int`, *optional*, defaults to 2):
Denotes end of sequences token id.
Example:
```python
>>> from transformers import MoonshineModel, MoonshineConfig
|
329_3_17
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moonshine.md
|
https://huggingface.co/docs/transformers/en/model_doc/moonshine/#moonshineconfig
|
.md
|
>>> # Initializing a Moonshine style configuration
>>> configuration = MoonshineConfig().from_pretrained("UsefulSensors/moonshine-tiny")
>>> # Initializing a model from the configuration
>>> model = MoonshineModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
329_3_18
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moonshine.md
|
https://huggingface.co/docs/transformers/en/model_doc/moonshine/#moonshinemodel
|
.md
|
The bare Moonshine Model outputting raw hidden-states without any specific head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
329_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moonshine.md
|
https://huggingface.co/docs/transformers/en/model_doc/moonshine/#moonshinemodel
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`MoonshineConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
|
329_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moonshine.md
|
https://huggingface.co/docs/transformers/en/model_doc/moonshine/#moonshinemodel
|
.md
|
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
- _mask_input_features
|
329_4_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moonshine.md
|
https://huggingface.co/docs/transformers/en/model_doc/moonshine/#moonshineforconditionalgeneration
|
.md
|
The Moonshine Model with a language modeling head. Can be used for automatic speech recognition.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
329_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moonshine.md
|
https://huggingface.co/docs/transformers/en/model_doc/moonshine/#moonshineforconditionalgeneration
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`MoonshineConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
|
329_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/moonshine.md
|
https://huggingface.co/docs/transformers/en/model_doc/moonshine/#moonshineforconditionalgeneration
|
.md
|
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
- generate
|
329_5_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clvp.md
|
https://huggingface.co/docs/transformers/en/model_doc/clvp/
|
.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
330_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clvp.md
|
https://huggingface.co/docs/transformers/en/model_doc/clvp/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
330_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clvp.md
|
https://huggingface.co/docs/transformers/en/model_doc/clvp/#overview
|
.md
|
The CLVP (Contrastive Language-Voice Pretrained Transformer) model was proposed in [Better speech synthesis through scaling](https://arxiv.org/abs/2305.07243) by James Betker.
The abstract from the paper is the following:
|
330_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clvp.md
|
https://huggingface.co/docs/transformers/en/model_doc/clvp/#overview
|
.md
|
*In recent years, the field of image generation has been revolutionized by the application of autoregressive transformers and DDPMs. These approaches model the process of image generation as a step-wise probabilistic processes and leverage large amounts of compute and data to learn the image distribution. This methodology of improving performance need not be confined to images. This paper describes a way to apply advances in the image generative domain to speech synthesis. The result is TorToise - an
|
330_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clvp.md
|
https://huggingface.co/docs/transformers/en/model_doc/clvp/#overview
|
.md
|
This paper describes a way to apply advances in the image generative domain to speech synthesis. The result is TorToise - an expressive, multi-voice text-to-speech system.*
|
330_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clvp.md
|
https://huggingface.co/docs/transformers/en/model_doc/clvp/#overview
|
.md
|
This model was contributed by [Susnato Dhar](https://huggingface.co/susnato).
The original code can be found [here](https://github.com/neonbjb/tortoise-tts).
|
330_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clvp.md
|
https://huggingface.co/docs/transformers/en/model_doc/clvp/#usage-tips
|
.md
|
1. CLVP is an integral part of the Tortoise TTS model.
2. CLVP can be used to compare different generated speech candidates with the provided text, and the best speech tokens are forwarded to the diffusion model.
3. The use of the [`ClvpModelForConditionalGeneration.generate()`] method is strongly recommended for tortoise usage.
4. Note that the CLVP model expects the audio to be sampled at 22.05 kHz contrary to other audio models which expects 16 kHz.
|
330_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clvp.md
|
https://huggingface.co/docs/transformers/en/model_doc/clvp/#brief-explanation
|
.md
|
- The [`ClvpTokenizer`] tokenizes the text input, and the [`ClvpFeatureExtractor`] extracts the log mel-spectrogram from the desired audio.
- [`ClvpConditioningEncoder`] takes those text tokens and audio representations and converts them into embeddings conditioned on the text and audio.
- The [`ClvpForCausalLM`] uses those embeddings to generate multiple speech candidates.
|
330_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clvp.md
|
https://huggingface.co/docs/transformers/en/model_doc/clvp/#brief-explanation
|
.md
|
- The [`ClvpForCausalLM`] uses those embeddings to generate multiple speech candidates.
- Each speech candidate is passed through the speech encoder ([`ClvpEncoder`]) which converts them into a vector representation, and the text encoder ([`ClvpEncoder`]) converts the text tokens into the same latent space.
- At the end, we compare each speech vector with the text vector to see which speech vector is most similar to the text vector.
|
330_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clvp.md
|
https://huggingface.co/docs/transformers/en/model_doc/clvp/#brief-explanation
|
.md
|
- At the end, we compare each speech vector with the text vector to see which speech vector is most similar to the text vector.
- [`ClvpModelForConditionalGeneration.generate()`] compresses all of the logic described above into a single method.
Example :
```python
>>> import datasets
>>> from transformers import ClvpProcessor, ClvpModelForConditionalGeneration
|
330_3_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clvp.md
|
https://huggingface.co/docs/transformers/en/model_doc/clvp/#brief-explanation
|
.md
|
>>> # Define the Text and Load the Audio (We are taking an audio example from HuggingFace Hub using `datasets` library).
>>> text = "This is an example text."
>>> ds = datasets.load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> ds = ds.cast_column("audio", datasets.Audio(sampling_rate=22050))
>>> sample = ds[0]["audio"]
|
330_3_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clvp.md
|
https://huggingface.co/docs/transformers/en/model_doc/clvp/#brief-explanation
|
.md
|
>>> # Define processor and model.
>>> processor = ClvpProcessor.from_pretrained("susnato/clvp_dev")
>>> model = ClvpModelForConditionalGeneration.from_pretrained("susnato/clvp_dev")
>>> # Generate processor output and model output.
>>> processor_output = processor(raw_speech=sample["array"], sampling_rate=sample["sampling_rate"], text=text, return_tensors="pt")
>>> generated_output = model.generate(**processor_output)
```
|
330_3_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clvp.md
|
https://huggingface.co/docs/transformers/en/model_doc/clvp/#clvpconfig
|
.md
|
[`ClvpConfig`] is the configuration class to store the configuration of a [`ClvpModelForConditionalGeneration`]. It
is used to instantiate a CLVP model according to the specified arguments, defining the text model, speech model and
decoder model configs. Instantiating a configuration with the defaults will yield a similar configuration to that
of the CLVP [susnato/clvp_dev](https://huggingface.co/susnato/clvp_dev) architecture.
|
330_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clvp.md
|
https://huggingface.co/docs/transformers/en/model_doc/clvp/#clvpconfig
|
.md
|
of the CLVP [susnato/clvp_dev](https://huggingface.co/susnato/clvp_dev) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
text_config (`dict`, *optional*):
Dictionary of configuration options used to initialize the CLVP text encoder.
speech_config (`dict`, *optional*):
Dictionary of configuration options used to initialize CLVP speech encoder.
|
330_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clvp.md
|
https://huggingface.co/docs/transformers/en/model_doc/clvp/#clvpconfig
|
.md
|
speech_config (`dict`, *optional*):
Dictionary of configuration options used to initialize CLVP speech encoder.
decoder_config (`dict`, *optional*):
Dictionary of configuration options used to initialize [`ClvpDecoderConfig`].
projection_dim (`int`, *optional*, defaults to 768):
Dimensionality of text and speech projection layers.
logit_scale_init_value (`float`, *optional*, defaults to 2.6592):
The initial value of the *logit_scale* parameter. Default is used as per the original CLVP implementation.
|
330_4_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clvp.md
|
https://huggingface.co/docs/transformers/en/model_doc/clvp/#clvpconfig
|
.md
|
The initial value of the *logit_scale* parameter. Default is used as per the original CLVP implementation.
initializer_factor (`float`, *optional*, defaults to 1.0):
A factor for initializing all weight matrices (should be kept to 1.0, used internally for initialization
testing).
kwargs (*optional*):
Dictionary of keyword arguments.
Example:
```python
>>> from transformers import ClvpConfig, ClvpModelForConditionalGeneration
|
330_4_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clvp.md
|
https://huggingface.co/docs/transformers/en/model_doc/clvp/#clvpconfig
|
.md
|
>>> # Initializing a ClvpConfig with susnato/clvp_dev style configuration
>>> configuration = ClvpConfig()
>>> # Initializing a ClvpModelForConditionalGeneration (with random weights) from the susnato/clvp_dev style configuration
>>> model = ClvpModelForConditionalGeneration(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
|
330_4_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clvp.md
|
https://huggingface.co/docs/transformers/en/model_doc/clvp/#clvpconfig
|
.md
|
>>> # Accessing the model configuration
>>> configuration = model.config
>>> # We can also initialize a CLVPConfig from a CLVPTextConfig, CLVPSpeechConfig and a CLVPAutoRegressiveConfig
>>> from transformers import ClvpEncoderConfig, ClvpDecoderConfig
>>> # Initializing a CLVP text, CLVP speech and CLVP decoder configuration
>>> config_text = ClvpEncoderConfig()
>>> config_speech = ClvpEncoderConfig()
>>> decoder_config = ClvpDecoderConfig()
|
330_4_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clvp.md
|
https://huggingface.co/docs/transformers/en/model_doc/clvp/#clvpconfig
|
.md
|
>>> config = ClvpConfig.from_sub_model_configs(config_text, config_speech, decoder_config)
```
Methods: from_sub_model_configs
|
330_4_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clvp.md
|
https://huggingface.co/docs/transformers/en/model_doc/clvp/#clvpencoderconfig
|
.md
|
This is the configuration class to store the configuration of a [`ClvpEncoder`]. It is used to instantiate a CLVP
text or CLVP speech encoder according to the specified arguments. Instantiating a configuration with the defaults
will yield a similar configuration to that of the encoder of the CLVP
[susnato/clvp_dev](https://huggingface.co/susnato/clvp_dev) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
330_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clvp.md
|
https://huggingface.co/docs/transformers/en/model_doc/clvp/#clvpencoderconfig
|
.md
|
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 256):
Vocabulary size of the CLVP Encoder model.
hidden_size (`int`, *optional*, defaults to 768):
Dimensionality of the encoder layers and the pooler layer.
intermediate_size (`int`, *optional*, defaults to 1536):
|
330_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clvp.md
|
https://huggingface.co/docs/transformers/en/model_doc/clvp/#clvpencoderconfig
|
.md
|
Dimensionality of the encoder layers and the pooler layer.
intermediate_size (`int`, *optional*, defaults to 1536):
Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
projection_dim (`int`, *optional*, defaults to 768):
Dimensionality of the projection vector.
num_hidden_layers (`int`, *optional*, defaults to 20):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 12):
|
330_5_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clvp.md
|
https://huggingface.co/docs/transformers/en/model_doc/clvp/#clvpencoderconfig
|
.md
|
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 12):
Number of attention heads for each attention layer in the Transformer encoder.
hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"selu"` and `"gelu_new"` `"quick_gelu"` are supported.
layer_norm_eps (`float`, *optional*, defaults to 1e-05):
|
330_5_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clvp.md
|
https://huggingface.co/docs/transformers/en/model_doc/clvp/#clvpencoderconfig
|
.md
|
`"relu"`, `"selu"` and `"gelu_new"` `"quick_gelu"` are supported.
layer_norm_eps (`float`, *optional*, defaults to 1e-05):
The epsilon used by the layer normalization layers.
attention_dropout (`float`, *optional*, defaults to 0.1):
The dropout ratio for the attention probabilities.
dropout (`float`, *optional*, defaults to 0.1):
The dropout ratio for the feed-forward layers in [`ClvpEncoderMLP`].
use_rotary_embedding (`bool`, *optional*, defaults to `True`):
Whether to use rotary_embedding or not.
|
330_5_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clvp.md
|
https://huggingface.co/docs/transformers/en/model_doc/clvp/#clvpencoderconfig
|
.md
|
use_rotary_embedding (`bool`, *optional*, defaults to `True`):
Whether to use rotary_embedding or not.
use_attention_bias (`bool`, *optional*, defaults to `False`):
Whether to use bias in Query, Key and Value layers during self attention.
summary_type (`str`, *optional*, defaults to `"mean"`):
What strategy to use to get pooler_output from the last_hidden_state. `"last"`, `"first"`, `"mean"` and
`"cls_index"` are supported.
initializer_factor (`float`, *optional*, defaults to 1.0):
|
330_5_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clvp.md
|
https://huggingface.co/docs/transformers/en/model_doc/clvp/#clvpencoderconfig
|
.md
|
`"cls_index"` are supported.
initializer_factor (`float`, *optional*, defaults to 1.0):
A factor for initializing all weight matrices (should be kept to 1.0, used internally for initialization
testing).
bos_token_id (`int`, *optional*, defaults to 255):
Beginning of sequence token id.
eos_token_id (`int`, *optional*, defaults to 0):
End of sequence token id.
Example:
```python
>>> from transformers import ClvpEncoderConfig, ClvpEncoder
|
330_5_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clvp.md
|
https://huggingface.co/docs/transformers/en/model_doc/clvp/#clvpencoderconfig
|
.md
|
>>> # Initializing a ClvpEncoderConfig with susnato/clvp_dev style configuration
>>> encoder_configuration = ClvpEncoderConfig()
>>> # Initializing a ClvpEncoder (with random weights) from the susnato/clvp_dev style configuration
>>> model = ClvpEncoder(encoder_configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
330_5_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clvp.md
|
https://huggingface.co/docs/transformers/en/model_doc/clvp/#clvpdecoderconfig
|
.md
|
This is the configuration class to store the configuration of a [`ClvpDecoder`]. It is used to instantiate a CLVP
Decoder Model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the Decoder part of the CLVP
[susnato/clvp_dev](https://huggingface.co/susnato/clvp_dev) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
330_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clvp.md
|
https://huggingface.co/docs/transformers/en/model_doc/clvp/#clvpdecoderconfig
|
.md
|
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
The architecture is similar to GPT2.
Args:
vocab_size (`int`, *optional*, defaults to 8194):
Vocabulary size of the model.
max_position_embeddings (`int`, *optional*, defaults to 608):
The maximum sequence length of mel tokens that this model might ever be used with. Similar to `n_positions`
in `GPT2Config`.
|
330_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clvp.md
|
https://huggingface.co/docs/transformers/en/model_doc/clvp/#clvpdecoderconfig
|
.md
|
The maximum sequence length of mel tokens that this model might ever be used with. Similar to `n_positions`
in `GPT2Config`.
max_text_tokens (`int`, *optional*, defaults to 404):
The maximum sequence length of text tokens that this model might ever be used with. Similar to
`n_positions` in `GPT2Config`.
hidden_size (`int`, *optional*, defaults to 1024):
Dimensionality of the embeddings and hidden states.
num_hidden_layers (`int`, *optional*, defaults to 30):
|
330_6_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clvp.md
|
https://huggingface.co/docs/transformers/en/model_doc/clvp/#clvpdecoderconfig
|
.md
|
Dimensionality of the embeddings and hidden states.
num_hidden_layers (`int`, *optional*, defaults to 30):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 16):
Number of attention heads for each attention layer in the Transformer encoder.
n_inner (`int`, *optional*):
Dimensionality of the inner feed-forward layers. `None` will set it to 4 times `hidden_size`.
num_mel_attn_blocks (`int`, *optional*, defaults to 6):
|
330_6_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clvp.md
|
https://huggingface.co/docs/transformers/en/model_doc/clvp/#clvpdecoderconfig
|
.md
|
num_mel_attn_blocks (`int`, *optional*, defaults to 6):
Denotes the number of self attention layers in [`ClvpConditioningEncoder`].
activation_function (`str`, *optional*, defaults to `"gelu_new"`):
Activation function, to be selected in the list `["relu", "silu", "gelu", "tanh", "gelu_new"]`.
resid_pdrop (`float`, *optional*, defaults to 0.1):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
embd_pdrop (`float`, *optional*, defaults to 0.1):
|
330_6_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clvp.md
|
https://huggingface.co/docs/transformers/en/model_doc/clvp/#clvpdecoderconfig
|
.md
|
embd_pdrop (`float`, *optional*, defaults to 0.1):
The dropout ratio for the embeddings.
attention_dropout (`float`, *optional*, defaults to 0.1):
The dropout ratio for the attention.
layer_norm_epsilon (`float`, *optional*, defaults to 1e-05):
The epsilon to use in the layer normalization layers.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
|
330_6_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clvp.md
|
https://huggingface.co/docs/transformers/en/model_doc/clvp/#clvpdecoderconfig
|
.md
|
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
summary_type (`string`, *optional*, defaults to `"cls_index"`):
Argument used when doing sequence summary.
Has to be one of the following options:
- `"last"`: Take the last token hidden state (like XLNet).
- `"first"`: Take the first token hidden state (like BERT).
- `"mean"`: Take the mean of all tokens hidden states.
- `"cls_index"`: Supply a Tensor of classification token position (like GPT/GPT-2).
|
330_6_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clvp.md
|
https://huggingface.co/docs/transformers/en/model_doc/clvp/#clvpdecoderconfig
|
.md
|
- `"cls_index"`: Supply a Tensor of classification token position (like GPT/GPT-2).
- `"attn"`: Not implemented now, use multi-head attention.
summary_use_proj (`bool`, *optional*, defaults to `True`):
Whether or not to add a projection after the vector extraction.
summary_activation (`str`, *optional*):
Pass `"tanh"` for a tanh activation to the output, any other value will result in no activation.
summary_proj_to_labels (`bool`, *optional*, defaults to `True`):
|
330_6_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clvp.md
|
https://huggingface.co/docs/transformers/en/model_doc/clvp/#clvpdecoderconfig
|
.md
|
summary_proj_to_labels (`bool`, *optional*, defaults to `True`):
Whether the projection outputs should have `config.num_labels` or `config.hidden_size` classes.
summary_first_dropout (`float`, *optional*, defaults to 0.1):
The dropout ratio to be used after the projection and activation.
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models).
bos_token_id (`int`, *optional*, defaults to 8192):
|
330_6_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clvp.md
|
https://huggingface.co/docs/transformers/en/model_doc/clvp/#clvpdecoderconfig
|
.md
|
bos_token_id (`int`, *optional*, defaults to 8192):
Beginning of sequence token id, used at the start of the generation.
eos_token_id (`int`, *optional*, defaults to 8193):
End of sequence token id, used in the method
[`ClvpModelForConditionalGeneration.fix_speech_decoder_output()`] to correct decoder outputs.
feature_size (`int`, *optional*, defaults to 80):
The feature dimension of the extracted mel features. This value is used in [`ClvpConditioningEncoder`].
|
330_6_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clvp.md
|
https://huggingface.co/docs/transformers/en/model_doc/clvp/#clvpdecoderconfig
|
.md
|
The feature dimension of the extracted mel features. This value is used in [`ClvpConditioningEncoder`].
use_attention_bias (`bool`, *optional*, defaults to `True`):
Whether to use bias in Query, Key and Value layers during self attention.
initializer_factor (`float`, *optional*, defaults to 1.0):
A factor for initializing all weight matrices (should be kept to 1.0, used internally for initialization
testing).
decoder_fixing_codes (`list`, *optional*, defaults to `[83, 45, 45, 248]`):
|
330_6_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clvp.md
|
https://huggingface.co/docs/transformers/en/model_doc/clvp/#clvpdecoderconfig
|
.md
|
testing).
decoder_fixing_codes (`list`, *optional*, defaults to `[83, 45, 45, 248]`):
These values are used in the method `fix_speech_decoder_output` to fix decoder generated outputs.
Example:
```python
>>> from transformers import ClvpDecoderConfig, ClvpDecoder
|
330_6_11
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clvp.md
|
https://huggingface.co/docs/transformers/en/model_doc/clvp/#clvpdecoderconfig
|
.md
|
>>> # Initializing a ClvpDecoderConfig with susnato/clvp_dev style configuration
>>> decoder_configuration = ClvpDecoderConfig()
>>> # Initializing a ClvpDecoder (with random weights) from the susnato/clvp_dev style configuration
>>> model = ClvpDecoder(decoder_configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
330_6_12
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clvp.md
|
https://huggingface.co/docs/transformers/en/model_doc/clvp/#clvptokenizer
|
.md
|
Construct a CLVP tokenizer. Based on byte-level Byte-Pair-Encoding.
This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will
be encoded differently whether it is at the beginning of the sentence (without space) or not:
```python
>>> from transformers import ClvpTokenizer
>>> tokenizer = ClvpTokenizer.from_pretrained("susnato/clvp_dev")
>>> tokenizer("Hello world")["input_ids"]
[62, 84, 28, 2, 179, 79]
|
330_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clvp.md
|
https://huggingface.co/docs/transformers/en/model_doc/clvp/#clvptokenizer
|
.md
|
>>> tokenizer(" Hello world")["input_ids"]
[2, 62, 84, 28, 2, 179, 79]
```
You can get around that behavior by passing `add_prefix_space=True` when instantiating this tokenizer or when you
call it on some text, but since the model was not pretrained this way, it might yield a decrease in performance.
<Tip>
When used with `is_split_into_words=True`, this tokenizer will add a space before each word (even the first one).
</Tip>
|
330_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clvp.md
|
https://huggingface.co/docs/transformers/en/model_doc/clvp/#clvptokenizer
|
.md
|
When used with `is_split_into_words=True`, this tokenizer will add a space before each word (even the first one).
</Tip>
This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
Args:
vocab_file (`str`):
Path to the vocabulary file.
merges_file (`str`):
Path to the merges file.
errors (`str`, *optional*, defaults to `"replace"`):
Paradigm to follow when decoding bytes to UTF-8. See
|
330_7_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clvp.md
|
https://huggingface.co/docs/transformers/en/model_doc/clvp/#clvptokenizer
|
.md
|
errors (`str`, *optional*, defaults to `"replace"`):
Paradigm to follow when decoding bytes to UTF-8. See
[bytes.decode](https://docs.python.org/3/library/stdtypes.html#bytes.decode) for more information.
unk_token (`str`, *optional*, defaults to `"[UNK]"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
bos_token (`str`, *optional*, defaults to `"<|endoftext|>"`):
The beginning of sequence token.
|
330_7_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clvp.md
|
https://huggingface.co/docs/transformers/en/model_doc/clvp/#clvptokenizer
|
.md
|
token instead.
bos_token (`str`, *optional*, defaults to `"<|endoftext|>"`):
The beginning of sequence token.
eos_token (`str`, *optional*, defaults to `"[STOP]"`):
The end of sequence token.
pad_token (`str`, *optional*, defaults to `"[STOP]"`):
The pad token of the sequence.
add_prefix_space (`bool`, *optional*, defaults to `False`):
Whether or not to add an initial space to the input. This allows to treat the leading word just as any
|
330_7_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clvp.md
|
https://huggingface.co/docs/transformers/en/model_doc/clvp/#clvptokenizer
|
.md
|
Whether or not to add an initial space to the input. This allows to treat the leading word just as any
other word. (CLVP tokenizer detect beginning of words by the preceding space).
add_bos_token (`bool`, *optional*, defaults to `False`):
Whether to add `bos_token` in front of the sequence when add_special_tokens=True.
add_eos_token (`bool`, *optional*, defaults to `False`):
Whether to add `eos_token` in end of the sequence when add_special_tokens=True.
Methods: save_vocabulary
|
330_7_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clvp.md
|
https://huggingface.co/docs/transformers/en/model_doc/clvp/#clvpfeatureextractor
|
.md
|
Constructs a CLVP feature extractor.
This feature extractor inherits from [`~feature_extraction_sequence_utils.SequenceFeatureExtractor`] which contains
most of the main methods. Users should refer to this superclass for more information regarding those methods.
This class extracts log-mel-spectrogram features from raw speech using a custom numpy implementation of the `Short
Time Fourier Transform` which should match pytorch's `torch.stft` equivalent.
Args:
|
330_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clvp.md
|
https://huggingface.co/docs/transformers/en/model_doc/clvp/#clvpfeatureextractor
|
.md
|
Time Fourier Transform` which should match pytorch's `torch.stft` equivalent.
Args:
feature_size (`int`, *optional*, defaults to 80):
The feature dimension of the extracted features.
sampling_rate (`int`, *optional*, defaults to 22050):
The sampling rate at which the audio files should be digitalized expressed in hertz (Hz).
default_audio_length (`int`, *optional*, defaults to 6):
The default length of raw audio in seconds. If `max_length` is not set during `__call__` then it will
|
330_8_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clvp.md
|
https://huggingface.co/docs/transformers/en/model_doc/clvp/#clvpfeatureextractor
|
.md
|
The default length of raw audio in seconds. If `max_length` is not set during `__call__` then it will
automatically be set to default_audio_length * `self.sampling_rate`.
hop_length (`int`, *optional*, defaults to 256):
Length of the overlaping windows for the STFT used to obtain the Mel Frequency coefficients.
chunk_length (`int`, *optional*, defaults to 30):
The maximum number of chuncks of `sampling_rate` samples used to trim and pad longer or shorter audio
sequences.
|
330_8_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clvp.md
|
https://huggingface.co/docs/transformers/en/model_doc/clvp/#clvpfeatureextractor
|
.md
|
The maximum number of chuncks of `sampling_rate` samples used to trim and pad longer or shorter audio
sequences.
n_fft (`int`, *optional*, defaults to 1024):
Size of the Fourier transform.
padding_value (`float`, *optional*, defaults to 0.0):
Padding value used to pad the audio. Should correspond to silences.
mel_norms (`list` of length `feature_size`, *optional*):
If `mel_norms` is provided then it will be used to normalize the log-mel spectrograms along each
mel-filter.
|
330_8_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clvp.md
|
https://huggingface.co/docs/transformers/en/model_doc/clvp/#clvpfeatureextractor
|
.md
|
If `mel_norms` is provided then it will be used to normalize the log-mel spectrograms along each
mel-filter.
return_attention_mask (`bool`, *optional*, defaults to `False`):
Whether to return the attention mask. If left to the default, it will return the attention mask.
[What are attention masks?](../glossary#attention-mask)
Methods: __call__
|
330_8_4
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.