source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/gemma/#flaxgemmamodel
|
.md
|
No docstring available for FlaxGemmaModel
Methods: __call__
|
231_9_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/gemma/#flaxgemmaforcausallm
|
.md
|
No docstring available for FlaxGemmaForCausalLM
Methods: __call__
|
231_10_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wavlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/wavlm/
|
.md
|
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
232_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wavlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/wavlm/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
232_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wavlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/wavlm/#overview
|
.md
|
The WavLM model was proposed in [WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900) by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen,
Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu,
Michael Zeng, Furu Wei.
The abstract from the paper is the following:
|
232_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wavlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/wavlm/#overview
|
.md
|
Michael Zeng, Furu Wei.
The abstract from the paper is the following:
*Self-supervised learning (SSL) achieves great success in speech recognition, while limited exploration has been
attempted for other speech processing tasks. As speech signal contains multi-faceted information including speaker
identity, paralinguistics, spoken content, etc., learning universal representations for all speech tasks is
|
232_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wavlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/wavlm/#overview
|
.md
|
identity, paralinguistics, spoken content, etc., learning universal representations for all speech tasks is
challenging. In this paper, we propose a new pre-trained model, WavLM, to solve full-stack downstream speech tasks.
WavLM is built based on the HuBERT framework, with an emphasis on both spoken content modeling and speaker identity
preservation. We first equip the Transformer structure with gated relative position bias to improve its capability on
|
232_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wavlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/wavlm/#overview
|
.md
|
preservation. We first equip the Transformer structure with gated relative position bias to improve its capability on
recognition tasks. For better speaker discrimination, we propose an utterance mixing training strategy, where
additional overlapped utterances are created unsupervisedly and incorporated during model training. Lastly, we scale up
the training dataset from 60k hours to 94k hours. WavLM Large achieves state-of-the-art performance on the SUPERB
|
232_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wavlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/wavlm/#overview
|
.md
|
the training dataset from 60k hours to 94k hours. WavLM Large achieves state-of-the-art performance on the SUPERB
benchmark, and brings significant improvements for various speech processing tasks on their representative benchmarks.*
Relevant checkpoints can be found under https://huggingface.co/models?other=wavlm.
This model was contributed by [patrickvonplaten](https://huggingface.co/patrickvonplaten). The Authors' code can be
found [here](https://github.com/microsoft/unilm/tree/master/wavlm).
|
232_1_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wavlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/wavlm/#usage-tips
|
.md
|
- WavLM is a speech model that accepts a float array corresponding to the raw waveform of the speech signal. Please use
[`Wav2Vec2Processor`] for the feature extraction.
- WavLM model can be fine-tuned using connectionist temporal classification (CTC) so the model output has to be decoded
using [`Wav2Vec2CTCTokenizer`].
- WavLM performs especially well on speaker verification, speaker identification, and speaker diarization tasks.
|
232_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wavlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/wavlm/#resources
|
.md
|
- [Audio classification task guide](../tasks/audio_classification)
- [Automatic speech recognition task guide](../tasks/asr)
|
232_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wavlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/wavlm/#wavlmconfig
|
.md
|
This is the configuration class to store the configuration of a [`WavLMModel`]. It is used to instantiate an WavLM
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the WavLM
[microsoft/wavlm-base](https://huggingface.co/microsoft/wavlm-base) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
232_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wavlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/wavlm/#wavlmconfig
|
.md
|
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 32):
Vocabulary size of the WavLM model. Defines the number of different tokens that can be represented by the
`inputs_ids` passed when calling [`WavLMModel`]. Vocabulary size of the model. Defines the different tokens
|
232_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wavlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/wavlm/#wavlmconfig
|
.md
|
`inputs_ids` passed when calling [`WavLMModel`]. Vocabulary size of the model. Defines the different tokens
that can be represented by the *inputs_ids* passed to the forward method of [`WavLMModel`].
hidden_size (`int`, *optional*, defaults to 768):
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (`int`, *optional*, defaults to 12):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 12):
|
232_4_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wavlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/wavlm/#wavlmconfig
|
.md
|
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 12):
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (`int`, *optional*, defaults to 3072):
Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
|
232_4_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wavlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/wavlm/#wavlmconfig
|
.md
|
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"selu"` and `"gelu_new"` are supported.
hidden_dropout (`float`, *optional*, defaults to 0.1):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
activation_dropout (`float`, *optional*, defaults to 0.1):
The dropout ratio for activations inside the fully connected layer.
attention_dropout (`float`, *optional*, defaults to 0.1):
|
232_4_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wavlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/wavlm/#wavlmconfig
|
.md
|
The dropout ratio for activations inside the fully connected layer.
attention_dropout (`float`, *optional*, defaults to 0.1):
The dropout ratio for the attention probabilities.
final_dropout (`float`, *optional*, defaults to 0.1):
The dropout probability for the final projection layer of [`WavLMForCTC`].
layerdrop (`float`, *optional*, defaults to 0.1):
The LayerDrop probability. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more
details.
|
232_4_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wavlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/wavlm/#wavlmconfig
|
.md
|
The LayerDrop probability. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more
details.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (`float`, *optional*, defaults to 1e-12):
The epsilon used by the layer normalization layers.
feat_extract_norm (`str`, *optional*, defaults to `"group"`):
|
232_4_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wavlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/wavlm/#wavlmconfig
|
.md
|
The epsilon used by the layer normalization layers.
feat_extract_norm (`str`, *optional*, defaults to `"group"`):
The norm to be applied to 1D convolutional layers in feature encoder. One of `"group"` for group
normalization of only the first 1D convolutional layer or `"layer"` for layer normalization of all 1D
convolutional layers.
feat_proj_dropout (`float`, *optional*, defaults to 0.0):
The dropout probability for output of the feature encoder.
|
232_4_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wavlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/wavlm/#wavlmconfig
|
.md
|
feat_proj_dropout (`float`, *optional*, defaults to 0.0):
The dropout probability for output of the feature encoder.
feat_extract_activation (`str, `optional`, defaults to `"gelu"`):
The non-linear activation function (function or string) in the 1D convolutional layers of the feature
extractor. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported.
conv_dim (`Tuple[int]` or `List[int]`, *optional*, defaults to `(512, 512, 512, 512, 512, 512, 512)`):
|
232_4_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wavlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/wavlm/#wavlmconfig
|
.md
|
conv_dim (`Tuple[int]` or `List[int]`, *optional*, defaults to `(512, 512, 512, 512, 512, 512, 512)`):
A tuple of integers defining the number of input and output channels of each 1D convolutional layer in the
feature encoder. The length of *conv_dim* defines the number of 1D convolutional layers.
conv_stride (`Tuple[int]` or `List[int]`, *optional*, defaults to `(5, 2, 2, 2, 2, 2, 2)`):
A tuple of integers defining the stride of each 1D convolutional layer in the feature encoder. The length
|
232_4_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wavlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/wavlm/#wavlmconfig
|
.md
|
A tuple of integers defining the stride of each 1D convolutional layer in the feature encoder. The length
of *conv_stride* defines the number of convolutional layers and has to match the length of *conv_dim*.
conv_kernel (`Tuple[int]` or `List[int]`, *optional*, defaults to `(10, 3, 3, 3, 3, 3, 3)`):
A tuple of integers defining the kernel size of each 1D convolutional layer in the feature encoder. The
length of *conv_kernel* defines the number of convolutional layers and has to match the length of
|
232_4_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wavlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/wavlm/#wavlmconfig
|
.md
|
length of *conv_kernel* defines the number of convolutional layers and has to match the length of
*conv_dim*.
conv_bias (`bool`, *optional*, defaults to `False`):
Whether the 1D convolutional layers have a bias.
num_conv_pos_embeddings (`int`, *optional*, defaults to 128):
Number of convolutional positional embeddings. Defines the kernel size of 1D convolutional positional
embeddings layer.
num_conv_pos_embedding_groups (`int`, *optional*, defaults to 16):
|
232_4_11
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wavlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/wavlm/#wavlmconfig
|
.md
|
embeddings layer.
num_conv_pos_embedding_groups (`int`, *optional*, defaults to 16):
Number of groups of 1D convolutional positional embeddings layer.
do_stable_layer_norm (`bool`, *optional*, defaults to `False`):
Whether to apply *stable* layer norm architecture of the Transformer encoder. `do_stable_layer_norm is
True` corresponds to applying layer norm before the attention layer, whereas `do_stable_layer_norm is
False` corresponds to applying layer norm after the attention layer.
|
232_4_12
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wavlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/wavlm/#wavlmconfig
|
.md
|
False` corresponds to applying layer norm after the attention layer.
apply_spec_augment (`bool`, *optional*, defaults to `True`):
Whether to apply *SpecAugment* data augmentation to the outputs of the feature encoder. For reference see
[SpecAugment: A Simple Data Augmentation Method for Automatic Speech
Recognition](https://arxiv.org/abs/1904.08779).
mask_time_prob (`float`, *optional*, defaults to 0.05):
|
232_4_13
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wavlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/wavlm/#wavlmconfig
|
.md
|
Recognition](https://arxiv.org/abs/1904.08779).
mask_time_prob (`float`, *optional*, defaults to 0.05):
Propability of each feature vector along the time axis to be chosen as the start of the vector span to be
masked. Approximately `mask_time_prob * sequence_length // mask_time_length` feature vectors will be masked
along the time axis. This is only relevant if `apply_spec_augment is True`.
mask_time_length (`int`, *optional*, defaults to 10):
Length of vector span along the time axis.
|
232_4_14
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wavlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/wavlm/#wavlmconfig
|
.md
|
mask_time_length (`int`, *optional*, defaults to 10):
Length of vector span along the time axis.
mask_time_min_masks (`int`, *optional*, defaults to 2),:
The minimum number of masks of length `mask_feature_length` generated along the time axis, each time step,
irrespectively of `mask_feature_prob`. Only relevant if ''mask_time_prob*len(time_axis)/mask_time_length <
mask_time_min_masks''
mask_feature_prob (`float`, *optional*, defaults to 0.0):
|
232_4_15
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wavlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/wavlm/#wavlmconfig
|
.md
|
mask_time_min_masks''
mask_feature_prob (`float`, *optional*, defaults to 0.0):
Propability of each feature vector along the feature axis to be chosen as the start of the vector span to
be masked. Approximately `mask_time_prob * hidden_size // mask_time_length` feature vectors will be masked
along the time axis. This is only relevant if `apply_spec_augment is True`.
mask_feature_length (`int`, *optional*, defaults to 10):
Length of vector span along the feature axis.
|
232_4_16
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wavlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/wavlm/#wavlmconfig
|
.md
|
mask_feature_length (`int`, *optional*, defaults to 10):
Length of vector span along the feature axis.
num_codevectors_per_group (`int`, *optional*, defaults to 320):
Number of entries in each quantization codebook (group).
num_codevector_groups (`int`, *optional*, defaults to 2):
Number of codevector groups for product codevector quantization.
contrastive_logits_temperature (`float`, *optional*, defaults to 0.1):
The temperature *kappa* in the contrastive loss.
|
232_4_17
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wavlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/wavlm/#wavlmconfig
|
.md
|
contrastive_logits_temperature (`float`, *optional*, defaults to 0.1):
The temperature *kappa* in the contrastive loss.
num_negatives (`int`, *optional*, defaults to 100):
Number of negative samples for the contrastive loss.
codevector_dim (`int`, *optional*, defaults to 256):
Dimensionality of the quantized feature vectors.
proj_codevector_dim (`int`, *optional*, defaults to 256):
Dimensionality of the final projection of both the quantized and the transformer features.
|
232_4_18
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wavlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/wavlm/#wavlmconfig
|
.md
|
Dimensionality of the final projection of both the quantized and the transformer features.
diversity_loss_weight (`int`, *optional*, defaults to 0.1):
The weight of the codebook diversity loss component.
ctc_loss_reduction (`str`, *optional*, defaults to `"mean"`):
Specifies the reduction to apply to the output of `torch.nn.CTCLoss`. Only relevant when training an
instance of [`WavLMForCTC`].
ctc_zero_infinity (`bool`, *optional*, defaults to `False`):
|
232_4_19
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wavlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/wavlm/#wavlmconfig
|
.md
|
instance of [`WavLMForCTC`].
ctc_zero_infinity (`bool`, *optional*, defaults to `False`):
Whether to zero infinite losses and the associated gradients of `torch.nn.CTCLoss`. Infinite losses mainly
occur when the inputs are too short to be aligned to the targets. Only relevant when training an instance
of [`WavLMForCTC`].
use_weighted_layer_sum (`bool`, *optional*, defaults to `False`):
Whether to use a weighted average of layer outputs with learned weights. Only relevant when using an
|
232_4_20
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wavlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/wavlm/#wavlmconfig
|
.md
|
Whether to use a weighted average of layer outputs with learned weights. Only relevant when using an
instance of [`WavLMForSequenceClassification`].
classifier_proj_size (`int`, *optional*, defaults to 256):
Dimensionality of the projection before token mean-pooling for classification.
tdnn_dim (`Tuple[int]` or `List[int]`, *optional*, defaults to `(512, 512, 512, 512, 1500)`):
A tuple of integers defining the number of output channels of each 1D convolutional layer in the *TDNN*
|
232_4_21
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wavlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/wavlm/#wavlmconfig
|
.md
|
A tuple of integers defining the number of output channels of each 1D convolutional layer in the *TDNN*
module of the *XVector* model. The length of *tdnn_dim* defines the number of *TDNN* layers.
tdnn_kernel (`Tuple[int]` or `List[int]`, *optional*, defaults to `(5, 3, 3, 1, 1)`):
A tuple of integers defining the kernel size of each 1D convolutional layer in the *TDNN* module of the
*XVector* model. The length of *tdnn_kernel* has to match the length of *tdnn_dim*.
|
232_4_22
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wavlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/wavlm/#wavlmconfig
|
.md
|
*XVector* model. The length of *tdnn_kernel* has to match the length of *tdnn_dim*.
tdnn_dilation (`Tuple[int]` or `List[int]`, *optional*, defaults to `(1, 2, 3, 1, 1)`):
A tuple of integers defining the dilation factor of each 1D convolutional layer in *TDNN* module of the
*XVector* model. The length of *tdnn_dilation* has to match the length of *tdnn_dim*.
xvector_output_dim (`int`, *optional*, defaults to 512):
Dimensionality of the *XVector* embedding vectors.
|
232_4_23
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wavlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/wavlm/#wavlmconfig
|
.md
|
xvector_output_dim (`int`, *optional*, defaults to 512):
Dimensionality of the *XVector* embedding vectors.
add_adapter (`bool`, *optional*, defaults to `False`):
Whether a convolutional network should be stacked on top of the Wav2Vec2 Encoder. Can be very useful for
warm-starting Wav2Vec2 for SpeechEncoderDecoder models.
adapter_kernel_size (`int`, *optional*, defaults to 3):
Kernel size of the convolutional layers in the adapter network. Only relevant if `add_adapter is True`.
|
232_4_24
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wavlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/wavlm/#wavlmconfig
|
.md
|
Kernel size of the convolutional layers in the adapter network. Only relevant if `add_adapter is True`.
adapter_stride (`int`, *optional*, defaults to 2):
Stride of the convolutional layers in the adapter network. Only relevant if `add_adapter is True`.
num_adapter_layers (`int`, *optional*, defaults to 3):
Number of convolutional layers that should be used in the adapter network. Only relevant if `add_adapter is
True`.
output_hidden_size (`int`, *optional*):
|
232_4_25
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wavlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/wavlm/#wavlmconfig
|
.md
|
True`.
output_hidden_size (`int`, *optional*):
Dimensionality of the encoder output layer. If not defined, this defaults to *hidden-size*. Only relevant
if `add_adapter is True`.
Example:
```python
|
232_4_26
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wavlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/wavlm/#wavlmconfig
|
.md
|
```
Example:
```python
>>> from transformers import WavLMConfig, WavLMModel
>>> # Initializing a WavLM facebook/wavlm-base-960h style configuration
>>> configuration = WavLMConfig()
>>> # Initializing a model (with random weights) from the facebook/wavlm-base-960h style configuration
>>> model = WavLMModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
232_4_27
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wavlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/wavlm/#wavlmmodel
|
.md
|
The bare WavLM Model transformer outputting raw hidden-states without any specific head on top.
WavLM was proposed in [WavLM: Unified Speech Representation Learning with Labeled and Unlabeled
Data](https://arxiv.org/abs/2110.13900) by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo
Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian,
Jian Wu, Michael Zeng, Xiangzhan Yu, Furu Wei.
|
232_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wavlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/wavlm/#wavlmmodel
|
.md
|
Jian Wu, Michael Zeng, Xiangzhan Yu, Furu Wei.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
|
232_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wavlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/wavlm/#wavlmmodel
|
.md
|
behavior.
Parameters:
config ([`WavLMConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
232_5_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wavlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/wavlm/#wavlmforctc
|
.md
|
WavLM Model with a `language modeling` head on top for Connectionist Temporal Classification (CTC).
WavLM was proposed in [WavLM: Unified Speech Representation Learning with Labeled and Unlabeled
Data](https://arxiv.org/abs/2110.13900) by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo
Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian,
Jian Wu, Michael Zeng, Xiangzhan Yu, Furu Wei.
|
232_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wavlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/wavlm/#wavlmforctc
|
.md
|
Jian Wu, Michael Zeng, Xiangzhan Yu, Furu Wei.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
|
232_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wavlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/wavlm/#wavlmforctc
|
.md
|
behavior.
Parameters:
config ([`WavLMConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
232_6_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wavlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/wavlm/#wavlmforsequenceclassification
|
.md
|
WavLM Model with a sequence classification head on top (a linear layer over the pooled output) for tasks like
SUPERB Keyword Spotting.
WavLM was proposed in [WavLM: Unified Speech Representation Learning with Labeled and Unlabeled
Data](https://arxiv.org/abs/2110.13900) by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo
Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian,
Jian Wu, Michael Zeng, Xiangzhan Yu, Furu Wei.
|
232_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wavlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/wavlm/#wavlmforsequenceclassification
|
.md
|
Jian Wu, Michael Zeng, Xiangzhan Yu, Furu Wei.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
|
232_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wavlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/wavlm/#wavlmforsequenceclassification
|
.md
|
behavior.
Parameters:
config ([`WavLMConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
232_7_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wavlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/wavlm/#wavlmforaudioframeclassification
|
.md
|
WavLM Model with a frame classification head on top for tasks like Speaker Diarization.
WavLM was proposed in [WavLM: Unified Speech Representation Learning with Labeled and Unlabeled
Data](https://arxiv.org/abs/2110.13900) by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo
Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian,
Jian Wu, Michael Zeng, Xiangzhan Yu, Furu Wei.
|
232_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wavlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/wavlm/#wavlmforaudioframeclassification
|
.md
|
Jian Wu, Michael Zeng, Xiangzhan Yu, Furu Wei.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
|
232_8_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wavlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/wavlm/#wavlmforaudioframeclassification
|
.md
|
behavior.
Parameters:
config ([`WavLMConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
232_8_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wavlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/wavlm/#wavlmforxvector
|
.md
|
WavLM Model with an XVector feature extraction head on top for tasks like Speaker Verification.
WavLM was proposed in [WavLM: Unified Speech Representation Learning with Labeled and Unlabeled
Data](https://arxiv.org/abs/2110.13900) by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo
Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian,
Jian Wu, Michael Zeng, Xiangzhan Yu, Furu Wei.
|
232_9_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wavlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/wavlm/#wavlmforxvector
|
.md
|
Jian Wu, Michael Zeng, Xiangzhan Yu, Furu Wei.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
|
232_9_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wavlm.md
|
https://huggingface.co/docs/transformers/en/model_doc/wavlm/#wavlmforxvector
|
.md
|
behavior.
Parameters:
config ([`WavLMConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
232_9_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech-sat.md
|
https://huggingface.co/docs/transformers/en/model_doc/unispeech-sat/
|
.md
|
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
233_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech-sat.md
|
https://huggingface.co/docs/transformers/en/model_doc/unispeech-sat/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
233_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech-sat.md
|
https://huggingface.co/docs/transformers/en/model_doc/unispeech-sat/#overview
|
.md
|
The UniSpeech-SAT model was proposed in [UniSpeech-SAT: Universal Speech Representation Learning with Speaker Aware
Pre-Training](https://arxiv.org/abs/2110.05752) by Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen,
Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu .
The abstract from the paper is the following:
*Self-supervised learning (SSL) is a long-standing goal for speech processing, since it utilizes large-scale unlabeled
|
233_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech-sat.md
|
https://huggingface.co/docs/transformers/en/model_doc/unispeech-sat/#overview
|
.md
|
*Self-supervised learning (SSL) is a long-standing goal for speech processing, since it utilizes large-scale unlabeled
data and avoids extensive human labeling. Recent years witness great successes in applying self-supervised learning in
speech recognition, while limited exploration was attempted in applying SSL for modeling speaker characteristics. In
this paper, we aim to improve the existing SSL framework for speaker representation learning. Two methods are
|
233_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech-sat.md
|
https://huggingface.co/docs/transformers/en/model_doc/unispeech-sat/#overview
|
.md
|
this paper, we aim to improve the existing SSL framework for speaker representation learning. Two methods are
introduced for enhancing the unsupervised speaker information extraction. First, we apply the multi-task learning to
the current SSL framework, where we integrate the utterance-wise contrastive loss with the SSL objective function.
Second, for better speaker discrimination, we propose an utterance mixing strategy for data augmentation, where
|
233_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech-sat.md
|
https://huggingface.co/docs/transformers/en/model_doc/unispeech-sat/#overview
|
.md
|
Second, for better speaker discrimination, we propose an utterance mixing strategy for data augmentation, where
additional overlapped utterances are created unsupervisedly and incorporate during training. We integrate the proposed
methods into the HuBERT framework. Experiment results on SUPERB benchmark show that the proposed system achieves
state-of-the-art performance in universal representation learning, especially for speaker identification oriented
|
233_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech-sat.md
|
https://huggingface.co/docs/transformers/en/model_doc/unispeech-sat/#overview
|
.md
|
state-of-the-art performance in universal representation learning, especially for speaker identification oriented
tasks. An ablation study is performed verifying the efficacy of each proposed method. Finally, we scale up training
dataset to 94 thousand hours public audio data and achieve further performance improvement in all SUPERB tasks.*
This model was contributed by [patrickvonplaten](https://huggingface.co/patrickvonplaten). The Authors' code can be
|
233_1_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech-sat.md
|
https://huggingface.co/docs/transformers/en/model_doc/unispeech-sat/#overview
|
.md
|
This model was contributed by [patrickvonplaten](https://huggingface.co/patrickvonplaten). The Authors' code can be
found [here](https://github.com/microsoft/UniSpeech/tree/main/UniSpeech-SAT).
|
233_1_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech-sat.md
|
https://huggingface.co/docs/transformers/en/model_doc/unispeech-sat/#usage-tips
|
.md
|
- UniSpeechSat is a speech model that accepts a float array corresponding to the raw waveform of the speech signal.
Please use [`Wav2Vec2Processor`] for the feature extraction.
- UniSpeechSat model can be fine-tuned using connectionist temporal classification (CTC) so the model output has to be
decoded using [`Wav2Vec2CTCTokenizer`].
- UniSpeechSat performs especially well on speaker verification, speaker identification, and speaker diarization tasks.
|
233_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech-sat.md
|
https://huggingface.co/docs/transformers/en/model_doc/unispeech-sat/#resources
|
.md
|
- [Audio classification task guide](../tasks/audio_classification)
- [Automatic speech recognition task guide](../tasks/asr)
|
233_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech-sat.md
|
https://huggingface.co/docs/transformers/en/model_doc/unispeech-sat/#unispeechsatconfig
|
.md
|
This is the configuration class to store the configuration of a [`UniSpeechSatModel`]. It is used to instantiate an
UniSpeechSat model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the UniSpeechSat
[microsoft/unispeech-sat-base-100h-libri-ft](https://huggingface.co/microsoft/unispeech-sat-base-100h-libri-ft)
architecture.
|
233_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech-sat.md
|
https://huggingface.co/docs/transformers/en/model_doc/unispeech-sat/#unispeechsatconfig
|
.md
|
[microsoft/unispeech-sat-base-100h-libri-ft](https://huggingface.co/microsoft/unispeech-sat-base-100h-libri-ft)
architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 32):
Vocabulary size of the UniSpeechSat model. Defines the number of different tokens that can be represented
|
233_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech-sat.md
|
https://huggingface.co/docs/transformers/en/model_doc/unispeech-sat/#unispeechsatconfig
|
.md
|
Vocabulary size of the UniSpeechSat model. Defines the number of different tokens that can be represented
by the `inputs_ids` passed when calling [`UniSpeechSatModel`]. Vocabulary size of the model. Defines the
different tokens that can be represented by the *inputs_ids* passed to the forward method of
[`UniSpeechSatModel`].
hidden_size (`int`, *optional*, defaults to 768):
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (`int`, *optional*, defaults to 12):
|
233_4_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech-sat.md
|
https://huggingface.co/docs/transformers/en/model_doc/unispeech-sat/#unispeechsatconfig
|
.md
|
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (`int`, *optional*, defaults to 12):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 12):
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (`int`, *optional*, defaults to 3072):
Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
|
233_4_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech-sat.md
|
https://huggingface.co/docs/transformers/en/model_doc/unispeech-sat/#unispeechsatconfig
|
.md
|
Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"selu"` and `"gelu_new"` are supported.
hidden_dropout (`float`, *optional*, defaults to 0.1):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
|
233_4_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech-sat.md
|
https://huggingface.co/docs/transformers/en/model_doc/unispeech-sat/#unispeechsatconfig
|
.md
|
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
activation_dropout (`float`, *optional*, defaults to 0.1):
The dropout ratio for activations inside the fully connected layer.
attention_dropout (`float`, *optional*, defaults to 0.1):
The dropout ratio for the attention probabilities.
feat_proj_dropout (`float`, *optional*, defaults to 0.0):
The dropout probability for output of the feature encoder.
feat_quantizer_dropout (`float`, *optional*, defaults to 0.0):
|
233_4_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech-sat.md
|
https://huggingface.co/docs/transformers/en/model_doc/unispeech-sat/#unispeechsatconfig
|
.md
|
The dropout probability for output of the feature encoder.
feat_quantizer_dropout (`float`, *optional*, defaults to 0.0):
The dropout probability for the output of the feature encoder that's used by the quantizer.
final_dropout (`float`, *optional*, defaults to 0.1):
The dropout probability for the final projection layer of [`UniSpeechSatForCTC`].
layerdrop (`float`, *optional*, defaults to 0.1):
The LayerDrop probability. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more
details.
|
233_4_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech-sat.md
|
https://huggingface.co/docs/transformers/en/model_doc/unispeech-sat/#unispeechsatconfig
|
.md
|
The LayerDrop probability. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more
details.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (`float`, *optional*, defaults to 1e-05):
The epsilon used by the layer normalization layers.
feat_extract_norm (`str`, *optional*, defaults to `"group"`):
|
233_4_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech-sat.md
|
https://huggingface.co/docs/transformers/en/model_doc/unispeech-sat/#unispeechsatconfig
|
.md
|
The epsilon used by the layer normalization layers.
feat_extract_norm (`str`, *optional*, defaults to `"group"`):
The norm to be applied to 1D convolutional layers in feature encoder. One of `"group"` for group
normalization of only the first 1D convolutional layer or `"layer"` for layer normalization of all 1D
convolutional layers.
feat_extract_activation (`str, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the 1D convolutional layers of the feature
|
233_4_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech-sat.md
|
https://huggingface.co/docs/transformers/en/model_doc/unispeech-sat/#unispeechsatconfig
|
.md
|
The non-linear activation function (function or string) in the 1D convolutional layers of the feature
extractor. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported.
conv_dim (`Tuple[int]` or `List[int]`, *optional*, defaults to `(512, 512, 512, 512, 512, 512, 512)`):
A tuple of integers defining the number of input and output channels of each 1D convolutional layer in the
feature encoder. The length of *conv_dim* defines the number of 1D convolutional layers.
|
233_4_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech-sat.md
|
https://huggingface.co/docs/transformers/en/model_doc/unispeech-sat/#unispeechsatconfig
|
.md
|
feature encoder. The length of *conv_dim* defines the number of 1D convolutional layers.
conv_stride (`Tuple[int]` or `List[int]`, *optional*, defaults to `(5, 2, 2, 2, 2, 2, 2)`):
A tuple of integers defining the stride of each 1D convolutional layer in the feature encoder. The length
of *conv_stride* defines the number of convolutional layers and has to match the length of *conv_dim*.
conv_kernel (`Tuple[int]` or `List[int]`, *optional*, defaults to `(10, 3, 3, 3, 3, 2, 2)`):
|
233_4_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech-sat.md
|
https://huggingface.co/docs/transformers/en/model_doc/unispeech-sat/#unispeechsatconfig
|
.md
|
conv_kernel (`Tuple[int]` or `List[int]`, *optional*, defaults to `(10, 3, 3, 3, 3, 2, 2)`):
A tuple of integers defining the kernel size of each 1D convolutional layer in the feature encoder. The
length of *conv_kernel* defines the number of convolutional layers and has to match the length of
*conv_dim*.
conv_bias (`bool`, *optional*, defaults to `False`):
Whether the 1D convolutional layers have a bias.
num_conv_pos_embeddings (`int`, *optional*, defaults to 128):
|
233_4_11
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech-sat.md
|
https://huggingface.co/docs/transformers/en/model_doc/unispeech-sat/#unispeechsatconfig
|
.md
|
Whether the 1D convolutional layers have a bias.
num_conv_pos_embeddings (`int`, *optional*, defaults to 128):
Number of convolutional positional embeddings. Defines the kernel size of 1D convolutional positional
embeddings layer.
num_conv_pos_embedding_groups (`int`, *optional*, defaults to 16):
Number of groups of 1D convolutional positional embeddings layer.
do_stable_layer_norm (`bool`, *optional*, defaults to `False`):
|
233_4_12
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech-sat.md
|
https://huggingface.co/docs/transformers/en/model_doc/unispeech-sat/#unispeechsatconfig
|
.md
|
do_stable_layer_norm (`bool`, *optional*, defaults to `False`):
Whether to apply *stable* layer norm architecture of the Transformer encoder. `do_stable_layer_norm is
True` corresponds to applying layer norm before the attention layer, whereas `do_stable_layer_norm is
False` corresponds to applying layer norm after the attention layer.
apply_spec_augment (`bool`, *optional*, defaults to `True`):
Whether to apply *SpecAugment* data augmentation to the outputs of the feature encoder. For reference see
|
233_4_13
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech-sat.md
|
https://huggingface.co/docs/transformers/en/model_doc/unispeech-sat/#unispeechsatconfig
|
.md
|
Whether to apply *SpecAugment* data augmentation to the outputs of the feature encoder. For reference see
[SpecAugment: A Simple Data Augmentation Method for Automatic Speech
Recognition](https://arxiv.org/abs/1904.08779).
mask_time_prob (`float`, *optional*, defaults to 0.05):
Percentage (between 0 and 1) of all feature vectors along the time axis which will be masked. The masking
procecure generates ''mask_time_prob*len(time_axis)/mask_time_length'' independent masks over the axis. If
|
233_4_14
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech-sat.md
|
https://huggingface.co/docs/transformers/en/model_doc/unispeech-sat/#unispeechsatconfig
|
.md
|
procecure generates ''mask_time_prob*len(time_axis)/mask_time_length'' independent masks over the axis. If
reasoning from the propability of each feature vector to be chosen as the start of the vector span to be
masked, *mask_time_prob* should be `prob_vector_start*mask_time_length`. Note that overlap may decrease the
actual percentage of masked vectors. This is only relevant if `apply_spec_augment is True`.
mask_time_length (`int`, *optional*, defaults to 10):
Length of vector span along the time axis.
|
233_4_15
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech-sat.md
|
https://huggingface.co/docs/transformers/en/model_doc/unispeech-sat/#unispeechsatconfig
|
.md
|
mask_time_length (`int`, *optional*, defaults to 10):
Length of vector span along the time axis.
mask_time_min_masks (`int`, *optional*, defaults to 2):
The minimum number of masks of length `mask_feature_length` generated along the time axis, each time step,
irrespectively of `mask_feature_prob`. Only relevant if ''mask_time_prob*len(time_axis)/mask_time_length <
mask_time_min_masks''
mask_feature_prob (`float`, *optional*, defaults to 0.0):
|
233_4_16
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech-sat.md
|
https://huggingface.co/docs/transformers/en/model_doc/unispeech-sat/#unispeechsatconfig
|
.md
|
mask_time_min_masks''
mask_feature_prob (`float`, *optional*, defaults to 0.0):
Percentage (between 0 and 1) of all feature vectors along the feature axis which will be masked. The
masking procecure generates ''mask_feature_prob*len(feature_axis)/mask_time_length'' independent masks over
the axis. If reasoning from the propability of each feature vector to be chosen as the start of the vector
span to be masked, *mask_feature_prob* should be `prob_vector_start*mask_feature_length`. Note that overlap
|
233_4_17
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech-sat.md
|
https://huggingface.co/docs/transformers/en/model_doc/unispeech-sat/#unispeechsatconfig
|
.md
|
span to be masked, *mask_feature_prob* should be `prob_vector_start*mask_feature_length`. Note that overlap
may decrease the actual percentage of masked vectors. This is only relevant if `apply_spec_augment is
True`.
mask_feature_length (`int`, *optional*, defaults to 10):
Length of vector span along the feature axis.
mask_feature_min_masks (`int`, *optional*, defaults to 0):
The minimum number of masks of length `mask_feature_length` generated along the feature axis, each time
|
233_4_18
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech-sat.md
|
https://huggingface.co/docs/transformers/en/model_doc/unispeech-sat/#unispeechsatconfig
|
.md
|
The minimum number of masks of length `mask_feature_length` generated along the feature axis, each time
step, irrespectively of `mask_feature_prob`. Only relevant if
''mask_feature_prob*len(feature_axis)/mask_feature_length < mask_feature_min_masks''
num_codevectors_per_group (`int`, *optional*, defaults to 320):
Number of entries in each quantization codebook (group).
num_codevector_groups (`int`, *optional*, defaults to 2):
Number of codevector groups for product codevector quantization.
|
233_4_19
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech-sat.md
|
https://huggingface.co/docs/transformers/en/model_doc/unispeech-sat/#unispeechsatconfig
|
.md
|
num_codevector_groups (`int`, *optional*, defaults to 2):
Number of codevector groups for product codevector quantization.
contrastive_logits_temperature (`float`, *optional*, defaults to 0.1):
The temperature *kappa* in the contrastive loss.
num_negatives (`int`, *optional*, defaults to 100):
Number of negative samples for the contrastive loss.
codevector_dim (`int`, *optional*, defaults to 256):
Dimensionality of the quantized feature vectors.
proj_codevector_dim (`int`, *optional*, defaults to 256):
|
233_4_20
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech-sat.md
|
https://huggingface.co/docs/transformers/en/model_doc/unispeech-sat/#unispeechsatconfig
|
.md
|
Dimensionality of the quantized feature vectors.
proj_codevector_dim (`int`, *optional*, defaults to 256):
Dimensionality of the final projection of both the quantized and the transformer features.
diversity_loss_weight (`int`, *optional*, defaults to 0.1):
The weight of the codebook diversity loss component.
ctc_loss_reduction (`str`, *optional*, defaults to `"mean"`):
Specifies the reduction to apply to the output of `torch.nn.CTCLoss`. Only relevant when training an
instance of [`UniSpeechSatForCTC`].
|
233_4_21
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech-sat.md
|
https://huggingface.co/docs/transformers/en/model_doc/unispeech-sat/#unispeechsatconfig
|
.md
|
instance of [`UniSpeechSatForCTC`].
ctc_zero_infinity (`bool`, *optional*, defaults to `False`):
Whether to zero infinite losses and the associated gradients of `torch.nn.CTCLoss`. Infinite losses mainly
occur when the inputs are too short to be aligned to the targets. Only relevant when training an instance
of [`UniSpeechSatForCTC`].
use_weighted_layer_sum (`bool`, *optional*, defaults to `False`):
Whether to use a weighted average of layer outputs with learned weights. Only relevant when using an
|
233_4_22
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech-sat.md
|
https://huggingface.co/docs/transformers/en/model_doc/unispeech-sat/#unispeechsatconfig
|
.md
|
Whether to use a weighted average of layer outputs with learned weights. Only relevant when using an
instance of [`UniSpeechSatForSequenceClassification`].
classifier_proj_size (`int`, *optional*, defaults to 256):
Dimensionality of the projection before token mean-pooling for classification.
tdnn_dim (`Tuple[int]` or `List[int]`, *optional*, defaults to `(512, 512, 512, 512, 1500)`):
A tuple of integers defining the number of output channels of each 1D convolutional layer in the *TDNN*
|
233_4_23
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech-sat.md
|
https://huggingface.co/docs/transformers/en/model_doc/unispeech-sat/#unispeechsatconfig
|
.md
|
A tuple of integers defining the number of output channels of each 1D convolutional layer in the *TDNN*
module of the *XVector* model. The length of *tdnn_dim* defines the number of *TDNN* layers.
tdnn_kernel (`Tuple[int]` or `List[int]`, *optional*, defaults to `(5, 3, 3, 1, 1)`):
A tuple of integers defining the kernel size of each 1D convolutional layer in the *TDNN* module of the
*XVector* model. The length of *tdnn_kernel* has to match the length of *tdnn_dim*.
|
233_4_24
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech-sat.md
|
https://huggingface.co/docs/transformers/en/model_doc/unispeech-sat/#unispeechsatconfig
|
.md
|
*XVector* model. The length of *tdnn_kernel* has to match the length of *tdnn_dim*.
tdnn_dilation (`Tuple[int]` or `List[int]`, *optional*, defaults to `(1, 2, 3, 1, 1)`):
A tuple of integers defining the dilation factor of each 1D convolutional layer in *TDNN* module of the
*XVector* model. The length of *tdnn_dilation* has to match the length of *tdnn_dim*.
xvector_output_dim (`int`, *optional*, defaults to 512):
Dimensionality of the *XVector* embedding vectors.
|
233_4_25
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech-sat.md
|
https://huggingface.co/docs/transformers/en/model_doc/unispeech-sat/#unispeechsatconfig
|
.md
|
xvector_output_dim (`int`, *optional*, defaults to 512):
Dimensionality of the *XVector* embedding vectors.
pad_token_id (`int`, *optional*, defaults to 0):
The id of the padding token.
bos_token_id (`int`, *optional*, defaults to 1):
The id of the "beginning-of-sequence" token.
eos_token_id (`int`, *optional*, defaults to 2):
The id of the "end-of-sequence" token.
num_clusters (`int`, *optional*, defaults to 504):
Number of clusters for weak labeling. Only relevant when using an instance of
|
233_4_26
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech-sat.md
|
https://huggingface.co/docs/transformers/en/model_doc/unispeech-sat/#unispeechsatconfig
|
.md
|
Number of clusters for weak labeling. Only relevant when using an instance of
[`UniSpeechSatForPreTraining`].
Example:
```python
>>> from transformers import UniSpeechSatModel, UniSpeechSatConfig
|
233_4_27
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech-sat.md
|
https://huggingface.co/docs/transformers/en/model_doc/unispeech-sat/#unispeechsatconfig
|
.md
|
>>> # Initializing a UniSpeechSat microsoft/unispeech-sat-base-100h-libri-ft style configuration
>>> configuration = UniSpeechSatConfig()
>>> # Initializing a model from the microsoft/unispeech-sat-base-100h-libri-ft style configuration
>>> model = UniSpeechSatModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
233_4_28
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech-sat.md
|
https://huggingface.co/docs/transformers/en/model_doc/unispeech-sat/#unispeechsat-specific-outputs
|
.md
|
models.unispeech_sat.modeling_unispeech_sat.UniSpeechSatForPreTrainingOutput
Output type of [`UniSpeechSatForPreTrainingOutput`], with potential hidden states and attentions.
Args:
loss (*optional*, returned when model is in train mode, `torch.FloatTensor` of shape `(1,)`):
Total loss as the sum of the contrastive loss (L_m) and the diversity loss (L_d) as stated in the [official
paper](https://arxiv.org/pdf/2006.11477.pdf) . (classification) loss.
|
233_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech-sat.md
|
https://huggingface.co/docs/transformers/en/model_doc/unispeech-sat/#unispeechsat-specific-outputs
|
.md
|
paper](https://arxiv.org/pdf/2006.11477.pdf) . (classification) loss.
projected_states (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.proj_codevector_dim)`):
Hidden-states of the model projected to *config.proj_codevector_dim* that can be used to predict the masked
projected quantized states.
projected_quantized_states (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.proj_codevector_dim)`):
|
233_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech-sat.md
|
https://huggingface.co/docs/transformers/en/model_doc/unispeech-sat/#unispeechsat-specific-outputs
|
.md
|
projected_quantized_states (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.proj_codevector_dim)`):
Quantized extracted feature vectors projected to *config.proj_codevector_dim* representing the positive
target vectors for contrastive loss.
hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
|
233_5_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech-sat.md
|
https://huggingface.co/docs/transformers/en/model_doc/unispeech-sat/#unispeechsat-specific-outputs
|
.md
|
Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of
shape `(batch_size, sequence_length, hidden_size)`.
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
|
233_5_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech-sat.md
|
https://huggingface.co/docs/transformers/en/model_doc/unispeech-sat/#unispeechsat-specific-outputs
|
.md
|
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
sequence_length)`.
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
|
233_5_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech-sat.md
|
https://huggingface.co/docs/transformers/en/model_doc/unispeech-sat/#unispeechsatmodel
|
.md
|
The bare UniSpeechSat Model transformer outputting raw hidden-states without any specific head on top.
UniSpeechSat was proposed in [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech
Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael
Auli.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).
|
233_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech-sat.md
|
https://huggingface.co/docs/transformers/en/model_doc/unispeech-sat/#unispeechsatmodel
|
.md
|
library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`UniSpeechSatConfig`]): Model configuration class with all the parameters of the model.
|
233_6_1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.