source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fnet.md
https://huggingface.co/docs/transformers/en/model_doc/fnet/#fnettokenizerfast
.md
Construct a "fast" FNetTokenizer (backed by HuggingFace's *tokenizers* library). Adapted from [`AlbertTokenizerFast`]. Based on [Unigram](https://huggingface.co/docs/tokenizers/python/latest/components.html?highlight=unigram#models). This tokenizer inherits from [`PreTrainedTokenizerFast`] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods Args: vocab_file (`str`):
165_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fnet.md
https://huggingface.co/docs/transformers/en/model_doc/fnet/#fnettokenizerfast
.md
this superclass for more information regarding those methods Args: vocab_file (`str`): [SentencePiece](https://github.com/google/sentencepiece) file (generally has a *.spm* extension) that contains the vocabulary necessary to instantiate a tokenizer. do_lower_case (`bool`, *optional*, defaults to `False`): Whether or not to lowercase the input when tokenizing. remove_space (`bool`, *optional*, defaults to `True`):
165_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fnet.md
https://huggingface.co/docs/transformers/en/model_doc/fnet/#fnettokenizerfast
.md
Whether or not to lowercase the input when tokenizing. remove_space (`bool`, *optional*, defaults to `True`): Whether or not to strip the text when tokenizing (removing excess spaces before and after the string). keep_accents (`bool`, *optional*, defaults to `True`): Whether or not to keep accents when tokenizing. unk_token (`str`, *optional*, defaults to `"<unk>"`): The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.
165_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fnet.md
https://huggingface.co/docs/transformers/en/model_doc/fnet/#fnettokenizerfast
.md
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. sep_token (`str`, *optional*, defaults to `"[SEP]"`): The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens. pad_token (`str`, *optional*, defaults to `"<pad>"`):
165_6_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fnet.md
https://huggingface.co/docs/transformers/en/model_doc/fnet/#fnettokenizerfast
.md
token of a sequence built with special tokens. pad_token (`str`, *optional*, defaults to `"<pad>"`): The token used for padding, for example when batching sequences of different lengths. cls_token (`str`, *optional*, defaults to `"[CLS]"`): The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens.
165_6_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fnet.md
https://huggingface.co/docs/transformers/en/model_doc/fnet/#fnettokenizerfast
.md
instead of per-token classification). It is the first token of the sequence when built with special tokens. mask_token (`str`, *optional*, defaults to `"[MASK]"`): The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict.
165_6_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fnet.md
https://huggingface.co/docs/transformers/en/model_doc/fnet/#fnetmodel
.md
The bare FNet Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`FNetConfig`]): Model configuration class with all the parameters of the model.
165_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fnet.md
https://huggingface.co/docs/transformers/en/model_doc/fnet/#fnetmodel
.md
behavior. Parameters: config ([`FNetConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. The model can behave as an encoder, following the architecture described in [FNet: Mixing Tokens with Fourier
165_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fnet.md
https://huggingface.co/docs/transformers/en/model_doc/fnet/#fnetmodel
.md
The model can behave as an encoder, following the architecture described in [FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824) by James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon. Methods: forward
165_7_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fnet.md
https://huggingface.co/docs/transformers/en/model_doc/fnet/#fnetforpretraining
.md
FNet Model with two heads on top as done during the pretraining: a `masked language modeling` head and a `next sentence prediction (classification)` head. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`FNetConfig`]): Model configuration class with all the parameters of the model.
165_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fnet.md
https://huggingface.co/docs/transformers/en/model_doc/fnet/#fnetforpretraining
.md
behavior. Parameters: config ([`FNetConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
165_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fnet.md
https://huggingface.co/docs/transformers/en/model_doc/fnet/#fnetformaskedlm
.md
FNet Model with a `language modeling` head on top. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`FNetConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
165_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fnet.md
https://huggingface.co/docs/transformers/en/model_doc/fnet/#fnetformaskedlm
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
165_9_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fnet.md
https://huggingface.co/docs/transformers/en/model_doc/fnet/#fnetfornextsentenceprediction
.md
FNet Model with a `next sentence prediction (classification)` head on top. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`FNetConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
165_10_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fnet.md
https://huggingface.co/docs/transformers/en/model_doc/fnet/#fnetfornextsentenceprediction
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
165_10_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fnet.md
https://huggingface.co/docs/transformers/en/model_doc/fnet/#fnetforsequenceclassification
.md
FNet Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`FNetConfig`]): Model configuration class with all the parameters of the model.
165_11_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fnet.md
https://huggingface.co/docs/transformers/en/model_doc/fnet/#fnetforsequenceclassification
.md
behavior. Parameters: config ([`FNetConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
165_11_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fnet.md
https://huggingface.co/docs/transformers/en/model_doc/fnet/#fnetformultiplechoice
.md
FNet Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`FNetConfig`]): Model configuration class with all the parameters of the model.
165_12_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fnet.md
https://huggingface.co/docs/transformers/en/model_doc/fnet/#fnetformultiplechoice
.md
behavior. Parameters: config ([`FNetConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
165_12_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fnet.md
https://huggingface.co/docs/transformers/en/model_doc/fnet/#fnetfortokenclassification
.md
FNet Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`FNetConfig`]): Model configuration class with all the parameters of the model.
165_13_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fnet.md
https://huggingface.co/docs/transformers/en/model_doc/fnet/#fnetfortokenclassification
.md
behavior. Parameters: config ([`FNetConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
165_13_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fnet.md
https://huggingface.co/docs/transformers/en/model_doc/fnet/#fnetforquestionanswering
.md
FNet Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute `span start logits` and `span end logits`). This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters:
165_14_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fnet.md
https://huggingface.co/docs/transformers/en/model_doc/fnet/#fnetforquestionanswering
.md
behavior. Parameters: config ([`FNetConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
165_14_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech.md
https://huggingface.co/docs/transformers/en/model_doc/unispeech/
.md
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
166_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech.md
https://huggingface.co/docs/transformers/en/model_doc/unispeech/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
166_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech.md
https://huggingface.co/docs/transformers/en/model_doc/unispeech/#overview
.md
The UniSpeech model was proposed in [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597) by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang . The abstract from the paper is the following: *In this paper, we propose a unified pre-training approach called UniSpeech to learn speech representations with both
166_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech.md
https://huggingface.co/docs/transformers/en/model_doc/unispeech/#overview
.md
*In this paper, we propose a unified pre-training approach called UniSpeech to learn speech representations with both unlabeled and labeled data, in which supervised phonetic CTC learning and phonetically-aware contrastive self-supervised learning are conducted in a multi-task learning manner. The resultant representations can capture information more correlated with phonetic structures and improve the generalization across languages and domains. We
166_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech.md
https://huggingface.co/docs/transformers/en/model_doc/unispeech/#overview
.md
information more correlated with phonetic structures and improve the generalization across languages and domains. We evaluate the effectiveness of UniSpeech for cross-lingual representation learning on public CommonVoice corpus. The results show that UniSpeech outperforms self-supervised pretraining and supervised transfer learning for speech recognition by a maximum of 13.4% and 17.8% relative phone error rate reductions respectively (averaged over all
166_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech.md
https://huggingface.co/docs/transformers/en/model_doc/unispeech/#overview
.md
recognition by a maximum of 13.4% and 17.8% relative phone error rate reductions respectively (averaged over all testing languages). The transferability of UniSpeech is also demonstrated on a domain-shift speech recognition task, i.e., a relative word error rate reduction of 6% against the previous approach.* This model was contributed by [patrickvonplaten](https://huggingface.co/patrickvonplaten). The Authors' code can be found [here](https://github.com/microsoft/UniSpeech/tree/main/UniSpeech).
166_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech.md
https://huggingface.co/docs/transformers/en/model_doc/unispeech/#usage-tips
.md
- UniSpeech is a speech model that accepts a float array corresponding to the raw waveform of the speech signal. Please use [`Wav2Vec2Processor`] for the feature extraction. - UniSpeech model can be fine-tuned using connectionist temporal classification (CTC) so the model output has to be decoded using [`Wav2Vec2CTCTokenizer`].
166_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech.md
https://huggingface.co/docs/transformers/en/model_doc/unispeech/#resources
.md
- [Audio classification task guide](../tasks/audio_classification) - [Automatic speech recognition task guide](../tasks/asr)
166_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech.md
https://huggingface.co/docs/transformers/en/model_doc/unispeech/#unispeechconfig
.md
This is the configuration class to store the configuration of a [`UniSpeechModel`]. It is used to instantiate an UniSpeech model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the UniSpeech [microsoft/unispeech-large-1500h-cv](https://huggingface.co/microsoft/unispeech-large-1500h-cv) architecture.
166_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech.md
https://huggingface.co/docs/transformers/en/model_doc/unispeech/#unispeechconfig
.md
[microsoft/unispeech-large-1500h-cv](https://huggingface.co/microsoft/unispeech-large-1500h-cv) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 32): Vocabulary size of the UniSpeech model. Defines the number of different tokens that can be represented by
166_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech.md
https://huggingface.co/docs/transformers/en/model_doc/unispeech/#unispeechconfig
.md
Vocabulary size of the UniSpeech model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`UniSpeechModel`]. Vocabulary size of the model. Defines the different tokens that can be represented by the *inputs_ids* passed to the forward method of [`UniSpeechModel`]. hidden_size (`int`, *optional*, defaults to 768): Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (`int`, *optional*, defaults to 12):
166_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech.md
https://huggingface.co/docs/transformers/en/model_doc/unispeech/#unispeechconfig
.md
Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (`int`, *optional*, defaults to 12): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 12): Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (`int`, *optional*, defaults to 3072): Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
166_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech.md
https://huggingface.co/docs/transformers/en/model_doc/unispeech/#unispeechconfig
.md
Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder. hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported. hidden_dropout (`float`, *optional*, defaults to 0.1): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
166_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech.md
https://huggingface.co/docs/transformers/en/model_doc/unispeech/#unispeechconfig
.md
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. activation_dropout (`float`, *optional*, defaults to 0.1): The dropout ratio for activations inside the fully connected layer. attention_dropout (`float`, *optional*, defaults to 0.1): The dropout ratio for the attention probabilities. feat_proj_dropout (`float`, *optional*, defaults to 0.0): The dropout probability for output of the feature encoder. feat_quantizer_dropout (`float`, *optional*, defaults to 0.0):
166_4_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech.md
https://huggingface.co/docs/transformers/en/model_doc/unispeech/#unispeechconfig
.md
The dropout probability for output of the feature encoder. feat_quantizer_dropout (`float`, *optional*, defaults to 0.0): The dropout probability for the output of the feature encoder that's used by the quantizer. final_dropout (`float`, *optional*, defaults to 0.1): The dropout probability for the final projection layer of [`UniSpeechForCTC`]. layerdrop (`float`, *optional*, defaults to 0.1): The LayerDrop probability. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more details.
166_4_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech.md
https://huggingface.co/docs/transformers/en/model_doc/unispeech/#unispeechconfig
.md
The LayerDrop probability. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more details. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-05): The epsilon used by the layer normalization layers. feat_extract_norm (`str`, *optional*, defaults to `"group"`):
166_4_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech.md
https://huggingface.co/docs/transformers/en/model_doc/unispeech/#unispeechconfig
.md
The epsilon used by the layer normalization layers. feat_extract_norm (`str`, *optional*, defaults to `"group"`): The norm to be applied to 1D convolutional layers in feature encoder. One of `"group"` for group normalization of only the first 1D convolutional layer or `"layer"` for layer normalization of all 1D convolutional layers. feat_extract_activation (`str, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the 1D convolutional layers of the feature
166_4_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech.md
https://huggingface.co/docs/transformers/en/model_doc/unispeech/#unispeechconfig
.md
The non-linear activation function (function or string) in the 1D convolutional layers of the feature extractor. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported. conv_dim (`Tuple[int]` or `List[int]`, *optional*, defaults to `(512, 512, 512, 512, 512, 512, 512)`): A tuple of integers defining the number of input and output channels of each 1D convolutional layer in the feature encoder. The length of *conv_dim* defines the number of 1D convolutional layers.
166_4_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech.md
https://huggingface.co/docs/transformers/en/model_doc/unispeech/#unispeechconfig
.md
feature encoder. The length of *conv_dim* defines the number of 1D convolutional layers. conv_stride (`Tuple[int]` or `List[int]`, *optional*, defaults to `(5, 2, 2, 2, 2, 2, 2)`): A tuple of integers defining the stride of each 1D convolutional layer in the feature encoder. The length of *conv_stride* defines the number of convolutional layers and has to match the length of *conv_dim*. conv_kernel (`Tuple[int]` or `List[int]`, *optional*, defaults to `(10, 3, 3, 3, 3, 2, 2)`):
166_4_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech.md
https://huggingface.co/docs/transformers/en/model_doc/unispeech/#unispeechconfig
.md
conv_kernel (`Tuple[int]` or `List[int]`, *optional*, defaults to `(10, 3, 3, 3, 3, 2, 2)`): A tuple of integers defining the kernel size of each 1D convolutional layer in the feature encoder. The length of *conv_kernel* defines the number of convolutional layers and has to match the length of *conv_dim*. conv_bias (`bool`, *optional*, defaults to `False`): Whether the 1D convolutional layers have a bias. num_conv_pos_embeddings (`int`, *optional*, defaults to 128):
166_4_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech.md
https://huggingface.co/docs/transformers/en/model_doc/unispeech/#unispeechconfig
.md
Whether the 1D convolutional layers have a bias. num_conv_pos_embeddings (`int`, *optional*, defaults to 128): Number of convolutional positional embeddings. Defines the kernel size of 1D convolutional positional embeddings layer. num_conv_pos_embedding_groups (`int`, *optional*, defaults to 16): Number of groups of 1D convolutional positional embeddings layer. do_stable_layer_norm (`bool`, *optional*, defaults to `False`):
166_4_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech.md
https://huggingface.co/docs/transformers/en/model_doc/unispeech/#unispeechconfig
.md
do_stable_layer_norm (`bool`, *optional*, defaults to `False`): Whether to apply *stable* layer norm architecture of the Transformer encoder. `do_stable_layer_norm is True` corresponds to applying layer norm before the attention layer, whereas `do_stable_layer_norm is False` corresponds to applying layer norm after the attention layer. apply_spec_augment (`bool`, *optional*, defaults to `True`): Whether to apply *SpecAugment* data augmentation to the outputs of the feature encoder. For reference see
166_4_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech.md
https://huggingface.co/docs/transformers/en/model_doc/unispeech/#unispeechconfig
.md
Whether to apply *SpecAugment* data augmentation to the outputs of the feature encoder. For reference see [SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition](https://arxiv.org/abs/1904.08779). mask_time_prob (`float`, *optional*, defaults to 0.05): Percentage (between 0 and 1) of all feature vectors along the time axis which will be masked. The masking procecure generates ''mask_time_prob*len(time_axis)/mask_time_length'' independent masks over the axis. If
166_4_14
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech.md
https://huggingface.co/docs/transformers/en/model_doc/unispeech/#unispeechconfig
.md
procecure generates ''mask_time_prob*len(time_axis)/mask_time_length'' independent masks over the axis. If reasoning from the propability of each feature vector to be chosen as the start of the vector span to be masked, *mask_time_prob* should be `prob_vector_start*mask_time_length`. Note that overlap may decrease the actual percentage of masked vectors. This is only relevant if `apply_spec_augment is True`. mask_time_length (`int`, *optional*, defaults to 10): Length of vector span along the time axis.
166_4_15
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech.md
https://huggingface.co/docs/transformers/en/model_doc/unispeech/#unispeechconfig
.md
mask_time_length (`int`, *optional*, defaults to 10): Length of vector span along the time axis. mask_time_min_masks (`int`, *optional*, defaults to 2): The minimum number of masks of length `mask_feature_length` generated along the time axis, each time step, irrespectively of `mask_feature_prob`. Only relevant if ''mask_time_prob*len(time_axis)/mask_time_length < mask_time_min_masks'' mask_feature_prob (`float`, *optional*, defaults to 0.0):
166_4_16
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech.md
https://huggingface.co/docs/transformers/en/model_doc/unispeech/#unispeechconfig
.md
mask_time_min_masks'' mask_feature_prob (`float`, *optional*, defaults to 0.0): Percentage (between 0 and 1) of all feature vectors along the feature axis which will be masked. The masking procecure generates ''mask_feature_prob*len(feature_axis)/mask_time_length'' independent masks over the axis. If reasoning from the propability of each feature vector to be chosen as the start of the vector span to be masked, *mask_feature_prob* should be `prob_vector_start*mask_feature_length`. Note that overlap
166_4_17
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech.md
https://huggingface.co/docs/transformers/en/model_doc/unispeech/#unispeechconfig
.md
span to be masked, *mask_feature_prob* should be `prob_vector_start*mask_feature_length`. Note that overlap may decrease the actual percentage of masked vectors. This is only relevant if `apply_spec_augment is True`. mask_feature_length (`int`, *optional*, defaults to 10): Length of vector span along the feature axis. mask_feature_min_masks (`int`, *optional*, defaults to 0): The minimum number of masks of length `mask_feature_length` generated along the feature axis, each time
166_4_18
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech.md
https://huggingface.co/docs/transformers/en/model_doc/unispeech/#unispeechconfig
.md
The minimum number of masks of length `mask_feature_length` generated along the feature axis, each time step, irrespectively of `mask_feature_prob`. Only relevant if ''mask_feature_prob*len(feature_axis)/mask_feature_length < mask_feature_min_masks'' num_codevectors_per_group (`int`, *optional*, defaults to 320): Number of entries in each quantization codebook (group). num_codevector_groups (`int`, *optional*, defaults to 2): Number of codevector groups for product codevector quantization.
166_4_19
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech.md
https://huggingface.co/docs/transformers/en/model_doc/unispeech/#unispeechconfig
.md
num_codevector_groups (`int`, *optional*, defaults to 2): Number of codevector groups for product codevector quantization. contrastive_logits_temperature (`float`, *optional*, defaults to 0.1): The temperature *kappa* in the contrastive loss. num_negatives (`int`, *optional*, defaults to 100): Number of negative samples for the contrastive loss. codevector_dim (`int`, *optional*, defaults to 256): Dimensionality of the quantized feature vectors. proj_codevector_dim (`int`, *optional*, defaults to 256):
166_4_20
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech.md
https://huggingface.co/docs/transformers/en/model_doc/unispeech/#unispeechconfig
.md
Dimensionality of the quantized feature vectors. proj_codevector_dim (`int`, *optional*, defaults to 256): Dimensionality of the final projection of both the quantized and the transformer features. diversity_loss_weight (`int`, *optional*, defaults to 0.1): The weight of the codebook diversity loss component. ctc_loss_reduction (`str`, *optional*, defaults to `"mean"`): Specifies the reduction to apply to the output of `torch.nn.CTCLoss`. Only relevant when training an instance of [`UniSpeechForCTC`].
166_4_21
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech.md
https://huggingface.co/docs/transformers/en/model_doc/unispeech/#unispeechconfig
.md
instance of [`UniSpeechForCTC`]. ctc_zero_infinity (`bool`, *optional*, defaults to `False`): Whether to zero infinite losses and the associated gradients of `torch.nn.CTCLoss`. Infinite losses mainly occur when the inputs are too short to be aligned to the targets. Only relevant when training an instance of [`UniSpeechForCTC`]. use_weighted_layer_sum (`bool`, *optional*, defaults to `False`): Whether to use a weighted average of layer outputs with learned weights. Only relevant when using an
166_4_22
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech.md
https://huggingface.co/docs/transformers/en/model_doc/unispeech/#unispeechconfig
.md
Whether to use a weighted average of layer outputs with learned weights. Only relevant when using an instance of [`UniSpeechForSequenceClassification`]. classifier_proj_size (`int`, *optional*, defaults to 256): Dimensionality of the projection before token mean-pooling for classification. num_ctc_classes (`int`, *optional*, defaults to 80): Specifies the number of classes (phoneme tokens and blank token) for phoneme-level CTC loss. Only relevant when using an instance of [`UniSpeechForPreTraining`].
166_4_23
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech.md
https://huggingface.co/docs/transformers/en/model_doc/unispeech/#unispeechconfig
.md
when using an instance of [`UniSpeechForPreTraining`]. pad_token_id (`int`, *optional*, defaults to 0): The id of the padding token. bos_token_id (`int`, *optional*, defaults to 1): The id of the "beginning-of-sequence" token. eos_token_id (`int`, *optional*, defaults to 2): The id of the "end-of-sequence" token. replace_prob (`float`, *optional*, defaults to 0.5): Propability that transformer feature is replaced by quantized feature for pretraining. Example: ```python
166_4_24
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech.md
https://huggingface.co/docs/transformers/en/model_doc/unispeech/#unispeechconfig
.md
Propability that transformer feature is replaced by quantized feature for pretraining. Example: ```python >>> from transformers import UniSpeechConfig, UniSpeechModel
166_4_25
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech.md
https://huggingface.co/docs/transformers/en/model_doc/unispeech/#unispeechconfig
.md
>>> # Initializing a UniSpeech facebook/unispeech-base-960h style configuration >>> configuration = UniSpeechConfig() >>> # Initializing a model (with random weights) from the facebook/unispeech-base-960h style configuration >>> model = UniSpeechModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
166_4_26
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech.md
https://huggingface.co/docs/transformers/en/model_doc/unispeech/#unispeech-specific-outputs
.md
models.unispeech.modeling_unispeech.UniSpeechForPreTrainingOutput Output type of [`UniSpeechForPreTrainingOutput`], with potential hidden states and attentions. Args: loss (*optional*, returned when model is in train mode, `torch.FloatTensor` of shape `(1,)`): Total loss as the sum of the contrastive loss (L_m) and the diversity loss (L_d) as stated in the [official paper](https://arxiv.org/pdf/2006.11477.pdf) . (classification) loss.
166_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech.md
https://huggingface.co/docs/transformers/en/model_doc/unispeech/#unispeech-specific-outputs
.md
paper](https://arxiv.org/pdf/2006.11477.pdf) . (classification) loss. projected_states (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.proj_codevector_dim)`): Hidden-states of the model projected to *config.proj_codevector_dim* that can be used to predict the masked projected quantized states. projected_quantized_states (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.proj_codevector_dim)`):
166_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech.md
https://huggingface.co/docs/transformers/en/model_doc/unispeech/#unispeech-specific-outputs
.md
projected_quantized_states (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.proj_codevector_dim)`): Quantized extracted feature vectors projected to *config.proj_codevector_dim* representing the positive target vectors for contrastive loss. hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
166_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech.md
https://huggingface.co/docs/transformers/en/model_doc/unispeech/#unispeech-specific-outputs
.md
Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`. Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
166_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech.md
https://huggingface.co/docs/transformers/en/model_doc/unispeech/#unispeech-specific-outputs
.md
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
166_5_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech.md
https://huggingface.co/docs/transformers/en/model_doc/unispeech/#unispeechmodel
.md
The bare UniSpeech Model transformer outputting raw hidden-states without any specific head on top. UniSpeech was proposed in [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597) by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
166_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech.md
https://huggingface.co/docs/transformers/en/model_doc/unispeech/#unispeechmodel
.md
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.). This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters:
166_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech.md
https://huggingface.co/docs/transformers/en/model_doc/unispeech/#unispeechmodel
.md
behavior. Parameters: config ([`UniSpeechConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
166_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech.md
https://huggingface.co/docs/transformers/en/model_doc/unispeech/#unispeechforctc
.md
UniSpeech Model with a `language modeling` head on top for Connectionist Temporal Classification (CTC). UniSpeech was proposed in [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597) by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
166_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech.md
https://huggingface.co/docs/transformers/en/model_doc/unispeech/#unispeechforctc
.md
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.). This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters:
166_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech.md
https://huggingface.co/docs/transformers/en/model_doc/unispeech/#unispeechforctc
.md
behavior. Parameters: config ([`UniSpeechConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. target_lang (`str`, *optional*): Language id of adapter weights. Adapter weights are stored in the format adapter.<lang>.safetensors or
166_7_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech.md
https://huggingface.co/docs/transformers/en/model_doc/unispeech/#unispeechforctc
.md
Language id of adapter weights. Adapter weights are stored in the format adapter.<lang>.safetensors or adapter.<lang>.bin. Only relevant when using an instance of [`UniSpeechForCTC`] with adapters. Uses 'eng' by default. Methods: forward
166_7_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech.md
https://huggingface.co/docs/transformers/en/model_doc/unispeech/#unispeechforsequenceclassification
.md
UniSpeech Model with a sequence classification head on top (a linear layer over the pooled output) for tasks like SUPERB Keyword Spotting. UniSpeech was proposed in [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597) by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
166_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech.md
https://huggingface.co/docs/transformers/en/model_doc/unispeech/#unispeechforsequenceclassification
.md
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.). This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters:
166_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech.md
https://huggingface.co/docs/transformers/en/model_doc/unispeech/#unispeechforsequenceclassification
.md
behavior. Parameters: config ([`UniSpeechConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
166_8_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech.md
https://huggingface.co/docs/transformers/en/model_doc/unispeech/#unispeechforpretraining
.md
UniSpeech Model with a vector-quantization module and ctc loss for pre-training. UniSpeech was proposed in [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597) by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
166_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech.md
https://huggingface.co/docs/transformers/en/model_doc/unispeech/#unispeechforpretraining
.md
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.). This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters:
166_9_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/unispeech.md
https://huggingface.co/docs/transformers/en/model_doc/unispeech/#unispeechforpretraining
.md
behavior. Parameters: config ([`UniSpeechConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
166_9_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cpmant.md
https://huggingface.co/docs/transformers/en/model_doc/cpmant/
.md
<!--Copyright 2022 The HuggingFace Team and The OpenBMB Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
167_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cpmant.md
https://huggingface.co/docs/transformers/en/model_doc/cpmant/
.md
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
167_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cpmant.md
https://huggingface.co/docs/transformers/en/model_doc/cpmant/#overview
.md
CPM-Ant is an open-source Chinese pre-trained language model (PLM) with 10B parameters. It is also the first milestone of the live training process of CPM-Live. The training process is cost-effective and environment-friendly. CPM-Ant also achieves promising results with delta tuning on the CUGE benchmark. Besides the full model, we also provide various compressed versions to meet the requirements of different hardware configurations. [See more](https://github.com/OpenBMB/CPM-Live/tree/cpm-ant/cpm-live)
167_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cpmant.md
https://huggingface.co/docs/transformers/en/model_doc/cpmant/#overview
.md
This model was contributed by [OpenBMB](https://huggingface.co/openbmb). The original code can be found [here](https://github.com/OpenBMB/CPM-Live/tree/cpm-ant/cpm-live).
167_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cpmant.md
https://huggingface.co/docs/transformers/en/model_doc/cpmant/#resources
.md
- A tutorial on [CPM-Live](https://github.com/OpenBMB/CPM-Live/tree/cpm-ant/cpm-live).
167_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cpmant.md
https://huggingface.co/docs/transformers/en/model_doc/cpmant/#cpmantconfig
.md
This is the configuration class to store the configuration of a [`CpmAntModel`]. It is used to instantiate an CPMAnt model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the CPMAnt [openbmb/cpm-ant-10b](https://huggingface.co/openbmb/cpm-ant-10b) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
167_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cpmant.md
https://huggingface.co/docs/transformers/en/model_doc/cpmant/#cpmantconfig
.md
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 30720): Vocabulary size of the CPMAnt model. Defines the number of different tokens that can be represented by the `input` passed when calling [`CpmAntModel`]. hidden_size (`int`, *optional*, defaults to 4096): Dimension of the encoder layers.
167_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cpmant.md
https://huggingface.co/docs/transformers/en/model_doc/cpmant/#cpmantconfig
.md
hidden_size (`int`, *optional*, defaults to 4096): Dimension of the encoder layers. num_attention_heads (`int`, *optional*, defaults to 32): Number of attention heads in the Transformer encoder. dim_head (`int`, *optional*, defaults to 128): Dimension of attention heads for each attention layer in the Transformer encoder. dim_ff (`int`, *optional*, defaults to 10240): Dimension of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
167_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cpmant.md
https://huggingface.co/docs/transformers/en/model_doc/cpmant/#cpmantconfig
.md
Dimension of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder. num_hidden_layers (`int`, *optional*, defaults to 48): Number of layers of the Transformer encoder. dropout_p (`float`, *optional*, defaults to 0.0): The dropout probability for all fully connected layers in the embeddings, encoder. position_bias_num_buckets (`int`, *optional*, defaults to 512): The number of position_bias buckets. position_bias_max_distance (`int`, *optional*, defaults to 2048):
167_3_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cpmant.md
https://huggingface.co/docs/transformers/en/model_doc/cpmant/#cpmantconfig
.md
The number of position_bias buckets. position_bias_max_distance (`int`, *optional*, defaults to 2048): The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). eps (`float`, *optional*, defaults to 1e-06): The epsilon used by the layer normalization layers. init_std (`float`, *optional*, defaults to 1.0): Initialize parameters with std = init_std. prompt_types (`int`, *optional*, defaults to 32): The type of prompt.
167_3_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cpmant.md
https://huggingface.co/docs/transformers/en/model_doc/cpmant/#cpmantconfig
.md
Initialize parameters with std = init_std. prompt_types (`int`, *optional*, defaults to 32): The type of prompt. prompt_length (`int`, *optional*, defaults to 32): The length of prompt. segment_types (`int`, *optional*, defaults to 32): The type of segment. use_cache (`bool`, *optional*, defaults to `True`): Whether to use cache. Example: ```python >>> from transformers import CpmAntModel, CpmAntConfig
167_3_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cpmant.md
https://huggingface.co/docs/transformers/en/model_doc/cpmant/#cpmantconfig
.md
>>> # Initializing a CPMAnt cpm-ant-10b style configuration >>> configuration = CpmAntConfig() >>> # Initializing a model from the cpm-ant-10b style configuration >>> model = CpmAntModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ``` Methods: all
167_3_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cpmant.md
https://huggingface.co/docs/transformers/en/model_doc/cpmant/#cpmanttokenizer
.md
Construct a CPMAnt tokenizer. Based on byte-level Byte-Pair-Encoding. Args: vocab_file (`str`): Path to the vocabulary file. bod_token (`str`, *optional*, defaults to `"<d>"`): The beginning of document token. eod_token (`str`, *optional*, defaults to `"</d>"`): The end of document token. bos_token (`str`, *optional*, defaults to `"<s>"`): The beginning of sequence token. eos_token (`str`, *optional*, defaults to `"</s>"`): The end of sequence token. pad_token (`str`, *optional*, defaults to `"<pad>"`):
167_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cpmant.md
https://huggingface.co/docs/transformers/en/model_doc/cpmant/#cpmanttokenizer
.md
The end of sequence token. pad_token (`str`, *optional*, defaults to `"<pad>"`): The token used for padding. unk_token (`str`, *optional*, defaults to `"<unk>"`): The unknown token. line_token (`str`, *optional*, defaults to `"</n>"`): The line token. space_token (`str`, *optional*, defaults to `"</_>"`): The space token. Methods: all
167_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cpmant.md
https://huggingface.co/docs/transformers/en/model_doc/cpmant/#cpmantmodel
.md
The bare CPMAnt Model outputting raw hidden-states without any specific head on top. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters config ([`~CpmAntConfig`]): Model configuration class with all the parameters of the
167_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cpmant.md
https://huggingface.co/docs/transformers/en/model_doc/cpmant/#cpmantmodel
.md
behavior. Parameters config ([`~CpmAntConfig`]): Model configuration class with all the parameters of the Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: all
167_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cpmant.md
https://huggingface.co/docs/transformers/en/model_doc/cpmant/#cpmantforcausallm
.md
The CPMAnt Model with a language modeling head on top (linear layer with weights tied to the input embeddings). This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters config ([`~CpmAntConfig`]): Model configuration class with all the parameters of the
167_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cpmant.md
https://huggingface.co/docs/transformers/en/model_doc/cpmant/#cpmantforcausallm
.md
behavior. Parameters config ([`~CpmAntConfig`]): Model configuration class with all the parameters of the Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: all
167_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech-encoder-decoder.md
https://huggingface.co/docs/transformers/en/model_doc/speech-encoder-decoder/
.md
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
168_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech-encoder-decoder.md
https://huggingface.co/docs/transformers/en/model_doc/speech-encoder-decoder/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
168_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech-encoder-decoder.md
https://huggingface.co/docs/transformers/en/model_doc/speech-encoder-decoder/#speech-encoder-decoder-models
.md
The [`SpeechEncoderDecoderModel`] can be used to initialize a speech-to-text model with any pretrained speech autoencoding model as the encoder (*e.g.* [Wav2Vec2](wav2vec2), [Hubert](hubert)) and any pretrained autoregressive model as the decoder. The effectiveness of initializing speech-sequence-to-text-sequence models with pretrained checkpoints for speech recognition and speech translation has *e.g.* been shown in [Large-Scale Self- and Semi-Supervised Learning for Speech
168_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech-encoder-decoder.md
https://huggingface.co/docs/transformers/en/model_doc/speech-encoder-decoder/#speech-encoder-decoder-models
.md
recognition and speech translation has *e.g.* been shown in [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) by Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau. An example of how to use a [`SpeechEncoderDecoderModel`] for inference can be seen in [Speech2Text2](speech_to_text_2).
168_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech-encoder-decoder.md
https://huggingface.co/docs/transformers/en/model_doc/speech-encoder-decoder/#randomly-initializing-speechencoderdecodermodel-from-model-configurations
.md
[`SpeechEncoderDecoderModel`] can be randomly initialized from an encoder and a decoder config. In the following example, we show how to do this using the default [`Wav2Vec2Model`] configuration for the encoder and the default [`BertForCausalLM`] configuration for the decoder. ```python >>> from transformers import BertConfig, Wav2Vec2Config, SpeechEncoderDecoderConfig, SpeechEncoderDecoderModel >>> config_encoder = Wav2Vec2Config() >>> config_decoder = BertConfig()
168_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech-encoder-decoder.md
https://huggingface.co/docs/transformers/en/model_doc/speech-encoder-decoder/#randomly-initializing-speechencoderdecodermodel-from-model-configurations
.md
>>> config_encoder = Wav2Vec2Config() >>> config_decoder = BertConfig() >>> config = SpeechEncoderDecoderConfig.from_encoder_decoder_configs(config_encoder, config_decoder) >>> model = SpeechEncoderDecoderModel(config=config) ```
168_2_1