source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4tconfig
|
.md
|
> Speech encoder specific parameters
speech_encoder_layers (`int`, *optional*, defaults to 24):
Number of hidden layers in the Transformer speech encoder.
speech_encoder_attention_heads (`int`, *optional*, defaults to 16):
Number of attention heads for each attention layer in the Transformer speech encoder.
speech_encoder_intermediate_size (`int`, *optional*, defaults to 4096):
Dimension of the "intermediate" (i.e., feed-forward) layer in the Transformer speech encoder.
|
222_15_13
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4tconfig
|
.md
|
Dimension of the "intermediate" (i.e., feed-forward) layer in the Transformer speech encoder.
speech_encoder_hidden_act (`str` or `function`, *optional*, defaults to `"swish"`):
The non-linear activation function (function or string) in the speech encoder. If string, `"gelu"`,
`"relu"`, `"selu"`, `"swish"` and `"gelu_new"` are supported.
speech_encoder_dropout (`float`, *optional*, defaults to 0.0):
The dropout probability for all layers in the speech encoder.
|
222_15_14
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4tconfig
|
.md
|
speech_encoder_dropout (`float`, *optional*, defaults to 0.0):
The dropout probability for all layers in the speech encoder.
add_adapter (`bool`, *optional*, defaults to `True`):
Add an adapter layer on top of the speech encoder.
speech_encoder_layerdrop (`float`, *optional*, defaults to 0.1):
The LayerDrop probability for the speech encoder. See the [LayerDrop paper](see
https://arxiv.org/abs/1909.11556) for more details.
feature_projection_input_dim (`int`, *optional*, defaults to 160):
|
222_15_15
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4tconfig
|
.md
|
https://arxiv.org/abs/1909.11556) for more details.
feature_projection_input_dim (`int`, *optional*, defaults to 160):
Input dimension of the input feature projection of the speech encoder, i.e the dimension after processing
input audios with [`SeamlessM4TFeatureExtractor`].
num_conv_pos_embeddings (`int`, *optional*, defaults to 128):
Number of convolutional positional embeddings. Defines the kernel size of 1D convolutional positional
embeddings layer of the speech encoder.
|
222_15_16
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4tconfig
|
.md
|
embeddings layer of the speech encoder.
num_conv_pos_embedding_groups (`int`, *optional*, defaults to 16):
Number of groups of 1D convolutional positional embeddings layer of the speech encoder.
adaptor_kernel_size (`int`, *optional*, defaults to 8):
Kernel size of the convolutional layers in the adapter network. Only relevant if `add_adapter is True`.
adaptor_stride (`int`, *optional*, defaults to 8):
Stride of the convolutional layers in the adapter network. Only relevant if `add_adapter is True`.
|
222_15_17
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4tconfig
|
.md
|
Stride of the convolutional layers in the adapter network. Only relevant if `add_adapter is True`.
adaptor_dropout (`float`, *optional*, defaults to 0.1):
The dropout probability for all layers in the speech adapter.
num_adapter_layers (`int`, *optional*, defaults to 1):
Number of convolutional layers that should be used in the adapter network. Only relevant if `add_adapter is
True`.
position_embeddings_type (`str`, *optional*, defaults to `"relative"`):
|
222_15_18
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4tconfig
|
.md
|
True`.
position_embeddings_type (`str`, *optional*, defaults to `"relative"`):
Can be specified to `relative` or `rotary` for relative or rotary position embeddings respectively. If left
`None` no relative position embedding is applied. Only applied to the speech encoder.
rotary_embedding_base (`int`, *optional*, defaults to 10000):
If `"rotary"` position embeddings are used, defines the size of the embedding base. Only applied to the
speech encoder.
|
222_15_19
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4tconfig
|
.md
|
If `"rotary"` position embeddings are used, defines the size of the embedding base. Only applied to the
speech encoder.
max_source_positions (`int`, *optional*, defaults to 4096):
if `"relative"` position embeddings are used, defines the maximum source input positions. Only applied to
the speech encoder.
conv_depthwise_kernel_size (`int`, *optional*, defaults to 31):
Kernel size of convolutional depthwise 1D layer in Conformer blocks. Only applied to the speech encoder.
|
222_15_20
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4tconfig
|
.md
|
Kernel size of convolutional depthwise 1D layer in Conformer blocks. Only applied to the speech encoder.
> Text-To-Unit (t2u) model specific parameters
t2u_bos_token_id (`int`, *optional*, defaults to 0):
The id of the _beginning-of-stream_ unit token. Only applied to the text-to-unit seq2seq model.
t2u_pad_token_id (`int`, *optional*, defaults to 1):
The id of the _padding_ unit token. Only applied to the text-to-unit seq2seq model.
t2u_eos_token_id (`int`, *optional*, defaults to 2):
|
222_15_21
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4tconfig
|
.md
|
t2u_eos_token_id (`int`, *optional*, defaults to 2):
The id of the _end-of-stream_ unit token. Only applied to the text-to-unit seq2seq model.
t2u_decoder_start_token_id (`int`, *optional*, defaults to 2):
If an encoder-decoder model starts decoding with a different token than _bos_, the id of that token. Only
applied to the text-to-unit seq2seq model.
t2u_max_new_tokens (`int`, *optional*, defaults to 1024):
|
222_15_22
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4tconfig
|
.md
|
applied to the text-to-unit seq2seq model.
t2u_max_new_tokens (`int`, *optional*, defaults to 1024):
The maximum numbers of unit tokens to generate, ignoring the number of tokens in the prompt. Only applied
to the text-to-unit seq2seq model.
t2u_encoder_layers (`int`, *optional*, defaults to 6):
Number of hidden layers in the Transformer text-to-unit encoder.
t2u_encoder_ffn_dim (`int`, *optional*, defaults to 8192):
|
222_15_23
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4tconfig
|
.md
|
Number of hidden layers in the Transformer text-to-unit encoder.
t2u_encoder_ffn_dim (`int`, *optional*, defaults to 8192):
Dimension of the "intermediate" (i.e., feed-forward) layer in the Transformer text-to-unit encoder.
t2u_encoder_attention_heads (`int`, *optional*, defaults to 16):
Number of attention heads for each attention layer in the Transformer text-to-unit encoder.
t2u_decoder_layers (`int`, *optional*, defaults to 6):
Number of hidden layers in the Transformer text-to-unit decoder.
|
222_15_24
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4tconfig
|
.md
|
t2u_decoder_layers (`int`, *optional*, defaults to 6):
Number of hidden layers in the Transformer text-to-unit decoder.
t2u_decoder_ffn_dim (`int`, *optional*, defaults to 8192):
Dimension of the "intermediate" (i.e., feed-forward) layer in the Transformer text-to-unit decoder.
t2u_decoder_attention_heads (`int`, *optional*, defaults to 16):
Number of attention heads for each attention layer in the Transformer text-to-unit decoder.
t2u_max_position_embeddings (`int`, *optional*, defaults to 2048):
|
222_15_25
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4tconfig
|
.md
|
t2u_max_position_embeddings (`int`, *optional*, defaults to 2048):
The maximum sequence length that this model text-to-unit component might ever be used with. Typically set
this to something large just in case (e.g., 512 or 1024 or 2048).
> Hifi-Gan Vocoder specific parameters
sampling_rate (`int`, *optional*, defaults to 16000):
The sampling rate at which the output audio will be generated, expressed in hertz (Hz).
upsample_initial_channel (`int`, *optional*, defaults to 512):
|
222_15_26
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4tconfig
|
.md
|
upsample_initial_channel (`int`, *optional*, defaults to 512):
The number of input channels into the hifi-gan upsampling network. Applies to the vocoder only.
upsample_rates (`Tuple[int]` or `List[int]`, *optional*, defaults to `[5, 4, 4, 2, 2]`):
A tuple of integers defining the stride of each 1D convolutional layer in the vocoder upsampling network.
The length of *upsample_rates* defines the number of convolutional layers and has to match the length of
|
222_15_27
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4tconfig
|
.md
|
The length of *upsample_rates* defines the number of convolutional layers and has to match the length of
*upsample_kernel_sizes*. Applies to the vocoder only.
upsample_kernel_sizes (`Tuple[int]` or `List[int]`, *optional*, defaults to `[11, 8, 8, 4, 4]`):
A tuple of integers defining the kernel size of each 1D convolutional layer in the vocoder upsampling
network. The length of *upsample_kernel_sizes* defines the number of convolutional layers and has to match
|
222_15_28
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4tconfig
|
.md
|
network. The length of *upsample_kernel_sizes* defines the number of convolutional layers and has to match
the length of *upsample_rates*. Applies to the vocoder only.
resblock_kernel_sizes (`Tuple[int]` or `List[int]`, *optional*, defaults to `[3, 7, 11]`):
A tuple of integers defining the kernel sizes of the vocoder 1D convolutional layers in the multi-receptive
field fusion (MRF) module. Applies to the vocoder only.
|
222_15_29
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4tconfig
|
.md
|
field fusion (MRF) module. Applies to the vocoder only.
resblock_dilation_sizes (`Tuple[Tuple[int]]` or `List[List[int]]`, *optional*, defaults to `[[1, 3, 5], [1, 3, 5], [1, 3, 5]]`):
A nested tuple of integers defining the dilation rates of the vocoder dilated 1D convolutional layers in
the multi-receptive field fusion (MRF) module. Applies to the vocoder only.
leaky_relu_slope (`float`, *optional*, defaults to 0.1):
|
222_15_30
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4tconfig
|
.md
|
leaky_relu_slope (`float`, *optional*, defaults to 0.1):
The angle of the negative slope used by the leaky ReLU activation in the vocoder. Applies to the vocoder
only.
unit_hifi_gan_vocab_size (`int`, *optional*, defaults to 10000):
Vocabulary size of the SeamlessM4T vocoder. Defines the number of different unit tokens that can be
represented by the `inputs_ids` passed when calling the vocoder of [`~SeamlessM4TModel`],
[`~SeamlessM4TForSpeechToSpeech`] or [`~SeamlessM4TForTextToSpeech`].
|
222_15_31
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4tconfig
|
.md
|
[`~SeamlessM4TForSpeechToSpeech`] or [`~SeamlessM4TForTextToSpeech`].
unit_embed_dim (`int`, *optional*, defaults to 1280):
The projection dimension of the input ids given to the hifi-gan vocoder. Applies to the vocoder only.
lang_embed_dim (`int`, *optional*, defaults to 256):
The projection dimension of the target language given to the hifi-gan vocoder. Applies to the vocoder only.
spkr_embed_dim (`int`, *optional*, defaults to 256):
|
222_15_32
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4tconfig
|
.md
|
spkr_embed_dim (`int`, *optional*, defaults to 256):
The projection dimension of the speaker id given to the hifi-gan vocoder. Applies to the vocoder only.
vocoder_num_langs (`int`, *optional*, defaults to 36):
Number of langs supported by the vocoder. Might be different from `t2u_num_langs`.
vocoder_num_spkrs (`int`, *optional*, defaults to 200):
Number of speakers supported by the vocoder.
variance_predictor_kernel_size (`int`, *optional*, defaults to 3):
|
222_15_33
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4tconfig
|
.md
|
Number of speakers supported by the vocoder.
variance_predictor_kernel_size (`int`, *optional*, defaults to 3):
Kernel size of the duration predictor. Applies to the vocoder only.
var_pred_dropout (`float`, *optional*, defaults to 0.5):
The dropout probability of the duration predictor. Applies to the vocoder only.
vocoder_offset (`int`, *optional*, defaults to 4):
Offset the unit token ids by this number to account for symbol tokens. Applies to the vocoder only.
```python
|
222_15_34
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4tconfig
|
.md
|
Offset the unit token ids by this number to account for symbol tokens. Applies to the vocoder only.
```python
>>> from transformers import SeamlessM4TModel, SeamlessM4TConfig
|
222_15_35
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4tconfig
|
.md
|
>>> # Initializing a SeamlessM4T "facebook/hf-seamless-m4t-medium" style configuration
>>> configuration = SeamlessM4TConfig()
>>> # Initializing a model from the "facebook/hf-seamless-m4t-medium" style configuration
>>> model = SeamlessM4TModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
222_15_36
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4ttokenizer
|
.md
|
Construct a SeamlessM4T tokenizer.
Adapted from [`RobertaTokenizer`] and [`XLNetTokenizer`]. Based on
[SentencePiece](https://github.com/google/sentencepiece).
The tokenization method is `<language code> <tokens> <eos>` for source language documents, and `<eos> <language
code> <tokens> <eos>` for target language documents.
Examples:
```python
>>> from transformers import SeamlessM4TTokenizer
|
222_16_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4ttokenizer
|
.md
|
>>> tokenizer = SeamlessM4TTokenizer.from_pretrained(
... "facebook/hf-seamless-m4t-medium", src_lang="eng", tgt_lang="fra"
... )
>>> example_english_phrase = " UN Chief Says There Is No Military Solution in Syria"
>>> expected_translation_french = "Le chef de l'ONU affirme qu'il n'y a pas de solution militaire en Syrie."
>>> inputs = tokenizer(example_english_phrase, text_target=expected_translation_french, return_tensors="pt")
```
Args:
vocab_file (`str`):
Path to the vocabulary file.
|
222_16_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4ttokenizer
|
.md
|
```
Args:
vocab_file (`str`):
Path to the vocabulary file.
bos_token (`str`, *optional*, defaults to `"<s>"`):
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
<Tip>
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the `cls_token`.
</Tip>
eos_token (`str`, *optional*, defaults to `"</s>"`):
The end of sequence token.
<Tip>
|
222_16_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4ttokenizer
|
.md
|
</Tip>
eos_token (`str`, *optional*, defaults to `"</s>"`):
The end of sequence token.
<Tip>
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the `sep_token`.
</Tip>
sep_token (`str`, *optional*, defaults to `"</s>"`):
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
|
222_16_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4ttokenizer
|
.md
|
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (`str`, *optional*, defaults to `"<s>"`):
The classifier token which is used when doing sequence classification (classification of the whole sequence
|
222_16_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4ttokenizer
|
.md
|
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (`str`, *optional*, defaults to `"<unk>"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (`str`, *optional*, defaults to `"<pad>"`):
|
222_16_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4ttokenizer
|
.md
|
token instead.
pad_token (`str`, *optional*, defaults to `"<pad>"`):
The token used for padding, for example when batching sequences of different lengths.
tokenizer_file (`str`, *optional*):
The path to a tokenizer file to use instead of the vocab file.
src_lang (`str`, *optional*, defaults to `"eng"`):
The language to use as source language for translation.
tgt_lang (`str`, *optional*, defaults to `"fra"`):
The language to use as target language for translation.
|
222_16_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4ttokenizer
|
.md
|
tgt_lang (`str`, *optional*, defaults to `"fra"`):
The language to use as target language for translation.
sp_model_kwargs (`Dict[str, Any]`, *optional*):
Additional keyword arguments to pass to the model initialization.
additional_special_tokens (tuple or list of `str` or `tokenizers.AddedToken`, *optional*):
A tuple or a list of additional special tokens. Can be used to specify the list of languages that will be
supported by the tokenizer.
add_prefix_space (`bool`, *optional*, defaults to `True`):
|
222_16_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4ttokenizer
|
.md
|
supported by the tokenizer.
add_prefix_space (`bool`, *optional*, defaults to `True`):
Whether or not to add an initial space to the input. This allows to treat the leading word just as any
other word.
Methods: __call__
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
|
222_16_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4ttokenizerfast
|
.md
|
Construct a "fast" SeamlessM4T tokenizer (backed by HuggingFace's *tokenizers* library). Based on
[BPE](https://huggingface.co/docs/tokenizers/python/latest/components.html?highlight=BPE#models).
This tokenizer inherits from [`PreTrainedTokenizerFast`] which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
The tokenization method is `<language code> <tokens> <eos>` for source language documents, and `<eos> <language
|
222_17_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4ttokenizerfast
|
.md
|
The tokenization method is `<language code> <tokens> <eos>` for source language documents, and `<eos> <language
code> <tokens> <eos>` for target language documents.
Examples:
```python
>>> from transformers import SeamlessM4TTokenizerFast
|
222_17_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4ttokenizerfast
|
.md
|
>>> tokenizer = SeamlessM4TTokenizerFast.from_pretrained(
... "facebook/hf-seamless-m4t-medium", src_lang="eng", tgt_lang="fra"
... )
>>> example_english_phrase = " UN Chief Says There Is No Military Solution in Syria"
>>> expected_translation_french = "Le chef de l'ONU affirme qu'il n'y a pas de solution militaire en Syrie."
>>> inputs = tokenizer(example_english_phrase, text_target=expected_translation_french, return_tensors="pt")
```
Args:
vocab_file (`str`, *optional*):
|
222_17_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4ttokenizerfast
|
.md
|
```
Args:
vocab_file (`str`, *optional*):
Path to the vocabulary file.
tokenizer_file (`str`, *optional*):
The path to a tokenizer file to use instead of the vocab file.
bos_token (`str`, *optional*, defaults to `"<s>"`):
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
<Tip>
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the `cls_token`.
</Tip>
|
222_17_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4ttokenizerfast
|
.md
|
sequence. The token used is the `cls_token`.
</Tip>
eos_token (`str`, *optional*, defaults to `"</s>"`):
The end of sequence token.
<Tip>
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the `sep_token`.
</Tip>
sep_token (`str`, *optional*, defaults to `"</s>"`):
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
|
222_17_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4ttokenizerfast
|
.md
|
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (`str`, *optional*, defaults to `"<s>"`):
The classifier token which is used when doing sequence classification (classification of the whole sequence
|
222_17_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4ttokenizerfast
|
.md
|
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (`str`, *optional*, defaults to `"<unk>"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (`str`, *optional*, defaults to `"<pad>"`):
|
222_17_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4ttokenizerfast
|
.md
|
token instead.
pad_token (`str`, *optional*, defaults to `"<pad>"`):
The token used for padding, for example when batching sequences of different lengths.
src_lang (`str`, *optional*, defaults to `"eng"`):
The language to use as source language for translation.
tgt_lang (`str`, *optional*, defaults to `"fra"`):
The language to use as target language for translation.
additional_special_tokens (tuple or list of `str` or `tokenizers.AddedToken`, *optional*):
A tuple or a list of additional special tokens.
|
222_17_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4ttokenizerfast
|
.md
|
A tuple or a list of additional special tokens.
Methods: __call__
|
222_17_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4tfeatureextractor
|
.md
|
Constructs a SeamlessM4T feature extractor.
This feature extractor inherits from [`SequenceFeatureExtractor`] which contains most of the main methods. Users
should refer to this superclass for more information regarding those methods.
This class extracts mel-filter bank features from raw speech.
Args:
feature_size (`int`, *optional*, defaults to 80):
The feature dimension of the extracted features.
sampling_rate (`int`, *optional*, defaults to 16000):
|
222_18_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4tfeatureextractor
|
.md
|
The feature dimension of the extracted features.
sampling_rate (`int`, *optional*, defaults to 16000):
The sampling rate at which the audio files should be digitalized expressed in hertz (Hz).
num_mel_bins (`int`, *optional*, defaults to 80):
Number of Mel-frequency bins.
padding_value (`float`, *optional*, defaults to 0.0):
The value that is used to fill the padding vectors.
stride (`int`, *optional*, defaults to 2):
Stride used to reshape audios from shape (batch_size,num_frames,num_mel_bins) to
|
222_18_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4tfeatureextractor
|
.md
|
stride (`int`, *optional*, defaults to 2):
Stride used to reshape audios from shape (batch_size,num_frames,num_mel_bins) to
(batch_size,num_frames//stride,num_mel_bins*stride).
Methods: __call__
|
222_18_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4tprocessor
|
.md
|
Constructs a SeamlessM4T processor which wraps a SeamlessM4T feature extractor and a SeamlessM4T tokenizer into a
single processor.
[`SeamlessM4TProcessor`] offers all the functionalities of [`SeamlessM4TFeatureExtractor`] and
[`SeamlessM4TTokenizerFast`]. See the [`~SeamlessM4TProcessor.__call__`] and [`~SeamlessM4TProcessor.decode`] for
more information.
Args:
feature_extractor ([`SeamlessM4TFeatureExtractor`]):
The audio processor is a required input.
tokenizer ([`SeamlessM4TTokenizerFast`]):
|
222_19_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4tprocessor
|
.md
|
The audio processor is a required input.
tokenizer ([`SeamlessM4TTokenizerFast`]):
The tokenizer is a required input.
Methods: __call__
|
222_19_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4tcodehifigan
|
.md
|
Code HiFi-GAN vocoder as described in this [repository](https://github.com/facebookresearch/speech-resynthesis).
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
222_20_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4tcodehifigan
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`SeamlessM4TConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
|
222_20_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4tcodehifigan
|
.md
|
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
|
222_20_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4thifigan
|
.md
|
No docstring available for SeamlessM4THifiGan
|
222_21_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4ttexttounitmodel
|
.md
|
Transformer bare text-to-unit encoder-decoder. The encoder is a [`SeamlessM4TEncoder`] without embeddings and the decoder is a [`SeamlessM4TDecoder`].
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`~SeamlessM4TConfig`]): Model configuration class with all the parameters of the model.
|
222_22_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4ttexttounitmodel
|
.md
|
behavior.
Parameters:
config ([`~SeamlessM4TConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
embed_tokens_decoder (`nn.Embedding`, *optional*): input embedding of the decoder.
|
222_22_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4ttexttounitforconditionalgeneration
|
.md
|
Transformer text-to-unit encoder-decoder with a language model head. The base encoder-decoder model is a [`SeamlessM4TTextToUnit`].
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`~SeamlessM4TConfig`]): Model configuration class with all the parameters of the model.
|
222_23_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t.md
|
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t/#seamlessm4ttexttounitforconditionalgeneration
|
.md
|
behavior.
Parameters:
config ([`~SeamlessM4TConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
embed_tokens_decoder (`nn.Embedding`, *optional*): input embedding of the decoder.
|
222_23_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/imagegpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/imagegpt/
|
.md
|
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the
License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
223_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/imagegpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/imagegpt/
|
.md
|
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
specific language governing permissions and limitations under the License. -->
|
223_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/imagegpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/imagegpt/#overview
|
.md
|
The ImageGPT model was proposed in [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt) by Mark
Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever. ImageGPT (iGPT) is a GPT-2-like
model trained to predict the next pixel value, allowing for both unconditional and conditional image generation.
The abstract from the paper is the following:
*Inspired by progress in unsupervised representation learning for natural language, we examine whether similar models
|
223_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/imagegpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/imagegpt/#overview
|
.md
|
*Inspired by progress in unsupervised representation learning for natural language, we examine whether similar models
can learn useful representations for images. We train a sequence Transformer to auto-regressively predict pixels,
without incorporating knowledge of the 2D input structure. Despite training on low-resolution ImageNet without labels,
we find that a GPT-2 scale model learns strong image representations as measured by linear probing, fine-tuning, and
|
223_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/imagegpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/imagegpt/#overview
|
.md
|
we find that a GPT-2 scale model learns strong image representations as measured by linear probing, fine-tuning, and
low-data classification. On CIFAR-10, we achieve 96.3% accuracy with a linear probe, outperforming a supervised Wide
ResNet, and 99.0% accuracy with full fine-tuning, matching the top supervised pre-trained models. We are also
competitive with self-supervised benchmarks on ImageNet when substituting pixels for a VQVAE encoding, achieving 69.0%
|
223_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/imagegpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/imagegpt/#overview
|
.md
|
competitive with self-supervised benchmarks on ImageNet when substituting pixels for a VQVAE encoding, achieving 69.0%
top-1 accuracy on a linear probe of our features.*
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/imagegpt_architecture.png"
alt="drawing" width="600"/>
<small> Summary of the approach. Taken from the [original paper](https://cdn.openai.com/papers/Generative_Pretraining_from_Pixels_V2.pdf). </small>
|
223_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/imagegpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/imagegpt/#overview
|
.md
|
This model was contributed by [nielsr](https://huggingface.co/nielsr), based on [this issue](https://github.com/openai/image-gpt/issues/7). The original code can be found
[here](https://github.com/openai/image-gpt).
|
223_1_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/imagegpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/imagegpt/#usage-tips
|
.md
|
- ImageGPT is almost exactly the same as [GPT-2](gpt2), with the exception that a different activation
function is used (namely "quick gelu"), and the layer normalization layers don't mean center the inputs. ImageGPT
also doesn't have tied input- and output embeddings.
- As the time- and memory requirements of the attention mechanism of Transformers scales quadratically in the sequence
length, the authors pre-trained ImageGPT on smaller input resolutions, such as 32x32 and 64x64. However, feeding a
|
223_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/imagegpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/imagegpt/#usage-tips
|
.md
|
length, the authors pre-trained ImageGPT on smaller input resolutions, such as 32x32 and 64x64. However, feeding a
sequence of 32x32x3=3072 tokens from 0..255 into a Transformer is still prohibitively large. Therefore, the authors
applied k-means clustering to the (R,G,B) pixel values with k=512. This way, we only have a 32*32 = 1024-long
sequence, but now of integers in the range 0..511. So we are shrinking the sequence length at the cost of a bigger
|
223_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/imagegpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/imagegpt/#usage-tips
|
.md
|
sequence, but now of integers in the range 0..511. So we are shrinking the sequence length at the cost of a bigger
embedding matrix. In other words, the vocabulary size of ImageGPT is 512, + 1 for a special "start of sentence" (SOS)
token, used at the beginning of every sequence. One can use [`ImageGPTImageProcessor`] to prepare
images for the model.
- Despite being pre-trained entirely unsupervised (i.e. without the use of any labels), ImageGPT produces fairly
|
223_2_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/imagegpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/imagegpt/#usage-tips
|
.md
|
- Despite being pre-trained entirely unsupervised (i.e. without the use of any labels), ImageGPT produces fairly
performant image features useful for downstream tasks, such as image classification. The authors showed that the
features in the middle of the network are the most performant, and can be used as-is to train a linear model (such as
a sklearn logistic regression model for example). This is also referred to as "linear probing". Features can be
|
223_2_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/imagegpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/imagegpt/#usage-tips
|
.md
|
a sklearn logistic regression model for example). This is also referred to as "linear probing". Features can be
easily obtained by first forwarding the image through the model, then specifying `output_hidden_states=True`, and
then average-pool the hidden states at whatever layer you like.
- Alternatively, one can further fine-tune the entire model on a downstream dataset, similar to BERT. For this, you can
use [`ImageGPTForImageClassification`].
|
223_2_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/imagegpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/imagegpt/#usage-tips
|
.md
|
use [`ImageGPTForImageClassification`].
- ImageGPT comes in different sizes: there's ImageGPT-small, ImageGPT-medium and ImageGPT-large. The authors did also
train an XL variant, which they didn't release. The differences in size are summarized in the following table:
| **Model variant** | **Depths** | **Hidden sizes** | **Decoder hidden size** | **Params (M)** | **ImageNet-1k Top 1** |
|---|---|---|---|---|---|
| MiT-b0 | [2, 2, 2, 2] | [32, 64, 160, 256] | 256 | 3.7 | 70.5 |
|
223_2_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/imagegpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/imagegpt/#usage-tips
|
.md
|
|---|---|---|---|---|---|
| MiT-b0 | [2, 2, 2, 2] | [32, 64, 160, 256] | 256 | 3.7 | 70.5 |
| MiT-b1 | [2, 2, 2, 2] | [64, 128, 320, 512] | 256 | 14.0 | 78.7 |
| MiT-b2 | [3, 4, 6, 3] | [64, 128, 320, 512] | 768 | 25.4 | 81.6 |
| MiT-b3 | [3, 4, 18, 3] | [64, 128, 320, 512] | 768 | 45.2 | 83.1 |
| MiT-b4 | [3, 8, 27, 3] | [64, 128, 320, 512] | 768 | 62.6 | 83.6 |
| MiT-b5 | [3, 6, 40, 3] | [64, 128, 320, 512] | 768 | 82.0 | 83.8 |
|
223_2_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/imagegpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/imagegpt/#resources
|
.md
|
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ImageGPT.
<PipelineTag pipeline="image-classification"/>
- Demo notebooks for ImageGPT can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/ImageGPT).
|
223_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/imagegpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/imagegpt/#resources
|
.md
|
- Demo notebooks for ImageGPT can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/ImageGPT).
- [`ImageGPTForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb).
- See also: [Image classification task guide](../tasks/image_classification)
|
223_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/imagegpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/imagegpt/#resources
|
.md
|
- See also: [Image classification task guide](../tasks/image_classification)
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
|
223_3_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/imagegpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/imagegpt/#imagegptconfig
|
.md
|
This is the configuration class to store the configuration of a [`ImageGPTModel`] or a [`TFImageGPTModel`]. It is
used to instantiate a GPT-2 model according to the specified arguments, defining the model architecture.
Instantiating a configuration with the defaults will yield a similar configuration to that of the ImageGPT
[openai/imagegpt-small](https://huggingface.co/openai/imagegpt-small) architecture.
|
223_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/imagegpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/imagegpt/#imagegptconfig
|
.md
|
[openai/imagegpt-small](https://huggingface.co/openai/imagegpt-small) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 512):
Vocabulary size of the GPT-2 model. Defines the number of different tokens that can be represented by the
`inputs_ids` passed when calling [`ImageGPTModel`] or [`TFImageGPTModel`].
|
223_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/imagegpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/imagegpt/#imagegptconfig
|
.md
|
`inputs_ids` passed when calling [`ImageGPTModel`] or [`TFImageGPTModel`].
n_positions (`int`, *optional*, defaults to 32*32):
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
n_embd (`int`, *optional*, defaults to 512):
Dimensionality of the embeddings and hidden states.
n_layer (`int`, *optional*, defaults to 24):
Number of hidden layers in the Transformer encoder.
|
223_4_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/imagegpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/imagegpt/#imagegptconfig
|
.md
|
n_layer (`int`, *optional*, defaults to 24):
Number of hidden layers in the Transformer encoder.
n_head (`int`, *optional*, defaults to 8):
Number of attention heads for each attention layer in the Transformer encoder.
n_inner (`int`, *optional*, defaults to None):
Dimensionality of the inner feed-forward layers. `None` will set it to 4 times n_embd
activation_function (`str`, *optional*, defaults to `"quick_gelu"`):
|
223_4_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/imagegpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/imagegpt/#imagegptconfig
|
.md
|
activation_function (`str`, *optional*, defaults to `"quick_gelu"`):
Activation function (can be one of the activation functions defined in src/transformers/activations.py).
Defaults to "quick_gelu".
resid_pdrop (`float`, *optional*, defaults to 0.1):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
embd_pdrop (`int`, *optional*, defaults to 0.1):
The dropout ratio for the embeddings.
attn_pdrop (`float`, *optional*, defaults to 0.1):
|
223_4_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/imagegpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/imagegpt/#imagegptconfig
|
.md
|
The dropout ratio for the embeddings.
attn_pdrop (`float`, *optional*, defaults to 0.1):
The dropout ratio for the attention.
layer_norm_epsilon (`float`, *optional*, defaults to 1e-5):
The epsilon to use in the layer normalization layers.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
scale_attn_weights (`bool`, *optional*, defaults to `True`):
|
223_4_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/imagegpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/imagegpt/#imagegptconfig
|
.md
|
scale_attn_weights (`bool`, *optional*, defaults to `True`):
Scale attention weights by dividing by sqrt(hidden_size)..
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models).
scale_attn_by_inverse_layer_idx (`bool`, *optional*, defaults to `False`):
Whether to additionally scale attention weights by `1 / layer_idx + 1`.
reorder_and_upcast_attn (`bool`, *optional*, defaults to `False`):
|
223_4_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/imagegpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/imagegpt/#imagegptconfig
|
.md
|
reorder_and_upcast_attn (`bool`, *optional*, defaults to `False`):
Whether to scale keys (K) prior to computing attention (dot-product) and upcast attention
dot-product/softmax to float() when training with mixed precision.
Example:
```python
>>> from transformers import ImageGPTConfig, ImageGPTModel
|
223_4_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/imagegpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/imagegpt/#imagegptconfig
|
.md
|
>>> # Initializing a ImageGPT configuration
>>> configuration = ImageGPTConfig()
>>> # Initializing a model (with random weights) from the configuration
>>> model = ImageGPTModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
223_4_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/imagegpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/imagegpt/#imagegptfeatureextractor
|
.md
|
No docstring available for ImageGPTFeatureExtractor
Methods: __call__
|
223_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/imagegpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/imagegpt/#imagegptimageprocessor
|
.md
|
Constructs a ImageGPT image processor. This image processor can be used to resize images to a smaller resolution
(such as 32x32 or 64x64), normalize them and finally color quantize them to obtain sequences of "pixel values"
(color clusters).
Args:
clusters (`np.ndarray` or `List[List[int]]`, *optional*):
The color clusters to use, of shape `(n_clusters, 3)` when color quantizing. Can be overriden by `clusters`
in `preprocess`.
do_resize (`bool`, *optional*, defaults to `True`):
|
223_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/imagegpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/imagegpt/#imagegptimageprocessor
|
.md
|
in `preprocess`.
do_resize (`bool`, *optional*, defaults to `True`):
Whether to resize the image's dimensions to `(size["height"], size["width"])`. Can be overridden by
`do_resize` in `preprocess`.
size (`Dict[str, int]` *optional*, defaults to `{"height": 256, "width": 256}`):
Size of the image after resizing. Can be overridden by `size` in `preprocess`.
resample (`PILImageResampling`, *optional*, defaults to `Resampling.BILINEAR`):
|
223_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/imagegpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/imagegpt/#imagegptimageprocessor
|
.md
|
resample (`PILImageResampling`, *optional*, defaults to `Resampling.BILINEAR`):
Resampling filter to use if resizing the image. Can be overridden by `resample` in `preprocess`.
do_normalize (`bool`, *optional*, defaults to `True`):
Whether to normalize the image pixel value to between [-1, 1]. Can be overridden by `do_normalize` in
`preprocess`.
do_color_quantize (`bool`, *optional*, defaults to `True`):
Whether to color quantize the image. Can be overridden by `do_color_quantize` in `preprocess`.
|
223_6_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/imagegpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/imagegpt/#imagegptimageprocessor
|
.md
|
Whether to color quantize the image. Can be overridden by `do_color_quantize` in `preprocess`.
Methods: preprocess
|
223_6_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/imagegpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/imagegpt/#imagegptmodel
|
.md
|
The bare ImageGPT Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
223_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/imagegpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/imagegpt/#imagegptmodel
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`ImageGPTConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
223_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/imagegpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/imagegpt/#imagegptmodel
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
223_7_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/imagegpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/imagegpt/#imagegptforcausalimagemodeling
|
.md
|
The ImageGPT Model transformer with a language modeling head on top (linear layer with weights tied to the input
embeddings).
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
223_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/imagegpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/imagegpt/#imagegptforcausalimagemodeling
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`ImageGPTConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
223_8_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/imagegpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/imagegpt/#imagegptforcausalimagemodeling
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
223_8_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/imagegpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/imagegpt/#imagegptforimageclassification
|
.md
|
The ImageGPT Model transformer with an image classification head on top (linear layer).
[`ImageGPTForImageClassification`] average-pools the hidden states in order to do the classification.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
|
223_9_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/imagegpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/imagegpt/#imagegptforimageclassification
|
.md
|
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`ImageGPTConfig`]): Model configuration class with all the parameters of the model.
|
223_9_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/imagegpt.md
|
https://huggingface.co/docs/transformers/en/model_doc/imagegpt/#imagegptforimageclassification
|
.md
|
and behavior.
Parameters:
config ([`ImageGPTConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
223_9_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nezha.md
|
https://huggingface.co/docs/transformers/en/model_doc/nezha/
|
.md
|
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
224_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nezha.md
|
https://huggingface.co/docs/transformers/en/model_doc/nezha/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
224_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nezha.md
|
https://huggingface.co/docs/transformers/en/model_doc/nezha/#nezha
|
.md
|
<Tip warning={true}>
This model is in maintenance mode only, we don't accept any new PRs changing its code.
If you run into any issues running this model, please reinstall the last version that supported this model: v4.40.2.
You can do so by running the following command: `pip install -U transformers==4.40.2`.
</Tip>
|
224_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nezha.md
|
https://huggingface.co/docs/transformers/en/model_doc/nezha/#overview
|
.md
|
The Nezha model was proposed in [NEZHA: Neural Contextualized Representation for Chinese Language Understanding](https://arxiv.org/abs/1909.00204) by Junqiu Wei et al.
The abstract from the paper is the following:
*The pre-trained language models have achieved great successes in various natural language understanding (NLU) tasks
due to its capacity to capture the deep contextualized information in text by pre-training on large-scale corpora.
|
224_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nezha.md
|
https://huggingface.co/docs/transformers/en/model_doc/nezha/#overview
|
.md
|
due to its capacity to capture the deep contextualized information in text by pre-training on large-scale corpora.
In this technical report, we present our practice of pre-training language models named NEZHA (NEural contextualiZed
representation for CHinese lAnguage understanding) on Chinese corpora and finetuning for the Chinese NLU tasks.
The current version of NEZHA is based on BERT with a collection of proven improvements, which include Functional
|
224_2_1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.