source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnettokenizer
|
.md
|
[issue](https://github.com/huggingface/transformers/issues/328)).
strip_accents (`bool`, *optional*):
Whether or not to strip all accents. If this option is not specified, then it will be determined by the
value for `lowercase` (as in the original BERT).
clean_up_tokenization_spaces (`bool`, *optional*, defaults to `True`):
Whether or not to cleanup spaces after decoding, cleanup consists in removing potential artifacts like
extra spaces.
|
242_6_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnet-specific-outputs
|
.md
|
models.prophetnet.modeling_prophetnet.ProphetNetSeq2SeqLMOutput
Base class for sequence-to-sequence language models outputs.
Args:
loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `labels` is provided):
Language modeling loss.
logits (`torch.FloatTensor` of shape `(batch_size, decoder_sequence_length, config.vocab_size)`):
Prediction scores of the main stream language modeling head (scores for each vocabulary token before
SoftMax).
|
242_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnet-specific-outputs
|
.md
|
Prediction scores of the main stream language modeling head (scores for each vocabulary token before
SoftMax).
logits_ngram (`torch.FloatTensor` of shape `(batch_size, ngram * decoder_sequence_length, config.vocab_size)`):
Prediction scores of the predict stream language modeling head (scores for each vocabulary token before
SoftMax).
past_key_values (`List[torch.FloatTensor]`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`):
|
242_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnet-specific-outputs
|
.md
|
List of `torch.FloatTensor` of length `config.n_layers`, with each tensor of shape `(2, batch_size,
num_attn_heads, decoder_sequence_length, embed_size_per_head)`).
Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be
used (see `past_key_values` input) to speed up sequential decoding.
decoder_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
|
242_7_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnet-specific-outputs
|
.md
|
Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of
shape `(batch_size, decoder_sequence_length, hidden_size)`.
Hidden-states of main stream of the decoder at the output of each layer plus the initial embedding outputs.
decoder_ngram_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
|
242_7_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnet-specific-outputs
|
.md
|
Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of
shape `(batch_size, ngram * decoder_sequence_length, hidden_size)`.
Hidden-states of the predict stream of the decoder at the output of each layer plus the initial embedding
outputs.
decoder_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
|
242_7_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnet-specific-outputs
|
.md
|
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_attn_heads,
decoder_sequence_length, decoder_sequence_length)`.
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
decoder_ngram_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
|
242_7_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnet-specific-outputs
|
.md
|
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_attn_heads,
decoder_sequence_length, decoder_sequence_length)`.
Attentions weights of the predict stream of the decoder, after the attention softmax, used to compute the
weighted average in the self-attention heads.
cross_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
|
242_7_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnet-specific-outputs
|
.md
|
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_attn_heads,
encoder_sequence_length, decoder_sequence_length)`.
Attentions weights of the cross-attention layer of the decoder, after the attention softmax, used to
compute the weighted average in the
encoder_last_hidden_state (`torch.FloatTensor` of shape `(batch_size, encoder_sequence_length, hidden_size)`, *optional*):
Sequence of hidden-states at the output of the last layer of the encoder of the model.
|
242_7_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnet-specific-outputs
|
.md
|
Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of
shape `(batch_size, encoder_sequence_length, hidden_size)`.
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
|
242_7_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnet-specific-outputs
|
.md
|
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_attn_heads,
encoder_sequence_length, encoder_sequence_length)`. Attentions weights of the encoder, after the attention
|
242_7_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnet-specific-outputs
|
.md
|
encoder_sequence_length, encoder_sequence_length)`. Attentions weights of the encoder, after the attention
softmax, used to compute the weighted average in the self-attention heads.
models.prophetnet.modeling_prophetnet.ProphetNetSeq2SeqModelOutput
Base class for model encoder's outputs that also contains : pre-computed hidden states that can speed up sequential
decoding.
Args:
last_hidden_state (`torch.FloatTensor` of shape `(batch_size, decoder_sequence_length, hidden_size)`):
|
242_7_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnet-specific-outputs
|
.md
|
decoding.
Args:
last_hidden_state (`torch.FloatTensor` of shape `(batch_size, decoder_sequence_length, hidden_size)`):
Sequence of main stream hidden-states at the output of the last layer of the decoder of the model.
If `past_key_values` is used only the last hidden-state of the sequences of shape `(batch_size, 1,
hidden_size)` is output.
last_hidden_state_ngram (`torch.FloatTensor` of shape `(batch_size,ngram * decoder_sequence_length, config.vocab_size)`, *optional*):
|
242_7_11
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnet-specific-outputs
|
.md
|
Sequence of predict stream hidden-states at the output of the last layer of the decoder of the model.
past_key_values (`List[torch.FloatTensor]`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`):
List of `torch.FloatTensor` of length `config.n_layers`, with each tensor of shape `(2, batch_size,
num_attn_heads, decoder_sequence_length, embed_size_per_head)`).
Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be
|
242_7_12
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnet-specific-outputs
|
.md
|
Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be
used (see `past_key_values` input) to speed up sequential decoding.
decoder_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of
shape `(batch_size, decoder_sequence_length, hidden_size)`.
|
242_7_13
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnet-specific-outputs
|
.md
|
shape `(batch_size, decoder_sequence_length, hidden_size)`.
Hidden-states of main stream of the decoder at the output of each layer plus the initial embedding outputs.
decoder_ngram_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of
shape `(batch_size, ngram * decoder_sequence_length, hidden_size)`.
|
242_7_14
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnet-specific-outputs
|
.md
|
shape `(batch_size, ngram * decoder_sequence_length, hidden_size)`.
Hidden-states of the predict stream of the decoder at the output of each layer plus the initial embedding
outputs.
decoder_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_attn_heads,
decoder_sequence_length, decoder_sequence_length)`.
|
242_7_15
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnet-specific-outputs
|
.md
|
decoder_sequence_length, decoder_sequence_length)`.
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
decoder_ngram_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_attn_heads,
decoder_sequence_length, decoder_sequence_length)`.
|
242_7_16
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnet-specific-outputs
|
.md
|
decoder_sequence_length, decoder_sequence_length)`.
Attentions weights of the predict stream of the decoder, after the attention softmax, used to compute the
weighted average in the
cross_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_attn_heads,
encoder_sequence_length, decoder_sequence_length)`.
|
242_7_17
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnet-specific-outputs
|
.md
|
encoder_sequence_length, decoder_sequence_length)`.
Attentions weights of the cross-attention layer of the decoder, after the attention softmax, used to
compute the weighted average in the
encoder_last_hidden_state (`torch.FloatTensor` of shape `(batch_size, encoder_sequence_length, hidden_size)`, *optional*):
Sequence of hidden-states at the output of the last layer of the encoder of the model.
|
242_7_18
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnet-specific-outputs
|
.md
|
Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of
shape `(batch_size, encoder_sequence_length, hidden_size)`.
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
|
242_7_19
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnet-specific-outputs
|
.md
|
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_attn_heads,
encoder_sequence_length, encoder_sequence_length)`.
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
|
242_7_20
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnet-specific-outputs
|
.md
|
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
models.prophetnet.modeling_prophetnet.ProphetNetDecoderModelOutput
Base class for model's outputs that may also contain a past key/values (to speed up sequential decoding).
Args:
last_hidden_state (`torch.FloatTensor` of shape `(batch_size, decoder_sequence_length, hidden_size)`):
|
242_7_21
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnet-specific-outputs
|
.md
|
Args:
last_hidden_state (`torch.FloatTensor` of shape `(batch_size, decoder_sequence_length, hidden_size)`):
Sequence of main stream hidden-states at the output of the last layer of the decoder of the model.
If `past_key_values` is used only the last hidden-state of the sequences of shape `(batch_size, 1,
hidden_size)` is output.
last_hidden_state_ngram (`torch.FloatTensor` of shape `(batch_size, ngram * decoder_sequence_length, config.vocab_size)`):
|
242_7_22
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnet-specific-outputs
|
.md
|
last_hidden_state_ngram (`torch.FloatTensor` of shape `(batch_size, ngram * decoder_sequence_length, config.vocab_size)`):
Sequence of predict stream hidden-states at the output of the last layer of the decoder of the model.
past_key_values (`List[torch.FloatTensor]`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`):
List of `torch.FloatTensor` of length `config.n_layers`, with each tensor of shape `(2, batch_size,
|
242_7_23
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnet-specific-outputs
|
.md
|
List of `torch.FloatTensor` of length `config.n_layers`, with each tensor of shape `(2, batch_size,
num_attn_heads, decoder_sequence_length, embed_size_per_head)`).
Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be
used (see `past_key_values` input) to speed up sequential decoding.
hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
|
242_7_24
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnet-specific-outputs
|
.md
|
Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of
shape `(batch_size, decoder_sequence_length, hidden_size)`.
Hidden-states of main stream of the decoder at the output of each layer plus the initial embedding outputs.
ngram_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
|
242_7_25
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnet-specific-outputs
|
.md
|
Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of
shape `(batch_size, ngram * decoder_sequence_length, hidden_size)`.
Hidden-states of the predict stream of the decoder at the output of each layer plus the initial embedding
outputs.
attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
|
242_7_26
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnet-specific-outputs
|
.md
|
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_attn_heads,
decoder_sequence_length, decoder_sequence_length)`.
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
ngram_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_attn_heads,
|
242_7_27
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnet-specific-outputs
|
.md
|
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_attn_heads,
decoder_sequence_length, decoder_sequence_length)`.
Attentions weights of the predict stream of the decoder, after the attention softmax, used to compute the
weighted average in the
cross_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_attn_heads,
|
242_7_28
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnet-specific-outputs
|
.md
|
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_attn_heads,
encoder_sequence_length, decoder_sequence_length)`.
Attentions weights of the cross-attention layer of the decoder, after the attention softmax, used to
compute the weighted average in the
models.prophetnet.modeling_prophetnet.ProphetNetDecoderLMOutput
Base class for model's outputs that may also contain a past key/values (to speed up sequential decoding).
Args:
|
242_7_29
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnet-specific-outputs
|
.md
|
Base class for model's outputs that may also contain a past key/values (to speed up sequential decoding).
Args:
loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `labels` is provided):
Language modeling loss.
logits (`torch.FloatTensor` of shape `(batch_size, decoder_sequence_length, config.vocab_size)`):
Prediction scores of the main stream language modeling head (scores for each vocabulary token before
SoftMax).
|
242_7_30
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnet-specific-outputs
|
.md
|
Prediction scores of the main stream language modeling head (scores for each vocabulary token before
SoftMax).
logits_ngram (`torch.FloatTensor` of shape `(batch_size, ngram * decoder_sequence_length, config.vocab_size)`):
Prediction scores of the predict stream language modeling head (scores for each vocabulary token before
SoftMax).
past_key_values (`List[torch.FloatTensor]`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`):
|
242_7_31
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnet-specific-outputs
|
.md
|
List of `torch.FloatTensor` of length `config.n_layers`, with each tensor of shape `(2, batch_size,
num_attn_heads, decoder_sequence_length, embed_size_per_head)`).
Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be
used (see `past_key_values` input) to speed up sequential decoding.
hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
|
242_7_32
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnet-specific-outputs
|
.md
|
Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of
shape `(batch_size, decoder_sequence_length, hidden_size)`.
Hidden-states of main stream of the decoder at the output of each layer plus the initial embedding outputs.
ngram_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
|
242_7_33
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnet-specific-outputs
|
.md
|
Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of
shape `(batch_size, ngram * decoder_sequence_length, hidden_size)`.
Hidden-states of the predict stream of the decoder at the output of each layer plus the initial embedding
outputs.
attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
|
242_7_34
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnet-specific-outputs
|
.md
|
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_attn_heads,
decoder_sequence_length, decoder_sequence_length)`.
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
ngram_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_attn_heads,
|
242_7_35
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnet-specific-outputs
|
.md
|
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_attn_heads,
decoder_sequence_length, decoder_sequence_length)`.
Attentions weights of the predict stream of the decoder, after the attention softmax, used to compute the
weighted average in the
cross_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_attn_heads,
|
242_7_36
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnet-specific-outputs
|
.md
|
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_attn_heads,
encoder_sequence_length, decoder_sequence_length)`.
Attentions weights of the cross-attention layer of the decoder, after the attention softmax, used to
compute the weighted average in the
|
242_7_37
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnetmodel
|
.md
|
The bare ProphetNet Model outputting raw hidden-states without any specific head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
Original ProphetNet code can be found [here](https://github.com/microsoft/ProphetNet). Checkpoints were converted
|
242_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnetmodel
|
.md
|
etc.)
Original ProphetNet code can be found [here](https://github.com/microsoft/ProphetNet). Checkpoints were converted
from original Fairseq checkpoints. For more information on the checkpoint conversion, please take a look at the
file `convert_prophetnet_original_pytorch_checkpoint_to_pytorch.py`.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
|
242_8_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnetmodel
|
.md
|
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matters related to general usage and
behavior.
Parameters:
config ([`ProphetNetConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
242_8_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnetmodel
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
242_8_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnetencoder
|
.md
|
The standalone encoder part of the ProphetNetModel.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
Original ProphetNet code can be found [here](https://github.com/microsoft/ProphetNet). Checkpoints were converted
from original Fairseq checkpoints. For more information on the checkpoint conversion, please take a look at the
|
242_9_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnetencoder
|
.md
|
from original Fairseq checkpoints. For more information on the checkpoint conversion, please take a look at the
file `convert_prophetnet_original_pytorch_checkpoint_to_pytorch.py`.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matters related to general usage and
behavior.
Parameters:
|
242_9_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnetencoder
|
.md
|
behavior.
Parameters:
config ([`ProphetNetConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
word_embeddings (`torch.nn.Embeddings` of shape `(config.vocab_size, config.hidden_size)`, *optional*):
|
242_9_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnetencoder
|
.md
|
word_embeddings (`torch.nn.Embeddings` of shape `(config.vocab_size, config.hidden_size)`, *optional*):
The word embedding parameters. This can be used to initialize [`ProphetNetEncoder`] with pre-defined word
embeddings instead of randomly initialized word embeddings.
Methods: forward
|
242_9_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnetdecoder
|
.md
|
The standalone decoder part of the ProphetNetModel.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
Original ProphetNet code can be found [here](https://github.com/microsoft/ProphetNet). Checkpoints were converted
from original Fairseq checkpoints. For more information on the checkpoint conversion, please take a look at the
|
242_10_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnetdecoder
|
.md
|
from original Fairseq checkpoints. For more information on the checkpoint conversion, please take a look at the
file `convert_prophetnet_original_pytorch_checkpoint_to_pytorch.py`.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matters related to general usage and
behavior.
Parameters:
|
242_10_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnetdecoder
|
.md
|
behavior.
Parameters:
config ([`ProphetNetConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
word_embeddings (`torch.nn.Embeddings` of shape `(config.vocab_size, config.hidden_size)`, *optional*):
|
242_10_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnetdecoder
|
.md
|
word_embeddings (`torch.nn.Embeddings` of shape `(config.vocab_size, config.hidden_size)`, *optional*):
The word embedding parameters. This can be used to initialize [`ProphetNetEncoder`] with pre-defined word
embeddings instead of randomly initialized word embeddings.
Methods: forward
|
242_10_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnetforconditionalgeneration
|
.md
|
The ProphetNet Model with a language modeling head. Can be used for sequence generation tasks.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
Original ProphetNet code can be found [here](https://github.com/microsoft/ProphetNet). Checkpoints were converted
|
242_11_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnetforconditionalgeneration
|
.md
|
etc.)
Original ProphetNet code can be found [here](https://github.com/microsoft/ProphetNet). Checkpoints were converted
from original Fairseq checkpoints. For more information on the checkpoint conversion, please take a look at the
file `convert_prophetnet_original_pytorch_checkpoint_to_pytorch.py`.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
|
242_11_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnetforconditionalgeneration
|
.md
|
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matters related to general usage and
behavior.
Parameters:
config ([`ProphetNetConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
242_11_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnetforconditionalgeneration
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
242_11_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnetforcausallm
|
.md
|
The standalone decoder part of the ProphetNetModel with a lm head on top. The model can be used for causal language modeling.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
Original ProphetNet code can be found [here](https://github.com/microsoft/ProphetNet). Checkpoints were converted
|
242_12_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnetforcausallm
|
.md
|
etc.)
Original ProphetNet code can be found [here](https://github.com/microsoft/ProphetNet). Checkpoints were converted
from original Fairseq checkpoints. For more information on the checkpoint conversion, please take a look at the
file `convert_prophetnet_original_pytorch_checkpoint_to_pytorch.py`.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
|
242_12_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnetforcausallm
|
.md
|
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matters related to general usage and
behavior.
Parameters:
config ([`ProphetNetConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
242_12_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnetforcausallm
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
242_12_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox/
|
.md
|
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
243_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
243_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox/#overview
|
.md
|
We introduce GPT-NeoX-20B, a 20 billion parameter autoregressive language model trained on the Pile, whose weights will
be made freely and openly available to the public through a permissive license. It is, to the best of our knowledge,
the largest dense autoregressive model that has publicly available weights at the time of submission. In this work,
we describe GPT-NeoX-20B's architecture and training and evaluate its performance on a range of language-understanding,
|
243_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox/#overview
|
.md
|
we describe GPT-NeoX-20B's architecture and training and evaluate its performance on a range of language-understanding,
mathematics, and knowledge-based tasks. We find that GPT-NeoX-20B is a particularly powerful few-shot reasoner and
gains far more in performance when evaluated five-shot than similarly sized GPT-3 and FairSeq models. We open-source
the training and evaluation code, as well as the model weights, at [https://github.com/EleutherAI/gpt-neox](https://github.com/EleutherAI/gpt-neox).
|
243_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox/#overview
|
.md
|
Development of the model was led by Sid Black, Stella Biderman and Eric Hallahan, and the model was trained with
generous the support of [CoreWeave](https://www.coreweave.com/).
GPT-NeoX-20B was trained with fp16, thus it is recommended to initialize the model as follows:
```python
model = GPTNeoXForCausalLM.from_pretrained("EleutherAI/gpt-neox-20b").half().cuda()
```
GPT-NeoX-20B also has a different tokenizer from the one used in GPT-J-6B and GPT-Neo. The new tokenizer allocates
|
243_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox/#overview
|
.md
|
```
GPT-NeoX-20B also has a different tokenizer from the one used in GPT-J-6B and GPT-Neo. The new tokenizer allocates
additional tokens to whitespace characters, making the model more suitable for certain tasks like code generation.
|
243_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox/#usage-example
|
.md
|
The `generate()` method can be used to generate text using GPT Neo model.
```python
>>> from transformers import GPTNeoXForCausalLM, GPTNeoXTokenizerFast
>>> model = GPTNeoXForCausalLM.from_pretrained("EleutherAI/gpt-neox-20b")
>>> tokenizer = GPTNeoXTokenizerFast.from_pretrained("EleutherAI/gpt-neox-20b")
>>> prompt = "GPTNeoX20B is a 20B-parameter autoregressive Transformer model developed by EleutherAI."
>>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids
|
243_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox/#usage-example
|
.md
|
>>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids
>>> gen_tokens = model.generate(
... input_ids,
... do_sample=True,
... temperature=0.9,
... max_length=100,
... )
>>> gen_text = tokenizer.batch_decode(gen_tokens)[0]
```
|
243_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox/#using-flash-attention-2
|
.md
|
Flash Attention 2 is an faster, optimized version of the model.
|
243_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox/#installation
|
.md
|
First, check whether your hardware is compatible with Flash Attention 2. The latest list of compatible hardware can be found in the [official documentation](https://github.com/Dao-AILab/flash-attention#installation-and-features). If your hardware is not compatible with Flash Attention 2, you can still benefit from attention kernel optimisations through Better Transformer support covered [above](https://huggingface.co/docs/transformers/main/en/model_doc/bark#using-better-transformer).
|
243_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox/#installation
|
.md
|
Next, [install](https://github.com/Dao-AILab/flash-attention#installation-and-features) the latest version of Flash Attention 2:
```bash
pip install -U flash-attn --no-build-isolation
```
|
243_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox/#usage
|
.md
|
To load a model using Flash Attention 2, we can pass the argument `attn_implementation="flash_attention_2"` to [`.from_pretrained`](https://huggingface.co/docs/transformers/main/en/main_classes/model#transformers.PreTrainedModel.from_pretrained). We'll also load the model in half-precision (e.g. `torch.float16`), since it results in almost no degradation to audio quality but significantly lower memory usage and faster inference:
```python
|
243_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox/#usage
|
.md
|
```python
>>> from transformers import GPTNeoXForCausalLM, GPTNeoXTokenizerFast
|
243_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox/#usage
|
.md
|
model = GPTNeoXForCausalLM.from_pretrained("EleutherAI/gpt-neox-20b", torch_dtype=torch.float16, attn_implementation="flash_attention_2").to(device)
...
```
|
243_5_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox/#expected-speedups
|
.md
|
Below is an expected speedup diagram that compares pure inference time between the native implementation in transformers using `stockmark/gpt-neox-japanese-1.4b` checkpoint and the Flash Attention 2 version of the model using a sequence length of 2048.
<div style="text-align: center">
<img src="https://huggingface.co/datasets/ybelkada/documentation-images/resolve/main/gpt-neox-1.8b-speedup.jpg">
</div>
|
243_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox/#using-scaled-dot-product-attention-sdpa
|
.md
|
PyTorch includes a native scaled dot-product attention (SDPA) operator as part of `torch.nn.functional`. This function
encompasses several implementations that can be applied depending on the inputs and the hardware in use. See the
[official documentation](https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html)
or the [GPU Inference](https://huggingface.co/docs/transformers/main/en/perf_infer_gpu_one#pytorch-scaled-dot-product-attention)
page for more information.
|
243_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox/#using-scaled-dot-product-attention-sdpa
|
.md
|
page for more information.
SDPA is used by default for `torch>=2.1.1` when an implementation is available, but you may also set
`attn_implementation="sdpa"` in `from_pretrained()` to explicitly request SDPA to be used.
```python
from transformers import GPTNeoXForCausalLM
model = GPTNeoXForCausalLM.from_pretrained("EleutherAI/gpt-neox-20b", torch_dtype=torch.float16, attn_implementation="sdpa")
...
```
|
243_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox/#using-scaled-dot-product-attention-sdpa
|
.md
|
...
```
For the best speedups, we recommend loading the model in half-precision (e.g. `torch.float16` or `torch.bfloat16`).
On a local benchmark (rtx3080ti-16GB, PyTorch 2.2.1, OS Ubuntu 22.04) using `float16` with
[pythia-410m-deduped](https://huggingface.co/EleutherAI/pythia-410m-deduped), we saw the
following speedups during training and inference.
|
243_7_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox/#training
|
.md
|
| Batch size | Seq len | Time per batch (Eager - s) | Time per batch (SDPA - s) | Speedup (%) | Eager peak mem (MB) | SDPA peak mem (MB) | Mem saving (%) |
|-----------:|-----------:|---------------------------:|-----------------------------:|------------:|--------------------:|-------------------:|------------------:|
| 1 | 128 | 0.024 | 0.019 | 28.945 | 1789.95 | 1789.95 | 0 |
|
243_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox/#training
|
.md
|
| 1 | 256 | 0.039 | 0.031 | 23.18 | 1845.83 | 1844.84 | 0.053 |
| 1 | 512 | 0.08 | 0.055 | 45.524 | 2278.38 | 1953.76 | 16.615 |
| 1 | 1024 | 0.19 | 0.102 | 86.777 | 4772.36 | 2408.35 | 98.159 |
|
243_8_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox/#training
|
.md
|
| 1 | 2048 | 0.565 | 0.204 | 177.098 | 13484.1 | 3882.01 | 247.348 |
| 2 | 128 | 0.037 | 0.032 | 15.121 | 1843.86 | 1844.78 | -0.05 |
| 2 | 256 | 0.067 | 0.055 | 21.706 | 1999.72 | 1951.67 | 2.462 |
|
243_8_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox/#training
|
.md
|
| 2 | 512 | 0.144 | 0.096 | 50.046 | 3613.16 | 2406.77 | 50.125 |
| 2 | 1024 | 0.366 | 0.193 | 89.666 | 8707.55 | 3878.86 | 124.487 |
| 2 | 2048 | OOM | 0.379 | / | OOM | 6825.13 | SDPA does not OOM |
|
243_8_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox/#training
|
.md
|
| 4 | 128 | 0.06 | 0.054 | 11.539 | 1947.6 | 1952.06 | -0.228 |
| 4 | 256 | 0.119 | 0.093 | 28.072 | 3008.39 | 2405.99 | 25.038 |
| 4 | 512 | 0.275 | 0.187 | 47.145 | 6290.58 | 3877.29 | 62.242 |
|
243_8_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox/#training
|
.md
|
| 4 | 1024 | OOM | 0.36 | / | OOM | 6821.98 | SDPA does not OOM |
| 4 | 2048 | OOM | 0.731 | / | OOM | 12705.1 | SDPA does not OOM |
|
243_8_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox/#inference
|
.md
|
| Batch size | Seq len | Per token latency Eager (ms) | Per token latency SDPA (ms) | Speedup (%) | Mem Eager (MB) | Mem SDPA (MB) | Mem saved (%) |
|--------------:|-------------:|--------------------------------:|-------------------------------:|---------------:|------------------:|----------------:|-----------------:|
|
243_9_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox/#inference
|
.md
|
| 1 | 128 | 6.569 | 5.858 | 12.14 | 974.831 | 974.826 | 0 |
| 1 | 256 | 7.009 | 5.863 | 19.542 | 1029.01 | 1028.08 | 0.09 |
|
243_9_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox/#inference
|
.md
|
| 1 | 512 | 7.157 | 5.965 | 19.983 | 1137.54 | 1137.52 | 0.001 |
| 1 | 1024 | 7.523 | 6.506 | 15.637 | 1329.3 | 1329.26 | 0.003 |
|
243_9_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox/#inference
|
.md
|
| 1 | 2048 | 9.271 | 9.205 | 0.713 | 1752.47 | 1734.51 | 1.036 |
| 2 | 128 | 7.239 | 5.959 | 21.493 | 1044.8 | 1028.37 | 1.597 |
|
243_9_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox/#inference
|
.md
|
| 2 | 256 | 7.228 | 6.036 | 19.757 | 1167.32 | 1137.73 | 2.601 |
| 2 | 512 | 7.538 | 6.693 | 12.628 | 1352.93 | 1329.55 | 1.758 |
|
243_9_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox/#inference
|
.md
|
| 2 | 1024 | 8.916 | 8.632 | 3.291 | 1752.56 | 1734.62 | 1.034 |
| 2 | 2048 | 12.628 | 12.606 | 0.181 | 2558.72 | 2545.8 | 0.508 |
|
243_9_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox/#inference
|
.md
|
| 4 | 128 | 7.278 | 6.046 | 20.373 | 1168.41 | 1137.79 | 2.691 |
| 4 | 256 | 7.614 | 6.588 | 15.574 | 1353.1 | 1329.79 | 1.753 |
|
243_9_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox/#inference
|
.md
|
| 4 | 512 | 8.798 | 8.144 | 8.028 | 1752.76 | 1734.85 | 1.032 |
| 4 | 1024 | 11.765 | 11.303 | 4.09 | 2558.96 | 2546.04 | 0.508 |
|
243_9_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox/#inference
|
.md
|
| 4 | 2048 | 19.568 | 17.735 | 10.33 | 4175.5 | 4165.26 | 0.246 |
|
243_9_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox/#resources
|
.md
|
- [Causal language modeling task guide](../tasks/language_modeling)
|
243_10_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox/#gptneoxconfig
|
.md
|
This is the configuration class to store the configuration of a [`GPTNeoXModel`]. It is used to instantiate an
GPTNeoX model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the GPTNeoX
[EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
243_11_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox/#gptneoxconfig
|
.md
|
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 50432):
Vocabulary size of the GPTNeoX model. Defines the number of different tokens that can be represented by the
`inputs_ids` passed when calling [`GPTNeoXModel`].
hidden_size (`int`, *optional*, defaults to 6144):
Dimension of the encoder layers and the pooler layer.
|
243_11_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox/#gptneoxconfig
|
.md
|
hidden_size (`int`, *optional*, defaults to 6144):
Dimension of the encoder layers and the pooler layer.
num_hidden_layers (`int`, *optional*, defaults to 44):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 64):
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (`int`, *optional*, defaults to 24576):
Dimension of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
|
243_11_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox/#gptneoxconfig
|
.md
|
Dimension of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"selu"` and `"gelu_new"` are supported.
rotary_pct (`float`, *optional*, defaults to 0.25):
percentage of hidden dimensions to allocate to rotary embeddings
rotary_emb_base (`int`, *optional*, defaults to 10000)
|
243_11_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox/#gptneoxconfig
|
.md
|
percentage of hidden dimensions to allocate to rotary embeddings
rotary_emb_base (`int`, *optional*, defaults to 10000)
base for computing rotary embeddings frequency
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio probability of the attention score.
hidden_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio of (1) the word embeddings, (2) the post-attention hidden states, and (3) the post-mlp
hidden states.
|
243_11_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox/#gptneoxconfig
|
.md
|
The dropout ratio of (1) the word embeddings, (2) the post-attention hidden states, and (3) the post-mlp
hidden states.
classifier_dropout (`float`, *optional*, defaults to 0.1):
Argument used when doing token classification, used in the model [`GPTNeoXForTokenClassification`].
The dropout ratio for the hidden layer.
max_position_embeddings (`int`, *optional*, defaults to 2048):
The maximum sequence length that this model might ever be used with. Typically set this to something large
|
243_11_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox/#gptneoxconfig
|
.md
|
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
initializer_range (`float`, *optional*, defaults to 1e-5):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (`float`, *optional*, defaults to 1e-12):
The epsilon used by the layer normalization layers.
use_cache (`bool`, *optional*, defaults to `True`):
|
243_11_6
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.