source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#ledconfig
.md
The maximum sequence length that the encoder might ever be used with. max_decoder_position_embeddings (`int`, *optional*, defaults to 16384): The maximum sequence length that the decoder might ever be used with. init_std (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. encoder_layerdrop (`float`, *optional*, defaults to 0.0):
301_4_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#ledconfig
.md
encoder_layerdrop (`float`, *optional*, defaults to 0.0): The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more details. decoder_layerdrop (`float`, *optional*, defaults to 0.0): The LayerDrop probability for the decoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more details. use_cache (`bool`, *optional*, defaults to `True`):
301_4_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#ledconfig
.md
for more details. use_cache (`bool`, *optional*, defaults to `True`): Whether or not the model should return the last key/values attentions (not used by all models) Example: ```python >>> from transformers import LEDModel, LEDConfig
301_4_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#ledconfig
.md
>>> # Initializing a LED allenai/led-base-16384 style configuration >>> configuration = LEDConfig() >>> # Initializing a model from the allenai/led-base-16384 style configuration >>> model = LEDModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
301_4_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#ledtokenizer
.md
Constructs a LED tokenizer, which is smilar to the ROBERTa tokenizer, using byte-level Byte-Pair-Encoding. This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will be encoded differently whether it is at the beginning of the sentence (without space) or not: ```python >>> from transformers import LEDTokenizer >>> tokenizer = LEDTokenizer.from_pretrained("allenai/led-base-16384") >>> tokenizer("Hello world")["input_ids"] [0, 31414, 232, 2]
301_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#ledtokenizer
.md
>>> tokenizer(" Hello world")["input_ids"] [0, 20920, 232, 2] ``` You can get around that behavior by passing `add_prefix_space=True` when instantiating this tokenizer or when you call it on some text, but since the model was not pretrained this way, it might yield a decrease in performance. <Tip> When used with `is_split_into_words=True`, this tokenizer will add a space before each word (even the first one). </Tip>
301_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#ledtokenizer
.md
When used with `is_split_into_words=True`, this tokenizer will add a space before each word (even the first one). </Tip> This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. Args: vocab_file (`str`): Path to the vocabulary file. merges_file (`str`): Path to the merges file. errors (`str`, *optional*, defaults to `"replace"`): Paradigm to follow when decoding bytes to UTF-8. See
301_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#ledtokenizer
.md
errors (`str`, *optional*, defaults to `"replace"`): Paradigm to follow when decoding bytes to UTF-8. See [bytes.decode](https://docs.python.org/3/library/stdtypes.html#bytes.decode) for more information. bos_token (`str`, *optional*, defaults to `"<s>"`): The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token. <Tip> When building a sequence using special tokens, this is not the token that is used for the beginning of
301_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#ledtokenizer
.md
<Tip> When building a sequence using special tokens, this is not the token that is used for the beginning of sequence. The token used is the `cls_token`. </Tip> eos_token (`str`, *optional*, defaults to `"</s>"`): The end of sequence token. <Tip> When building a sequence using special tokens, this is not the token that is used for the end of sequence. The token used is the `sep_token`. </Tip> sep_token (`str`, *optional*, defaults to `"</s>"`):
301_5_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#ledtokenizer
.md
The token used is the `sep_token`. </Tip> sep_token (`str`, *optional*, defaults to `"</s>"`): The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens. cls_token (`str`, *optional*, defaults to `"<s>"`):
301_5_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#ledtokenizer
.md
token of a sequence built with special tokens. cls_token (`str`, *optional*, defaults to `"<s>"`): The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens. unk_token (`str`, *optional*, defaults to `"<unk>"`): The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.
301_5_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#ledtokenizer
.md
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. pad_token (`str`, *optional*, defaults to `"<pad>"`): The token used for padding, for example when batching sequences of different lengths. mask_token (`str`, *optional*, defaults to `"<mask>"`): The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict.
301_5_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#ledtokenizer
.md
modeling. This is the token which the model will try to predict. add_prefix_space (`bool`, *optional*, defaults to `False`): Whether or not to add an initial space to the input. This allows to treat the leading word just as any other word. (BART tokenizer detect beginning of words by the preceding space). Methods: build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary
301_5_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#ledtokenizerfast
.md
Construct a "fast" LED tokenizer (backed by HuggingFace's *tokenizers* library), derived from the GPT-2 tokenizer, using byte-level Byte-Pair-Encoding. This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will be encoded differently whether it is at the beginning of the sentence (without space) or not: ```python >>> from transformers import LEDTokenizerFast
301_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#ledtokenizerfast
.md
>>> tokenizer = LEDTokenizerFast.from_pretrained("allenai/led-base-16384") >>> tokenizer("Hello world")["input_ids"] [0, 31414, 232, 2]
301_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#ledtokenizerfast
.md
>>> tokenizer(" Hello world")["input_ids"] [0, 20920, 232, 2] ``` You can get around that behavior by passing `add_prefix_space=True` when instantiating this tokenizer or when you call it on some text, but since the model was not pretrained this way, it might yield a decrease in performance. <Tip> When used with `is_split_into_words=True`, this tokenizer needs to be instantiated with `add_prefix_space=True`. </Tip>
301_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#ledtokenizerfast
.md
When used with `is_split_into_words=True`, this tokenizer needs to be instantiated with `add_prefix_space=True`. </Tip> This tokenizer inherits from [`PreTrainedTokenizerFast`] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. Args: vocab_file (`str`): Path to the vocabulary file. merges_file (`str`): Path to the merges file. errors (`str`, *optional*, defaults to `"replace"`):
301_6_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#ledtokenizerfast
.md
Path to the vocabulary file. merges_file (`str`): Path to the merges file. errors (`str`, *optional*, defaults to `"replace"`): Paradigm to follow when decoding bytes to UTF-8. See [bytes.decode](https://docs.python.org/3/library/stdtypes.html#bytes.decode) for more information. bos_token (`str`, *optional*, defaults to `"<s>"`): The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token. <Tip>
301_6_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#ledtokenizerfast
.md
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token. <Tip> When building a sequence using special tokens, this is not the token that is used for the beginning of sequence. The token used is the `cls_token`. </Tip> eos_token (`str`, *optional*, defaults to `"</s>"`): The end of sequence token. <Tip> When building a sequence using special tokens, this is not the token that is used for the end of sequence. The token used is the `sep_token`.
301_6_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#ledtokenizerfast
.md
The token used is the `sep_token`. </Tip> sep_token (`str`, *optional*, defaults to `"</s>"`): The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens. cls_token (`str`, *optional*, defaults to `"<s>"`):
301_6_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#ledtokenizerfast
.md
token of a sequence built with special tokens. cls_token (`str`, *optional*, defaults to `"<s>"`): The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens. unk_token (`str`, *optional*, defaults to `"<unk>"`): The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.
301_6_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#ledtokenizerfast
.md
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. pad_token (`str`, *optional*, defaults to `"<pad>"`): The token used for padding, for example when batching sequences of different lengths. mask_token (`str`, *optional*, defaults to `"<mask>"`): The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict.
301_6_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#ledtokenizerfast
.md
modeling. This is the token which the model will try to predict. add_prefix_space (`bool`, *optional*, defaults to `False`): Whether or not to add an initial space to the input. This allows to treat the leading word just as any other word. (LED tokenizer detect beginning of words by the preceding space). trim_offsets (`bool`, *optional*, defaults to `True`): Whether the post processing step should trim offsets to avoid including whitespaces.
301_6_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#led-specific-outputs
.md
models.led.modeling_led.LEDEncoderBaseModelOutput Base class for LEDEncoder's outputs, with potential hidden states, local and global attentions. Args: last_hidden_state (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`): Sequence of hidden-states at the output of the last layer of the model. hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
301_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#led-specific-outputs
.md
Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`. Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, x +
301_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#led-specific-outputs
.md
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, x + attention_window + 1)`, where `x` is the number of tokens with global attention mask. Local attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Those are the attention weights from every token in the sequence to every token with global attention (first `x` values) and to every token in the attention window (remaining `attention_window
301_7_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#led-specific-outputs
.md
global attention (first `x` values) and to every token in the attention window (remaining `attention_window + 1` values). Note that the first `x` values refer to tokens with fixed positions in the text, but the remaining `attention_window + 1` values refer to tokens with relative positions: the attention weight of a token to itself is located at index `x + attention_window / 2` and the `attention_window / 2` preceding
301_7_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#led-specific-outputs
.md
token to itself is located at index `x + attention_window / 2` and the `attention_window / 2` preceding (succeeding) values are the attention weights to the `attention_window / 2` preceding (succeeding) tokens. If the attention window contains a token with global attention, the attention weight at the corresponding index is set to 0; the value should be accessed from the first `x` attention weights. If a token has global
301_7_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#led-specific-outputs
.md
index is set to 0; the value should be accessed from the first `x` attention weights. If a token has global attention, the attention weights to all other tokens in `attentions` is set to 0, the values should be accessed from `global_attentions`. global_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, x)`,
301_7_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#led-specific-outputs
.md
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, x)`, where `x` is the number of tokens with global attention mask. Global attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Those are the attention weights from every token with global attention to every token in the sequence. models.led.modeling_led.LEDSeq2SeqModelOutput
301_7_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#led-specific-outputs
.md
in the sequence. models.led.modeling_led.LEDSeq2SeqModelOutput Base class for model encoder's outputs that also contains : pre-computed hidden states that can speed up sequential decoding. Args: last_hidden_state (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`): Sequence of hidden-states at the output of the last layer of the decoder of the model. If `past_key_values` is used only the last hidden-state of the sequences of shape `(batch_size, 1, hidden_size)` is output.
301_7_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#led-specific-outputs
.md
If `past_key_values` is used only the last hidden-state of the sequences of shape `(batch_size, 1, hidden_size)` is output. past_key_values (`List[torch.FloatTensor]`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`): List of `torch.FloatTensor` of length `config.n_layers`, with each tensor of shape `(2, batch_size, num_heads, sequence_length, embed_size_per_head)`).
301_7_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#led-specific-outputs
.md
num_heads, sequence_length, embed_size_per_head)`). Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be used (see `past_key_values` input) to speed up sequential decoding. decoder_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of
301_7_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#led-specific-outputs
.md
Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`. Hidden-states of the decoder at the output of each layer plus the initial embedding outputs. decoder_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
301_7_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#led-specific-outputs
.md
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
301_7_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#led-specific-outputs
.md
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights of the decoder's cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. encoder_last_hidden_state (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): Sequence of hidden-states at the output of the last layer of the encoder of the model.
301_7_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#led-specific-outputs
.md
Sequence of hidden-states at the output of the last layer of the encoder of the model. encoder_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`. Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
301_7_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#led-specific-outputs
.md
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs. encoder_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
301_7_14
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#led-specific-outputs
.md
self-attention heads. encoder_global_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, x)`, where `x` is the number of tokens with global attention mask. Global attentions weights after the attention softmax, used to compute the weighted average in the
301_7_15
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#led-specific-outputs
.md
Global attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Those are the attention weights from every token with global attention to every token in the sequence. models.led.modeling_led.LEDSeq2SeqLMOutput Base class for sequence-to-sequence language models outputs. Args: loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `labels` is provided): Language modeling loss.
301_7_16
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#led-specific-outputs
.md
Args: loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `labels` is provided): Language modeling loss. logits (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.vocab_size)`): Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). past_key_values (`List[torch.FloatTensor]`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`):
301_7_17
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#led-specific-outputs
.md
List of `torch.FloatTensor` of length `config.n_layers`, with each tensor of shape `(2, batch_size, num_heads, sequence_length, embed_size_per_head)`). Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be used (see `past_key_values` input) to speed up sequential decoding. decoder_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
301_7_18
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#led-specific-outputs
.md
Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`. Hidden-states of the decoder at the output of each layer plus the initial embedding outputs. decoder_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
301_7_19
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#led-specific-outputs
.md
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
301_7_20
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#led-specific-outputs
.md
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights of the decoder's cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. encoder_last_hidden_state (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): Sequence of hidden-states at the output of the last layer of the encoder of the model.
301_7_21
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#led-specific-outputs
.md
Sequence of hidden-states at the output of the last layer of the encoder of the model. encoder_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`. Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
301_7_22
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#led-specific-outputs
.md
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs. encoder_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
301_7_23
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#led-specific-outputs
.md
self-attention heads. encoder_global_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, x)`, where `x` is the number of tokens with global attention mask. Global attentions weights after the attention softmax, used to compute the weighted average in the
301_7_24
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#led-specific-outputs
.md
Global attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Those are the attention weights from every token with global attention to every token in the sequence. models.led.modeling_led.LEDSeq2SeqSequenceClassifierOutput Base class for outputs of sequence-to-sequence sentence classification models. Args: loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `label` is provided):
301_7_25
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#led-specific-outputs
.md
Args: loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `label` is provided): Classification (or regression if config.num_labels==1) loss. logits (`torch.FloatTensor` of shape `(batch_size, config.num_labels)`): Classification (or regression if config.num_labels==1) scores (before SoftMax). past_key_values (`List[torch.FloatTensor]`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`):
301_7_26
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#led-specific-outputs
.md
List of `torch.FloatTensor` of length `config.n_layers`, with each tensor of shape `(2, batch_size, num_heads, sequence_length, embed_size_per_head)`). Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be used (see `past_key_values` input) to speed up sequential decoding. decoder_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
301_7_27
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#led-specific-outputs
.md
Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`. Hidden-states of the decoder at the output of each layer plus the initial embedding outputs. decoder_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
301_7_28
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#led-specific-outputs
.md
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
301_7_29
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#led-specific-outputs
.md
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights of the decoder's cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. encoder_last_hidden_state (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): Sequence of hidden-states at the output of the last layer of the encoder of the model.
301_7_30
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#led-specific-outputs
.md
Sequence of hidden-states at the output of the last layer of the encoder of the model. encoder_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`. Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
301_7_31
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#led-specific-outputs
.md
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs. encoder_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
301_7_32
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#led-specific-outputs
.md
self-attention heads. encoder_global_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, x)`, where `x` is the number of tokens with global attention mask. Global attentions weights after the attention softmax, used to compute the weighted average in the
301_7_33
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#led-specific-outputs
.md
Global attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Those are the attention weights from every token with global attention to every token in the sequence. models.led.modeling_led.LEDSeq2SeqQuestionAnsweringModelOutput Base class for outputs of sequence-to-sequence question answering models. Args: loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `labels` is provided):
301_7_34
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#led-specific-outputs
.md
Args: loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `labels` is provided): Total span extraction loss is the sum of a Cross-Entropy for the start and end positions. start_logits (`torch.FloatTensor` of shape `(batch_size, sequence_length)`): Span-start scores (before SoftMax). end_logits (`torch.FloatTensor` of shape `(batch_size, sequence_length)`): Span-end scores (before SoftMax).
301_7_35
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#led-specific-outputs
.md
end_logits (`torch.FloatTensor` of shape `(batch_size, sequence_length)`): Span-end scores (before SoftMax). past_key_values (`List[torch.FloatTensor]`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`): List of `torch.FloatTensor` of length `config.n_layers`, with each tensor of shape `(2, batch_size, num_heads, sequence_length, embed_size_per_head)`). Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be
301_7_36
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#led-specific-outputs
.md
Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be used (see `past_key_values` input) to speed up sequential decoding. decoder_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.
301_7_37
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#led-specific-outputs
.md
shape `(batch_size, sequence_length, hidden_size)`. Hidden-states of the decoder at the output of each layer plus the initial embedding outputs. decoder_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`.
301_7_38
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#led-specific-outputs
.md
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
301_7_39
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#led-specific-outputs
.md
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights of the decoder's cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. encoder_last_hidden_state (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): Sequence of hidden-states at the output of the last layer of the encoder of the model.
301_7_40
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#led-specific-outputs
.md
Sequence of hidden-states at the output of the last layer of the encoder of the model. encoder_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`. Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
301_7_41
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#led-specific-outputs
.md
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs. encoder_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
301_7_42
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#led-specific-outputs
.md
self-attention heads. encoder_global_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, x)`, where `x` is the number of tokens with global attention mask. Global attentions weights after the attention softmax, used to compute the weighted average in the
301_7_43
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#led-specific-outputs
.md
Global attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Those are the attention weights from every token with global attention to every token in the sequence. [[autodoc]] models.led.modeling_tf_led.TFLEDEncoderBaseModelOutput: modeling_tf_led requires the TensorFlow library but it was not found in your environment. However, we were able to find a PyTorch installation. PyTorch classes do not begin
301_7_44
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#led-specific-outputs
.md
However, we were able to find a PyTorch installation. PyTorch classes do not begin with "TF", but are otherwise identically named to our TF classes. If you want to use PyTorch, please use those classes instead! If you really do want to use TensorFlow, please follow the instructions on the installation page https://www.tensorflow.org/install that match your environment. [[autodoc]] models.led.modeling_tf_led.TFLEDSeq2SeqModelOutput:
301_7_45
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#led-specific-outputs
.md
[[autodoc]] models.led.modeling_tf_led.TFLEDSeq2SeqModelOutput: modeling_tf_led requires the TensorFlow library but it was not found in your environment. However, we were able to find a PyTorch installation. PyTorch classes do not begin with "TF", but are otherwise identically named to our TF classes. If you want to use PyTorch, please use those classes instead! If you really do want to use TensorFlow, please follow the instructions on the
301_7_46
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#led-specific-outputs
.md
If you really do want to use TensorFlow, please follow the instructions on the installation page https://www.tensorflow.org/install that match your environment. [[autodoc]] models.led.modeling_tf_led.TFLEDSeq2SeqLMOutput: modeling_tf_led requires the TensorFlow library but it was not found in your environment. However, we were able to find a PyTorch installation. PyTorch classes do not begin with "TF", but are otherwise identically named to our TF classes.
301_7_47
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#led-specific-outputs
.md
with "TF", but are otherwise identically named to our TF classes. If you want to use PyTorch, please use those classes instead! If you really do want to use TensorFlow, please follow the instructions on the installation page https://www.tensorflow.org/install that match your environment. <frameworkcontent> <pt>
301_7_48
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#ledmodel
.md
The bare LED Model outputting raw hidden-states without any specific head on top. This model inherits from [`PreTrainedModel`]. See the superclass documentation for the generic methods the library implements for all its models (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
301_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#ledmodel
.md
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for general usage and behavior. Parameters: config ([`LEDConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the
301_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#ledmodel
.md
load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
301_8_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#ledforconditionalgeneration
.md
The LED Model with a language modeling head. Can be used for summarization. This model inherits from [`PreTrainedModel`]. See the superclass documentation for the generic methods the library implements for all its models (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
301_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#ledforconditionalgeneration
.md
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for general usage and behavior. Parameters: config ([`LEDConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the
301_9_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#ledforconditionalgeneration
.md
load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
301_9_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#ledforsequenceclassification
.md
LED model with a sequence classification/head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. This model inherits from [`PreTrainedModel`]. See the superclass documentation for the generic methods the library implements for all its models (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
301_10_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#ledforsequenceclassification
.md
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for general usage and behavior. Parameters: config ([`LEDConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the
301_10_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#ledforsequenceclassification
.md
load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
301_10_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#ledforquestionanswering
.md
LED Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layer on top of the hidden-states output to compute `span start logits` and `span end logits`). This model inherits from [`PreTrainedModel`]. See the superclass documentation for the generic methods the library implements for all its models (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
301_11_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#ledforquestionanswering
.md
implements for all its models (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for general usage and behavior. Parameters: config ([`LEDConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not
301_11_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#ledforquestionanswering
.md
Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward </pt> <tf>
301_11_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#tfledmodel
.md
No docstring available for TFLEDModel Methods: call
301_12_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/led.md
https://huggingface.co/docs/transformers/en/model_doc/led/#tfledforconditionalgeneration
.md
No docstring available for TFLEDForConditionalGeneration Methods: call </tf> </frameworkcontent>
301_13_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phi3.md
https://huggingface.co/docs/transformers/en/model_doc/phi3/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
302_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phi3.md
https://huggingface.co/docs/transformers/en/model_doc/phi3/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
302_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phi3.md
https://huggingface.co/docs/transformers/en/model_doc/phi3/#overview
.md
The Phi-3 model was proposed in [Phi-3 Technical Report: A Highly Capable Language Model Locally on Your Phone](https://arxiv.org/abs/2404.14219) by Microsoft.
302_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phi3.md
https://huggingface.co/docs/transformers/en/model_doc/phi3/#summary
.md
The abstract from the Phi-3 paper is the following:
302_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phi3.md
https://huggingface.co/docs/transformers/en/model_doc/phi3/#summary
.md
We introduce phi-3-mini, a 3.8 billion parameter language model trained on 3.3 trillion tokens, whose overall performance, as measured by both academic benchmarks and internal testing, rivals that of models such as Mixtral 8x7B and GPT-3.5 (e.g., phi-3-mini achieves 69% on MMLU and 8.38 on MT-bench), despite being small enough to be deployed on a phone. The innovation lies entirely in our dataset for training, a scaled-up version of the one used for phi-2, composed of heavily filtered web data and
302_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phi3.md
https://huggingface.co/docs/transformers/en/model_doc/phi3/#summary
.md
entirely in our dataset for training, a scaled-up version of the one used for phi-2, composed of heavily filtered web data and synthetic data. The model is also further aligned for robustness, safety, and chat format. We also provide some initial parameter-scaling results with a 7B and 14B models trained for 4.8T tokens, called phi-3-small and phi-3-medium, both significantly more capable than phi-3-mini (e.g., respectively 75% and 78% on MMLU, and 8.7 and 8.9 on MT-bench).
302_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phi3.md
https://huggingface.co/docs/transformers/en/model_doc/phi3/#summary
.md
The original code for Phi-3 can be found [here](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct).
302_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phi3.md
https://huggingface.co/docs/transformers/en/model_doc/phi3/#usage-tips
.md
- This model is very similar to `Llama` with the main difference of [`Phi3SuScaledRotaryEmbedding`] and [`Phi3YarnScaledRotaryEmbedding`], where they are used to extend the context of the rotary embeddings. The query, key and values are fused, and the MLP's up and gate projection layers are also fused. - The tokenizer used for this model is identical to the [`LlamaTokenizer`], with the exception of additional tokens.
302_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phi3.md
https://huggingface.co/docs/transformers/en/model_doc/phi3/#how-to-use-phi-3
.md
<Tip warning={true}> Phi-3 has been integrated in the development version (4.40.0.dev) of `transformers`. Until the official version is released through `pip`, ensure that you are doing one of the following: * When loading the model, ensure that `trust_remote_code=True` is passed as an argument of the `from_pretrained()` function.
302_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phi3.md
https://huggingface.co/docs/transformers/en/model_doc/phi3/#how-to-use-phi-3
.md
* When loading the model, ensure that `trust_remote_code=True` is passed as an argument of the `from_pretrained()` function. * Update your local `transformers` to the development version: `pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers`. The previous command is an alternative to cloning and installing from the source. </Tip> ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer
302_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phi3.md
https://huggingface.co/docs/transformers/en/model_doc/phi3/#how-to-use-phi-3
.md
>>> model = AutoModelForCausalLM.from_pretrained("microsoft/Phi-3-mini-4k-instruct") >>> tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3-mini-4k-instruct") >>> messages = [{"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"}] >>> inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt")
302_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phi3.md
https://huggingface.co/docs/transformers/en/model_doc/phi3/#how-to-use-phi-3
.md
>>> outputs = model.generate(inputs, max_new_tokens=32) >>> text = tokenizer.batch_decode(outputs)[0] >>> print(text) <s><|user|> Can you provide ways to eat combinations of bananas and dragonfruits?<|end|> <|assistant|> Certainly! Bananas and dragonfruits can be combined in various delicious ways. Here are some ideas for eating combinations of bananas and ```
302_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phi3.md
https://huggingface.co/docs/transformers/en/model_doc/phi3/#phi3config
.md
This is the configuration class to store the configuration of a [`Phi3Model`]. It is used to instantiate a Phi-3 model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct). Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
302_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phi3.md
https://huggingface.co/docs/transformers/en/model_doc/phi3/#phi3config
.md
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 32064): Vocabulary size of the Phi-3 model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`Phi3Model`]. hidden_size (`int`, *optional*, defaults to 3072): Dimension of the hidden representations.
302_5_1