source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlv2.md
|
https://huggingface.co/docs/transformers/en/model_doc/owlv2/#owlv2forobjectdetection
|
.md
|
No docstring available for Owlv2ForObjectDetection
Methods: forward
- image_guided_detection
|
199_12_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/
|
.md
|
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
200_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
β οΈ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
200_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#funnel-transformer
|
.md
|
<div class="flex flex-wrap space-x-1">
<a href="https://huggingface.co/models?filter=funnel">
<img alt="Models" src="https://img.shields.io/badge/All_model_pages-funnel-blueviolet">
</a>
<a href="https://huggingface.co/spaces/docs-demos/funnel-transformer-small">
<img alt="Spaces" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue">
</a>
</div>
|
200_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#overview
|
.md
|
The Funnel Transformer model was proposed in the paper [Funnel-Transformer: Filtering out Sequential Redundancy for
Efficient Language Processing](https://arxiv.org/abs/2006.03236). It is a bidirectional transformer model, like
BERT, but with a pooling operation after each block of layers, a bit like in traditional convolutional neural networks
(CNN) in computer vision.
The abstract from the paper is the following:
|
200_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#overview
|
.md
|
(CNN) in computer vision.
The abstract from the paper is the following:
*With the success of language pretraining, it is highly desirable to develop more efficient architectures of good
scalability that can exploit the abundant unlabeled data at a lower cost. To improve the efficiency, we examine the
much-overlooked redundancy in maintaining a full-length token-level presentation, especially for tasks that only
|
200_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#overview
|
.md
|
much-overlooked redundancy in maintaining a full-length token-level presentation, especially for tasks that only
require a single-vector presentation of the sequence. With this intuition, we propose Funnel-Transformer which
gradually compresses the sequence of hidden states to a shorter one and hence reduces the computation cost. More
importantly, by re-investing the saved FLOPs from length reduction in constructing a deeper or wider model, we further
|
200_2_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#overview
|
.md
|
importantly, by re-investing the saved FLOPs from length reduction in constructing a deeper or wider model, we further
improve the model capacity. In addition, to perform token-level predictions as required by common pretraining
objectives, Funnel-Transformer is able to recover a deep representation for each token from the reduced hidden sequence
via a decoder. Empirically, with comparable or fewer FLOPs, Funnel-Transformer outperforms the standard Transformer on
|
200_2_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#overview
|
.md
|
via a decoder. Empirically, with comparable or fewer FLOPs, Funnel-Transformer outperforms the standard Transformer on
a wide variety of sequence-level prediction tasks, including text classification, language understanding, and reading
comprehension.*
This model was contributed by [sgugger](https://huggingface.co/sgugger). The original code can be found [here](https://github.com/laiguokun/Funnel-Transformer).
|
200_2_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#usage-tips
|
.md
|
- Since Funnel Transformer uses pooling, the sequence length of the hidden states changes after each block of layers. This way, their length is divided by 2, which speeds up the computation of the next hidden states.
The base model therefore has a final sequence length that is a quarter of the original one. This model can be used
directly for tasks that just require a sentence summary (like sequence classification or multiple choice). For other
|
200_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#usage-tips
|
.md
|
directly for tasks that just require a sentence summary (like sequence classification or multiple choice). For other
tasks, the full model is used; this full model has a decoder that upsamples the final hidden states to the same
sequence length as the input.
|
200_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#usage-tips
|
.md
|
- For tasks such as classification, this is not a problem, but for tasks like masked language modeling or token classification, we need a hidden state with the same sequence length as the original input. In those cases, the final hidden states are upsampled to the input sequence length and go through two additional layers. That's why there are two versions of each checkpoint. The version suffixed with β-baseβ contains only the three blocks, while the version without that suffix contains the three blocks
|
200_3_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#usage-tips
|
.md
|
version suffixed with β-baseβ contains only the three blocks, while the version without that suffix contains the three blocks and the upsampling head with its additional layers.
|
200_3_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#usage-tips
|
.md
|
- The Funnel Transformer checkpoints are all available with a full version and a base version. The first ones should be
used for [`FunnelModel`], [`FunnelForPreTraining`],
[`FunnelForMaskedLM`], [`FunnelForTokenClassification`] and
[`FunnelForQuestionAnswering`]. The second ones should be used for
[`FunnelBaseModel`], [`FunnelForSequenceClassification`] and
[`FunnelForMultipleChoice`].
|
200_3_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#resources
|
.md
|
- [Text classification task guide](../tasks/sequence_classification)
- [Token classification task guide](../tasks/token_classification)
- [Question answering task guide](../tasks/question_answering)
- [Masked language modeling task guide](../tasks/masked_language_modeling)
- [Multiple choice task guide](../tasks/multiple_choice)
|
200_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#funnelconfig
|
.md
|
This is the configuration class to store the configuration of a [`FunnelModel`] or a [`TFBertModel`]. It is used to
instantiate a Funnel Transformer model according to the specified arguments, defining the model architecture.
Instantiating a configuration with the defaults will yield a similar configuration to that of the Funnel
Transformer [funnel-transformer/small](https://huggingface.co/funnel-transformer/small) architecture.
|
200_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#funnelconfig
|
.md
|
Transformer [funnel-transformer/small](https://huggingface.co/funnel-transformer/small) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 30522):
Vocabulary size of the Funnel transformer. Defines the number of different tokens that can be represented
|
200_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#funnelconfig
|
.md
|
Vocabulary size of the Funnel transformer. Defines the number of different tokens that can be represented
by the `inputs_ids` passed when calling [`FunnelModel`] or [`TFFunnelModel`].
block_sizes (`List[int]`, *optional*, defaults to `[4, 4, 4]`):
The sizes of the blocks used in the model.
block_repeats (`List[int]`, *optional*):
If passed along, each layer of each block is repeated the number of times indicated.
num_decoder_layers (`int`, *optional*, defaults to 2):
|
200_5_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#funnelconfig
|
.md
|
num_decoder_layers (`int`, *optional*, defaults to 2):
The number of layers in the decoder (when not using the base model).
d_model (`int`, *optional*, defaults to 768):
Dimensionality of the model's hidden states.
n_head (`int`, *optional*, defaults to 12):
Number of attention heads for each attention layer in the Transformer encoder.
d_head (`int`, *optional*, defaults to 64):
Dimensionality of the model's heads.
d_inner (`int`, *optional*, defaults to 3072):
Inner dimension in the feed-forward blocks.
|
200_5_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#funnelconfig
|
.md
|
Dimensionality of the model's heads.
d_inner (`int`, *optional*, defaults to 3072):
Inner dimension in the feed-forward blocks.
hidden_act (`str` or `callable`, *optional*, defaults to `"gelu_new"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"silu"` and `"gelu_new"` are supported.
hidden_dropout (`float`, *optional*, defaults to 0.1):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
|
200_5_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#funnelconfig
|
.md
|
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_dropout (`float`, *optional*, defaults to 0.1):
The dropout probability for the attention probabilities.
activation_dropout (`float`, *optional*, defaults to 0.0):
The dropout probability used between the two layers of the feed-forward blocks.
initializer_range (`float`, *optional*, defaults to 0.1):
The upper bound of the *uniform initializer* for initializing all weight matrices in attention layers.
|
200_5_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#funnelconfig
|
.md
|
The upper bound of the *uniform initializer* for initializing all weight matrices in attention layers.
initializer_std (`float`, *optional*):
The standard deviation of the *normal initializer* for initializing the embedding matrix and the weight of
linear layers. Will default to 1 for the embedding matrix and the value given by Xavier initialization for
linear layers.
layer_norm_eps (`float`, *optional*, defaults to 1e-09):
The epsilon used by the layer normalization layers.
|
200_5_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#funnelconfig
|
.md
|
linear layers.
layer_norm_eps (`float`, *optional*, defaults to 1e-09):
The epsilon used by the layer normalization layers.
pooling_type (`str`, *optional*, defaults to `"mean"`):
Possible values are `"mean"` or `"max"`. The way pooling is performed at the beginning of each block.
attention_type (`str`, *optional*, defaults to `"relative_shift"`):
Possible values are `"relative_shift"` or `"factorized"`. The former is faster on CPU/GPU while the latter
is faster on TPU.
|
200_5_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#funnelconfig
|
.md
|
Possible values are `"relative_shift"` or `"factorized"`. The former is faster on CPU/GPU while the latter
is faster on TPU.
separate_cls (`bool`, *optional*, defaults to `True`):
Whether or not to separate the cls token when applying pooling.
truncate_seq (`bool`, *optional*, defaults to `True`):
When using `separate_cls`, whether or not to truncate the last token when pooling, to avoid getting a
sequence length that is not a multiple of 2.
pool_q_only (`bool`, *optional*, defaults to `True`):
|
200_5_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#funnelconfig
|
.md
|
sequence length that is not a multiple of 2.
pool_q_only (`bool`, *optional*, defaults to `True`):
Whether or not to apply the pooling only to the query or to query, key and values for the attention layers.
|
200_5_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#funneltokenizer
|
.md
|
Construct a Funnel Transformer tokenizer. Based on WordPiece.
This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
Args:
vocab_file (`str`):
File containing the vocabulary.
do_lower_case (`bool`, *optional*, defaults to `True`):
Whether or not to lowercase the input when tokenizing.
do_basic_tokenize (`bool`, *optional*, defaults to `True`):
|
200_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#funneltokenizer
|
.md
|
Whether or not to lowercase the input when tokenizing.
do_basic_tokenize (`bool`, *optional*, defaults to `True`):
Whether or not to do basic tokenization before WordPiece.
never_split (`Iterable`, *optional*):
Collection of tokens which will never be split during tokenization. Only has an effect when
`do_basic_tokenize=True`
unk_token (`str`, *optional*, defaults to `"<unk>"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
|
200_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#funneltokenizer
|
.md
|
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
sep_token (`str`, *optional*, defaults to `"<sep>"`):
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
pad_token (`str`, *optional*, defaults to `"<pad>"`):
|
200_6_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#funneltokenizer
|
.md
|
token of a sequence built with special tokens.
pad_token (`str`, *optional*, defaults to `"<pad>"`):
The token used for padding, for example when batching sequences of different lengths.
cls_token (`str`, *optional*, defaults to `"<cls>"`):
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
|
200_6_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#funneltokenizer
|
.md
|
instead of per-token classification). It is the first token of the sequence when built with special tokens.
mask_token (`str`, *optional*, defaults to `"<mask>"`):
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
bos_token (`str`, *optional*, defaults to `"<s>"`):
The beginning of sentence token.
eos_token (`str`, *optional*, defaults to `"</s>"`):
The end of sentence token.
|
200_6_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#funneltokenizer
|
.md
|
The beginning of sentence token.
eos_token (`str`, *optional*, defaults to `"</s>"`):
The end of sentence token.
tokenize_chinese_chars (`bool`, *optional*, defaults to `True`):
Whether or not to tokenize Chinese characters.
This should likely be deactivated for Japanese (see this
[issue](https://github.com/huggingface/transformers/issues/328)).
strip_accents (`bool`, *optional*):
Whether or not to strip all accents. If this option is not specified, then it will be determined by the
|
200_6_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#funneltokenizer
|
.md
|
Whether or not to strip all accents. If this option is not specified, then it will be determined by the
value for `lowercase` (as in the original BERT).
clean_up_tokenization_spaces (`bool`, *optional*, defaults to `True`):
Whether or not to cleanup spaces after decoding, cleanup consists in removing potential artifacts like
extra spaces.
Methods: build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
|
200_6_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#funneltokenizerfast
|
.md
|
Construct a "fast" Funnel Transformer tokenizer (backed by HuggingFace's *tokenizers* library). Based on WordPiece.
This tokenizer inherits from [`PreTrainedTokenizerFast`] which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
Args:
vocab_file (`str`):
File containing the vocabulary.
do_lower_case (`bool`, *optional*, defaults to `True`):
Whether or not to lowercase the input when tokenizing.
|
200_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#funneltokenizerfast
|
.md
|
do_lower_case (`bool`, *optional*, defaults to `True`):
Whether or not to lowercase the input when tokenizing.
unk_token (`str`, *optional*, defaults to `"<unk>"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
sep_token (`str`, *optional*, defaults to `"<sep>"`):
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
|
200_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#funneltokenizerfast
|
.md
|
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
pad_token (`str`, *optional*, defaults to `"<pad>"`):
The token used for padding, for example when batching sequences of different lengths.
cls_token (`str`, *optional*, defaults to `"<cls>"`):
|
200_7_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#funneltokenizerfast
|
.md
|
cls_token (`str`, *optional*, defaults to `"<cls>"`):
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
mask_token (`str`, *optional*, defaults to `"<mask>"`):
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
|
200_7_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#funneltokenizerfast
|
.md
|
modeling. This is the token which the model will try to predict.
clean_text (`bool`, *optional*, defaults to `True`):
Whether or not to clean the text before tokenization by removing any control characters and replacing all
whitespaces by the classic one.
tokenize_chinese_chars (`bool`, *optional*, defaults to `True`):
Whether or not to tokenize Chinese characters. This should likely be deactivated for Japanese (see [this
issue](https://github.com/huggingface/transformers/issues/328)).
|
200_7_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#funneltokenizerfast
|
.md
|
issue](https://github.com/huggingface/transformers/issues/328)).
bos_token (`str`, `optional`, defaults to `"<s>"`):
The beginning of sentence token.
eos_token (`str`, `optional`, defaults to `"</s>"`):
The end of sentence token.
strip_accents (`bool`, *optional*):
Whether or not to strip all accents. If this option is not specified, then it will be determined by the
value for `lowercase` (as in the original BERT).
wordpieces_prefix (`str`, *optional*, defaults to `"##"`):
The prefix for subwords.
|
200_7_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#funnel-specific-outputs
|
.md
|
models.funnel.modeling_funnel.FunnelForPreTrainingOutput
Output type of [`FunnelForPreTraining`].
Args:
loss (*optional*, returned when `labels` is provided, `torch.FloatTensor` of shape `(1,)`):
Total loss of the ELECTRA-style objective.
logits (`torch.FloatTensor` of shape `(batch_size, sequence_length)`):
Prediction scores of the head (scores for each token before SoftMax).
|
200_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#funnel-specific-outputs
|
.md
|
Prediction scores of the head (scores for each token before SoftMax).
hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of
shape `(batch_size, sequence_length, hidden_size)`.
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
|
200_8_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#funnel-specific-outputs
|
.md
|
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
sequence_length)`.
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
|
200_8_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#funnel-specific-outputs
|
.md
|
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
[[autodoc]] models.funnel.modeling_tf_funnel.TFFunnelForPreTrainingOutput:
modeling_tf_funnel requires the TensorFlow library but it was not found in your environment.
However, we were able to find a PyTorch installation. PyTorch classes do not begin
with "TF", but are otherwise identically named to our TF classes.
If you want to use PyTorch, please use those classes instead!
|
200_8_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#funnel-specific-outputs
|
.md
|
If you want to use PyTorch, please use those classes instead!
If you really do want to use TensorFlow, please follow the instructions on the
installation page https://www.tensorflow.org/install that match your environment.
<frameworkcontent>
<pt>
|
200_8_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#funnelbasemodel
|
.md
|
The base Funnel Transformer Model transformer outputting raw hidden-states without upsampling head (also called
decoder) or any task-specific head on top.
The Funnel Transformer model was proposed in [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient
Language Processing](https://arxiv.org/abs/2006.03236) by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
|
200_9_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#funnelbasemodel
|
.md
|
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
|
200_9_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#funnelbasemodel
|
.md
|
and behavior.
Parameters:
config ([`FunnelConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
200_9_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#funnelmodel
|
.md
|
The bare Funnel Transformer Model transformer outputting raw hidden-states without any specific head on top.
The Funnel Transformer model was proposed in [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient
Language Processing](https://arxiv.org/abs/2006.03236) by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
|
200_10_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#funnelmodel
|
.md
|
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
|
200_10_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#funnelmodel
|
.md
|
and behavior.
Parameters:
config ([`FunnelConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
200_10_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#funnelmodelforpretraining
|
.md
|
No docstring available for FunnelForPreTraining
Methods: forward
|
200_11_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#funnelformaskedlm
|
.md
|
Funnel Transformer Model with a `language modeling` head on top.
The Funnel Transformer model was proposed in [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient
Language Processing](https://arxiv.org/abs/2006.03236) by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
|
200_12_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#funnelformaskedlm
|
.md
|
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
|
200_12_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#funnelformaskedlm
|
.md
|
and behavior.
Parameters:
config ([`FunnelConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
200_12_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#funnelforsequenceclassification
|
.md
|
Funnel Transformer Model with a sequence classification/regression head on top (two linear layer on top of the
first timestep of the last hidden state) e.g. for GLUE tasks.
The Funnel Transformer model was proposed in [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient
Language Processing](https://arxiv.org/abs/2006.03236) by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
|
200_13_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#funnelforsequenceclassification
|
.md
|
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
|
200_13_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#funnelforsequenceclassification
|
.md
|
and behavior.
Parameters:
config ([`FunnelConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
200_13_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#funnelformultiplechoice
|
.md
|
Funnel Transformer Model with a multiple choice classification head on top (two linear layer on top of the first
timestep of the last hidden state, and a softmax) e.g. for RocStories/SWAG tasks.
The Funnel Transformer model was proposed in [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient
Language Processing](https://arxiv.org/abs/2006.03236) by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le.
|
200_14_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#funnelformultiplechoice
|
.md
|
Language Processing](https://arxiv.org/abs/2006.03236) by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
200_14_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#funnelformultiplechoice
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`FunnelConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
200_14_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#funnelformultiplechoice
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
200_14_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#funnelfortokenclassification
|
.md
|
Funnel Transformer Model with a token classification head on top (a linear layer on top of the hidden-states
output) e.g. for Named-Entity-Recognition (NER) tasks.
The Funnel Transformer model was proposed in [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient
Language Processing](https://arxiv.org/abs/2006.03236) by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
|
200_15_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#funnelfortokenclassification
|
.md
|
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
|
200_15_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#funnelfortokenclassification
|
.md
|
and behavior.
Parameters:
config ([`FunnelConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
200_15_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#funnelforquestionanswering
|
.md
|
Funnel Transformer Model with a span classification head on top for extractive question-answering tasks like SQuAD
(a linear layer on top of the hidden-states output to compute `span start logits` and `span end logits`).
The Funnel Transformer model was proposed in [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient
Language Processing](https://arxiv.org/abs/2006.03236) by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le.
|
200_16_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#funnelforquestionanswering
|
.md
|
Language Processing](https://arxiv.org/abs/2006.03236) by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
200_16_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#funnelforquestionanswering
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`FunnelConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
200_16_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#funnelforquestionanswering
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
</pt>
<tf>
|
200_16_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#tffunnelbasemodel
|
.md
|
No docstring available for TFFunnelBaseModel
Methods: call
|
200_17_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#tffunnelmodel
|
.md
|
No docstring available for TFFunnelModel
Methods: call
|
200_18_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#tffunnelmodelforpretraining
|
.md
|
No docstring available for TFFunnelForPreTraining
Methods: call
|
200_19_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#tffunnelformaskedlm
|
.md
|
No docstring available for TFFunnelForMaskedLM
Methods: call
|
200_20_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#tffunnelforsequenceclassification
|
.md
|
No docstring available for TFFunnelForSequenceClassification
Methods: call
|
200_21_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#tffunnelformultiplechoice
|
.md
|
No docstring available for TFFunnelForMultipleChoice
Methods: call
|
200_22_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#tffunnelfortokenclassification
|
.md
|
No docstring available for TFFunnelForTokenClassification
Methods: call
|
200_23_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/funnel.md
|
https://huggingface.co/docs/transformers/en/model_doc/funnel/#tffunnelforquestionanswering
|
.md
|
No docstring available for TFFunnelForQuestionAnswering
Methods: call
</tf>
</frameworkcontent>
|
200_24_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/
|
.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
201_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
β οΈ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
201_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#overview
|
.md
|
The Llama2 model was proposed in [LLaMA: Open Foundation and Fine-Tuned Chat Models](https://ai.meta.com/research/publications/llama-2-open-foundation-and-fine-tuned-chat-models/) by Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj
|
201_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#overview
|
.md
|
Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushka rMishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta,
|
201_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#overview
|
.md
|
Xavier Martinet, Todor Mihaylov, Pushka rMishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing EllenTan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom. It is a collection of foundation language models ranging from 7B to
|
201_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#overview
|
.md
|
Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom. It is a collection of foundation language models ranging from 7B to 70B parameters, with checkpoints finetuned for chat application!
|
201_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#overview
|
.md
|
The abstract from the paper is the following:
|
201_1_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#overview
|
.md
|
*In this work, we develop and release Llama 2, a collection of pretrained and fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for dialogue use cases. Our models outperform open-source chat models on most benchmarks we tested, and based on our human evaluations for helpfulness and safety, may be a suitable substitute for closed-source models. We provide a detailed description of our approach to
|
201_1_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#overview
|
.md
|
and safety, may be a suitable substitute for closed-source models. We provide a detailed description of our approach to fine-tuning and safety improvements of Llama 2-Chat in order to enable the community to build on our work and contribute to the responsible development of LLMs.*
|
201_1_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#overview
|
.md
|
Checkout all Llama2 model checkpoints [here](https://huggingface.co/models?search=llama2).
This model was contributed by [Arthur Zucker](https://huggingface.co/ArthurZ) with contributions from [Lysandre Debut](https://huggingface.co/lysandre). The code of the implementation in Hugging Face is based on GPT-NeoX [here](https://github.com/EleutherAI/gpt-neox). The original code of the authors can be found [here](https://github.com/facebookresearch/llama).
|
201_1_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#usage-tips
|
.md
|
<Tip warning={true}>
The `Llama2` models were trained using `bfloat16`, but the original inference uses `float16`. The checkpoints uploaded on the Hub use `torch_dtype = 'float16'`, which will be
used by the `AutoModel` API to cast the checkpoints from `torch.float32` to `torch.float16`.
|
201_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#usage-tips
|
.md
|
The `dtype` of the online weights is mostly irrelevant unless you are using `torch_dtype="auto"` when initializing a model using `model = AutoModelForCausalLM.from_pretrained("path", torch_dtype = "auto")`. The reason is that the model will first be downloaded ( using the `dtype` of the checkpoints online), then it will be casted to the default `dtype` of `torch` (becomes `torch.float32`), and finally, if there is a `torch_dtype` provided in the config, it will be used.
|
201_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#usage-tips
|
.md
|
Training the model in `float16` is not recommended and is known to produce `nan`; as such, the model should be trained in `bfloat16`.
</Tip>
Tips:
- Weights for the Llama2 models can be obtained by filling out [this form](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
- The architecture is very similar to the first Llama, with the addition of Grouped Query Attention (GQA) following this [paper](https://arxiv.org/pdf/2305.13245.pdf)
|
201_2_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#usage-tips
|
.md
|
- Setting `config.pretraining_tp` to a value different than 1 will activate the more accurate but slower computation of the linear layers, which should better match the original logits.
|
201_2_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#usage-tips
|
.md
|
- The original model uses `pad_id = -1` which means that there is no padding token. We can't have the same logic, make sure to add a padding token using `tokenizer.add_special_tokens({"pad_token":"<pad>"})` and resize the token embedding accordingly. You should also set the `model.config.pad_token_id`. The `embed_tokens` layer of the model is initialized with `self.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size, self.config.padding_idx)`, which makes sure that encoding the padding token
|
201_2_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#usage-tips
|
.md
|
nn.Embedding(config.vocab_size, config.hidden_size, self.config.padding_idx)`, which makes sure that encoding the padding token will output zeros, so passing it when initializing is recommended.
|
201_2_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#usage-tips
|
.md
|
- After filling out the form and gaining access to the model checkpoints, you should be able to use the already converted checkpoints. Otherwise, if you are converting your own model, feel free to use the [conversion script](https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/convert_llama_weights_to_hf.py). The script can be called with the following (example) command:
```bash
python src/transformers/models/llama/convert_llama_weights_to_hf.py \
|
201_2_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#usage-tips
|
.md
|
```bash
python src/transformers/models/llama/convert_llama_weights_to_hf.py \
--input_dir /path/to/downloaded/llama/weights --model_size 7B --output_dir /output/path
```
- After conversion, the model and tokenizer can be loaded via:
```python
from transformers import LlamaForCausalLM, LlamaTokenizer
|
201_2_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#usage-tips
|
.md
|
tokenizer = LlamaTokenizer.from_pretrained("/output/path")
model = LlamaForCausalLM.from_pretrained("/output/path")
```
Note that executing the script requires enough CPU RAM to host the whole model in float16 precision (even if the biggest versions
come in several checkpoints they each contain a part of each weight of the model, so we need to load them all in RAM). For the 75B model, it's thus 145GB of RAM needed.
|
201_2_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#usage-tips
|
.md
|
- The LLaMA tokenizer is a BPE model based on [sentencepiece](https://github.com/google/sentencepiece). One quirk of sentencepiece is that when decoding a sequence, if the first token is the start of the word (e.g. "Banana"), the tokenizer does not prepend the prefix space to the string.
|
201_2_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#usage-tips
|
.md
|
- When using Flash Attention 2 via `attn_implementation="flash_attention_2"`, don't pass `torch_dtype` to the `from_pretrained` class method and use Automatic Mixed-Precision training. When using `Trainer`, it is simply specifying either `fp16` or `bf16` to `True`. Otherwise, make sure you are using `torch.autocast`. This is required because the Flash Attention only support `fp16` and `bf16` data type.
|
201_2_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#resources
|
.md
|
A list of official Hugging Face and community (indicated by π) resources to help you get started with LLaMA2. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
- [Llama 2 is here - get it on Hugging Face](https://huggingface.co/blog/llama2), a blog post about Llama 2 and how to use it with π€ Transformers and π€ PEFT.
|
201_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#resources
|
.md
|
- [LLaMA 2 - Every Resource you need](https://www.philschmid.de/llama-2), a compilation of relevant resources to learn about LLaMA 2 and how to get started quickly.
<PipelineTag pipeline="text-generation"/>
- A [notebook](https://colab.research.google.com/drive/1PEQyJO1-f6j0S_XJ8DV50NkpzasXkrzd?usp=sharing) on how to fine-tune Llama 2 in Google Colab using QLoRA and 4-bit precision. π
|
201_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#resources
|
.md
|
- A [notebook](https://colab.research.google.com/drive/134o_cXcMe_lsvl15ZE_4Y75Kstepsntu?usp=sharing) on how to fine-tune the "Llama-v2-7b-guanaco" model with 4-bit QLoRA and generate Q&A datasets from PDFs. π
<PipelineTag pipeline="text-classification"/>
- A [notebook](https://colab.research.google.com/drive/1ggaa2oRFphdBmqIjSEbnb_HGkcIRC2ZB?usp=sharing) on how to fine-tune the Llama 2 model with QLoRa, TRL, and Korean text classification dataset. ππ°π·
βοΈ Optimization
|
201_3_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#resources
|
.md
|
βοΈ Optimization
- [Fine-tune Llama 2 with DPO](https://huggingface.co/blog/dpo-trl), a guide to using the TRL library's DPO method to fine tune Llama 2 on a specific dataset.
- [Extended Guide: Instruction-tune Llama 2](https://www.philschmid.de/instruction-tune-llama-2), a guide to training Llama 2 to generate instructions from inputs, transforming the model from instruction-following to instruction-giving.
|
201_3_3
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.