source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md
https://huggingface.co/docs/transformers/en/model_doc/rag/#rag-specific-outputs
.md
Sequence of hidden-states at the output of the last layer of the generator encoder of the model. generator_enc_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): Tuple of `torch.FloatTensor` (one for the output of the embeddings and one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.
114_6_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md
https://huggingface.co/docs/transformers/en/model_doc/rag/#rag-specific-outputs
.md
shape `(batch_size, sequence_length, hidden_size)`. Hidden states of the generator encoder at the output of each layer plus the initial embedding outputs. generator_enc_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`.
114_6_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md
https://huggingface.co/docs/transformers/en/model_doc/rag/#rag-specific-outputs
.md
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights of the generator encoder, after the attention softmax, used to compute the weighted average in the self-attention heads. generator_dec_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
114_6_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md
https://huggingface.co/docs/transformers/en/model_doc/rag/#rag-specific-outputs
.md
Tuple of `torch.FloatTensor` (one for the output of the embeddings and one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`. Hidden states of the generator decoder at the output of each layer plus the initial embedding outputs. generator_dec_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
114_6_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md
https://huggingface.co/docs/transformers/en/model_doc/rag/#rag-specific-outputs
.md
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights of the generator decoder, after the attention softmax, used to compute the weighted average in the self-attention heads. generator_cross_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
114_6_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md
https://huggingface.co/docs/transformers/en/model_doc/rag/#rag-specific-outputs
.md
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Cross-attentions weights of the generator decoder, after the attention softmax, used to compute the weighted average in the cross-attention heads. models.rag.modeling_rag.RetrievAugLMOutput Args: logits (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.vocab_size)`):
114_6_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md
https://huggingface.co/docs/transformers/en/model_doc/rag/#rag-specific-outputs
.md
Args: logits (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.vocab_size)`): Prediction scores of the language modeling head. The score is possibly marginalized over all documents for each vocabulary token. doc_scores (`torch.FloatTensor` of shape `(batch_size, config.n_docs)`): Score between each retrieved document embeddings (see `retrieved_doc_embeds`) and `question_encoder_last_hidden_state`.
114_6_14
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md
https://huggingface.co/docs/transformers/en/model_doc/rag/#rag-specific-outputs
.md
Score between each retrieved document embeddings (see `retrieved_doc_embeds`) and `question_encoder_last_hidden_state`. past_key_values (`List[torch.FloatTensor]`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`): List of `torch.FloatTensor` of length `config.n_layers`, with each tensor of shape `(2, batch_size, num_heads, sequence_length, embed_size_per_head)`).
114_6_15
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md
https://huggingface.co/docs/transformers/en/model_doc/rag/#rag-specific-outputs
.md
num_heads, sequence_length, embed_size_per_head)`). Contains precomputed hidden-states (key and values in the attention blocks) of the decoder that can be used (see `past_key_values` input) to speed up sequential decoding. retrieved_doc_embeds (`torch.FloatTensor` of shape `(batch_size, config.n_docs, hidden_size)`, *optional*, returned when *output_retrieved=True*): Embedded documents retrieved by the retriever. Is used with `question_encoder_last_hidden_state` to compute the `doc_scores`.
114_6_16
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md
https://huggingface.co/docs/transformers/en/model_doc/rag/#rag-specific-outputs
.md
Embedded documents retrieved by the retriever. Is used with `question_encoder_last_hidden_state` to compute the `doc_scores`. retrieved_doc_ids (`torch.LongTensor` of shape `(batch_size, config.n_docs)`, *optional*, returned when *output_retrieved=True*): The indexes of the embedded documents retrieved by the retriever. context_input_ids (`torch.LongTensor` of shape `(batch_size * config.n_docs, config.max_combined_length)`, *optional*, returned when *output_retrieved=True*):
114_6_17
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md
https://huggingface.co/docs/transformers/en/model_doc/rag/#rag-specific-outputs
.md
Input ids post-processed from the retrieved documents and the question encoder input_ids by the retriever. context_attention_mask (`torch.LongTensor` of shape `(batch_size * config.n_docs, config.max_combined_length)`, *optional*, returned when *output_retrieved=True*): Attention mask post-processed from the retrieved documents and the question encoder `input_ids` by the retriever. question_encoder_last_hidden_state (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
114_6_18
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md
https://huggingface.co/docs/transformers/en/model_doc/rag/#rag-specific-outputs
.md
question_encoder_last_hidden_state (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): Sequence of hidden states at the output of the last layer of the question encoder pooled output of the model. question_enc_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): Tuple of `torch.FloatTensor` (one for the output of the embeddings and one for the output of each layer) of
114_6_19
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md
https://huggingface.co/docs/transformers/en/model_doc/rag/#rag-specific-outputs
.md
Tuple of `torch.FloatTensor` (one for the output of the embeddings and one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`. Hidden states of the question encoder at the output of each layer plus the initial embedding outputs. question_enc_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
114_6_20
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md
https://huggingface.co/docs/transformers/en/model_doc/rag/#rag-specific-outputs
.md
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights of the question encoder, after the attention softmax, used to compute the weighted average in the self-attention heads. generator_enc_last_hidden_state (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): Sequence of hidden-states at the output of the last layer of the generator encoder of the model.
114_6_21
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md
https://huggingface.co/docs/transformers/en/model_doc/rag/#rag-specific-outputs
.md
Sequence of hidden-states at the output of the last layer of the generator encoder of the model. generator_enc_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): Tuple of `torch.FloatTensor` (one for the output of the embeddings and one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.
114_6_22
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md
https://huggingface.co/docs/transformers/en/model_doc/rag/#rag-specific-outputs
.md
shape `(batch_size, sequence_length, hidden_size)`. Hidden states of the generator encoder at the output of each layer plus the initial embedding outputs. generator_enc_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`.
114_6_23
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md
https://huggingface.co/docs/transformers/en/model_doc/rag/#rag-specific-outputs
.md
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights of the generator encoder, after the attention softmax, used to compute the weighted average in the self-attention heads. generator_dec_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
114_6_24
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md
https://huggingface.co/docs/transformers/en/model_doc/rag/#rag-specific-outputs
.md
Tuple of `torch.FloatTensor` (one for the output of the embeddings and one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`. Hidden states of the generator decoder at the output of each layer plus the initial embedding outputs. generator_dec_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
114_6_25
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md
https://huggingface.co/docs/transformers/en/model_doc/rag/#rag-specific-outputs
.md
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights of the generator decoder, after the attention softmax, used to compute the weighted average in the self-attention heads. generator_cross_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
114_6_26
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md
https://huggingface.co/docs/transformers/en/model_doc/rag/#rag-specific-outputs
.md
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Cross-attentions weights of the generator decoder, after the attention softmax, used to compute the weighted average in the cross-attention heads.
114_6_27
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md
https://huggingface.co/docs/transformers/en/model_doc/rag/#ragretriever
.md
Retriever used to get documents from vector queries. It retrieves the documents embeddings as well as the documents contents, and it formats them to be used with a RagModel. Args: config ([`RagConfig`]): The configuration of the RAG model this Retriever is used with. Contains parameters indicating which `Index` to build. You can load your own custom dataset with `config.index_name="custom"` or use a canonical one (default) from the datasets library with `config.index_name="wiki_dpr"` for example.
114_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md
https://huggingface.co/docs/transformers/en/model_doc/rag/#ragretriever
.md
one (default) from the datasets library with `config.index_name="wiki_dpr"` for example. question_encoder_tokenizer ([`PreTrainedTokenizer`]): The tokenizer that was used to tokenize the question. It is used to decode the question and then use the generator_tokenizer. generator_tokenizer ([`PreTrainedTokenizer`]): The tokenizer used for the generator part of the RagModel. index ([`~models.rag.retrieval_rag.Index`], optional, defaults to the one defined by the configuration):
114_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md
https://huggingface.co/docs/transformers/en/model_doc/rag/#ragretriever
.md
index ([`~models.rag.retrieval_rag.Index`], optional, defaults to the one defined by the configuration): If specified, use this index instead of the one built using the configuration Examples: ```python >>> # To load the default "wiki_dpr" dataset with 21M passages from wikipedia (index name is 'compressed' or 'exact') >>> from transformers import RagRetriever
114_7_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md
https://huggingface.co/docs/transformers/en/model_doc/rag/#ragretriever
.md
>>> retriever = RagRetriever.from_pretrained( ... "facebook/dpr-ctx_encoder-single-nq-base", dataset="wiki_dpr", index_name="compressed" ... ) >>> # To load your own indexed dataset built with the datasets library. More info on how to build the indexed dataset in examples/rag/use_own_knowledge_dataset.py >>> from transformers import RagRetriever
114_7_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md
https://huggingface.co/docs/transformers/en/model_doc/rag/#ragretriever
.md
>>> dataset = ( ... ... ... ) # dataset must be a datasets.Datasets object with columns "title", "text" and "embeddings", and it must have a faiss index >>> retriever = RagRetriever.from_pretrained("facebook/dpr-ctx_encoder-single-nq-base", indexed_dataset=dataset) >>> # To load your own indexed dataset built with the datasets library that was saved on disk. More info in examples/rag/use_own_knowledge_dataset.py >>> from transformers import RagRetriever
114_7_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md
https://huggingface.co/docs/transformers/en/model_doc/rag/#ragretriever
.md
>>> dataset_path = "path/to/my/dataset" # dataset saved via *dataset.save_to_disk(...)* >>> index_path = "path/to/my/index.faiss" # faiss index saved via *dataset.get_index("embeddings").save(...)* >>> retriever = RagRetriever.from_pretrained( ... "facebook/dpr-ctx_encoder-single-nq-base", ... index_name="custom", ... passages_path=dataset_path, ... index_path=index_path, ... ) >>> # To load the legacy index built originally for Rag's paper >>> from transformers import RagRetriever
114_7_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md
https://huggingface.co/docs/transformers/en/model_doc/rag/#ragretriever
.md
>>> # To load the legacy index built originally for Rag's paper >>> from transformers import RagRetriever >>> retriever = RagRetriever.from_pretrained("facebook/dpr-ctx_encoder-single-nq-base", index_name="legacy") ``` <frameworkcontent> <pt>
114_7_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md
https://huggingface.co/docs/transformers/en/model_doc/rag/#ragmodel
.md
The [`RagModel`] forward method, overrides the `__call__` special method. <Tip> Although the recipe for forward pass needs to be defined within this function, one should call the [`Module`] instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. </Tip> RAG is a seq2seq model which encapsulates two core components: a question encoder and a generator. During a forward
114_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md
https://huggingface.co/docs/transformers/en/model_doc/rag/#ragmodel
.md
</Tip> RAG is a seq2seq model which encapsulates two core components: a question encoder and a generator. During a forward pass, we encode the input with the question encoder and pass it to the retriever to extract relevant context documents. The documents are then prepended to the input. Such contextualized inputs is passed to the generator. The question encoder can be any *autoencoding* model, preferably [`DPRQuestionEncoder`], and the generator can be
114_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md
https://huggingface.co/docs/transformers/en/model_doc/rag/#ragmodel
.md
The question encoder can be any *autoencoding* model, preferably [`DPRQuestionEncoder`], and the generator can be any *seq2seq* model, preferably [`BartForConditionalGeneration`]. The model can be initialized with a [`RagRetriever`] for end-to-end generation or used in combination with the outputs of a retriever in multiple steps---see examples for more details. The model is compatible any *autoencoding* model as the `question_encoder` and any *seq2seq* model with language model head as the `generator`.
114_8_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md
https://huggingface.co/docs/transformers/en/model_doc/rag/#ragmodel
.md
*autoencoding* model as the `question_encoder` and any *seq2seq* model with language model head as the `generator`. It has been tested with [`DPRQuestionEncoder`] as the `question_encoder` and [`BartForConditionalGeneration`] or [`T5ForConditionalGeneration`] as the `generator`. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
114_8_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md
https://huggingface.co/docs/transformers/en/model_doc/rag/#ragmodel
.md
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Args: config ([`RagConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not
114_8_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md
https://huggingface.co/docs/transformers/en/model_doc/rag/#ragmodel
.md
Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. question_encoder ([`PreTrainedModel`]): An encoder model compatible with the faiss index encapsulated by the `retriever`. generator ([`PreTrainedModel`]): A seq2seq model used as the generator in the RAG architecture. retriever ([`RagRetriever`]):
114_8_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md
https://huggingface.co/docs/transformers/en/model_doc/rag/#ragmodel
.md
generator ([`PreTrainedModel`]): A seq2seq model used as the generator in the RAG architecture. retriever ([`RagRetriever`]): A retriever class encapsulating a faiss index queried to obtain context documents for current inputs. Methods: forward
114_8_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md
https://huggingface.co/docs/transformers/en/model_doc/rag/#ragsequenceforgeneration
.md
The [`RagSequenceForGeneration`] forward method, overrides the `__call__` special method. <Tip> Although the recipe for forward pass needs to be defined within this function, one should call the [`Module`] instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. </Tip> A RAG-sequence model implementation. It performs RAG-sequence specific marginalization in the forward pass.
114_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md
https://huggingface.co/docs/transformers/en/model_doc/rag/#ragsequenceforgeneration
.md
</Tip> A RAG-sequence model implementation. It performs RAG-sequence specific marginalization in the forward pass. RAG is a seq2seq model which encapsulates two core components: a question encoder and a generator. During a forward pass, we encode the input with the question encoder and pass it to the retriever to extract relevant context documents. The documents are then prepended to the input. Such contextualized inputs is passed to the generator.
114_9_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md
https://huggingface.co/docs/transformers/en/model_doc/rag/#ragsequenceforgeneration
.md
documents. The documents are then prepended to the input. Such contextualized inputs is passed to the generator. The question encoder can be any *autoencoding* model, preferably [`DPRQuestionEncoder`], and the generator can be any *seq2seq* model, preferably [`BartForConditionalGeneration`]. The model can be initialized with a [`RagRetriever`] for end-to-end generation or used in combination with the outputs of a retriever in multiple steps---see examples for more details. The model is compatible any
114_9_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md
https://huggingface.co/docs/transformers/en/model_doc/rag/#ragsequenceforgeneration
.md
outputs of a retriever in multiple steps---see examples for more details. The model is compatible any *autoencoding* model as the `question_encoder` and any *seq2seq* model with language model head as the `generator`. It has been tested with [`DPRQuestionEncoder`] as the `question_encoder` and [`BartForConditionalGeneration`] or [`T5ForConditionalGeneration`] as the `generator`. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
114_9_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md
https://huggingface.co/docs/transformers/en/model_doc/rag/#ragsequenceforgeneration
.md
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Args: config ([`RagConfig`]):
114_9_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md
https://huggingface.co/docs/transformers/en/model_doc/rag/#ragsequenceforgeneration
.md
and behavior. Args: config ([`RagConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. question_encoder ([`PreTrainedModel`]): An encoder model compatible with the faiss index encapsulated by the `retriever`. generator ([`PreTrainedModel`]):
114_9_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md
https://huggingface.co/docs/transformers/en/model_doc/rag/#ragsequenceforgeneration
.md
An encoder model compatible with the faiss index encapsulated by the `retriever`. generator ([`PreTrainedModel`]): A seq2seq model used as the generator in the RAG architecture. retriever ([`RagRetriever`]): A retriever class encapsulating a faiss index queried to obtain context documents for current inputs. Methods: forward - generate
114_9_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md
https://huggingface.co/docs/transformers/en/model_doc/rag/#ragtokenforgeneration
.md
The [`RagTokenForGeneration`] forward method, overrides the `__call__` special method. <Tip> Although the recipe for forward pass needs to be defined within this function, one should call the [`Module`] instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. </Tip> A RAG-token model implementation. It performs RAG-token specific marginalization in the forward pass.
114_10_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md
https://huggingface.co/docs/transformers/en/model_doc/rag/#ragtokenforgeneration
.md
</Tip> A RAG-token model implementation. It performs RAG-token specific marginalization in the forward pass. RAG is a seq2seq model which encapsulates two core components: a question encoder and a generator. During a forward pass, we encode the input with the question encoder and pass it to the retriever to extract relevant context documents. The documents are then prepended to the input. Such contextualized inputs is passed to the generator.
114_10_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md
https://huggingface.co/docs/transformers/en/model_doc/rag/#ragtokenforgeneration
.md
documents. The documents are then prepended to the input. Such contextualized inputs is passed to the generator. The question encoder can be any *autoencoding* model, preferably [`DPRQuestionEncoder`], and the generator can be any *seq2seq* model, preferably [`BartForConditionalGeneration`]. The model can be initialized with a [`RagRetriever`] for end-to-end generation or used in combination with the outputs of a retriever in multiple steps---see examples for more details. The model is compatible any
114_10_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md
https://huggingface.co/docs/transformers/en/model_doc/rag/#ragtokenforgeneration
.md
outputs of a retriever in multiple steps---see examples for more details. The model is compatible any *autoencoding* model as the `question_encoder` and any *seq2seq* model with language model head as the `generator`. It has been tested with [`DPRQuestionEncoder`] as the `question_encoder` and [`BartForConditionalGeneration`] or [`T5ForConditionalGeneration`] as the `generator`. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
114_10_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md
https://huggingface.co/docs/transformers/en/model_doc/rag/#ragtokenforgeneration
.md
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Args: config ([`RagConfig`]):
114_10_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md
https://huggingface.co/docs/transformers/en/model_doc/rag/#ragtokenforgeneration
.md
and behavior. Args: config ([`RagConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. question_encoder ([`PreTrainedModel`]): An encoder model compatible with the faiss index encapsulated by the `retriever`. generator ([`PreTrainedModel`]):
114_10_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md
https://huggingface.co/docs/transformers/en/model_doc/rag/#ragtokenforgeneration
.md
An encoder model compatible with the faiss index encapsulated by the `retriever`. generator ([`PreTrainedModel`]): A seq2seq model used as the generator in the RAG architecture. retriever ([`RagRetriever`]): A retriever class encapsulating a faiss index queried to obtain context documents for current inputs. Methods: forward - generate </pt> <tf>
114_10_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md
https://huggingface.co/docs/transformers/en/model_doc/rag/#tfragmodel
.md
No docstring available for TFRagModel Methods: call
114_11_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md
https://huggingface.co/docs/transformers/en/model_doc/rag/#tfragsequenceforgeneration
.md
No docstring available for TFRagSequenceForGeneration Methods: call - generate
114_12_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md
https://huggingface.co/docs/transformers/en/model_doc/rag/#tfragtokenforgeneration
.md
No docstring available for TFRagTokenForGeneration Methods: call - generate </tf> </frameworkcontent>
114_13_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md
https://huggingface.co/docs/transformers/en/model_doc/mobilebert/
.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
115_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md
https://huggingface.co/docs/transformers/en/model_doc/mobilebert/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
115_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md
https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#overview
.md
The MobileBERT model was proposed in [MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices](https://arxiv.org/abs/2004.02984) by Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou. It's a bidirectional transformer based on the BERT model, which is compressed and accelerated using several approaches. The abstract from the paper is the following:
115_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md
https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#overview
.md
approaches. The abstract from the paper is the following: *Natural Language Processing (NLP) has recently achieved great success by using huge pre-trained models with hundreds of millions of parameters. However, these models suffer from heavy model sizes and high latency such that they cannot be deployed to resource-limited mobile devices. In this paper, we propose MobileBERT for compressing and accelerating
115_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md
https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#overview
.md
be deployed to resource-limited mobile devices. In this paper, we propose MobileBERT for compressing and accelerating the popular BERT model. Like the original BERT, MobileBERT is task-agnostic, that is, it can be generically applied to various downstream NLP tasks via simple fine-tuning. Basically, MobileBERT is a thin version of BERT_LARGE, while equipped with bottleneck structures and a carefully designed balance between self-attentions and feed-forward networks.
115_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md
https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#overview
.md
equipped with bottleneck structures and a carefully designed balance between self-attentions and feed-forward networks. To train MobileBERT, we first train a specially designed teacher model, an inverted-bottleneck incorporated BERT_LARGE model. Then, we conduct knowledge transfer from this teacher to MobileBERT. Empirical studies show that MobileBERT is 4.3x smaller and 5.5x faster than BERT_BASE while achieving competitive results on well-known benchmarks. On the
115_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md
https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#overview
.md
4.3x smaller and 5.5x faster than BERT_BASE while achieving competitive results on well-known benchmarks. On the natural language inference tasks of GLUE, MobileBERT achieves a GLUEscore o 77.7 (0.6 lower than BERT_BASE), and 62 ms latency on a Pixel 4 phone. On the SQuAD v1.1/v2.0 question answering task, MobileBERT achieves a dev F1 score of 90.0/79.2 (1.5/2.1 higher than BERT_BASE).*
115_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md
https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#overview
.md
90.0/79.2 (1.5/2.1 higher than BERT_BASE).* This model was contributed by [vshampor](https://huggingface.co/vshampor). The original code can be found [here](https://github.com/google-research/google-research/tree/master/mobilebert).
115_1_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md
https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#usage-tips
.md
- MobileBERT is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than the left. - MobileBERT is similar to BERT and therefore relies on the masked language modeling (MLM) objective. It is therefore efficient at predicting masked tokens and at NLU in general, but is not optimal for text generation. Models trained with a causal language modeling (CLM) objective are better in that regard.
115_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md
https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#resources
.md
- [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Multiple choice task guide](../tasks/multiple_choice)
115_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md
https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#mobilebertconfig
.md
This is the configuration class to store the configuration of a [`MobileBertModel`] or a [`TFMobileBertModel`]. It is used to instantiate a MobileBERT model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the MobileBERT [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) architecture.
115_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md
https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#mobilebertconfig
.md
[google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 30522): Vocabulary size of the MobileBERT model. Defines the number of different tokens that can be represented by
115_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md
https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#mobilebertconfig
.md
Vocabulary size of the MobileBERT model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`MobileBertModel`] or [`TFMobileBertModel`]. hidden_size (`int`, *optional*, defaults to 512): Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (`int`, *optional*, defaults to 24): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 4):
115_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md
https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#mobilebertconfig
.md
Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 4): Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (`int`, *optional*, defaults to 512): Dimensionality of the "intermediate" (often named feed-forward) layer in the Transformer encoder. hidden_act (`str` or `function`, *optional*, defaults to `"relu"`):
115_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md
https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#mobilebertconfig
.md
hidden_act (`str` or `function`, *optional*, defaults to `"relu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"silu"` and `"gelu_new"` are supported. hidden_dropout_prob (`float`, *optional*, defaults to 0.0): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout ratio for the attention probabilities.
115_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md
https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#mobilebertconfig
.md
attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout ratio for the attention probabilities. max_position_embeddings (`int`, *optional*, defaults to 512): The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). type_vocab_size (`int`, *optional*, defaults to 2): The vocabulary size of the `token_type_ids` passed when calling [`MobileBertModel`] or [`TFMobileBertModel`].
115_4_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md
https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#mobilebertconfig
.md
The vocabulary size of the `token_type_ids` passed when calling [`MobileBertModel`] or [`TFMobileBertModel`]. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-12): The epsilon used by the layer normalization layers. pad_token_id (`int`, *optional*, defaults to 0): The ID of the token in the word embedding to use as padding.
115_4_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md
https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#mobilebertconfig
.md
pad_token_id (`int`, *optional*, defaults to 0): The ID of the token in the word embedding to use as padding. embedding_size (`int`, *optional*, defaults to 128): The dimension of the word embedding vectors. trigram_input (`bool`, *optional*, defaults to `True`): Use a convolution of trigram as input. use_bottleneck (`bool`, *optional*, defaults to `True`): Whether to use bottleneck in BERT. intra_bottleneck_size (`int`, *optional*, defaults to 128): Size of bottleneck layer output.
115_4_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md
https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#mobilebertconfig
.md
Whether to use bottleneck in BERT. intra_bottleneck_size (`int`, *optional*, defaults to 128): Size of bottleneck layer output. use_bottleneck_attention (`bool`, *optional*, defaults to `False`): Whether to use attention inputs from the bottleneck transformation. key_query_shared_bottleneck (`bool`, *optional*, defaults to `True`): Whether to use the same linear transformation for query&key in the bottleneck. num_feedforward_networks (`int`, *optional*, defaults to 4): Number of FFNs in a block.
115_4_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md
https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#mobilebertconfig
.md
num_feedforward_networks (`int`, *optional*, defaults to 4): Number of FFNs in a block. normalization_type (`str`, *optional*, defaults to `"no_norm"`): The normalization type in MobileBERT. classifier_dropout (`float`, *optional*): The dropout ratio for the classification head. Examples: ```python >>> from transformers import MobileBertConfig, MobileBertModel
115_4_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md
https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#mobilebertconfig
.md
>>> # Initializing a MobileBERT configuration >>> configuration = MobileBertConfig() >>> # Initializing a model (with random weights) from the configuration above >>> model = MobileBertModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
115_4_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md
https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#mobileberttokenizer
.md
Construct a MobileBERT tokenizer. Based on WordPiece. This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. Args: vocab_file (`str`): File containing the vocabulary. do_lower_case (`bool`, *optional*, defaults to `True`): Whether or not to lowercase the input when tokenizing. do_basic_tokenize (`bool`, *optional*, defaults to `True`):
115_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md
https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#mobileberttokenizer
.md
Whether or not to lowercase the input when tokenizing. do_basic_tokenize (`bool`, *optional*, defaults to `True`): Whether or not to do basic tokenization before WordPiece. never_split (`Iterable`, *optional*): Collection of tokens which will never be split during tokenization. Only has an effect when `do_basic_tokenize=True` unk_token (`str`, *optional*, defaults to `"[UNK]"`): The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.
115_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md
https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#mobileberttokenizer
.md
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. sep_token (`str`, *optional*, defaults to `"[SEP]"`): The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens. pad_token (`str`, *optional*, defaults to `"[PAD]"`):
115_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md
https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#mobileberttokenizer
.md
token of a sequence built with special tokens. pad_token (`str`, *optional*, defaults to `"[PAD]"`): The token used for padding, for example when batching sequences of different lengths. cls_token (`str`, *optional*, defaults to `"[CLS]"`): The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens.
115_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md
https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#mobileberttokenizer
.md
instead of per-token classification). It is the first token of the sequence when built with special tokens. mask_token (`str`, *optional*, defaults to `"[MASK]"`): The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict. tokenize_chinese_chars (`bool`, *optional*, defaults to `True`): Whether or not to tokenize Chinese characters. This should likely be deactivated for Japanese (see this
115_5_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md
https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#mobileberttokenizer
.md
Whether or not to tokenize Chinese characters. This should likely be deactivated for Japanese (see this [issue](https://github.com/huggingface/transformers/issues/328)). strip_accents (`bool`, *optional*): Whether or not to strip all accents. If this option is not specified, then it will be determined by the value for `lowercase` (as in the original MobileBERT). clean_up_tokenization_spaces (`bool`, *optional*, defaults to `True`):
115_5_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md
https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#mobileberttokenizer
.md
value for `lowercase` (as in the original MobileBERT). clean_up_tokenization_spaces (`bool`, *optional*, defaults to `True`): Whether or not to cleanup spaces after decoding, cleanup consists in removing potential artifacts like extra spaces.
115_5_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md
https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#mobileberttokenizerfast
.md
Construct a "fast" MobileBERT tokenizer (backed by HuggingFace's *tokenizers* library). Based on WordPiece. This tokenizer inherits from [`PreTrainedTokenizerFast`] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. Args: vocab_file (`str`): File containing the vocabulary. do_lower_case (`bool`, *optional*, defaults to `True`): Whether or not to lowercase the input when tokenizing.
115_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md
https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#mobileberttokenizerfast
.md
do_lower_case (`bool`, *optional*, defaults to `True`): Whether or not to lowercase the input when tokenizing. unk_token (`str`, *optional*, defaults to `"[UNK]"`): The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. sep_token (`str`, *optional*, defaults to `"[SEP]"`): The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
115_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md
https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#mobileberttokenizerfast
.md
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens. pad_token (`str`, *optional*, defaults to `"[PAD]"`): The token used for padding, for example when batching sequences of different lengths. cls_token (`str`, *optional*, defaults to `"[CLS]"`):
115_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md
https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#mobileberttokenizerfast
.md
cls_token (`str`, *optional*, defaults to `"[CLS]"`): The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens. mask_token (`str`, *optional*, defaults to `"[MASK]"`): The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict.
115_6_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md
https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#mobileberttokenizerfast
.md
modeling. This is the token which the model will try to predict. clean_text (`bool`, *optional*, defaults to `True`): Whether or not to clean the text before tokenization by removing any control characters and replacing all whitespaces by the classic one. tokenize_chinese_chars (`bool`, *optional*, defaults to `True`): Whether or not to tokenize Chinese characters. This should likely be deactivated for Japanese (see [this issue](https://github.com/huggingface/transformers/issues/328)).
115_6_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md
https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#mobileberttokenizerfast
.md
issue](https://github.com/huggingface/transformers/issues/328)). strip_accents (`bool`, *optional*): Whether or not to strip all accents. If this option is not specified, then it will be determined by the value for `lowercase` (as in the original MobileBERT). wordpieces_prefix (`str`, *optional*, defaults to `"##"`): The prefix for subwords.
115_6_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md
https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#mobilebert-specific-outputs
.md
models.mobilebert.modeling_mobilebert.MobileBertForPreTrainingOutput Output type of [`MobileBertForPreTraining`]. Args: loss (*optional*, returned when `labels` is provided, `torch.FloatTensor` of shape `(1,)`): Total loss as the sum of the masked language modeling loss and the next sequence prediction (classification) loss. prediction_logits (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.vocab_size)`):
115_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md
https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#mobilebert-specific-outputs
.md
(classification) loss. prediction_logits (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.vocab_size)`): Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). seq_relationship_logits (`torch.FloatTensor` of shape `(batch_size, 2)`): Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation before SoftMax).
115_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md
https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#mobilebert-specific-outputs
.md
Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation before SoftMax). hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.
115_7_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md
https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#mobilebert-specific-outputs
.md
shape `(batch_size, sequence_length, hidden_size)`. Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`.
115_7_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md
https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#mobilebert-specific-outputs
.md
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. [[autodoc]] models.mobilebert.modeling_tf_mobilebert.TFMobileBertForPreTrainingOutput: modeling_tf_mobilebert requires the TensorFlow library but it was not found in your environment. However, we were able to find a PyTorch installation. PyTorch classes do not begin
115_7_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md
https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#mobilebert-specific-outputs
.md
However, we were able to find a PyTorch installation. PyTorch classes do not begin with "TF", but are otherwise identically named to our TF classes. If you want to use PyTorch, please use those classes instead! If you really do want to use TensorFlow, please follow the instructions on the installation page https://www.tensorflow.org/install that match your environment. <frameworkcontent> <pt>
115_7_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md
https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#mobilebertmodel
.md
The bare MobileBert Model transformer outputting raw hidden-states without any specific head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
115_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md
https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#mobilebertmodel
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`MobileBertConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
115_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md
https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#mobilebertmodel
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. https://arxiv.org/pdf/2004.02984.pdf Methods: forward
115_8_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md
https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#mobilebertforpretraining
.md
MobileBert Model with two heads on top as done during the pretraining: a `masked language modeling` head and a `next sentence prediction (classification)` head. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
115_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md
https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#mobilebertforpretraining
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`MobileBertConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
115_9_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md
https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#mobilebertforpretraining
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
115_9_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md
https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#mobilebertformaskedlm
.md
MobileBert Model with a `language modeling` head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
115_10_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md
https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#mobilebertformaskedlm
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`MobileBertConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
115_10_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilebert.md
https://huggingface.co/docs/transformers/en/model_doc/mobilebert/#mobilebertformaskedlm
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
115_10_2