source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/starcoder2.md
|
https://huggingface.co/docs/transformers/en/model_doc/starcoder2/#starcoder2config
|
.md
|
ramp function. If unspecified, it defaults to 1.
`short_factor` (`List[float]`, *optional*):
Only used with 'longrope'. The scaling factor to be applied to short contexts (<
`original_max_position_embeddings`). Must be a list of numbers with the same length as the hidden
size divided by the number of attention heads divided by 2
`long_factor` (`List[float]`, *optional*):
Only used with 'longrope'. The scaling factor to be applied to long contexts (<
|
352_4_11
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/starcoder2.md
|
https://huggingface.co/docs/transformers/en/model_doc/starcoder2/#starcoder2config
|
.md
|
`long_factor` (`List[float]`, *optional*):
Only used with 'longrope'. The scaling factor to be applied to long contexts (<
`original_max_position_embeddings`). Must be a list of numbers with the same length as the hidden
size divided by the number of attention heads divided by 2
`low_freq_factor` (`float`, *optional*):
Only used with 'llama3'. Scaling factor applied to low frequency components of the RoPE
`high_freq_factor` (`float`, *optional*):
|
352_4_12
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/starcoder2.md
|
https://huggingface.co/docs/transformers/en/model_doc/starcoder2/#starcoder2config
|
.md
|
`high_freq_factor` (`float`, *optional*):
Only used with 'llama3'. Scaling factor applied to high frequency components of the RoPE
sliding_window (`int`, *optional*):
Sliding window attention window size. If not specified, will default to `None` (no sliding window).
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
residual_dropout (`float`, *optional*, defaults to 0.0):
Residual connection dropout value.
|
352_4_13
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/starcoder2.md
|
https://huggingface.co/docs/transformers/en/model_doc/starcoder2/#starcoder2config
|
.md
|
residual_dropout (`float`, *optional*, defaults to 0.0):
Residual connection dropout value.
embedding_dropout (`float`, *optional*, defaults to 0.0):
Embedding dropout.
use_bias (`bool`, *optional*, defaults to `True`):
Whether to use bias term on linear layers of the model.
```python
>>> from transformers import Starcoder2Model, Starcoder2Config
|
352_4_14
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/starcoder2.md
|
https://huggingface.co/docs/transformers/en/model_doc/starcoder2/#starcoder2config
|
.md
|
>>> # Initializing a Starcoder2 7B style configuration
>>> configuration = Starcoder2Config()
>>> # Initializing a model from the Starcoder2 7B style configuration
>>> model = Starcoder2Model(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
352_4_15
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/starcoder2.md
|
https://huggingface.co/docs/transformers/en/model_doc/starcoder2/#starcoder2model
|
.md
|
The bare Starcoder2 Model outputting raw hidden-states without any specific head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
352_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/starcoder2.md
|
https://huggingface.co/docs/transformers/en/model_doc/starcoder2/#starcoder2model
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`Starcoder2Config`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
|
352_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/starcoder2.md
|
https://huggingface.co/docs/transformers/en/model_doc/starcoder2/#starcoder2model
|
.md
|
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Transformer decoder consisting of *config.num_hidden_layers* layers. Each layer is a [`Starcoder2DecoderLayer`]
Args:
config: Starcoder2Config
Methods: forward
|
352_5_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/starcoder2.md
|
https://huggingface.co/docs/transformers/en/model_doc/starcoder2/#starcoder2forcausallm
|
.md
|
No docstring available for Starcoder2ForCausalLM
Methods: forward
|
352_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/starcoder2.md
|
https://huggingface.co/docs/transformers/en/model_doc/starcoder2/#starcoder2forsequenceclassification
|
.md
|
The Starcoder2 Model transformer with a sequence classification head on top (linear layer).
[`Starcoder2ForSequenceClassification`] uses the last token in order to do the classification, as other causal models
(e.g. GPT-2) do.
Since it does classification on the last token, it requires to know the position of the last token. If a
`pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If
|
352_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/starcoder2.md
|
https://huggingface.co/docs/transformers/en/model_doc/starcoder2/#starcoder2forsequenceclassification
|
.md
|
`pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If
no `pad_token_id` is defined, it simply takes the last value in each row of the batch. Since it cannot guess the
padding tokens when `inputs_embeds` are passed instead of `input_ids`, it does the same (take the last value in
each row of the batch).
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
|
352_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/starcoder2.md
|
https://huggingface.co/docs/transformers/en/model_doc/starcoder2/#starcoder2forsequenceclassification
|
.md
|
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
|
352_7_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/starcoder2.md
|
https://huggingface.co/docs/transformers/en/model_doc/starcoder2/#starcoder2forsequenceclassification
|
.md
|
and behavior.
Parameters:
config ([`Starcoder2Config`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
352_7_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/starcoder2.md
|
https://huggingface.co/docs/transformers/en/model_doc/starcoder2/#starcoder2fortokenclassification
|
.md
|
The Starcoder2 Model transformer with a token classification head on top (a linear layer on top of the hidden-states
output) e.g. for Named-Entity-Recognition (NER) tasks.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
|
352_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/starcoder2.md
|
https://huggingface.co/docs/transformers/en/model_doc/starcoder2/#starcoder2fortokenclassification
|
.md
|
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`Starcoder2Config`]):
|
352_8_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/starcoder2.md
|
https://huggingface.co/docs/transformers/en/model_doc/starcoder2/#starcoder2fortokenclassification
|
.md
|
and behavior.
Parameters:
config ([`Starcoder2Config`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
352_8_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/
|
.md
|
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
353_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
353_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#electra
|
.md
|
<div class="flex flex-wrap space-x-1">
<a href="https://huggingface.co/models?filter=electra">
<img alt="Models" src="https://img.shields.io/badge/All_model_pages-electra-blueviolet">
</a>
<a href="https://huggingface.co/spaces/docs-demos/electra_large_discriminator_squad2_512">
<img alt="Spaces" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue">
</a>
</div>
|
353_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#overview
|
.md
|
The ELECTRA model was proposed in the paper [ELECTRA: Pre-training Text Encoders as Discriminators Rather Than
Generators](https://openreview.net/pdf?id=r1xMH1BtvB). ELECTRA is a new pretraining approach which trains two
transformer models: the generator and the discriminator. The generator's role is to replace tokens in a sequence, and
is therefore trained as a masked language model. The discriminator, which is the model we're interested in, tries to
|
353_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#overview
|
.md
|
is therefore trained as a masked language model. The discriminator, which is the model we're interested in, tries to
identify which tokens were replaced by the generator in the sequence.
The abstract from the paper is the following:
*Masked language modeling (MLM) pretraining methods such as BERT corrupt the input by replacing some tokens with [MASK]
and then train a model to reconstruct the original tokens. While they produce good results when transferred to
|
353_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#overview
|
.md
|
and then train a model to reconstruct the original tokens. While they produce good results when transferred to
downstream NLP tasks, they generally require large amounts of compute to be effective. As an alternative, we propose a
more sample-efficient pretraining task called replaced token detection. Instead of masking the input, our approach
corrupts it by replacing some tokens with plausible alternatives sampled from a small generator network. Then, instead
|
353_2_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#overview
|
.md
|
corrupts it by replacing some tokens with plausible alternatives sampled from a small generator network. Then, instead
of training a model that predicts the original identities of the corrupted tokens, we train a discriminative model that
predicts whether each token in the corrupted input was replaced by a generator sample or not. Thorough experiments
demonstrate this new pretraining task is more efficient than MLM because the task is defined over all input tokens
|
353_2_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#overview
|
.md
|
demonstrate this new pretraining task is more efficient than MLM because the task is defined over all input tokens
rather than just the small subset that was masked out. As a result, the contextual representations learned by our
approach substantially outperform the ones learned by BERT given the same model size, data, and compute. The gains are
particularly strong for small models; for example, we train a model on one GPU for 4 days that outperforms GPT (trained
|
353_2_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#overview
|
.md
|
particularly strong for small models; for example, we train a model on one GPU for 4 days that outperforms GPT (trained
using 30x more compute) on the GLUE natural language understanding benchmark. Our approach also works well at scale,
where it performs comparably to RoBERTa and XLNet while using less than 1/4 of their compute and outperforms them when
using the same amount of compute.*
|
353_2_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#overview
|
.md
|
using the same amount of compute.*
This model was contributed by [lysandre](https://huggingface.co/lysandre). The original code can be found [here](https://github.com/google-research/electra).
|
353_2_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#usage-tips
|
.md
|
- ELECTRA is the pretraining approach, therefore there is nearly no changes done to the underlying model: BERT. The
only change is the separation of the embedding size and the hidden size: the embedding size is generally smaller,
while the hidden size is larger. An additional projection layer (linear) is used to project the embeddings from their
embedding size to the hidden size. In the case where the embedding size is the same as the hidden size, no projection
layer is used.
|
353_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#usage-tips
|
.md
|
- ELECTRA is a transformer model pretrained with the use of another (small) masked language model. The inputs are corrupted by that language model, which takes an input text that is randomly masked and outputs a text in which ELECTRA has to predict which token is an original and which one has been replaced. Like for GAN training, the small language model is trained for a few steps (but with the original texts as objective, not to fool the ELECTRA model like in a traditional GAN setting) then the ELECTRA
|
353_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#usage-tips
|
.md
|
(but with the original texts as objective, not to fool the ELECTRA model like in a traditional GAN setting) then the ELECTRA model is trained for a few steps.
|
353_3_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#usage-tips
|
.md
|
- The ELECTRA checkpoints saved using [Google Research's implementation](https://github.com/google-research/electra)
contain both the generator and discriminator. The conversion script requires the user to name which model to export
into the correct architecture. Once converted to the HuggingFace format, these checkpoints may be loaded into all
available ELECTRA models, however. This means that the discriminator may be loaded in the
[`ElectraForMaskedLM`] model, and the generator may be loaded in the
|
353_3_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#usage-tips
|
.md
|
[`ElectraForMaskedLM`] model, and the generator may be loaded in the
[`ElectraForPreTraining`] model (the classification head will be randomly initialized as it
doesn't exist in the generator).
|
353_3_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#resources
|
.md
|
- [Text classification task guide](../tasks/sequence_classification)
- [Token classification task guide](../tasks/token_classification)
- [Question answering task guide](../tasks/question_answering)
- [Causal language modeling task guide](../tasks/language_modeling)
- [Masked language modeling task guide](../tasks/masked_language_modeling)
- [Multiple choice task guide](../tasks/multiple_choice)
|
353_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#electraconfig
|
.md
|
This is the configuration class to store the configuration of a [`ElectraModel`] or a [`TFElectraModel`]. It is
used to instantiate a ELECTRA model according to the specified arguments, defining the model architecture.
Instantiating a configuration with the defaults will yield a similar configuration to that of the ELECTRA
[google/electra-small-discriminator](https://huggingface.co/google/electra-small-discriminator) architecture.
|
353_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#electraconfig
|
.md
|
[google/electra-small-discriminator](https://huggingface.co/google/electra-small-discriminator) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 30522):
Vocabulary size of the ELECTRA model. Defines the number of different tokens that can be represented by the
|
353_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#electraconfig
|
.md
|
Vocabulary size of the ELECTRA model. Defines the number of different tokens that can be represented by the
`inputs_ids` passed when calling [`ElectraModel`] or [`TFElectraModel`].
embedding_size (`int`, *optional*, defaults to 128):
Dimensionality of the encoder layers and the pooler layer.
hidden_size (`int`, *optional*, defaults to 256):
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (`int`, *optional*, defaults to 12):
Number of hidden layers in the Transformer encoder.
|
353_5_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#electraconfig
|
.md
|
num_hidden_layers (`int`, *optional*, defaults to 12):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 4):
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (`int`, *optional*, defaults to 1024):
Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (`str` or `Callable`, *optional*, defaults to `"gelu"`):
|
353_5_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#electraconfig
|
.md
|
hidden_act (`str` or `Callable`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"silu"` and `"gelu_new"` are supported.
hidden_dropout_prob (`float`, *optional*, defaults to 0.1):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1):
The dropout ratio for the attention probabilities.
|
353_5_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#electraconfig
|
.md
|
attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1):
The dropout ratio for the attention probabilities.
max_position_embeddings (`int`, *optional*, defaults to 512):
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (`int`, *optional*, defaults to 2):
The vocabulary size of the `token_type_ids` passed when calling [`ElectraModel`] or [`TFElectraModel`].
|
353_5_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#electraconfig
|
.md
|
The vocabulary size of the `token_type_ids` passed when calling [`ElectraModel`] or [`TFElectraModel`].
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (`float`, *optional*, defaults to 1e-12):
The epsilon used by the layer normalization layers.
summary_type (`str`, *optional*, defaults to `"first"`):
|
353_5_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#electraconfig
|
.md
|
The epsilon used by the layer normalization layers.
summary_type (`str`, *optional*, defaults to `"first"`):
Argument used when doing sequence summary. Used in the sequence classification and multiple choice models.
Has to be one of the following options:
- `"last"`: Take the last token hidden state (like XLNet).
- `"first"`: Take the first token hidden state (like BERT).
- `"mean"`: Take the mean of all tokens hidden states.
|
353_5_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#electraconfig
|
.md
|
- `"first"`: Take the first token hidden state (like BERT).
- `"mean"`: Take the mean of all tokens hidden states.
- `"cls_index"`: Supply a Tensor of classification token position (like GPT/GPT-2).
- `"attn"`: Not implemented now, use multi-head attention.
summary_use_proj (`bool`, *optional*, defaults to `True`):
Argument used when doing sequence summary. Used in the sequence classification and multiple choice models.
Whether or not to add a projection after the vector extraction.
|
353_5_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#electraconfig
|
.md
|
Whether or not to add a projection after the vector extraction.
summary_activation (`str`, *optional*):
Argument used when doing sequence summary. Used in the sequence classification and multiple choice models.
Pass `"gelu"` for a gelu activation to the output, any other value will result in no activation.
summary_last_dropout (`float`, *optional*, defaults to 0.0):
Argument used when doing sequence summary. Used in the sequence classification and multiple choice models.
|
353_5_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#electraconfig
|
.md
|
Argument used when doing sequence summary. Used in the sequence classification and multiple choice models.
The dropout ratio to be used after the projection and activation.
position_embedding_type (`str`, *optional*, defaults to `"absolute"`):
Type of position embedding. Choose one of `"absolute"`, `"relative_key"`, `"relative_key_query"`. For
positional embeddings use `"absolute"`. For more information on `"relative_key"`, please refer to
|
353_5_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#electraconfig
|
.md
|
positional embeddings use `"absolute"`. For more information on `"relative_key"`, please refer to
[Self-Attention with Relative Position Representations (Shaw et al.)](https://arxiv.org/abs/1803.02155).
For more information on `"relative_key_query"`, please refer to *Method 4* in [Improve Transformer Models
with Better Relative Position Embeddings (Huang et al.)](https://arxiv.org/abs/2009.13658).
use_cache (`bool`, *optional*, defaults to `True`):
|
353_5_11
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#electraconfig
|
.md
|
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if `config.is_decoder=True`.
classifier_dropout (`float`, *optional*):
The dropout ratio for the classification head.
Examples:
```python
>>> from transformers import ElectraConfig, ElectraModel
|
353_5_12
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#electraconfig
|
.md
|
>>> # Initializing a ELECTRA electra-base-uncased style configuration
>>> configuration = ElectraConfig()
>>> # Initializing a model (with random weights) from the electra-base-uncased style configuration
>>> model = ElectraModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
353_5_13
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#electratokenizer
|
.md
|
Construct a Electra tokenizer. Based on WordPiece.
This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
Args:
vocab_file (`str`):
File containing the vocabulary.
do_lower_case (`bool`, *optional*, defaults to `True`):
Whether or not to lowercase the input when tokenizing.
do_basic_tokenize (`bool`, *optional*, defaults to `True`):
|
353_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#electratokenizer
|
.md
|
Whether or not to lowercase the input when tokenizing.
do_basic_tokenize (`bool`, *optional*, defaults to `True`):
Whether or not to do basic tokenization before WordPiece.
never_split (`Iterable`, *optional*):
Collection of tokens which will never be split during tokenization. Only has an effect when
`do_basic_tokenize=True`
unk_token (`str`, *optional*, defaults to `"[UNK]"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
|
353_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#electratokenizer
|
.md
|
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
sep_token (`str`, *optional*, defaults to `"[SEP]"`):
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
pad_token (`str`, *optional*, defaults to `"[PAD]"`):
|
353_6_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#electratokenizer
|
.md
|
token of a sequence built with special tokens.
pad_token (`str`, *optional*, defaults to `"[PAD]"`):
The token used for padding, for example when batching sequences of different lengths.
cls_token (`str`, *optional*, defaults to `"[CLS]"`):
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
|
353_6_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#electratokenizer
|
.md
|
instead of per-token classification). It is the first token of the sequence when built with special tokens.
mask_token (`str`, *optional*, defaults to `"[MASK]"`):
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
tokenize_chinese_chars (`bool`, *optional*, defaults to `True`):
Whether or not to tokenize Chinese characters.
This should likely be deactivated for Japanese (see this
|
353_6_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#electratokenizer
|
.md
|
Whether or not to tokenize Chinese characters.
This should likely be deactivated for Japanese (see this
[issue](https://github.com/huggingface/transformers/issues/328)).
strip_accents (`bool`, *optional*):
Whether or not to strip all accents. If this option is not specified, then it will be determined by the
value for `lowercase` (as in the original Electra).
clean_up_tokenization_spaces (`bool`, *optional*, defaults to `True`):
|
353_6_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#electratokenizer
|
.md
|
value for `lowercase` (as in the original Electra).
clean_up_tokenization_spaces (`bool`, *optional*, defaults to `True`):
Whether or not to cleanup spaces after decoding, cleanup consists in removing potential artifacts like
extra spaces.
|
353_6_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#electratokenizerfast
|
.md
|
Construct a "fast" ELECTRA tokenizer (backed by HuggingFace's *tokenizers* library). Based on WordPiece.
This tokenizer inherits from [`PreTrainedTokenizerFast`] which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
Args:
vocab_file (`str`):
File containing the vocabulary.
do_lower_case (`bool`, *optional*, defaults to `True`):
Whether or not to lowercase the input when tokenizing.
|
353_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#electratokenizerfast
|
.md
|
do_lower_case (`bool`, *optional*, defaults to `True`):
Whether or not to lowercase the input when tokenizing.
unk_token (`str`, *optional*, defaults to `"[UNK]"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
sep_token (`str`, *optional*, defaults to `"[SEP]"`):
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
|
353_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#electratokenizerfast
|
.md
|
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
pad_token (`str`, *optional*, defaults to `"[PAD]"`):
The token used for padding, for example when batching sequences of different lengths.
cls_token (`str`, *optional*, defaults to `"[CLS]"`):
|
353_7_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#electratokenizerfast
|
.md
|
cls_token (`str`, *optional*, defaults to `"[CLS]"`):
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
mask_token (`str`, *optional*, defaults to `"[MASK]"`):
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
|
353_7_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#electratokenizerfast
|
.md
|
modeling. This is the token which the model will try to predict.
clean_text (`bool`, *optional*, defaults to `True`):
Whether or not to clean the text before tokenization by removing any control characters and replacing all
whitespaces by the classic one.
tokenize_chinese_chars (`bool`, *optional*, defaults to `True`):
Whether or not to tokenize Chinese characters. This should likely be deactivated for Japanese (see [this
issue](https://github.com/huggingface/transformers/issues/328)).
|
353_7_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#electratokenizerfast
|
.md
|
issue](https://github.com/huggingface/transformers/issues/328)).
strip_accents (`bool`, *optional*):
Whether or not to strip all accents. If this option is not specified, then it will be determined by the
value for `lowercase` (as in the original ELECTRA).
wordpieces_prefix (`str`, *optional*, defaults to `"##"`):
The prefix for subwords.
|
353_7_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#electra-specific-outputs
|
.md
|
models.electra.modeling_electra.ElectraForPreTrainingOutput
Output type of [`ElectraForPreTraining`].
Args:
loss (*optional*, returned when `labels` is provided, `torch.FloatTensor` of shape `(1,)`):
Total loss of the ELECTRA objective.
logits (`torch.FloatTensor` of shape `(batch_size, sequence_length)`):
Prediction scores of the head (scores for each token before SoftMax).
|
353_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#electra-specific-outputs
|
.md
|
Prediction scores of the head (scores for each token before SoftMax).
hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of
shape `(batch_size, sequence_length, hidden_size)`.
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
|
353_8_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#electra-specific-outputs
|
.md
|
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
sequence_length)`.
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
|
353_8_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#electra-specific-outputs
|
.md
|
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
[[autodoc]] models.electra.modeling_tf_electra.TFElectraForPreTrainingOutput:
modeling_tf_electra requires the TensorFlow library but it was not found in your environment.
However, we were able to find a PyTorch installation. PyTorch classes do not begin
with "TF", but are otherwise identically named to our TF classes.
If you want to use PyTorch, please use those classes instead!
|
353_8_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#electra-specific-outputs
|
.md
|
If you want to use PyTorch, please use those classes instead!
If you really do want to use TensorFlow, please follow the instructions on the
installation page https://www.tensorflow.org/install that match your environment.
<frameworkcontent>
<pt>
|
353_8_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#electramodel
|
.md
|
The bare Electra Model transformer outputting raw hidden-states without any specific head on top. Identical to the BERT model except that it uses an additional linear layer between the embedding layer and the encoder if the hidden size and embedding size are different. Both the generator and discriminator checkpoints may be loaded into this model.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
|
353_9_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#electramodel
|
.md
|
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
|
353_9_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#electramodel
|
.md
|
and behavior.
Parameters:
config ([`ElectraConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
353_9_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#electraforpretraining
|
.md
|
Electra model with a binary classification head on top as used during pretraining for identifying generated tokens.
It is recommended to load the discriminator checkpoint into that model.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
|
353_10_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#electraforpretraining
|
.md
|
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`ElectraConfig`]): Model configuration class with all the parameters of the model.
|
353_10_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#electraforpretraining
|
.md
|
and behavior.
Parameters:
config ([`ElectraConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
353_10_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#electraforcausallm
|
.md
|
ELECTRA Model with a `language modeling` head on top for CLM fine-tuning.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
353_11_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#electraforcausallm
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`ElectraConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
353_11_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#electraforcausallm
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
353_11_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#electraformaskedlm
|
.md
|
Electra model with a language modeling head on top.
Even though both the discriminator and generator may be loaded into this model, the generator is the only model of
the two to have been trained for the masked language modeling task.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
|
353_12_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#electraformaskedlm
|
.md
|
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`ElectraConfig`]): Model configuration class with all the parameters of the model.
|
353_12_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#electraformaskedlm
|
.md
|
and behavior.
Parameters:
config ([`ElectraConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
353_12_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#electraforsequenceclassification
|
.md
|
ELECTRA Model transformer with a sequence classification/regression head on top (a linear layer on top of the
pooled output) e.g. for GLUE tasks.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
353_13_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#electraforsequenceclassification
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`ElectraConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
353_13_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#electraforsequenceclassification
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
353_13_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#electraformultiplechoice
|
.md
|
ELECTRA Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a
softmax) e.g. for RocStories/SWAG tasks.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
353_14_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#electraformultiplechoice
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`ElectraConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
353_14_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#electraformultiplechoice
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
353_14_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#electrafortokenclassification
|
.md
|
Electra model with a token classification head on top.
Both the discriminator and generator may be loaded into this model.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
353_15_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#electrafortokenclassification
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`ElectraConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
353_15_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#electrafortokenclassification
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
353_15_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#electraforquestionanswering
|
.md
|
ELECTRA Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layers on top of the hidden-states output to compute `span start logits` and `span end logits`).
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
|
353_16_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#electraforquestionanswering
|
.md
|
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`ElectraConfig`]): Model configuration class with all the parameters of the model.
|
353_16_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#electraforquestionanswering
|
.md
|
and behavior.
Parameters:
config ([`ElectraConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
</pt>
<tf>
|
353_16_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#tfelectramodel
|
.md
|
No docstring available for TFElectraModel
Methods: call
|
353_17_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#tfelectraforpretraining
|
.md
|
No docstring available for TFElectraForPreTraining
Methods: call
|
353_18_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#tfelectraformaskedlm
|
.md
|
No docstring available for TFElectraForMaskedLM
Methods: call
|
353_19_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#tfelectraforsequenceclassification
|
.md
|
No docstring available for TFElectraForSequenceClassification
Methods: call
|
353_20_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#tfelectraformultiplechoice
|
.md
|
No docstring available for TFElectraForMultipleChoice
Methods: call
|
353_21_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#tfelectrafortokenclassification
|
.md
|
No docstring available for TFElectraForTokenClassification
Methods: call
|
353_22_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#tfelectraforquestionanswering
|
.md
|
No docstring available for TFElectraForQuestionAnswering
Methods: call
</tf>
<jax>
|
353_23_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#flaxelectramodel
|
.md
|
No docstring available for FlaxElectraModel
Methods: __call__
|
353_24_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#flaxelectraforpretraining
|
.md
|
No docstring available for FlaxElectraForPreTraining
Methods: __call__
|
353_25_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#flaxelectraforcausallm
|
.md
|
No docstring available for FlaxElectraForCausalLM
Methods: __call__
|
353_26_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#flaxelectraformaskedlm
|
.md
|
No docstring available for FlaxElectraForMaskedLM
Methods: __call__
|
353_27_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
|
https://huggingface.co/docs/transformers/en/model_doc/electra/#flaxelectraforsequenceclassification
|
.md
|
No docstring available for FlaxElectraForSequenceClassification
Methods: __call__
|
353_28_0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.