source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/barthez.md
|
https://huggingface.co/docs/transformers/en/model_doc/barthez/#bartheztokenizer
|
.md
|
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (`str`, *optional*, defaults to `"<unk>"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (`str`, *optional*, defaults to `"<pad>"`):
|
250_3_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/barthez.md
|
https://huggingface.co/docs/transformers/en/model_doc/barthez/#bartheztokenizer
|
.md
|
token instead.
pad_token (`str`, *optional*, defaults to `"<pad>"`):
The token used for padding, for example when batching sequences of different lengths.
mask_token (`str`, *optional*, defaults to `"<mask>"`):
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
sp_model_kwargs (`dict`, *optional*):
Will be passed to the `SentencePieceProcessor.__init__()` method. The [Python wrapper for
|
250_3_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/barthez.md
|
https://huggingface.co/docs/transformers/en/model_doc/barthez/#bartheztokenizer
|
.md
|
sp_model_kwargs (`dict`, *optional*):
Will be passed to the `SentencePieceProcessor.__init__()` method. The [Python wrapper for
SentencePiece](https://github.com/google/sentencepiece/tree/master/python) can be used, among other things,
to set:
- `enable_sampling`: Enable subword regularization.
- `nbest_size`: Sampling parameters for unigram. Invalid for BPE-Dropout.
- `nbest_size = {0,1}`: No sampling is performed.
- `nbest_size > 1`: samples from the nbest_size results.
|
250_3_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/barthez.md
|
https://huggingface.co/docs/transformers/en/model_doc/barthez/#bartheztokenizer
|
.md
|
- `nbest_size = {0,1}`: No sampling is performed.
- `nbest_size > 1`: samples from the nbest_size results.
- `nbest_size < 0`: assuming that nbest_size is infinite and samples from the all hypothesis (lattice)
using forward-filtering-and-backward-sampling algorithm.
- `alpha`: Smoothing parameter for unigram sampling, and dropout probability of merge operations for
BPE-dropout.
Attributes:
sp_model (`SentencePieceProcessor`):
|
250_3_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/barthez.md
|
https://huggingface.co/docs/transformers/en/model_doc/barthez/#bartheztokenizer
|
.md
|
BPE-dropout.
Attributes:
sp_model (`SentencePieceProcessor`):
The *SentencePiece* processor that is used for every conversion (string, tokens and IDs).
|
250_3_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/barthez.md
|
https://huggingface.co/docs/transformers/en/model_doc/barthez/#bartheztokenizerfast
|
.md
|
Adapted from [`CamembertTokenizer`] and [`BartTokenizer`]. Construct a "fast" BARThez tokenizer. Based on
[SentencePiece](https://github.com/google/sentencepiece).
This tokenizer inherits from [`PreTrainedTokenizerFast`] which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
Args:
vocab_file (`str`):
[SentencePiece](https://github.com/google/sentencepiece) file (generally has a *.spm* extension) that
|
250_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/barthez.md
|
https://huggingface.co/docs/transformers/en/model_doc/barthez/#bartheztokenizerfast
|
.md
|
Args:
vocab_file (`str`):
[SentencePiece](https://github.com/google/sentencepiece) file (generally has a *.spm* extension) that
contains the vocabulary necessary to instantiate a tokenizer.
bos_token (`str`, *optional*, defaults to `"<s>"`):
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
<Tip>
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the `cls_token`.
|
250_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/barthez.md
|
https://huggingface.co/docs/transformers/en/model_doc/barthez/#bartheztokenizerfast
|
.md
|
sequence. The token used is the `cls_token`.
</Tip>
eos_token (`str`, *optional*, defaults to `"</s>"`):
The end of sequence token.
<Tip>
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the `sep_token`.
</Tip>
sep_token (`str`, *optional*, defaults to `"</s>"`):
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
|
250_4_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/barthez.md
|
https://huggingface.co/docs/transformers/en/model_doc/barthez/#bartheztokenizerfast
|
.md
|
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (`str`, *optional*, defaults to `"<s>"`):
The classifier token which is used when doing sequence classification (classification of the whole sequence
|
250_4_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/barthez.md
|
https://huggingface.co/docs/transformers/en/model_doc/barthez/#bartheztokenizerfast
|
.md
|
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (`str`, *optional*, defaults to `"<unk>"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (`str`, *optional*, defaults to `"<pad>"`):
|
250_4_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/barthez.md
|
https://huggingface.co/docs/transformers/en/model_doc/barthez/#bartheztokenizerfast
|
.md
|
token instead.
pad_token (`str`, *optional*, defaults to `"<pad>"`):
The token used for padding, for example when batching sequences of different lengths.
mask_token (`str`, *optional*, defaults to `"<mask>"`):
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
additional_special_tokens (`List[str]`, *optional*, defaults to `["<s>NOTUSED", "</s>NOTUSED"]`):
|
250_4_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/barthez.md
|
https://huggingface.co/docs/transformers/en/model_doc/barthez/#bartheztokenizerfast
|
.md
|
additional_special_tokens (`List[str]`, *optional*, defaults to `["<s>NOTUSED", "</s>NOTUSED"]`):
Additional special tokens used by the tokenizer.
|
250_4_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/
|
.md
|
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
251_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
251_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#overview
|
.md
|
The LUKE model was proposed in [LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention](https://arxiv.org/abs/2010.01057) by Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda and Yuji Matsumoto.
It is based on RoBERTa and adds entity embeddings as well as an entity-aware self-attention mechanism, which helps
improve performance on various downstream tasks involving reasoning about entities such as named entity recognition,
|
251_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#overview
|
.md
|
improve performance on various downstream tasks involving reasoning about entities such as named entity recognition,
extractive and cloze-style question answering, entity typing, and relation classification.
The abstract from the paper is the following:
*Entity representations are useful in natural language tasks involving entities. In this paper, we propose new
pretrained contextualized representations of words and entities based on the bidirectional transformer. The proposed
|
251_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#overview
|
.md
|
pretrained contextualized representations of words and entities based on the bidirectional transformer. The proposed
model treats words and entities in a given text as independent tokens, and outputs contextualized representations of
them. Our model is trained using a new pretraining task based on the masked language model of BERT. The task involves
predicting randomly masked words and entities in a large entity-annotated corpus retrieved from Wikipedia. We also
|
251_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#overview
|
.md
|
predicting randomly masked words and entities in a large entity-annotated corpus retrieved from Wikipedia. We also
propose an entity-aware self-attention mechanism that is an extension of the self-attention mechanism of the
transformer, and considers the types of tokens (words or entities) when computing attention scores. The proposed model
achieves impressive empirical performance on a wide range of entity-related tasks. In particular, it obtains
|
251_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#overview
|
.md
|
achieves impressive empirical performance on a wide range of entity-related tasks. In particular, it obtains
state-of-the-art results on five well-known datasets: Open Entity (entity typing), TACRED (relation classification),
CoNLL-2003 (named entity recognition), ReCoRD (cloze-style question answering), and SQuAD 1.1 (extractive question
answering).*
|
251_1_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#overview
|
.md
|
answering).*
This model was contributed by [ikuyamada](https://huggingface.co/ikuyamada) and [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/studio-ousia/luke).
|
251_1_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#usage-tips
|
.md
|
- This implementation is the same as [`RobertaModel`] with the addition of entity embeddings as well
as an entity-aware self-attention mechanism, which improves performance on tasks involving reasoning about entities.
- LUKE treats entities as input tokens; therefore, it takes `entity_ids`, `entity_attention_mask`,
`entity_token_type_ids` and `entity_position_ids` as extra input. You can obtain those using
[`LukeTokenizer`].
|
251_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#usage-tips
|
.md
|
`entity_token_type_ids` and `entity_position_ids` as extra input. You can obtain those using
[`LukeTokenizer`].
- [`LukeTokenizer`] takes `entities` and `entity_spans` (character-based start and end
positions of the entities in the input text) as extra input. `entities` typically consist of [MASK] entities or
Wikipedia entities. The brief description when inputting these entities are as follows:
|
251_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#usage-tips
|
.md
|
Wikipedia entities. The brief description when inputting these entities are as follows:
- *Inputting [MASK] entities to compute entity representations*: The [MASK] entity is used to mask entities to be
predicted during pretraining. When LUKE receives the [MASK] entity, it tries to predict the original entity by
gathering the information about the entity from the input text. Therefore, the [MASK] entity can be used to address
|
251_2_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#usage-tips
|
.md
|
gathering the information about the entity from the input text. Therefore, the [MASK] entity can be used to address
downstream tasks requiring the information of entities in text such as entity typing, relation classification, and
named entity recognition.
- *Inputting Wikipedia entities to compute knowledge-enhanced token representations*: LUKE learns rich information
(or knowledge) about Wikipedia entities during pretraining and stores the information in its entity embedding. By
|
251_2_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#usage-tips
|
.md
|
(or knowledge) about Wikipedia entities during pretraining and stores the information in its entity embedding. By
using Wikipedia entities as input tokens, LUKE outputs token representations enriched by the information stored in
the embeddings of these entities. This is particularly effective for tasks requiring real-world knowledge, such as
question answering.
- There are three head models for the former use case:
|
251_2_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#usage-tips
|
.md
|
question answering.
- There are three head models for the former use case:
- [`LukeForEntityClassification`], for tasks to classify a single entity in an input text such as
entity typing, e.g. the [Open Entity dataset](https://www.cs.utexas.edu/~eunsol/html_pages/open_entity.html).
This model places a linear head on top of the output entity representation.
- [`LukeForEntityPairClassification`], for tasks to classify the relationship between two entities
|
251_2_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#usage-tips
|
.md
|
- [`LukeForEntityPairClassification`], for tasks to classify the relationship between two entities
such as relation classification, e.g. the [TACRED dataset](https://nlp.stanford.edu/projects/tacred/). This
model places a linear head on top of the concatenated output representation of the pair of given entities.
- [`LukeForEntitySpanClassification`], for tasks to classify the sequence of entity spans, such as
|
251_2_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#usage-tips
|
.md
|
- [`LukeForEntitySpanClassification`], for tasks to classify the sequence of entity spans, such as
named entity recognition (NER). This model places a linear head on top of the output entity representations. You
can address NER using this model by inputting all possible entity spans in the text to the model.
[`LukeTokenizer`] has a `task` argument, which enables you to easily create an input to these
head models by specifying `task="entity_classification"`, `task="entity_pair_classification"`, or
|
251_2_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#usage-tips
|
.md
|
head models by specifying `task="entity_classification"`, `task="entity_pair_classification"`, or
`task="entity_span_classification"`. Please refer to the example code of each head models.
Usage example:
```python
>>> from transformers import LukeTokenizer, LukeModel, LukeForEntityPairClassification
|
251_2_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#usage-tips
|
.md
|
>>> model = LukeModel.from_pretrained("studio-ousia/luke-base")
>>> tokenizer = LukeTokenizer.from_pretrained("studio-ousia/luke-base")
# Example 1: Computing the contextualized entity representation corresponding to the entity mention "Beyoncé"
|
251_2_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#usage-tips
|
.md
|
>>> text = "Beyoncé lives in Los Angeles."
>>> entity_spans = [(0, 7)] # character-based entity span corresponding to "Beyoncé"
>>> inputs = tokenizer(text, entity_spans=entity_spans, add_prefix_space=True, return_tensors="pt")
>>> outputs = model(**inputs)
>>> word_last_hidden_state = outputs.last_hidden_state
>>> entity_last_hidden_state = outputs.entity_last_hidden_state
# Example 2: Inputting Wikipedia entities to obtain enriched contextualized representations
|
251_2_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#usage-tips
|
.md
|
>>> entities = [
... "Beyoncé",
... "Los Angeles",
... ] # Wikipedia entity titles corresponding to the entity mentions "Beyoncé" and "Los Angeles"
>>> entity_spans = [(0, 7), (17, 28)] # character-based entity spans corresponding to "Beyoncé" and "Los Angeles"
>>> inputs = tokenizer(text, entities=entities, entity_spans=entity_spans, add_prefix_space=True, return_tensors="pt")
>>> outputs = model(**inputs)
>>> word_last_hidden_state = outputs.last_hidden_state
|
251_2_11
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#usage-tips
|
.md
|
>>> outputs = model(**inputs)
>>> word_last_hidden_state = outputs.last_hidden_state
>>> entity_last_hidden_state = outputs.entity_last_hidden_state
# Example 3: Classifying the relationship between two entities using LukeForEntityPairClassification head model
|
251_2_12
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#usage-tips
|
.md
|
>>> model = LukeForEntityPairClassification.from_pretrained("studio-ousia/luke-large-finetuned-tacred")
>>> tokenizer = LukeTokenizer.from_pretrained("studio-ousia/luke-large-finetuned-tacred")
>>> entity_spans = [(0, 7), (17, 28)] # character-based entity spans corresponding to "Beyoncé" and "Los Angeles"
>>> inputs = tokenizer(text, entity_spans=entity_spans, return_tensors="pt")
>>> outputs = model(**inputs)
>>> logits = outputs.logits
>>> predicted_class_idx = int(logits[0].argmax())
|
251_2_13
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#usage-tips
|
.md
|
>>> outputs = model(**inputs)
>>> logits = outputs.logits
>>> predicted_class_idx = int(logits[0].argmax())
>>> print("Predicted class:", model.config.id2label[predicted_class_idx])
```
|
251_2_14
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#resources
|
.md
|
- [A demo notebook on how to fine-tune [`LukeForEntityPairClassification`] for relation classification](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/LUKE)
- [Notebooks showcasing how you to reproduce the results as reported in the paper with the HuggingFace implementation of LUKE](https://github.com/studio-ousia/luke/tree/master/notebooks)
- [Text classification task guide](../tasks/sequence_classification)
- [Token classification task guide](../tasks/token_classification)
|
251_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#resources
|
.md
|
- [Token classification task guide](../tasks/token_classification)
- [Question answering task guide](../tasks/question_answering)
- [Masked language modeling task guide](../tasks/masked_language_modeling)
- [Multiple choice task guide](../tasks/multiple_choice)
|
251_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#lukeconfig
|
.md
|
This is the configuration class to store the configuration of a [`LukeModel`]. It is used to instantiate a LUKE
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the LUKE
[studio-ousia/luke-base](https://huggingface.co/studio-ousia/luke-base) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
251_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#lukeconfig
|
.md
|
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 50267):
Vocabulary size of the LUKE model. Defines the number of different tokens that can be represented by the
`inputs_ids` passed when calling [`LukeModel`].
entity_vocab_size (`int`, *optional*, defaults to 500000):
|
251_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#lukeconfig
|
.md
|
`inputs_ids` passed when calling [`LukeModel`].
entity_vocab_size (`int`, *optional*, defaults to 500000):
Entity vocabulary size of the LUKE model. Defines the number of different entities that can be represented
by the `entity_ids` passed when calling [`LukeModel`].
hidden_size (`int`, *optional*, defaults to 768):
Dimensionality of the encoder layers and the pooler layer.
entity_emb_size (`int`, *optional*, defaults to 256):
The number of dimensions of the entity embedding.
|
251_4_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#lukeconfig
|
.md
|
entity_emb_size (`int`, *optional*, defaults to 256):
The number of dimensions of the entity embedding.
num_hidden_layers (`int`, *optional*, defaults to 12):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 12):
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (`int`, *optional*, defaults to 3072):
Dimensionality of the "intermediate" (often named feed-forward) layer in the Transformer encoder.
|
251_4_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#lukeconfig
|
.md
|
Dimensionality of the "intermediate" (often named feed-forward) layer in the Transformer encoder.
hidden_act (`str` or `Callable`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"silu"` and `"gelu_new"` are supported.
hidden_dropout_prob (`float`, *optional*, defaults to 0.1):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
|
251_4_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#lukeconfig
|
.md
|
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1):
The dropout ratio for the attention probabilities.
max_position_embeddings (`int`, *optional*, defaults to 512):
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (`int`, *optional*, defaults to 2):
|
251_4_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#lukeconfig
|
.md
|
just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (`int`, *optional*, defaults to 2):
The vocabulary size of the `token_type_ids` passed when calling [`LukeModel`].
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (`float`, *optional*, defaults to 1e-12):
The epsilon used by the layer normalization layers.
use_entity_aware_attention (`bool`, *optional*, defaults to `True`):
|
251_4_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#lukeconfig
|
.md
|
The epsilon used by the layer normalization layers.
use_entity_aware_attention (`bool`, *optional*, defaults to `True`):
Whether or not the model should use the entity-aware self-attention mechanism proposed in [LUKE: Deep
Contextualized Entity Representations with Entity-aware Self-attention (Yamada et
al.)](https://arxiv.org/abs/2010.01057).
classifier_dropout (`float`, *optional*):
The dropout ratio for the classification head.
pad_token_id (`int`, *optional*, defaults to 1):
Padding token id.
|
251_4_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#lukeconfig
|
.md
|
The dropout ratio for the classification head.
pad_token_id (`int`, *optional*, defaults to 1):
Padding token id.
bos_token_id (`int`, *optional*, defaults to 0):
Beginning of stream token id.
eos_token_id (`int`, *optional*, defaults to 2):
End of stream token id.
Examples:
```python
>>> from transformers import LukeConfig, LukeModel
|
251_4_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#lukeconfig
|
.md
|
>>> # Initializing a LUKE configuration
>>> configuration = LukeConfig()
>>> # Initializing a model from the configuration
>>> model = LukeModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
251_4_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#luketokenizer
|
.md
|
Constructs a LUKE tokenizer, derived from the GPT-2 tokenizer, using byte-level Byte-Pair-Encoding.
This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will
be encoded differently whether it is at the beginning of the sentence (without space) or not:
```python
>>> from transformers import LukeTokenizer
>>> tokenizer = LukeTokenizer.from_pretrained("studio-ousia/luke-base")
>>> tokenizer("Hello world")["input_ids"]
[0, 31414, 232, 2]
|
251_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#luketokenizer
|
.md
|
>>> tokenizer(" Hello world")["input_ids"]
[0, 20920, 232, 2]
```
You can get around that behavior by passing `add_prefix_space=True` when instantiating this tokenizer or when you
call it on some text, but since the model was not pretrained this way, it might yield a decrease in performance.
<Tip>
When used with `is_split_into_words=True`, this tokenizer will add a space before each word (even the first one).
</Tip>
|
251_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#luketokenizer
|
.md
|
When used with `is_split_into_words=True`, this tokenizer will add a space before each word (even the first one).
</Tip>
This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods. It also creates entity sequences, namely
`entity_ids`, `entity_attention_mask`, `entity_token_type_ids`, and `entity_position_ids` to be used by the LUKE
model.
Args:
vocab_file (`str`):
|
251_5_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#luketokenizer
|
.md
|
model.
Args:
vocab_file (`str`):
Path to the vocabulary file.
merges_file (`str`):
Path to the merges file.
entity_vocab_file (`str`):
Path to the entity vocabulary file.
task (`str`, *optional*):
Task for which you want to prepare sequences. One of `"entity_classification"`,
`"entity_pair_classification"`, or `"entity_span_classification"`. If you specify this argument, the entity
sequence is automatically created based on the given entity span(s).
max_entity_length (`int`, *optional*, defaults to 32):
|
251_5_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#luketokenizer
|
.md
|
sequence is automatically created based on the given entity span(s).
max_entity_length (`int`, *optional*, defaults to 32):
The maximum length of `entity_ids`.
max_mention_length (`int`, *optional*, defaults to 30):
The maximum number of tokens inside an entity span.
entity_token_1 (`str`, *optional*, defaults to `<ent>`):
The special token used to represent an entity span in a word token sequence. This token is only used when
`task` is set to `"entity_classification"` or `"entity_pair_classification"`.
|
251_5_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#luketokenizer
|
.md
|
`task` is set to `"entity_classification"` or `"entity_pair_classification"`.
entity_token_2 (`str`, *optional*, defaults to `<ent2>`):
The special token used to represent an entity span in a word token sequence. This token is only used when
`task` is set to `"entity_pair_classification"`.
errors (`str`, *optional*, defaults to `"replace"`):
Paradigm to follow when decoding bytes to UTF-8. See
[bytes.decode](https://docs.python.org/3/library/stdtypes.html#bytes.decode) for more information.
|
251_5_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#luketokenizer
|
.md
|
[bytes.decode](https://docs.python.org/3/library/stdtypes.html#bytes.decode) for more information.
bos_token (`str`, *optional*, defaults to `"<s>"`):
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
<Tip>
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the `cls_token`.
</Tip>
eos_token (`str`, *optional*, defaults to `"</s>"`):
The end of sequence token.
|
251_5_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#luketokenizer
|
.md
|
</Tip>
eos_token (`str`, *optional*, defaults to `"</s>"`):
The end of sequence token.
<Tip>
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the `sep_token`.
</Tip>
sep_token (`str`, *optional*, defaults to `"</s>"`):
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
|
251_5_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#luketokenizer
|
.md
|
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (`str`, *optional*, defaults to `"<s>"`):
The classifier token which is used when doing sequence classification (classification of the whole sequence
|
251_5_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#luketokenizer
|
.md
|
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (`str`, *optional*, defaults to `"<unk>"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (`str`, *optional*, defaults to `"<pad>"`):
|
251_5_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#luketokenizer
|
.md
|
token instead.
pad_token (`str`, *optional*, defaults to `"<pad>"`):
The token used for padding, for example when batching sequences of different lengths.
mask_token (`str`, *optional*, defaults to `"<mask>"`):
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
add_prefix_space (`bool`, *optional*, defaults to `False`):
|
251_5_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#luketokenizer
|
.md
|
modeling. This is the token which the model will try to predict.
add_prefix_space (`bool`, *optional*, defaults to `False`):
Whether or not to add an initial space to the input. This allows to treat the leading word just as any
other word. (LUKE tokenizer detect beginning of words by the preceding space).
Methods: __call__
- save_vocabulary
|
251_5_11
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#lukemodel
|
.md
|
The bare LUKE model transformer outputting raw hidden-states for both word tokens and entities without any specific head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
251_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#lukemodel
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`LukeConfig`]): Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
|
251_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#lukemodel
|
.md
|
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
251_6_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#lukeformaskedlm
|
.md
|
The LUKE model with a language modeling head and entity prediction head on top for masked language modeling and
masked entity prediction.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
251_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#lukeformaskedlm
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`LukeConfig`]): Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
|
251_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#lukeformaskedlm
|
.md
|
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
251_7_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#lukeforentityclassification
|
.md
|
The LUKE model with a classification head on top (a linear layer on top of the hidden state of the first entity
token) for entity classification tasks, such as Open Entity.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
|
251_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#lukeforentityclassification
|
.md
|
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`LukeConfig`]): Model configuration class with all the parameters of the
|
251_8_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#lukeforentityclassification
|
.md
|
and behavior.
Parameters:
config ([`LukeConfig`]): Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
251_8_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#lukeforentitypairclassification
|
.md
|
The LUKE model with a classification head on top (a linear layer on top of the hidden states of the two entity
tokens) for entity pair classification tasks, such as TACRED.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
|
251_9_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#lukeforentitypairclassification
|
.md
|
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`LukeConfig`]): Model configuration class with all the parameters of the
|
251_9_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#lukeforentitypairclassification
|
.md
|
and behavior.
Parameters:
config ([`LukeConfig`]): Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
251_9_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#lukeforentityspanclassification
|
.md
|
The LUKE model with a span classification head on top (a linear layer on top of the hidden states output) for tasks
such as named entity recognition.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
251_10_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#lukeforentityspanclassification
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`LukeConfig`]): Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
|
251_10_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#lukeforentityspanclassification
|
.md
|
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
251_10_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#lukeforsequenceclassification
|
.md
|
The LUKE Model transformer with a sequence classification/regression head on top (a linear layer on top of the
pooled output) e.g. for GLUE tasks.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
251_11_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#lukeforsequenceclassification
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`LukeConfig`]): Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
|
251_11_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#lukeforsequenceclassification
|
.md
|
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
251_11_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#lukeformultiplechoice
|
.md
|
The LUKE Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a
softmax) e.g. for RocStories/SWAG tasks.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
251_12_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#lukeformultiplechoice
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`LukeConfig`]): Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
|
251_12_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#lukeformultiplechoice
|
.md
|
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
251_12_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#lukefortokenclassification
|
.md
|
The LUKE Model with a token classification head on top (a linear layer on top of the hidden-states output). To
solve Named-Entity Recognition (NER) task using LUKE, `LukeForEntitySpanClassification` is more suitable than this
class.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
|
251_13_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#lukefortokenclassification
|
.md
|
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`LukeConfig`]): Model configuration class with all the parameters of the
|
251_13_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#lukefortokenclassification
|
.md
|
and behavior.
Parameters:
config ([`LukeConfig`]): Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
251_13_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#lukeforquestionanswering
|
.md
|
The LUKE Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layers on top of the hidden-states output to compute `span start logits` and `span end logits`).
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
|
251_14_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#lukeforquestionanswering
|
.md
|
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`LukeConfig`]): Model configuration class with all the parameters of the
|
251_14_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/luke.md
|
https://huggingface.co/docs/transformers/en/model_doc/luke/#lukeforquestionanswering
|
.md
|
and behavior.
Parameters:
config ([`LukeConfig`]): Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
251_14_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/focalnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/focalnet/
|
.md
|
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
252_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/focalnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/focalnet/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
252_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/focalnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/focalnet/#overview
|
.md
|
The FocalNet model was proposed in [Focal Modulation Networks](https://arxiv.org/abs/2203.11926) by Jianwei Yang, Chunyuan Li, Xiyang Dai, Lu Yuan, Jianfeng Gao.
FocalNets completely replace self-attention (used in models like [ViT](vit) and [Swin](swin)) by a focal modulation mechanism for modeling token interactions in vision.
The authors claim that FocalNets outperform self-attention based models with similar computational costs on the tasks of image classification, object detection, and segmentation.
|
252_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/focalnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/focalnet/#overview
|
.md
|
The abstract from the paper is the following:
|
252_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/focalnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/focalnet/#overview
|
.md
|
*We propose focal modulation networks (FocalNets in short), where self-attention (SA) is completely replaced by a focal modulation mechanism for modeling token interactions in vision. Focal modulation comprises three components: (i) hierarchical contextualization, implemented using a stack of depth-wise convolutional layers, to encode visual contexts from short to long ranges, (ii) gated aggregation to selectively gather contexts for each query token based on its
|
252_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/focalnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/focalnet/#overview
|
.md
|
content, and (iii) element-wise modulation or affine transformation to inject the aggregated context into the query. Extensive experiments show FocalNets outperform the state-of-the-art SA counterparts (e.g., Swin and Focal Transformers) with similar computational costs on the tasks of image classification, object detection, and segmentation. Specifically, FocalNets with tiny and base size achieve 82.3% and 83.9% top-1 accuracy on ImageNet-1K. After pretrained on ImageNet-22K in 224 resolution, it attains
|
252_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/focalnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/focalnet/#overview
|
.md
|
base size achieve 82.3% and 83.9% top-1 accuracy on ImageNet-1K. After pretrained on ImageNet-22K in 224 resolution, it attains 86.5% and 87.3% top-1 accuracy when finetuned with resolution 224 and 384, respectively. When transferred to downstream tasks, FocalNets exhibit clear superiority. For object detection with Mask R-CNN, FocalNet base trained with 1\times outperforms the Swin counterpart by 2.1 points and already surpasses Swin trained with 3\times schedule (49.0 v.s. 48.5). For semantic
|
252_1_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/focalnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/focalnet/#overview
|
.md
|
the Swin counterpart by 2.1 points and already surpasses Swin trained with 3\times schedule (49.0 v.s. 48.5). For semantic segmentation with UPerNet, FocalNet base at single-scale outperforms Swin by 2.4, and beats Swin at multi-scale (50.5 v.s. 49.7). Using large FocalNet and Mask2former, we achieve 58.5 mIoU for ADE20K semantic segmentation, and 57.9 PQ for COCO Panoptic Segmentation. Using huge FocalNet and DINO, we achieved 64.3 and 64.4 mAP on COCO minival and test-dev, respectively, establishing new
|
252_1_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/focalnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/focalnet/#overview
|
.md
|
Using huge FocalNet and DINO, we achieved 64.3 and 64.4 mAP on COCO minival and test-dev, respectively, establishing new SoTA on top of much larger attention-based models like Swinv2-G and BEIT-3.*
|
252_1_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/focalnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/focalnet/#overview
|
.md
|
This model was contributed by [nielsr](https://huggingface.co/nielsr).
The original code can be found [here](https://github.com/microsoft/FocalNet).
|
252_1_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/focalnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/focalnet/#focalnetconfig
|
.md
|
This is the configuration class to store the configuration of a [`FocalNetModel`]. It is used to instantiate a
FocalNet model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the FocalNet
[microsoft/focalnet-tiny](https://huggingface.co/microsoft/focalnet-tiny) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
252_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/focalnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/focalnet/#focalnetconfig
|
.md
|
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
image_size (`int`, *optional*, defaults to 224):
The size (resolution) of each image.
patch_size (`int`, *optional*, defaults to 4):
The size (resolution) of each patch in the embeddings layer.
num_channels (`int`, *optional*, defaults to 3):
The number of input channels.
embed_dim (`int`, *optional*, defaults to 96):
|
252_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/focalnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/focalnet/#focalnetconfig
|
.md
|
num_channels (`int`, *optional*, defaults to 3):
The number of input channels.
embed_dim (`int`, *optional*, defaults to 96):
Dimensionality of patch embedding.
use_conv_embed (`bool`, *optional*, defaults to `False`):
Whether to use convolutional embedding. The authors noted that using convolutional embedding usually
improve the performance, but it's not used by default.
hidden_sizes (`List[int]`, *optional*, defaults to `[192, 384, 768, 768]`):
Dimensionality (hidden size) at each stage.
|
252_2_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/focalnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/focalnet/#focalnetconfig
|
.md
|
hidden_sizes (`List[int]`, *optional*, defaults to `[192, 384, 768, 768]`):
Dimensionality (hidden size) at each stage.
depths (`list(int)`, *optional*, defaults to `[2, 2, 6, 2]`):
Depth (number of layers) of each stage in the encoder.
focal_levels (`list(int)`, *optional*, defaults to `[2, 2, 2, 2]`):
Number of focal levels in each layer of the respective stages in the encoder.
focal_windows (`list(int)`, *optional*, defaults to `[3, 3, 3, 3]`):
|
252_2_3
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.