source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/diffllama.md | https://huggingface.co/docs/transformers/en/model_doc/diffllama/#diffllamaconfig | .md | `num_attention_heads`.
hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
The non-linear activation function (function or string) in the decoder.
max_position_embeddings (`int`, *optional*, defaults to 2048):
The maximum sequence length that this model might ever be used with.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
rms_norm_eps (`float`, *optional*, defaults to 1e-05): | 391_3_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/diffllama.md | https://huggingface.co/docs/transformers/en/model_doc/diffllama/#diffllamaconfig | .md | rms_norm_eps (`float`, *optional*, defaults to 1e-05):
The epsilon used by the rms normalization layers.
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if `config.is_decoder=True`.
pad_token_id (`int`, *optional*):
Padding token id.
bos_token_id (`int`, *optional*, defaults to 1):
Beginning of stream token id.
eos_token_id (`int`, *optional*, defaults to 2):
End of stream token id. | 391_3_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/diffllama.md | https://huggingface.co/docs/transformers/en/model_doc/diffllama/#diffllamaconfig | .md | Beginning of stream token id.
eos_token_id (`int`, *optional*, defaults to 2):
End of stream token id.
tie_word_embeddings (`bool`, *optional*, defaults to `False`):
Whether to tie weight embeddings
rope_theta (`float`, *optional*, defaults to 10000.0):
The base period of the RoPE embeddings.
rope_scaling (`Dict`, *optional*):
Dictionary containing the scaling configuration for the RoPE embeddings. NOTE: if you apply new rope type | 391_3_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/diffllama.md | https://huggingface.co/docs/transformers/en/model_doc/diffllama/#diffllamaconfig | .md | Dictionary containing the scaling configuration for the RoPE embeddings. NOTE: if you apply new rope type
and you expect the model to work on longer `max_position_embeddings`, we recommend you to update this value
accordingly.
Expected contents:
`rope_type` (`str`):
The sub-variant of RoPE to use. Can be one of ['default', 'linear', 'dynamic', 'yarn', 'longrope',
'diffllama3'], with 'default' being the original RoPE implementation.
`factor` (`float`, *optional*): | 391_3_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/diffllama.md | https://huggingface.co/docs/transformers/en/model_doc/diffllama/#diffllamaconfig | .md | 'diffllama3'], with 'default' being the original RoPE implementation.
`factor` (`float`, *optional*):
Used with all rope types except 'default'. The scaling factor to apply to the RoPE embeddings. In
most scaling types, a `factor` of x will enable the model to handle sequences of length x *
original maximum pre-trained length.
`original_max_position_embeddings` (`int`, *optional*):
Used with 'dynamic', 'longrope' and 'diffllama3'. The original max position embeddings used during
pretraining. | 391_3_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/diffllama.md | https://huggingface.co/docs/transformers/en/model_doc/diffllama/#diffllamaconfig | .md | Used with 'dynamic', 'longrope' and 'diffllama3'. The original max position embeddings used during
pretraining.
`attention_factor` (`float`, *optional*):
Used with 'yarn' and 'longrope'. The scaling factor to be applied on the attention
computation. If unspecified, it defaults to value recommended by the implementation, using the
`factor` field to infer the suggested value.
`beta_fast` (`float`, *optional*):
Only used with 'yarn'. Parameter to set the boundary for extrapolation (only) in the linear | 391_3_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/diffllama.md | https://huggingface.co/docs/transformers/en/model_doc/diffllama/#diffllamaconfig | .md | `beta_fast` (`float`, *optional*):
Only used with 'yarn'. Parameter to set the boundary for extrapolation (only) in the linear
ramp function. If unspecified, it defaults to 32.
`beta_slow` (`float`, *optional*):
Only used with 'yarn'. Parameter to set the boundary for interpolation (only) in the linear
ramp function. If unspecified, it defaults to 1.
`short_factor` (`List[float]`, *optional*):
Only used with 'longrope'. The scaling factor to be applied to short contexts (< | 391_3_11 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/diffllama.md | https://huggingface.co/docs/transformers/en/model_doc/diffllama/#diffllamaconfig | .md | `short_factor` (`List[float]`, *optional*):
Only used with 'longrope'. The scaling factor to be applied to short contexts (<
`original_max_position_embeddings`). Must be a list of numbers with the same length as the hidden
size divided by the number of attention heads divided by 2
`long_factor` (`List[float]`, *optional*):
Only used with 'longrope'. The scaling factor to be applied to long contexts (<
`original_max_position_embeddings`). Must be a list of numbers with the same length as the hidden | 391_3_12 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/diffllama.md | https://huggingface.co/docs/transformers/en/model_doc/diffllama/#diffllamaconfig | .md | `original_max_position_embeddings`). Must be a list of numbers with the same length as the hidden
size divided by the number of attention heads divided by 2
`low_freq_factor` (`float`, *optional*):
Only used with 'diffllama3'. Scaling factor applied to low frequency components of the RoPE
`high_freq_factor` (`float`, *optional*):
Only used with 'diffllama3'. Scaling factor applied to high frequency components of the RoPE
attention_bias (`bool`, *optional*, defaults to `False`): | 391_3_13 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/diffllama.md | https://huggingface.co/docs/transformers/en/model_doc/diffllama/#diffllamaconfig | .md | attention_bias (`bool`, *optional*, defaults to `False`):
Whether to use a bias in the query, key, value and output projection layers during self-attention.
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
lambda_std_dev (`float`, *optional*, defaults to 0.1):
The standard deviation for initialization of parameter lambda in attention layer.
head_dim (`int`, *optional*): | 391_3_14 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/diffllama.md | https://huggingface.co/docs/transformers/en/model_doc/diffllama/#diffllamaconfig | .md | The standard deviation for initialization of parameter lambda in attention layer.
head_dim (`int`, *optional*):
The attention head dimension. If None, it will default to hidden_size // num_heads
```python
>>> from transformers import DiffLlamaModel, DiffLlamaConfig | 391_3_15 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/diffllama.md | https://huggingface.co/docs/transformers/en/model_doc/diffllama/#diffllamaconfig | .md | >>> # Initializing a DiffLlama diffllama-7b style configuration
>>> configuration = DiffLlamaConfig()
>>> # Initializing a model from the diffllama-7b style configuration
>>> model = DiffLlamaModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
``` | 391_3_16 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/diffllama.md | https://huggingface.co/docs/transformers/en/model_doc/diffllama/#diffllamamodel | .md | The bare DiffLlama Model outputting raw hidden-states without any specific head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | 391_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/diffllama.md | https://huggingface.co/docs/transformers/en/model_doc/diffllama/#diffllamamodel | .md | etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`DiffLlamaConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the | 391_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/diffllama.md | https://huggingface.co/docs/transformers/en/model_doc/diffllama/#diffllamamodel | .md | load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Transformer decoder consisting of *config.num_hidden_layers* layers. Each layer is a [`DiffLlamaDecoderLayer`]
Args:
config: DiffLlamaConfig
Methods: forward | 391_4_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/diffllama.md | https://huggingface.co/docs/transformers/en/model_doc/diffllama/#diffllamaforcausallm | .md | No docstring available for DiffLlamaForCausalLM
Methods: forward | 391_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/diffllama.md | https://huggingface.co/docs/transformers/en/model_doc/diffllama/#diffllamaforsequenceclassification | .md | The DiffLlama Model transformer with a sequence classification head on top (linear layer).
[`DiffLlamaForSequenceClassification`] uses the last token in order to do the classification, as other causal models
(e.g. GPT-2) do.
Since it does classification on the last token, it requires to know the position of the last token. If a
`pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If | 391_6_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/diffllama.md | https://huggingface.co/docs/transformers/en/model_doc/diffllama/#diffllamaforsequenceclassification | .md | `pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If
no `pad_token_id` is defined, it simply takes the last value in each row of the batch. Since it cannot guess the
padding tokens when `inputs_embeds` are passed instead of `input_ids`, it does the same (take the last value in
each row of the batch).
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the | 391_6_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/diffllama.md | https://huggingface.co/docs/transformers/en/model_doc/diffllama/#diffllamaforsequenceclassification | .md | This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters: | 391_6_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/diffllama.md | https://huggingface.co/docs/transformers/en/model_doc/diffllama/#diffllamaforsequenceclassification | .md | and behavior.
Parameters:
config ([`DiffLlamaConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 391_6_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/diffllama.md | https://huggingface.co/docs/transformers/en/model_doc/diffllama/#diffllamaforquestionanswering | .md | The DiffLlama Model transformer with a span classification head on top for extractive question-answering tasks like
SQuAD (a linear layer on top of the hidden-states output to compute `span start logits` and `span end logits`).
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.) | 391_7_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/diffllama.md | https://huggingface.co/docs/transformers/en/model_doc/diffllama/#diffllamaforquestionanswering | .md | library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`DiffLlamaConfig`]): | 391_7_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/diffllama.md | https://huggingface.co/docs/transformers/en/model_doc/diffllama/#diffllamaforquestionanswering | .md | and behavior.
Parameters:
config ([`DiffLlamaConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 391_7_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/diffllama.md | https://huggingface.co/docs/transformers/en/model_doc/diffllama/#diffllamafortokenclassification | .md | The DiffLlama Model transformer with a token classification head on top (a linear layer on top of the hidden-states
output) e.g. for Named-Entity-Recognition (NER) tasks.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.) | 391_8_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/diffllama.md | https://huggingface.co/docs/transformers/en/model_doc/diffllama/#diffllamafortokenclassification | .md | library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`DiffLlamaConfig`]): | 391_8_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/diffllama.md | https://huggingface.co/docs/transformers/en/model_doc/diffllama/#diffllamafortokenclassification | .md | and behavior.
Parameters:
config ([`DiffLlamaConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 391_8_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bertweet.md | https://huggingface.co/docs/transformers/en/model_doc/bertweet/ | .md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 392_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bertweet.md | https://huggingface.co/docs/transformers/en/model_doc/bertweet/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 392_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bertweet.md | https://huggingface.co/docs/transformers/en/model_doc/bertweet/#overview | .md | The BERTweet model was proposed in [BERTweet: A pre-trained language model for English Tweets](https://www.aclweb.org/anthology/2020.emnlp-demos.2.pdf) by Dat Quoc Nguyen, Thanh Vu, Anh Tuan Nguyen.
The abstract from the paper is the following:
*We present BERTweet, the first public large-scale pre-trained language model for English Tweets. Our BERTweet, having
the same architecture as BERT-base (Devlin et al., 2019), is trained using the RoBERTa pre-training procedure (Liu et | 392_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bertweet.md | https://huggingface.co/docs/transformers/en/model_doc/bertweet/#overview | .md | the same architecture as BERT-base (Devlin et al., 2019), is trained using the RoBERTa pre-training procedure (Liu et
al., 2019). Experiments show that BERTweet outperforms strong baselines RoBERTa-base and XLM-R-base (Conneau et al.,
2020), producing better performance results than the previous state-of-the-art models on three Tweet NLP tasks:
Part-of-speech tagging, Named-entity recognition and text classification.* | 392_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bertweet.md | https://huggingface.co/docs/transformers/en/model_doc/bertweet/#overview | .md | Part-of-speech tagging, Named-entity recognition and text classification.*
This model was contributed by [dqnguyen](https://huggingface.co/dqnguyen). The original code can be found [here](https://github.com/VinAIResearch/BERTweet). | 392_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bertweet.md | https://huggingface.co/docs/transformers/en/model_doc/bertweet/#usage-example | .md | ```python
>>> import torch
>>> from transformers import AutoModel, AutoTokenizer
>>> bertweet = AutoModel.from_pretrained("vinai/bertweet-base")
>>> # For transformers v4.x+:
>>> tokenizer = AutoTokenizer.from_pretrained("vinai/bertweet-base", use_fast=False)
>>> # For transformers v3.x:
>>> # tokenizer = AutoTokenizer.from_pretrained("vinai/bertweet-base")
>>> # INPUT TWEET IS ALREADY NORMALIZED!
>>> line = "SC has first two presumptive cases of coronavirus , DHEC confirms HTTPURL via @USER :cry:" | 392_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bertweet.md | https://huggingface.co/docs/transformers/en/model_doc/bertweet/#usage-example | .md | >>> input_ids = torch.tensor([tokenizer.encode(line)])
>>> with torch.no_grad():
... features = bertweet(input_ids) # Models outputs are now tuples
>>> # With TensorFlow 2.0+:
>>> # from transformers import TFAutoModel
>>> # bertweet = TFAutoModel.from_pretrained("vinai/bertweet-base")
```
<Tip>
This implementation is the same as BERT, except for tokenization method. Refer to [BERT documentation](bert) for
API reference information.
</Tip> | 392_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bertweet.md | https://huggingface.co/docs/transformers/en/model_doc/bertweet/#bertweettokenizer | .md | Constructs a BERTweet tokenizer, using Byte-Pair-Encoding.
This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
Args:
vocab_file (`str`):
Path to the vocabulary file.
merges_file (`str`):
Path to the merges file.
normalization (`bool`, *optional*, defaults to `False`):
Whether or not to apply a normalization preprocess.
bos_token (`str`, *optional*, defaults to `"<s>"`): | 392_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bertweet.md | https://huggingface.co/docs/transformers/en/model_doc/bertweet/#bertweettokenizer | .md | Whether or not to apply a normalization preprocess.
bos_token (`str`, *optional*, defaults to `"<s>"`):
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
<Tip>
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the `cls_token`.
</Tip>
eos_token (`str`, *optional*, defaults to `"</s>"`):
The end of sequence token.
<Tip> | 392_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bertweet.md | https://huggingface.co/docs/transformers/en/model_doc/bertweet/#bertweettokenizer | .md | </Tip>
eos_token (`str`, *optional*, defaults to `"</s>"`):
The end of sequence token.
<Tip>
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the `sep_token`.
</Tip>
sep_token (`str`, *optional*, defaults to `"</s>"`):
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for | 392_3_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bertweet.md | https://huggingface.co/docs/transformers/en/model_doc/bertweet/#bertweettokenizer | .md | The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (`str`, *optional*, defaults to `"<s>"`):
The classifier token which is used when doing sequence classification (classification of the whole sequence | 392_3_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bertweet.md | https://huggingface.co/docs/transformers/en/model_doc/bertweet/#bertweettokenizer | .md | The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (`str`, *optional*, defaults to `"<unk>"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (`str`, *optional*, defaults to `"<pad>"`): | 392_3_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bertweet.md | https://huggingface.co/docs/transformers/en/model_doc/bertweet/#bertweettokenizer | .md | token instead.
pad_token (`str`, *optional*, defaults to `"<pad>"`):
The token used for padding, for example when batching sequences of different lengths.
mask_token (`str`, *optional*, defaults to `"<mask>"`):
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict. | 392_3_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/modernbert.md | https://huggingface.co/docs/transformers/en/model_doc/modernbert/ | .md | <!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 393_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/modernbert.md | https://huggingface.co/docs/transformers/en/model_doc/modernbert/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 393_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/modernbert.md | https://huggingface.co/docs/transformers/en/model_doc/modernbert/#modernbert | .md | <div class="flex flex-wrap space-x-1">
<a href="https://huggingface.co/models?filter=modernbert">
<img alt="Models" src="https://img.shields.io/badge/All_model_pages-modernbert-blueviolet">
</a>
<a href="https://arxiv.org/abs/2412.13663">
<img alt="Paper page" src="https://img.shields.io/badge/Paper%20page-2412.13663-green">
</a>
</div> | 393_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/modernbert.md | https://huggingface.co/docs/transformers/en/model_doc/modernbert/#overview | .md | The ModernBERT model was proposed in [Smarter, Better, Faster, Longer: A Modern Bidirectional Encoder for Fast, Memory Efficient, and Long Context Finetuning and Inference](https://arxiv.org/abs/2412.13663) by Benjamin Warner, Antoine Chaffin, Benjamin Clavié, Orion Weller, Oskar Hallström, Said Taghadouini, Alexis Galalgher, Raja Bisas, Faisal Ladhak, Tom Aarsen, Nathan Cooper, Grifin Adams, Jeremy Howard and Iacopo Poli. | 393_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/modernbert.md | https://huggingface.co/docs/transformers/en/model_doc/modernbert/#overview | .md | It is a refresh of the traditional encoder architecture, as used in previous models such as [BERT](https://huggingface.co/docs/transformers/en/model_doc/bert) and [RoBERTa](https://huggingface.co/docs/transformers/en/model_doc/roberta).
It builds on BERT and implements many modern architectural improvements which have been developed since its original release, such as:
- [Rotary Positional Embeddings](https://huggingface.co/blog/designing-positional-encoding) to support sequences of up to 8192 tokens. | 393_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/modernbert.md | https://huggingface.co/docs/transformers/en/model_doc/modernbert/#overview | .md | - [Unpadding](https://arxiv.org/abs/2208.08124) to ensure no compute is wasted on padding tokens, speeding up processing time for batches with mixed-length sequences.
- [GeGLU](https://arxiv.org/abs/2002.05202) Replacing the original MLP layers with GeGLU layers, shown to improve performance.
- [Alternating Attention](https://arxiv.org/abs/2004.05150v2) where most attention layers employ a sliding window of 128 tokens, with Global Attention only used every 3 layers. | 393_2_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/modernbert.md | https://huggingface.co/docs/transformers/en/model_doc/modernbert/#overview | .md | - [Flash Attention](https://github.com/Dao-AILab/flash-attention) to speed up processing.
- A model designed following recent [The Case for Co-Designing Model Architectures with Hardware](https://arxiv.org/abs/2401.14489), ensuring maximum efficiency across inference GPUs.
- Modern training data scales (2 trillion tokens) and mixtures (including code ande math data)
The abstract from the paper is the following: | 393_2_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/modernbert.md | https://huggingface.co/docs/transformers/en/model_doc/modernbert/#overview | .md | *Encoder-only transformer models such as BERT offer a great performance-size tradeoff for retrieval and classification tasks with respect to larger decoder-only models. Despite being the workhorse of numerous production pipelines, there have been limited Pareto improvements to BERT since its release. In this paper, we introduce ModernBERT, bringing modern model optimizations to encoder-only models and representing a major Pareto improvement over older encoders. Trained on 2 trillion tokens with a native | 393_2_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/modernbert.md | https://huggingface.co/docs/transformers/en/model_doc/modernbert/#overview | .md | encoder-only models and representing a major Pareto improvement over older encoders. Trained on 2 trillion tokens with a native 8192 sequence length, ModernBERT models exhibit state-of-the-art results on a large pool of evaluations encompassing diverse classification tasks and both single and multi-vector retrieval on different domains (including code). In addition to strong downstream performance, ModernBERT is also the most speed and memory efficient encoder and is designed for inference on common GPUs.* | 393_2_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/modernbert.md | https://huggingface.co/docs/transformers/en/model_doc/modernbert/#overview | .md | performance, ModernBERT is also the most speed and memory efficient encoder and is designed for inference on common GPUs.* | 393_2_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/modernbert.md | https://huggingface.co/docs/transformers/en/model_doc/modernbert/#overview | .md | The original code can be found [here](https://github.com/answerdotai/modernbert). | 393_2_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/modernbert.md | https://huggingface.co/docs/transformers/en/model_doc/modernbert/#resources | .md | A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ModernBert.
<PipelineTag pipeline="text-classification"/> | 393_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/modernbert.md | https://huggingface.co/docs/transformers/en/model_doc/modernbert/#resources | .md | <PipelineTag pipeline="text-classification"/>
- A notebook on how to [finetune for General Language Understanding Evaluation (GLUE) with Transformers](https://github.com/AnswerDotAI/ModernBERT/blob/main/examples/finetune_modernbert_on_glue.ipynb), also available as a Google Colab [notebook](https://colab.research.google.com/github/AnswerDotAI/ModernBERT/blob/main/examples/finetune_modernbert_on_glue.ipynb). 🌎
<PipelineTag pipeline="sentence-similarity"/> | 393_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/modernbert.md | https://huggingface.co/docs/transformers/en/model_doc/modernbert/#resources | .md | <PipelineTag pipeline="sentence-similarity"/>
- A script on how to [finetune for text similarity or information retrieval with Sentence Transformers](https://github.com/AnswerDotAI/ModernBERT/blob/main/examples/train_st.py). 🌎
- A script on how to [finetune for information retrieval with PyLate](https://github.com/AnswerDotAI/ModernBERT/blob/main/examples/train_pylate.py). 🌎
<PipelineTag pipeline="fill-mask"/>
- [Masked language modeling task guide](../tasks/masked_language_modeling) | 393_3_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/modernbert.md | https://huggingface.co/docs/transformers/en/model_doc/modernbert/#modernbertconfig | .md | This is the configuration class to store the configuration of a [`ModernBertModel`]. It is used to instantiate an ModernBert
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the ModernBERT-base.
e.g. [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base) | 393_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/modernbert.md | https://huggingface.co/docs/transformers/en/model_doc/modernbert/#modernbertconfig | .md | e.g. [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base)
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 50368):
Vocabulary size of the ModernBert model. Defines the number of different tokens that can be represented by the
`inputs_ids` passed when calling [`ModernBertModel`] | 393_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/modernbert.md | https://huggingface.co/docs/transformers/en/model_doc/modernbert/#modernbertconfig | .md | `inputs_ids` passed when calling [`ModernBertModel`]
hidden_size (`int`, *optional*, defaults to 768):
Dimension of the hidden representations.
intermediate_size (`int`, *optional*, defaults to 1152):
Dimension of the MLP representations.
num_hidden_layers (`int`, *optional*, defaults to 22):
Number of hidden layers in the Transformer decoder.
num_attention_heads (`int`, *optional*, defaults to 12):
Number of attention heads for each attention layer in the Transformer decoder. | 393_4_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/modernbert.md | https://huggingface.co/docs/transformers/en/model_doc/modernbert/#modernbertconfig | .md | Number of attention heads for each attention layer in the Transformer decoder.
hidden_activation (`str` or `function`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the decoder. Will default to `"gelu"`
if not specified.
max_position_embeddings (`int`, *optional*, defaults to 8192):
The maximum sequence length that this model might ever be used with.
initializer_range (`float`, *optional*, defaults to 0.02): | 393_4_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/modernbert.md | https://huggingface.co/docs/transformers/en/model_doc/modernbert/#modernbertconfig | .md | The maximum sequence length that this model might ever be used with.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
initializer_cutoff_factor (`float`, *optional*, defaults to 2.0):
The cutoff factor for the truncated_normal_initializer for initializing all weight matrices.
norm_eps (`float`, *optional*, defaults to 1e-05):
The epsilon used by the rms normalization layers. | 393_4_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/modernbert.md | https://huggingface.co/docs/transformers/en/model_doc/modernbert/#modernbertconfig | .md | norm_eps (`float`, *optional*, defaults to 1e-05):
The epsilon used by the rms normalization layers.
norm_bias (`bool`, *optional*, defaults to `False`):
Whether to use bias in the normalization layers.
pad_token_id (`int`, *optional*, defaults to 50283):
Padding token id.
eos_token_id (`int`, *optional*, defaults to 50282):
End of stream token id.
bos_token_id (`int`, *optional*, defaults to 50281):
Beginning of stream token id.
cls_token_id (`int`, *optional*, defaults to 50281):
Classification token id. | 393_4_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/modernbert.md | https://huggingface.co/docs/transformers/en/model_doc/modernbert/#modernbertconfig | .md | Beginning of stream token id.
cls_token_id (`int`, *optional*, defaults to 50281):
Classification token id.
sep_token_id (`int`, *optional*, defaults to 50282):
Separation token id.
global_rope_theta (`float`, *optional*, defaults to 160000.0):
The base period of the global RoPE embeddings.
attention_bias (`bool`, *optional*, defaults to `False`):
Whether to use a bias in the query, key, value and output projection layers during self-attention.
attention_dropout (`float`, *optional*, defaults to 0.0): | 393_4_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/modernbert.md | https://huggingface.co/docs/transformers/en/model_doc/modernbert/#modernbertconfig | .md | attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
global_attn_every_n_layers (`int`, *optional*, defaults to 3):
The number of layers between global attention layers.
local_attention (`int`, *optional*, defaults to 128):
The window size for local attention.
local_rope_theta (`float`, *optional*, defaults to 10000.0):
The base period of the local RoPE embeddings.
embedding_dropout (`float`, *optional*, defaults to 0.0): | 393_4_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/modernbert.md | https://huggingface.co/docs/transformers/en/model_doc/modernbert/#modernbertconfig | .md | The base period of the local RoPE embeddings.
embedding_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the embeddings.
mlp_bias (`bool`, *optional*, defaults to `False`):
Whether to use bias in the MLP layers.
mlp_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the MLP layers.
decoder_bias (`bool`, *optional*, defaults to `True`):
Whether to use bias in the decoder layers.
classifier_pooling (`str`, *optional*, defaults to `"cls"`): | 393_4_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/modernbert.md | https://huggingface.co/docs/transformers/en/model_doc/modernbert/#modernbertconfig | .md | Whether to use bias in the decoder layers.
classifier_pooling (`str`, *optional*, defaults to `"cls"`):
The pooling method for the classifier. Should be either `"cls"` or `"mean"`. In local attention layers, the
CLS token doesn't attend to all tokens on long sequences.
classifier_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the classifier.
classifier_bias (`bool`, *optional*, defaults to `False`):
Whether to use bias in the classifier. | 393_4_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/modernbert.md | https://huggingface.co/docs/transformers/en/model_doc/modernbert/#modernbertconfig | .md | classifier_bias (`bool`, *optional*, defaults to `False`):
Whether to use bias in the classifier.
classifier_activation (`str`, *optional*, defaults to `"gelu"`):
The activation function for the classifier.
deterministic_flash_attn (`bool`, *optional*, defaults to `False`):
Whether to use deterministic flash attention. If `False`, inference will be faster but not deterministic.
sparse_prediction (`bool`, *optional*, defaults to `False`): | 393_4_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/modernbert.md | https://huggingface.co/docs/transformers/en/model_doc/modernbert/#modernbertconfig | .md | sparse_prediction (`bool`, *optional*, defaults to `False`):
Whether to use sparse prediction for the masked language model instead of returning the full dense logits.
sparse_pred_ignore_index (`int`, *optional*, defaults to -100):
The index to ignore for the sparse prediction.
reference_compile (`bool`, *optional*):
Whether to compile the layers of the model which were compiled during pretraining. If `None`, then parts of | 393_4_11 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/modernbert.md | https://huggingface.co/docs/transformers/en/model_doc/modernbert/#modernbertconfig | .md | Whether to compile the layers of the model which were compiled during pretraining. If `None`, then parts of
the model will be compiled if 1) `triton` is installed, 2) the model is not on MPS, 3) the model is not
shared between devices, and 4) the model is not resized after initialization. If `True`, then the model may
be faster in some scenarios.
repad_logits_with_grad (`bool`, *optional*, defaults to `False`): | 393_4_12 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/modernbert.md | https://huggingface.co/docs/transformers/en/model_doc/modernbert/#modernbertconfig | .md | be faster in some scenarios.
repad_logits_with_grad (`bool`, *optional*, defaults to `False`):
When True, ModernBertForMaskedLM keeps track of the logits' gradient when repadding for output. This only
applies when using Flash Attention 2 with passed labels. Otherwise output logits always have a gradient.
Examples:
```python
>>> from transformers import ModernBertModel, ModernBertConfig | 393_4_13 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/modernbert.md | https://huggingface.co/docs/transformers/en/model_doc/modernbert/#modernbertconfig | .md | >>> # Initializing a ModernBert style configuration
>>> configuration = ModernBertConfig()
>>> # Initializing a model from the modernbert-base style configuration
>>> model = ModernBertModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
<frameworkcontent>
<pt> | 393_4_14 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/modernbert.md | https://huggingface.co/docs/transformers/en/model_doc/modernbert/#modernbertmodel | .md | The bare ModernBert Model outputting raw hidden-states without any specific head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | 393_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/modernbert.md | https://huggingface.co/docs/transformers/en/model_doc/modernbert/#modernbertmodel | .md | etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`ModernBertConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the | 393_5_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/modernbert.md | https://huggingface.co/docs/transformers/en/model_doc/modernbert/#modernbertmodel | .md | load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 393_5_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/modernbert.md | https://huggingface.co/docs/transformers/en/model_doc/modernbert/#modernbertformaskedlm | .md | The ModernBert Model with a decoder head on top that is used for masked language modeling.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | 393_6_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/modernbert.md | https://huggingface.co/docs/transformers/en/model_doc/modernbert/#modernbertformaskedlm | .md | etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`ModernBertConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the | 393_6_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/modernbert.md | https://huggingface.co/docs/transformers/en/model_doc/modernbert/#modernbertformaskedlm | .md | load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 393_6_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/modernbert.md | https://huggingface.co/docs/transformers/en/model_doc/modernbert/#modernbertforsequenceclassification | .md | The ModernBert Model with a sequence classification head on top that performs pooling.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | 393_7_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/modernbert.md | https://huggingface.co/docs/transformers/en/model_doc/modernbert/#modernbertforsequenceclassification | .md | etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`ModernBertConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the | 393_7_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/modernbert.md | https://huggingface.co/docs/transformers/en/model_doc/modernbert/#modernbertforsequenceclassification | .md | load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 393_7_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/modernbert.md | https://huggingface.co/docs/transformers/en/model_doc/modernbert/#modernbertfortokenclassification | .md | The ModernBert Model with a token classification head on top, e.g. for Named Entity Recognition (NER) tasks.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | 393_8_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/modernbert.md | https://huggingface.co/docs/transformers/en/model_doc/modernbert/#modernbertfortokenclassification | .md | etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`ModernBertConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the | 393_8_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/modernbert.md | https://huggingface.co/docs/transformers/en/model_doc/modernbert/#modernbertfortokenclassification | .md | load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
</pt>
</frameworkcontent> | 393_8_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bridgetower.md | https://huggingface.co/docs/transformers/en/model_doc/bridgetower/ | .md | <!--Copyright 2023 The Intel Labs Team Authors, The Microsoft Research Team Authors and HuggingFace Inc. team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on | 394_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bridgetower.md | https://huggingface.co/docs/transformers/en/model_doc/bridgetower/ | .md | Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 394_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bridgetower.md | https://huggingface.co/docs/transformers/en/model_doc/bridgetower/#overview | .md | The BridgeTower model was proposed in [BridgeTower: Building Bridges Between Encoders in Vision-Language Representative Learning](https://arxiv.org/abs/2206.08657) by Xiao Xu, Chenfei Wu, Shachar Rosenman, Vasudev Lal, Wanxiang Che, Nan Duan. The goal of this model is to build a | 394_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bridgetower.md | https://huggingface.co/docs/transformers/en/model_doc/bridgetower/#overview | .md | bridge between each uni-modal encoder and the cross-modal encoder to enable comprehensive and detailed interaction at each layer of the cross-modal encoder thus achieving remarkable performance on various downstream tasks with almost negligible additional performance and computational costs.
This paper has been accepted to the [AAAI'23](https://aaai.org/Conferences/AAAI-23/) conference.
The abstract from the paper is the following: | 394_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bridgetower.md | https://huggingface.co/docs/transformers/en/model_doc/bridgetower/#overview | .md | The abstract from the paper is the following:
*Vision-Language (VL) models with the TWO-TOWER architecture have dominated visual-language representation learning in recent years.
Current VL models either use lightweight uni-modal encoders and learn to extract, align and fuse both modalities simultaneously in a deep cross-modal encoder, or feed the last-layer uni-modal representations from the deep pre-trained uni-modal encoders into the top cross-modal encoder. | 394_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bridgetower.md | https://huggingface.co/docs/transformers/en/model_doc/bridgetower/#overview | .md | Both approaches potentially restrict vision-language representation learning and limit model performance. In this paper, we propose BRIDGETOWER, which introduces multiple bridge layers that build a connection between the top layers of uni-modal encoders and each layer of the crossmodal encoder. | 394_1_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bridgetower.md | https://huggingface.co/docs/transformers/en/model_doc/bridgetower/#overview | .md | This enables effective bottom-up cross-modal alignment and fusion between visual and textual representations of different semantic levels of pre-trained uni-modal encoders in the cross-modal encoder. Pre-trained with only 4M images, BRIDGETOWER achieves state-of-the-art performance on various downstream vision-language tasks. | 394_1_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bridgetower.md | https://huggingface.co/docs/transformers/en/model_doc/bridgetower/#overview | .md | In particular, on the VQAv2 test-std set, BRIDGETOWER achieves an accuracy of 78.73%, outperforming the previous state-of-the-art model METER by 1.09% with the same pre-training data and almost negligible additional parameters and computational costs.
Notably, when further scaling the model, BRIDGETOWER achieves an accuracy of 81.15%, surpassing models that are pre-trained on orders-of-magnitude larger datasets.* | 394_1_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bridgetower.md | https://huggingface.co/docs/transformers/en/model_doc/bridgetower/#overview | .md | <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/bridgetower_architecture%20.jpg"
alt="drawing" width="600"/>
<small> BridgeTower architecture. Taken from the <a href="https://arxiv.org/abs/2206.08657">original paper.</a> </small> | 394_1_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bridgetower.md | https://huggingface.co/docs/transformers/en/model_doc/bridgetower/#overview | .md | <small> BridgeTower architecture. Taken from the <a href="https://arxiv.org/abs/2206.08657">original paper.</a> </small>
This model was contributed by [Anahita Bhiwandiwalla](https://huggingface.co/anahita-b), [Tiep Le](https://huggingface.co/Tile) and [Shaoyen Tseng](https://huggingface.co/shaoyent). The original code can be found [here](https://github.com/microsoft/BridgeTower). | 394_1_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bridgetower.md | https://huggingface.co/docs/transformers/en/model_doc/bridgetower/#usage-tips-and-examples | .md | BridgeTower consists of a visual encoder, a textual encoder and cross-modal encoder with multiple lightweight bridge layers.
The goal of this approach was to build a bridge between each uni-modal encoder and the cross-modal encoder to enable comprehensive and detailed interaction at each layer of the cross-modal encoder.
In principle, one can apply any visual, textual or cross-modal encoder in the proposed architecture. | 394_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bridgetower.md | https://huggingface.co/docs/transformers/en/model_doc/bridgetower/#usage-tips-and-examples | .md | In principle, one can apply any visual, textual or cross-modal encoder in the proposed architecture.
The [`BridgeTowerProcessor`] wraps [`RobertaTokenizer`] and [`BridgeTowerImageProcessor`] into a single instance to both
encode the text and prepare the images respectively.
The following example shows how to run contrastive learning using [`BridgeTowerProcessor`] and [`BridgeTowerForContrastiveLearning`].
```python
>>> from transformers import BridgeTowerProcessor, BridgeTowerForContrastiveLearning | 394_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bridgetower.md | https://huggingface.co/docs/transformers/en/model_doc/bridgetower/#usage-tips-and-examples | .md | ```python
>>> from transformers import BridgeTowerProcessor, BridgeTowerForContrastiveLearning
>>> import requests
>>> from PIL import Image | 394_2_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bridgetower.md | https://huggingface.co/docs/transformers/en/model_doc/bridgetower/#usage-tips-and-examples | .md | >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> texts = ["An image of two cats chilling on a couch", "A football player scoring a goal"]
>>> processor = BridgeTowerProcessor.from_pretrained("BridgeTower/bridgetower-large-itm-mlm-itc")
>>> model = BridgeTowerForContrastiveLearning.from_pretrained("BridgeTower/bridgetower-large-itm-mlm-itc") | 394_2_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bridgetower.md | https://huggingface.co/docs/transformers/en/model_doc/bridgetower/#usage-tips-and-examples | .md | >>> # forward pass
>>> scores = dict()
>>> for text in texts:
... # prepare inputs
... encoding = processor(image, text, return_tensors="pt")
... outputs = model(**encoding)
... scores[text] = outputs
```
The following example shows how to run image-text retrieval using [`BridgeTowerProcessor`] and [`BridgeTowerForImageAndTextRetrieval`].
```python
>>> from transformers import BridgeTowerProcessor, BridgeTowerForImageAndTextRetrieval
>>> import requests
>>> from PIL import Image | 394_2_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bridgetower.md | https://huggingface.co/docs/transformers/en/model_doc/bridgetower/#usage-tips-and-examples | .md | >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> texts = ["An image of two cats chilling on a couch", "A football player scoring a goal"]
>>> processor = BridgeTowerProcessor.from_pretrained("BridgeTower/bridgetower-base-itm-mlm")
>>> model = BridgeTowerForImageAndTextRetrieval.from_pretrained("BridgeTower/bridgetower-base-itm-mlm") | 394_2_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bridgetower.md | https://huggingface.co/docs/transformers/en/model_doc/bridgetower/#usage-tips-and-examples | .md | >>> # forward pass
>>> scores = dict()
>>> for text in texts:
... # prepare inputs
... encoding = processor(image, text, return_tensors="pt")
... outputs = model(**encoding)
... scores[text] = outputs.logits[0, 1].item()
```
The following example shows how to run masked language modeling using [`BridgeTowerProcessor`] and [`BridgeTowerForMaskedLM`].
```python
>>> from transformers import BridgeTowerProcessor, BridgeTowerForMaskedLM
>>> from PIL import Image
>>> import requests | 394_2_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bridgetower.md | https://huggingface.co/docs/transformers/en/model_doc/bridgetower/#usage-tips-and-examples | .md | >>> url = "http://images.cocodataset.org/val2017/000000360943.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw).convert("RGB")
>>> text = "a <mask> looking out of the window"
>>> processor = BridgeTowerProcessor.from_pretrained("BridgeTower/bridgetower-base-itm-mlm")
>>> model = BridgeTowerForMaskedLM.from_pretrained("BridgeTower/bridgetower-base-itm-mlm")
>>> # prepare inputs
>>> encoding = processor(image, text, return_tensors="pt")
>>> # forward pass
>>> outputs = model(**encoding) | 394_2_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bridgetower.md | https://huggingface.co/docs/transformers/en/model_doc/bridgetower/#usage-tips-and-examples | .md | >>> # forward pass
>>> outputs = model(**encoding)
>>> results = processor.decode(outputs.logits.argmax(dim=-1).squeeze(0).tolist()) | 394_2_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bridgetower.md | https://huggingface.co/docs/transformers/en/model_doc/bridgetower/#usage-tips-and-examples | .md | >>> print(results)
.a cat looking out of the window.
```
Tips:
- This implementation of BridgeTower uses [`RobertaTokenizer`] to generate text embeddings and OpenAI's CLIP/ViT model to compute visual embeddings.
- Checkpoints for pre-trained [bridgeTower-base](https://huggingface.co/BridgeTower/bridgetower-base) and [bridgetower masked language modeling and image text matching](https://huggingface.co/BridgeTower/bridgetower-base-itm-mlm) are released. | 394_2_9 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.