source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lilt.md
|
https://huggingface.co/docs/transformers/en/model_doc/lilt/#usage-tips
|
.md
|
model = LiltModel.from_pretrained("path_to_your_files")
model.push_to_hub("name_of_repo_on_the_hub")
```
- When preparing data for the model, make sure to use the token vocabulary that corresponds to the RoBERTa checkpoint you combined with the Layout Transformer.
- As [lilt-roberta-en-base](https://huggingface.co/SCUT-DLVCLab/lilt-roberta-en-base) uses the same vocabulary as [LayoutLMv3](layoutlmv3), one can use [`LayoutLMv3TokenizerFast`] to prepare data for the model.
|
195_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lilt.md
|
https://huggingface.co/docs/transformers/en/model_doc/lilt/#usage-tips
|
.md
|
The same is true for [lilt-roberta-en-base](https://huggingface.co/SCUT-DLVCLab/lilt-infoxlm-base): one can use [`LayoutXLMTokenizerFast`] for that model.
|
195_2_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lilt.md
|
https://huggingface.co/docs/transformers/en/model_doc/lilt/#resources
|
.md
|
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with LiLT.
- Demo notebooks for LiLT can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/LiLT).
**Documentation resources**
- [Text classification task guide](../tasks/sequence_classification)
- [Token classification task guide](../tasks/token_classification)
- [Question answering task guide](../tasks/question_answering)
|
195_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lilt.md
|
https://huggingface.co/docs/transformers/en/model_doc/lilt/#resources
|
.md
|
- [Question answering task guide](../tasks/question_answering)
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
|
195_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lilt.md
|
https://huggingface.co/docs/transformers/en/model_doc/lilt/#liltconfig
|
.md
|
This is the configuration class to store the configuration of a [`LiltModel`]. It is used to instantiate a LiLT
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the LiLT
[SCUT-DLVCLab/lilt-roberta-en-base](https://huggingface.co/SCUT-DLVCLab/lilt-roberta-en-base) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
195_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lilt.md
|
https://huggingface.co/docs/transformers/en/model_doc/lilt/#liltconfig
|
.md
|
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 30522):
Vocabulary size of the LiLT model. Defines the number of different tokens that can be represented by the
`inputs_ids` passed when calling [`LiltModel`].
hidden_size (`int`, *optional*, defaults to 768):
|
195_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lilt.md
|
https://huggingface.co/docs/transformers/en/model_doc/lilt/#liltconfig
|
.md
|
`inputs_ids` passed when calling [`LiltModel`].
hidden_size (`int`, *optional*, defaults to 768):
Dimensionality of the encoder layers and the pooler layer. Should be a multiple of 24.
num_hidden_layers (`int`, *optional*, defaults to 12):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 12):
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (`int`, *optional*, defaults to 3072):
|
195_4_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lilt.md
|
https://huggingface.co/docs/transformers/en/model_doc/lilt/#liltconfig
|
.md
|
intermediate_size (`int`, *optional*, defaults to 3072):
Dimensionality of the "intermediate" (often named feed-forward) layer in the Transformer encoder.
hidden_act (`str` or `Callable`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"silu"` and `"gelu_new"` are supported.
hidden_dropout_prob (`float`, *optional*, defaults to 0.1):
|
195_4_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lilt.md
|
https://huggingface.co/docs/transformers/en/model_doc/lilt/#liltconfig
|
.md
|
`"relu"`, `"silu"` and `"gelu_new"` are supported.
hidden_dropout_prob (`float`, *optional*, defaults to 0.1):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1):
The dropout ratio for the attention probabilities.
max_position_embeddings (`int`, *optional*, defaults to 512):
The maximum sequence length that this model might ever be used with. Typically set this to something large
|
195_4_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lilt.md
|
https://huggingface.co/docs/transformers/en/model_doc/lilt/#liltconfig
|
.md
|
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (`int`, *optional*, defaults to 2):
The vocabulary size of the `token_type_ids` passed when calling [`LiltModel`].
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (`float`, *optional*, defaults to 1e-12):
|
195_4_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lilt.md
|
https://huggingface.co/docs/transformers/en/model_doc/lilt/#liltconfig
|
.md
|
layer_norm_eps (`float`, *optional*, defaults to 1e-12):
The epsilon used by the layer normalization layers.
position_embedding_type (`str`, *optional*, defaults to `"absolute"`):
Type of position embedding. Choose one of `"absolute"`, `"relative_key"`, `"relative_key_query"`. For
positional embeddings use `"absolute"`. For more information on `"relative_key"`, please refer to
[Self-Attention with Relative Position Representations (Shaw et al.)](https://arxiv.org/abs/1803.02155).
|
195_4_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lilt.md
|
https://huggingface.co/docs/transformers/en/model_doc/lilt/#liltconfig
|
.md
|
[Self-Attention with Relative Position Representations (Shaw et al.)](https://arxiv.org/abs/1803.02155).
For more information on `"relative_key_query"`, please refer to *Method 4* in [Improve Transformer Models
with Better Relative Position Embeddings (Huang et al.)](https://arxiv.org/abs/2009.13658).
classifier_dropout (`float`, *optional*):
The dropout ratio for the classification head.
channel_shrink_ratio (`int`, *optional*, defaults to 4):
|
195_4_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lilt.md
|
https://huggingface.co/docs/transformers/en/model_doc/lilt/#liltconfig
|
.md
|
The dropout ratio for the classification head.
channel_shrink_ratio (`int`, *optional*, defaults to 4):
The shrink ratio compared to the `hidden_size` for the channel dimension of the layout embeddings.
max_2d_position_embeddings (`int`, *optional*, defaults to 1024):
The maximum value that the 2D position embedding might ever be used with. Typically set this to something
large just in case (e.g., 1024).
Examples:
```python
>>> from transformers import LiltConfig, LiltModel
|
195_4_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lilt.md
|
https://huggingface.co/docs/transformers/en/model_doc/lilt/#liltconfig
|
.md
|
>>> # Initializing a LiLT SCUT-DLVCLab/lilt-roberta-en-base style configuration
>>> configuration = LiltConfig()
>>> # Randomly initializing a model from the SCUT-DLVCLab/lilt-roberta-en-base style configuration
>>> model = LiltModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
195_4_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lilt.md
|
https://huggingface.co/docs/transformers/en/model_doc/lilt/#liltmodel
|
.md
|
The bare LiLT Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
195_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lilt.md
|
https://huggingface.co/docs/transformers/en/model_doc/lilt/#liltmodel
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`LiltConfig`]): Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
|
195_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lilt.md
|
https://huggingface.co/docs/transformers/en/model_doc/lilt/#liltmodel
|
.md
|
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
195_5_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lilt.md
|
https://huggingface.co/docs/transformers/en/model_doc/lilt/#liltforsequenceclassification
|
.md
|
LiLT Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled
output) e.g. for GLUE tasks.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
195_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lilt.md
|
https://huggingface.co/docs/transformers/en/model_doc/lilt/#liltforsequenceclassification
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`LiltConfig`]): Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
|
195_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lilt.md
|
https://huggingface.co/docs/transformers/en/model_doc/lilt/#liltforsequenceclassification
|
.md
|
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
195_6_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lilt.md
|
https://huggingface.co/docs/transformers/en/model_doc/lilt/#liltfortokenclassification
|
.md
|
Lilt Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for
Named-Entity-Recognition (NER) tasks.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
195_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lilt.md
|
https://huggingface.co/docs/transformers/en/model_doc/lilt/#liltfortokenclassification
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`LiltConfig`]): Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
|
195_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lilt.md
|
https://huggingface.co/docs/transformers/en/model_doc/lilt/#liltfortokenclassification
|
.md
|
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
195_7_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lilt.md
|
https://huggingface.co/docs/transformers/en/model_doc/lilt/#liltforquestionanswering
|
.md
|
Lilt Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layers on top of the hidden-states output to compute `span start logits` and `span end logits`).
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
|
195_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lilt.md
|
https://huggingface.co/docs/transformers/en/model_doc/lilt/#liltforquestionanswering
|
.md
|
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`LiltConfig`]): Model configuration class with all the parameters of the
|
195_8_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/lilt.md
|
https://huggingface.co/docs/transformers/en/model_doc/lilt/#liltforquestionanswering
|
.md
|
and behavior.
Parameters:
config ([`LiltConfig`]): Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
195_8_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/m2m_100.md
|
https://huggingface.co/docs/transformers/en/model_doc/m2m_100/
|
.md
|
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
196_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/m2m_100.md
|
https://huggingface.co/docs/transformers/en/model_doc/m2m_100/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
196_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/m2m_100.md
|
https://huggingface.co/docs/transformers/en/model_doc/m2m_100/#overview
|
.md
|
The M2M100 model was proposed in [Beyond English-Centric Multilingual Machine Translation](https://arxiv.org/abs/2010.11125) by Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky,
Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy
Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin.
The abstract from the paper is the following:
|
196_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/m2m_100.md
|
https://huggingface.co/docs/transformers/en/model_doc/m2m_100/#overview
|
.md
|
Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin.
The abstract from the paper is the following:
*Existing work in translation demonstrated the potential of massively multilingual machine translation by training a
single model able to translate between any pair of languages. However, much of this work is English-Centric by training
only on data which was translated from or to English. While this is supported by large sources of training data, it
|
196_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/m2m_100.md
|
https://huggingface.co/docs/transformers/en/model_doc/m2m_100/#overview
|
.md
|
only on data which was translated from or to English. While this is supported by large sources of training data, it
does not reflect translation needs worldwide. In this work, we create a true Many-to-Many multilingual translation
model that can translate directly between any pair of 100 languages. We build and open source a training dataset that
covers thousands of language directions with supervised data, created through large-scale mining. Then, we explore how
|
196_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/m2m_100.md
|
https://huggingface.co/docs/transformers/en/model_doc/m2m_100/#overview
|
.md
|
covers thousands of language directions with supervised data, created through large-scale mining. Then, we explore how
to effectively increase model capacity through a combination of dense scaling and language-specific sparse parameters
to create high quality models. Our focus on non-English-Centric models brings gains of more than 10 BLEU when directly
translating between non-English directions while performing competitively to the best single systems of WMT. We
|
196_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/m2m_100.md
|
https://huggingface.co/docs/transformers/en/model_doc/m2m_100/#overview
|
.md
|
translating between non-English directions while performing competitively to the best single systems of WMT. We
open-source our scripts so that others may reproduce the data, evaluation, and final M2M-100 model.*
This model was contributed by [valhalla](https://huggingface.co/valhalla).
|
196_1_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/m2m_100.md
|
https://huggingface.co/docs/transformers/en/model_doc/m2m_100/#usage-tips-and-examples
|
.md
|
M2M100 is a multilingual encoder-decoder (seq-to-seq) model primarily intended for translation tasks. As the model is
multilingual it expects the sequences in a certain format: A special language id token is used as prefix in both the
source and target text. The source text format is `[lang_code] X [eos]`, where `lang_code` is source language
id for source text and target language id for target text, with `X` being the source or target text.
|
196_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/m2m_100.md
|
https://huggingface.co/docs/transformers/en/model_doc/m2m_100/#usage-tips-and-examples
|
.md
|
id for source text and target language id for target text, with `X` being the source or target text.
The [`M2M100Tokenizer`] depends on `sentencepiece` so be sure to install it before running the
examples. To install `sentencepiece` run `pip install sentencepiece`.
**Supervised Training**
```python
from transformers import M2M100Config, M2M100ForConditionalGeneration, M2M100Tokenizer
|
196_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/m2m_100.md
|
https://huggingface.co/docs/transformers/en/model_doc/m2m_100/#usage-tips-and-examples
|
.md
|
model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100_418M")
tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100_418M", src_lang="en", tgt_lang="fr")
src_text = "Life is like a box of chocolates."
tgt_text = "La vie est comme une boîte de chocolat."
model_inputs = tokenizer(src_text, text_target=tgt_text, return_tensors="pt")
|
196_2_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/m2m_100.md
|
https://huggingface.co/docs/transformers/en/model_doc/m2m_100/#usage-tips-and-examples
|
.md
|
loss = model(**model_inputs).loss # forward pass
```
**Generation**
M2M100 uses the `eos_token_id` as the `decoder_start_token_id` for generation with the target language id
being forced as the first generated token. To force the target language id as the first generated token, pass the
*forced_bos_token_id* parameter to the *generate* method. The following example shows how to translate between
Hindi to French and Chinese to English using the *facebook/m2m100_418M* checkpoint.
```python
|
196_2_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/m2m_100.md
|
https://huggingface.co/docs/transformers/en/model_doc/m2m_100/#usage-tips-and-examples
|
.md
|
Hindi to French and Chinese to English using the *facebook/m2m100_418M* checkpoint.
```python
>>> from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer
|
196_2_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/m2m_100.md
|
https://huggingface.co/docs/transformers/en/model_doc/m2m_100/#usage-tips-and-examples
|
.md
|
>>> hi_text = "जीवन एक चॉकलेट बॉक्स की तरह है।"
>>> chinese_text = "生活就像一盒巧克力。"
>>> model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100_418M")
>>> tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100_418M")
|
196_2_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/m2m_100.md
|
https://huggingface.co/docs/transformers/en/model_doc/m2m_100/#usage-tips-and-examples
|
.md
|
>>> # translate Hindi to French
>>> tokenizer.src_lang = "hi"
>>> encoded_hi = tokenizer(hi_text, return_tensors="pt")
>>> generated_tokens = model.generate(**encoded_hi, forced_bos_token_id=tokenizer.get_lang_id("fr"))
>>> tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
"La vie est comme une boîte de chocolat."
|
196_2_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/m2m_100.md
|
https://huggingface.co/docs/transformers/en/model_doc/m2m_100/#usage-tips-and-examples
|
.md
|
>>> # translate Chinese to English
>>> tokenizer.src_lang = "zh"
>>> encoded_zh = tokenizer(chinese_text, return_tensors="pt")
>>> generated_tokens = model.generate(**encoded_zh, forced_bos_token_id=tokenizer.get_lang_id("en"))
>>> tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
"Life is like a box of chocolate."
```
|
196_2_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/m2m_100.md
|
https://huggingface.co/docs/transformers/en/model_doc/m2m_100/#resources
|
.md
|
- [Translation task guide](../tasks/translation)
- [Summarization task guide](../tasks/summarization)
|
196_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/m2m_100.md
|
https://huggingface.co/docs/transformers/en/model_doc/m2m_100/#m2m100config
|
.md
|
This is the configuration class to store the configuration of a [`M2M100Model`]. It is used to instantiate an
M2M100 model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the M2M100
[facebook/m2m100_418M](https://huggingface.co/facebook/m2m100_418M) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
196_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/m2m_100.md
|
https://huggingface.co/docs/transformers/en/model_doc/m2m_100/#m2m100config
|
.md
|
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 50265):
Vocabulary size of the M2M100 model. Defines the number of different tokens that can be represented by the
`inputs_ids` passed when calling [`M2M100Model`] or
d_model (`int`, *optional*, defaults to 1024):
Dimensionality of the layers and the pooler layer.
|
196_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/m2m_100.md
|
https://huggingface.co/docs/transformers/en/model_doc/m2m_100/#m2m100config
|
.md
|
d_model (`int`, *optional*, defaults to 1024):
Dimensionality of the layers and the pooler layer.
encoder_layers (`int`, *optional*, defaults to 12):
Number of encoder layers.
decoder_layers (`int`, *optional*, defaults to 12):
Number of decoder layers.
encoder_attention_heads (`int`, *optional*, defaults to 16):
Number of attention heads for each attention layer in the Transformer encoder.
decoder_attention_heads (`int`, *optional*, defaults to 16):
|
196_4_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/m2m_100.md
|
https://huggingface.co/docs/transformers/en/model_doc/m2m_100/#m2m100config
|
.md
|
decoder_attention_heads (`int`, *optional*, defaults to 16):
Number of attention heads for each attention layer in the Transformer decoder.
decoder_ffn_dim (`int`, *optional*, defaults to 4096):
Dimensionality of the "intermediate" (often named feed-forward) layer in decoder.
encoder_ffn_dim (`int`, *optional*, defaults to 4096):
Dimensionality of the "intermediate" (often named feed-forward) layer in decoder.
activation_function (`str` or `function`, *optional*, defaults to `"gelu"`):
|
196_4_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/m2m_100.md
|
https://huggingface.co/docs/transformers/en/model_doc/m2m_100/#m2m100config
|
.md
|
activation_function (`str` or `function`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"silu"` and `"gelu_new"` are supported.
dropout (`float`, *optional*, defaults to 0.1):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
|
196_4_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/m2m_100.md
|
https://huggingface.co/docs/transformers/en/model_doc/m2m_100/#m2m100config
|
.md
|
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
activation_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for activations inside the fully connected layer.
classifier_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for classifier.
max_position_embeddings (`int`, *optional*, defaults to 1024):
The maximum sequence length that this model might ever be used with. Typically set this to something large
|
196_4_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/m2m_100.md
|
https://huggingface.co/docs/transformers/en/model_doc/m2m_100/#m2m100config
|
.md
|
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
init_std (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
encoder_layerdrop (`float`, *optional*, defaults to 0.0):
The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
|
196_4_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/m2m_100.md
|
https://huggingface.co/docs/transformers/en/model_doc/m2m_100/#m2m100config
|
.md
|
The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
decoder_layerdrop (`float`, *optional*, defaults to 0.0):
The LayerDrop probability for the decoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models).
Example:
```python
|
196_4_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/m2m_100.md
|
https://huggingface.co/docs/transformers/en/model_doc/m2m_100/#m2m100config
|
.md
|
Whether or not the model should return the last key/values attentions (not used by all models).
Example:
```python
>>> from transformers import M2M100Config, M2M100Model
|
196_4_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/m2m_100.md
|
https://huggingface.co/docs/transformers/en/model_doc/m2m_100/#m2m100config
|
.md
|
>>> # Initializing a M2M100 facebook/m2m100_418M style configuration
>>> configuration = M2M100Config()
>>> # Initializing a model (with random weights) from the facebook/m2m100_418M style configuration
>>> model = M2M100Model(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
196_4_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/m2m_100.md
|
https://huggingface.co/docs/transformers/en/model_doc/m2m_100/#m2m100tokenizer
|
.md
|
Construct an M2M100 tokenizer. Based on [SentencePiece](https://github.com/google/sentencepiece).
This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
Args:
vocab_file (`str`):
Path to the vocabulary file.
spm_file (`str`):
Path to [SentencePiece](https://github.com/google/sentencepiece) file (generally has a .spm extension) that
contains the vocabulary.
|
196_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/m2m_100.md
|
https://huggingface.co/docs/transformers/en/model_doc/m2m_100/#m2m100tokenizer
|
.md
|
contains the vocabulary.
src_lang (`str`, *optional*):
A string representing the source language.
tgt_lang (`str`, *optional*):
A string representing the target language.
eos_token (`str`, *optional*, defaults to `"</s>"`):
The end of sequence token.
sep_token (`str`, *optional*, defaults to `"</s>"`):
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
|
196_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/m2m_100.md
|
https://huggingface.co/docs/transformers/en/model_doc/m2m_100/#m2m100tokenizer
|
.md
|
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
unk_token (`str`, *optional*, defaults to `"<unk>"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (`str`, *optional*, defaults to `"<pad>"`):
|
196_5_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/m2m_100.md
|
https://huggingface.co/docs/transformers/en/model_doc/m2m_100/#m2m100tokenizer
|
.md
|
token instead.
pad_token (`str`, *optional*, defaults to `"<pad>"`):
The token used for padding, for example when batching sequences of different lengths.
language_codes (`str`, *optional*, defaults to `"m2m100"`):
What language codes to use. Should be one of `"m2m100"` or `"wmt21"`.
sp_model_kwargs (`dict`, *optional*):
Will be passed to the `SentencePieceProcessor.__init__()` method. The [Python wrapper for
|
196_5_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/m2m_100.md
|
https://huggingface.co/docs/transformers/en/model_doc/m2m_100/#m2m100tokenizer
|
.md
|
sp_model_kwargs (`dict`, *optional*):
Will be passed to the `SentencePieceProcessor.__init__()` method. The [Python wrapper for
SentencePiece](https://github.com/google/sentencepiece/tree/master/python) can be used, among other things,
to set:
- `enable_sampling`: Enable subword regularization.
- `nbest_size`: Sampling parameters for unigram. Invalid for BPE-Dropout.
- `nbest_size = {0,1}`: No sampling is performed.
- `nbest_size > 1`: samples from the nbest_size results.
|
196_5_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/m2m_100.md
|
https://huggingface.co/docs/transformers/en/model_doc/m2m_100/#m2m100tokenizer
|
.md
|
- `nbest_size = {0,1}`: No sampling is performed.
- `nbest_size > 1`: samples from the nbest_size results.
- `nbest_size < 0`: assuming that nbest_size is infinite and samples from the all hypothesis (lattice)
using forward-filtering-and-backward-sampling algorithm.
- `alpha`: Smoothing parameter for unigram sampling, and dropout probability of merge operations for
BPE-dropout.
Examples:
```python
>>> from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer
|
196_5_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/m2m_100.md
|
https://huggingface.co/docs/transformers/en/model_doc/m2m_100/#m2m100tokenizer
|
.md
|
>>> model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100_418M")
>>> tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100_418M", src_lang="en", tgt_lang="ro")
>>> src_text = " UN Chief Says There Is No Military Solution in Syria"
>>> tgt_text = "Şeful ONU declară că nu există o soluţie militară în Siria"
>>> model_inputs = tokenizer(src_text, text_target=tgt_text, return_tensors="pt")
>>> outputs = model(**model_inputs) # should work
```
|
196_5_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/m2m_100.md
|
https://huggingface.co/docs/transformers/en/model_doc/m2m_100/#m2m100tokenizer
|
.md
|
>>> outputs = model(**model_inputs) # should work
```
Methods: build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
|
196_5_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/m2m_100.md
|
https://huggingface.co/docs/transformers/en/model_doc/m2m_100/#m2m100model
|
.md
|
The bare M2M100 Model outputting raw hidden-states without any specific head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
196_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/m2m_100.md
|
https://huggingface.co/docs/transformers/en/model_doc/m2m_100/#m2m100model
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`M2M100Config`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
|
196_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/m2m_100.md
|
https://huggingface.co/docs/transformers/en/model_doc/m2m_100/#m2m100model
|
.md
|
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
196_6_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/m2m_100.md
|
https://huggingface.co/docs/transformers/en/model_doc/m2m_100/#m2m100forconditionalgeneration
|
.md
|
The M2M100 Model with a language modeling head. Can be used for summarization.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
196_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/m2m_100.md
|
https://huggingface.co/docs/transformers/en/model_doc/m2m_100/#m2m100forconditionalgeneration
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`M2M100Config`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
|
196_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/m2m_100.md
|
https://huggingface.co/docs/transformers/en/model_doc/m2m_100/#m2m100forconditionalgeneration
|
.md
|
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
196_7_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/m2m_100.md
|
https://huggingface.co/docs/transformers/en/model_doc/m2m_100/#using-flash-attention-2
|
.md
|
Flash Attention 2 is a faster, optimized version of the attention scores computation which relies on `cuda` kernels.
|
196_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/m2m_100.md
|
https://huggingface.co/docs/transformers/en/model_doc/m2m_100/#installation
|
.md
|
First, check whether your hardware is compatible with Flash Attention 2. The latest list of compatible hardware can be found in the [official documentation](https://github.com/Dao-AILab/flash-attention#installation-and-features).
Next, [install](https://github.com/Dao-AILab/flash-attention#installation-and-features) the latest version of Flash Attention 2:
```bash
pip install -U flash-attn --no-build-isolation
```
|
196_9_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/m2m_100.md
|
https://huggingface.co/docs/transformers/en/model_doc/m2m_100/#usage
|
.md
|
To load a model using Flash Attention 2, we can pass the argument `attn_implementation="flash_attention_2"` to [`.from_pretrained`](https://huggingface.co/docs/transformers/main/en/main_classes/model#transformers.PreTrainedModel.from_pretrained). You can use either `torch.float16` or `torch.bfloat16` precision.
```python
>>> import torch
>>> from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer
|
196_10_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/m2m_100.md
|
https://huggingface.co/docs/transformers/en/model_doc/m2m_100/#usage
|
.md
|
>>> model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100_418M", torch_dtype=torch.float16, attn_implementation="flash_attention_2").to("cuda").eval()
>>> tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100_418M")
|
196_10_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/m2m_100.md
|
https://huggingface.co/docs/transformers/en/model_doc/m2m_100/#usage
|
.md
|
>>> # translate Hindi to French
>>> hi_text = "जीवन एक चॉकलेट बॉक्स की तरह है।"
>>> tokenizer.src_lang = "hi"
>>> encoded_hi = tokenizer(hi_text, return_tensors="pt").to("cuda")
>>> generated_tokens = model.generate(**encoded_hi, forced_bos_token_id=tokenizer.get_lang_id("fr"))
>>> tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
"La vie est comme une boîte de chocolat."
```
|
196_10_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/m2m_100.md
|
https://huggingface.co/docs/transformers/en/model_doc/m2m_100/#expected-speedups
|
.md
|
Below is an expected speedup diagram that compares pure inference time between the native implementation and the Flash Attention 2.
<div style="text-align: center">
<img src="https://huggingface.co/datasets/visheratin/documentation-images/resolve/main/nllb-speedup.webp">
</div>
|
196_11_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/m2m_100.md
|
https://huggingface.co/docs/transformers/en/model_doc/m2m_100/#using-scaled-dot-product-attention-sdpa
|
.md
|
PyTorch includes a native scaled dot-product attention (SDPA) operator as part of `torch.nn.functional`. This function
encompasses several implementations that can be applied depending on the inputs and the hardware in use. See the
[official documentation](https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html)
or the [GPU Inference](https://huggingface.co/docs/transformers/main/en/perf_infer_gpu_one#pytorch-scaled-dot-product-attention)
page for more information.
|
196_12_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/m2m_100.md
|
https://huggingface.co/docs/transformers/en/model_doc/m2m_100/#using-scaled-dot-product-attention-sdpa
|
.md
|
page for more information.
SDPA is used by default for `torch>=2.1.1` when an implementation is available, but you may also set
`attn_implementation="sdpa"` in `from_pretrained()` to explicitly request SDPA to be used.
```python
from transformers import M2M100ForConditionalGeneration
model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100_418M", torch_dtype=torch.float16, attn_implementation="sdpa")
...
```
|
196_12_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/m2m_100.md
|
https://huggingface.co/docs/transformers/en/model_doc/m2m_100/#using-scaled-dot-product-attention-sdpa
|
.md
|
...
```
For the best speedups, we recommend loading the model in half-precision (e.g. `torch.float16` or `torch.bfloat16`).
|
196_12_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bamba.md
|
https://huggingface.co/docs/transformers/en/model_doc/bamba/
|
.md
|
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
197_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bamba.md
|
https://huggingface.co/docs/transformers/en/model_doc/bamba/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
197_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bamba.md
|
https://huggingface.co/docs/transformers/en/model_doc/bamba/#overview
|
.md
|
Bamba-9B is a decoder-only language model based on the [Mamba-2](https://github.com/state-spaces/mamba) architecture and is designed to handle a wide range of text generation tasks. It is trained from scratch using a two-stage training approach. In the first stage, the model is trained on 2 trillion tokens from the Dolma v1.7 dataset. In the second stage, it undergoes additional training on 200 billion tokens, leveraging a carefully curated blend of high-quality data to further refine its performance and
|
197_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bamba.md
|
https://huggingface.co/docs/transformers/en/model_doc/bamba/#overview
|
.md
|
training on 200 billion tokens, leveraging a carefully curated blend of high-quality data to further refine its performance and enhance output quality.
|
197_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bamba.md
|
https://huggingface.co/docs/transformers/en/model_doc/bamba/#overview
|
.md
|
Checkout all Bamba-9B model checkpoints [here](https://github.com/foundation-model-stack/bamba).
|
197_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bamba.md
|
https://huggingface.co/docs/transformers/en/model_doc/bamba/#bambaconfig
|
.md
|
| Model | Params | # Layers | Hidden Dim. | Attention Heads | GQA | KV Heads | Context Length | Tied Embeddings |
|-------------------|--------------|----------|-------------|-----------------|-----|----------|----------------|------------------|
| Bamba | 9B (9.78B) | 32 | 4096 | 32 | Yes | 8 | 4096 | True |
This is the configuration class to store the configuration of a [`BambaModel`]. It is used to instantiate a
|
197_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bamba.md
|
https://huggingface.co/docs/transformers/en/model_doc/bamba/#bambaconfig
|
.md
|
This is the configuration class to store the configuration of a [`BambaModel`]. It is used to instantiate a
BambaModel model according to the specified arguments, defining the model architecture. Instantiating a configuration
with defaults taken from [ibm-fms/Bamba-9.8b-2.2T-hf](https://huggingface.co/ibm-fms/Bamba-9.8b-2.2T-hf).
The BambaModel is a hybrid [mamba2](https://github.com/state-spaces/mamba) architecture with SwiGLU.
The checkpoints are jointly trained by IBM, Princeton, and UIUC.
|
197_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bamba.md
|
https://huggingface.co/docs/transformers/en/model_doc/bamba/#bambaconfig
|
.md
|
The checkpoints are jointly trained by IBM, Princeton, and UIUC.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 128000):
Vocabulary size of the Bamba model. Defines the number of different tokens that can be represented by the
`inputs_ids` passed when calling [`BambaModel`]
|
197_2_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bamba.md
|
https://huggingface.co/docs/transformers/en/model_doc/bamba/#bambaconfig
|
.md
|
`inputs_ids` passed when calling [`BambaModel`]
tie_word_embeddings (`bool`, *optional*, defaults to `False`):
Whether the model's input and output word embeddings should be tied. Note that this is only relevant if the
model has a output word embedding layer.
hidden_size (`int`, *optional*, defaults to 4096):
Dimension of the hidden representations.
intermediate_size (`int`, *optional*, defaults to 14336):
Dimension of the MLP representations.
num_hidden_layers (`int`, *optional*, defaults to 32):
|
197_2_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bamba.md
|
https://huggingface.co/docs/transformers/en/model_doc/bamba/#bambaconfig
|
.md
|
Dimension of the MLP representations.
num_hidden_layers (`int`, *optional*, defaults to 32):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 32):
Number of attention heads for each attention layer in the Transformer encoder.
num_key_value_heads (`int`, *optional*, defaults to 8):
This is the number of key_value heads that should be used to implement Grouped Query Attention. If
|
197_2_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bamba.md
|
https://huggingface.co/docs/transformers/en/model_doc/bamba/#bambaconfig
|
.md
|
This is the number of key_value heads that should be used to implement Grouped Query Attention. If
`num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if
`num_key_value_heads=1` the model will use Multi Query Attention (MQA) otherwise GQA is used. When
converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
by meanpooling all the original heads within that group. For more details checkout [this
|
197_2_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bamba.md
|
https://huggingface.co/docs/transformers/en/model_doc/bamba/#bambaconfig
|
.md
|
by meanpooling all the original heads within that group. For more details checkout [this
paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to `8`.
hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
The non-linear activation function (function or string) in the decoder.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
|
197_2_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bamba.md
|
https://huggingface.co/docs/transformers/en/model_doc/bamba/#bambaconfig
|
.md
|
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
rms_norm_eps (`float`, *optional*, defaults to 1e-05):
The epsilon used by the rms normalization layers.
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if `config.is_decoder=True`.
num_logits_to_keep (`int` or `None`, *optional*, defaults to 1):
|
197_2_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bamba.md
|
https://huggingface.co/docs/transformers/en/model_doc/bamba/#bambaconfig
|
.md
|
relevant if `config.is_decoder=True`.
num_logits_to_keep (`int` or `None`, *optional*, defaults to 1):
Number of prompt logits to calculate during generation. If `None`, all logits will be calculated. If an
integer value, only last `num_logits_to_keep` logits will be calculated. Default is 1 because only the
logits of the last prompt token are needed for generation. For long sequences, the logits for the entire
sequence may use a lot of memory so, setting `num_logits_to_keep=1` will reduce memory footprint
|
197_2_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bamba.md
|
https://huggingface.co/docs/transformers/en/model_doc/bamba/#bambaconfig
|
.md
|
sequence may use a lot of memory so, setting `num_logits_to_keep=1` will reduce memory footprint
significantly.
pad_token_id (`int`, *optional*, defaults to 0):
The id of the padding token.
bos_token_id (`int`, *optional*, defaults to 1):
The id of the "beginning-of-sequence" token.
eos_token_id (`int`, *optional*, defaults to 2):
The id of the "end-of-sequence" token.
max_position_embeddings (`int`, *optional*, defaults to 262144):
Max cached sequence length for the model
|
197_2_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bamba.md
|
https://huggingface.co/docs/transformers/en/model_doc/bamba/#bambaconfig
|
.md
|
max_position_embeddings (`int`, *optional*, defaults to 262144):
Max cached sequence length for the model
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
attn_layer_indices (`list`, *optional*):
Specifies the layer indices that will have full attention. Must contain values at most num_hidden_layers.
mamba_n_heads (`int`, *optional*, defaults to 128):
The number of mamba heads used in the v2 implementation.
|
197_2_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bamba.md
|
https://huggingface.co/docs/transformers/en/model_doc/bamba/#bambaconfig
|
.md
|
mamba_n_heads (`int`, *optional*, defaults to 128):
The number of mamba heads used in the v2 implementation.
mamba_d_head (`int`, *optional*, defaults to `"auto"`):
Head embeddding dimension size
mamba_n_groups (`int`, *optional*, defaults to 1):
The number of the mamba groups used in the v2 implementation.
mamba_d_state (`int`, *optional*, defaults to 256):
The dimension the mamba state space latents
mamba_d_conv (`int`, *optional*, defaults to 4):
The size of the mamba convolution kernel
|
197_2_11
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bamba.md
|
https://huggingface.co/docs/transformers/en/model_doc/bamba/#bambaconfig
|
.md
|
mamba_d_conv (`int`, *optional*, defaults to 4):
The size of the mamba convolution kernel
mamba_expand (`int`, *optional*, defaults to 2):
Expanding factor (relative to hidden_size) used to determine the mamba intermediate size
mamba_chunk_size (`int`, *optional*, defaults to 256):
The chunks in which to break the sequence when doing prefill/training
mamba_conv_bias (`bool`, *optional*, defaults to `True`):
Flag indicating whether or not to use bias in the convolution layer of the mamba mixer block.
|
197_2_12
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bamba.md
|
https://huggingface.co/docs/transformers/en/model_doc/bamba/#bambaconfig
|
.md
|
Flag indicating whether or not to use bias in the convolution layer of the mamba mixer block.
mamba_proj_bias (`bool`, *optional*, defaults to `False`):
Flag indicating whether or not to use bias in the input and output projections (["in_proj", "out_proj"]) of the mamba mixer block
<!---
|
197_2_13
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bamba.md
|
https://huggingface.co/docs/transformers/en/model_doc/bamba/#usage-tips
|
.md
|
Tips:
- The architecture is based on Mamba-2 models.
|
197_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bamba.md
|
https://huggingface.co/docs/transformers/en/model_doc/bamba/#bambamodel
|
.md
|
The bare Bamba Model outputting raw hidden-states without any specific head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
197_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bamba.md
|
https://huggingface.co/docs/transformers/en/model_doc/bamba/#bambamodel
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`BambaConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
|
197_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bamba.md
|
https://huggingface.co/docs/transformers/en/model_doc/bamba/#bambamodel
|
.md
|
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Transformer decoder consisting of *config.num_hidden_layers* layers. Each layer is a [`BambaDecoderLayer`]
Args:
config: BambaConfig
Methods: forward
-->
|
197_4_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bamba.md
|
https://huggingface.co/docs/transformers/en/model_doc/bamba/#bambaforcausallm
|
.md
|
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("ibm-fms/Bamba-9B")
tokenizer = AutoTokenizer.from_pretrained("ibm-fms/Bamba-9B")
|
197_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bamba.md
|
https://huggingface.co/docs/transformers/en/model_doc/bamba/#bambaforcausallm
|
.md
|
message = ["Mamba is a snake with following properties "]
inputs = tokenizer(message, return_tensors='pt', return_token_type_ids=False)
response = model.generate(**inputs, max_new_tokens=64)
print(tokenizer.batch_decode(response, skip_special_tokens=True)[0])
```
No docstring available for BambaForCausalLM
Methods: forward
This HF implementation is contributed by [ani300](https://github.com/ani300) and [fabianlim](https://github.com/fabianlim).
|
197_5_1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.