source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vision-text-dual-encoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/vision-text-dual-encoder/#visiontextdualencoderprocessor
|
.md
|
The image processor is a required input.
tokenizer ([`PreTrainedTokenizer`], *optional*):
The tokenizer is a required input.
<frameworkcontent>
<pt>
|
358_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vision-text-dual-encoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/vision-text-dual-encoder/#visiontextdualencodermodel
|
.md
|
This class can be used to initialize a vision-text dual encoder model with any pretrained vision autoencoding model
as the vision encoder and any pretrained text model as the text encoder. The vision and text encoders are loaded
via the [`~AutoModel.from_pretrained`] method. The projection layers are automatically added to the model and
should be fine-tuned on a downstream task, like contrastive image-text modeling.
|
358_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vision-text-dual-encoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/vision-text-dual-encoder/#visiontextdualencodermodel
|
.md
|
should be fine-tuned on a downstream task, like contrastive image-text modeling.
In [LiT: Zero-Shot Transfer with Locked-image Text Tuning](https://arxiv.org/abs/2111.07991) it is shown how
leveraging pre-trained (locked/frozen) image and text model for contrastive learning yields significant improvment
on new zero-shot vision tasks such as image classification or retrieval.
After such a Vision-Text-Dual-Encoder model has been trained/fine-tuned, it can be saved/loaded just like any other
|
358_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vision-text-dual-encoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/vision-text-dual-encoder/#visiontextdualencodermodel
|
.md
|
After such a Vision-Text-Dual-Encoder model has been trained/fine-tuned, it can be saved/loaded just like any other
models (see the examples for more information).
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
358_4_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vision-text-dual-encoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/vision-text-dual-encoder/#visiontextdualencodermodel
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`VisionEncoderDecoderConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
358_4_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vision-text-dual-encoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/vision-text-dual-encoder/#visiontextdualencodermodel
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
</pt>
<tf>
|
358_4_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vision-text-dual-encoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/vision-text-dual-encoder/#flaxvisiontextdualencodermodel
|
.md
|
No docstring available for FlaxVisionTextDualEncoderModel
Methods: __call__
</tf>
<jax>
|
358_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vision-text-dual-encoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/vision-text-dual-encoder/#tfvisiontextdualencodermodel
|
.md
|
No docstring available for TFVisionTextDualEncoderModel
Methods: call
</jax>
</frameworkcontent>
|
358_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb-moe.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb-moe/
|
.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
359_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb-moe.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb-moe/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
359_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb-moe.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb-moe/#overview
|
.md
|
The NLLB model was presented in [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) by Marta R. Costa-jussà, James Cross, Onur Çelebi,
Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, Anna Sun, Skyler Wang, Guillaume Wenzek, Al Youngblood, Bapi Akula,
|
359_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb-moe.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb-moe/#overview
|
.md
|
Loic Barrault, Gabriel Mejia Gonzalez, Prangthip Hansanti, John Hoffman, Semarley Jarrett, Kaushik Ram Sadagopan, Dirk Rowe, Shannon Spruit, Chau Tran, Pierre Andrews,
Necip Fazil Ayan, Shruti Bhosale, Sergey Edunov, Angela Fan, Cynthia Gao, Vedanuj Goswami, Francisco Guzmán, Philipp Koehn, Alexandre Mourachko, Christophe Ropers,
Safiyyah Saleem, Holger Schwenk, and Jeff Wang.
The abstract of the paper is the following:
|
359_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb-moe.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb-moe/#overview
|
.md
|
Safiyyah Saleem, Holger Schwenk, and Jeff Wang.
The abstract of the paper is the following:
*Driven by the goal of eradicating language barriers on a global scale, machine translation has solidified itself as a key focus of artificial intelligence research today.
However, such efforts have coalesced around a small subset of languages, leaving behind the vast majority of mostly low-resource languages. What does it take to break the
|
359_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb-moe.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb-moe/#overview
|
.md
|
200 language barrier while ensuring safe, high quality results, all while keeping ethical considerations in mind? In No Language Left Behind, we took on this challenge by
first contextualizing the need for low-resource language translation support through exploratory interviews with native speakers. Then, we created datasets and models aimed
|
359_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb-moe.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb-moe/#overview
|
.md
|
at narrowing the performance gap between low and high-resource languages. More specifically, we developed a conditional compute model based on Sparsely Gated Mixture of
Experts that is trained on data obtained with novel and effective data mining techniques tailored for low-resource languages. We propose multiple architectural and training
|
359_1_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb-moe.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb-moe/#overview
|
.md
|
improvements to counteract overfitting while training on thousands of tasks. Critically, we evaluated the performance of over 40,000 different translation directions using
a human-translated benchmark, Flores-200, and combined human evaluation with a novel toxicity benchmark covering all languages in Flores-200 to assess translation safety.
|
359_1_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb-moe.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb-moe/#overview
|
.md
|
Our model achieves an improvement of 44% BLEU relative to the previous state-of-the-art, laying important groundwork towards realizing a universal translation system.*
This model was contributed by [Arthur Zucker](https://huggingface.co/ArthurZ).
The original code can be found [here](https://github.com/facebookresearch/fairseq).
|
359_1_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb-moe.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb-moe/#usage-tips
|
.md
|
- M2M100ForConditionalGeneration is the base model for both NLLB and NLLB MoE
- The NLLB-MoE is very similar to the NLLB model, but it's feed forward layer is based on the implementation of SwitchTransformers.
- The tokenizer is the same as the NLLB models.
|
359_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb-moe.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb-moe/#implementation-differences-with-switchtransformers
|
.md
|
The biggest difference is the way the tokens are routed. NLLB-MoE uses a `top-2-gate` which means that for each input, only the top two experts are selected based on the
highest predicted probabilities from the gating network, and the remaining experts are ignored. In `SwitchTransformers`, only the top-1 probabilities are computed,
which means that tokens have less probability of being forwarded. Moreover, if a token is not routed to any expert, `SwitchTransformers` still adds its unmodified hidden
|
359_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb-moe.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb-moe/#implementation-differences-with-switchtransformers
|
.md
|
states (kind of like a residual connection) while they are masked in `NLLB`'s top-2 routing mechanism.
|
359_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb-moe.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb-moe/#generating-with-nllb-moe
|
.md
|
The available checkpoints require around 350GB of storage. Make sure to use `accelerate` if you do not have enough RAM on your machine.
While generating the target text set the `forced_bos_token_id` to the target language id. The following
example shows how to translate English to French using the *facebook/nllb-200-distilled-600M* model.
|
359_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb-moe.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb-moe/#generating-with-nllb-moe
|
.md
|
example shows how to translate English to French using the *facebook/nllb-200-distilled-600M* model.
Note that we're using the BCP-47 code for French `fra_Latn`. See [here](https://github.com/facebookresearch/flores/blob/main/flores200/README.md#languages-in-flores-200)
for the list of all BCP-47 in the Flores 200 dataset.
```python
>>> from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
|
359_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb-moe.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb-moe/#generating-with-nllb-moe
|
.md
|
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/nllb-moe-54b")
>>> model = AutoModelForSeq2SeqLM.from_pretrained("facebook/nllb-moe-54b")
>>> article = "Previously, Ring's CEO, Jamie Siminoff, remarked the company started when his doorbell wasn't audible from his shop in his garage."
>>> inputs = tokenizer(article, return_tensors="pt")
|
359_4_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb-moe.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb-moe/#generating-with-nllb-moe
|
.md
|
>>> translated_tokens = model.generate(
... **inputs, forced_bos_token_id=tokenizer.lang_code_to_id["fra_Latn"], max_length=50
... )
>>> tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0]
"Auparavant, le PDG de Ring, Jamie Siminoff, a fait remarquer que la société avait commencé lorsque sa sonnette n'était pas audible depuis son magasin dans son garage."
```
|
359_4_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb-moe.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb-moe/#generating-from-any-other-language-than-english
|
.md
|
English (`eng_Latn`) is set as the default language from which to translate. In order to specify that you'd like to translate from a different language,
you should specify the BCP-47 code in the `src_lang` keyword argument of the tokenizer initialization.
See example below for a translation from romanian to german:
```python
>>> from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
|
359_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb-moe.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb-moe/#generating-from-any-other-language-than-english
|
.md
|
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/nllb-moe-54b", src_lang="ron_Latn")
>>> model = AutoModelForSeq2SeqLM.from_pretrained("facebook/nllb-moe-54b")
>>> article = "Şeful ONU spune că nu există o soluţie militară în Siria"
>>> inputs = tokenizer(article, return_tensors="pt")
>>> translated_tokens = model.generate(
... **inputs, forced_bos_token_id=tokenizer.lang_code_to_id["deu_Latn"], max_length=30
... )
>>> tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0]
```
|
359_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb-moe.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb-moe/#resources
|
.md
|
- [Translation task guide](../tasks/translation)
- [Summarization task guide](../tasks/summarization)
|
359_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb-moe.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb-moe/#nllbmoeconfig
|
.md
|
This is the configuration class to store the configuration of a [`NllbMoeModel`]. It is used to instantiate an
NLLB-MoE model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the NLLB-MoE
[facebook/nllb-moe-54b](https://huggingface.co/facebook/nllb-moe-54b) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
359_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb-moe.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb-moe/#nllbmoeconfig
|
.md
|
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 50265):
Vocabulary size of the NllbMoe model. Defines the number of different tokens that can be represented by the
`inputs_ids` passed when calling [`NllbMoeModel`] or
d_model (`int`, *optional*, defaults to 1024):
Dimensionality of the layers and the pooler layer.
|
359_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb-moe.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb-moe/#nllbmoeconfig
|
.md
|
d_model (`int`, *optional*, defaults to 1024):
Dimensionality of the layers and the pooler layer.
encoder_layers (`int`, *optional*, defaults to 12):
Number of encoder layers.
decoder_layers (`int`, *optional*, defaults to 12):
Number of decoder layers.
encoder_attention_heads (`int`, *optional*, defaults to 16):
Number of attention heads for each attention layer in the Transformer encoder.
decoder_attention_heads (`int`, *optional*, defaults to 16):
|
359_7_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb-moe.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb-moe/#nllbmoeconfig
|
.md
|
decoder_attention_heads (`int`, *optional*, defaults to 16):
Number of attention heads for each attention layer in the Transformer decoder.
decoder_ffn_dim (`int`, *optional*, defaults to 4096):
Dimensionality of the "intermediate" (often named feed-forward) layer in decoder.
encoder_ffn_dim (`int`, *optional*, defaults to 4096):
Dimensionality of the "intermediate" (often named feed-forward) layer in encoder.
activation_function (`str` or `function`, *optional*, defaults to `"gelu"`):
|
359_7_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb-moe.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb-moe/#nllbmoeconfig
|
.md
|
activation_function (`str` or `function`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"silu"` and `"gelu_new"` are supported.
dropout (`float`, *optional*, defaults to 0.1):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
|
359_7_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb-moe.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb-moe/#nllbmoeconfig
|
.md
|
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
activation_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for activations inside the fully connected layer.
classifier_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for classifier.
max_position_embeddings (`int`, *optional*, defaults to 1024):
The maximum sequence length that this model might ever be used with. Typically set this to something large
|
359_7_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb-moe.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb-moe/#nllbmoeconfig
|
.md
|
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
init_std (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
encoder_layerdrop (`float`, *optional*, defaults to 0.0):
The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
|
359_7_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb-moe.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb-moe/#nllbmoeconfig
|
.md
|
The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
decoder_layerdrop (`float`, *optional*, defaults to 0.0):
The LayerDrop probability for the decoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
second_expert_policy ( `str`, *optional*, default to `"all"`):
The policy used for the sampling the probability of being sampled to a second expert for each token.
|
359_7_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb-moe.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb-moe/#nllbmoeconfig
|
.md
|
The policy used for the sampling the probability of being sampled to a second expert for each token.
normalize_router_prob_before_dropping (`bool`, *optional*, defaults to `True`):
Whether or not to normalize the router probabilities before applying a mask based on the experts capacity
(capacity dropping).
batch_prioritized_routing (`bool`, *optional*, defaults to `True`):
Whether or not to orders the tokens by their router probabilities before capacity dropping. This means that
|
359_7_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb-moe.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb-moe/#nllbmoeconfig
|
.md
|
Whether or not to orders the tokens by their router probabilities before capacity dropping. This means that
the tokens that have the highest probabilities will be routed before other tokens that might be further in
the sequence.
moe_eval_capacity_token_fraction (`float`, *optional*, defaults to 1.0):
Fraction of tokens as capacity during validation, if set to negative, uses the same as training. Should be
in range: (0.0, 1.0].
num_experts (`int`, *optional*, defaults to 128):
|
359_7_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb-moe.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb-moe/#nllbmoeconfig
|
.md
|
in range: (0.0, 1.0].
num_experts (`int`, *optional*, defaults to 128):
Number of experts for each NllbMoeSparseMlp layer.
expert_capacity (`int`, *optional*, defaults to 64):
Number of tokens that can be stored in each expert.
encoder_sparse_step (`int`, *optional*, defaults to 4):
Frequency of the sparse layers in the encoder. 4 means that one out of 4 layers will be sparse.
decoder_sparse_step (`int`, *optional*, defaults to 4):
|
359_7_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb-moe.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb-moe/#nllbmoeconfig
|
.md
|
decoder_sparse_step (`int`, *optional*, defaults to 4):
Frequency of the sparse layers in the decoder. 4 means that one out of 4 layers will be sparse.
router_dtype (`str`, *optional*, default to `"float32"`):
The `dtype` used for the routers. It is preferable to keep the `dtype` to `"float32"` as specified in the
*selective precision* discussion in [the paper](https://arxiv.org/abs/2101.03961).
router_ignore_padding_tokens (`bool`, *optional*, defaults to `False`):
|
359_7_11
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb-moe.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb-moe/#nllbmoeconfig
|
.md
|
router_ignore_padding_tokens (`bool`, *optional*, defaults to `False`):
Whether to ignore padding tokens when routing. if `False`, the padding tokens are not routed to any
experts.
router_bias (`bool`, *optional*, defaults to `False`):
Whether or not the classifier of the router should have a bias.
moe_token_dropout (`float`, *optional*, defualt ot 0.2):
Masking rate for MoE expert output masking (EOM), which is implemented via a Dropout2d on the expert
outputs.
|
359_7_12
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb-moe.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb-moe/#nllbmoeconfig
|
.md
|
Masking rate for MoE expert output masking (EOM), which is implemented via a Dropout2d on the expert
outputs.
output_router_logits (`bool`, *optional*, defaults to `False`):
Whether or not to return the router logits. Only set to `True` to get the auxiliary loss when training.
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models).
Example:
```python
>>> from transformers import NllbMoeModel, NllbMoeConfig
|
359_7_13
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb-moe.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb-moe/#nllbmoeconfig
|
.md
|
>>> # Initializing a NllbMoe facebook/nllb-moe-54b style configuration
>>> configuration = NllbMoeConfig()
>>> # Initializing a model from the facebook/nllb-moe-54b style configuration
>>> model = NllbMoeModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
359_7_14
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb-moe.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb-moe/#nllbmoetop2router
|
.md
|
Router using tokens choose top-2 experts assignment.
This router uses the same mechanism as in NLLB-MoE from the fairseq repository. Items are sorted by router_probs
and then routed to their choice of expert until the expert's expert_capacity is reached. **There is no guarantee
that each token is processed by an expert**, or that each expert receives at least one token.
The router combining weights are also returned to make sure that the states that are not updated will be masked.
|
359_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb-moe.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb-moe/#nllbmoetop2router
|
.md
|
The router combining weights are also returned to make sure that the states that are not updated will be masked.
Methods: route_tokens
- forward
|
359_8_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb-moe.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb-moe/#nllbmoesparsemlp
|
.md
|
Implementation of the NLLB-MoE sparse MLP module.
Methods: forward
|
359_9_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb-moe.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb-moe/#nllbmoemodel
|
.md
|
The bare NllbMoe Model outputting raw hidden-states without any specific head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
359_10_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb-moe.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb-moe/#nllbmoemodel
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`NllbMoeConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
|
359_10_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb-moe.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb-moe/#nllbmoemodel
|
.md
|
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
359_10_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb-moe.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb-moe/#nllbmoeforconditionalgeneration
|
.md
|
The NllbMoe Model with a language modeling head. Can be used for summarization.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
359_11_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb-moe.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb-moe/#nllbmoeforconditionalgeneration
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`NllbMoeConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
|
359_11_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb-moe.md
|
https://huggingface.co/docs/transformers/en/model_doc/nllb-moe/#nllbmoeforconditionalgeneration
|
.md
|
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
359_11_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptsan-japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gptsan-japanese/
|
.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
360_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptsan-japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gptsan-japanese/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
360_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptsan-japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gptsan-japanese/#gptsan-japanese
|
.md
|
<Tip warning={true}>
This model is in maintenance mode only, we don't accept any new PRs changing its code.
If you run into any issues running this model, please reinstall the last version that supported this model: v4.40.2.
You can do so by running the following command: `pip install -U transformers==4.40.2`.
</Tip>
|
360_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptsan-japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gptsan-japanese/#overview
|
.md
|
The GPTSAN-japanese model was released in the repository by Toshiyuki Sakamoto (tanreinama).
GPTSAN is a Japanese language model using Switch Transformer. It has the same structure as the model introduced as Prefix LM
in the T5 paper, and support both Text Generation and Masked Language Modeling tasks. These basic tasks similarly can
fine-tune for translation or summarization.
|
360_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptsan-japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gptsan-japanese/#usage-example
|
.md
|
The `generate()` method can be used to generate text using GPTSAN-Japanese model.
```python
>>> from transformers import AutoModel, AutoTokenizer
>>> import torch
|
360_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptsan-japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gptsan-japanese/#usage-example
|
.md
|
>>> tokenizer = AutoTokenizer.from_pretrained("Tanrei/GPTSAN-japanese")
>>> model = AutoModel.from_pretrained("Tanrei/GPTSAN-japanese").cuda()
>>> x_tok = tokenizer("は、", prefix_text="織田信長", return_tensors="pt")
>>> torch.manual_seed(0)
>>> gen_tok = model.generate(x_tok.input_ids.cuda(), token_type_ids=x_tok.token_type_ids.cuda(), max_new_tokens=20)
>>> tokenizer.decode(gen_tok[0])
'織田信長は、2004年に『戦国BASARA』のために、豊臣秀吉'
```
|
360_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptsan-japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gptsan-japanese/#gptsan-features
|
.md
|
GPTSAN has some unique features. It has a model structure of Prefix-LM. It works as a shifted Masked Language Model for Prefix Input tokens. Un-prefixed inputs behave like normal generative models.
The Spout vector is a GPTSAN specific input. Spout is pre-trained with random inputs, but you can specify a class of text or an arbitrary vector during fine-tuning. This allows you to indicate the tendency of the generated text.
|
360_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptsan-japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gptsan-japanese/#gptsan-features
|
.md
|
GPTSAN has a sparse Feed Forward based on Switch-Transformer. You can also add other layers and train them partially. See the original GPTSAN repository for details.
|
360_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptsan-japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gptsan-japanese/#prefix-lm-model
|
.md
|
GPTSAN has the structure of the model named Prefix-LM in the `T5` paper. (The original GPTSAN repository calls it `hybrid`)
In GPTSAN, the `Prefix` part of Prefix-LM, that is, the input position that can be referenced by both tokens, can be specified with any length.
Arbitrary lengths can also be specified differently for each batch.
This length applies to the text entered in `prefix_text` for the tokenizer.
The tokenizer returns the mask of the `Prefix` part of Prefix-LM as `token_type_ids`.
|
360_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptsan-japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gptsan-japanese/#prefix-lm-model
|
.md
|
The tokenizer returns the mask of the `Prefix` part of Prefix-LM as `token_type_ids`.
The model treats the part where `token_type_ids` is 1 as a `Prefix` part, that is, the input can refer to both tokens before and after.
|
360_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptsan-japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gptsan-japanese/#usage-tips
|
.md
|
Specifying the Prefix part is done with a mask passed to self-attention.
When token_type_ids=None or all zero, it is equivalent to regular causal mask
for example:
>>> x_token = tokenizer("アイウエ")
input_ids: | SOT | SEG | ア | イ | ウ | エ |
token_type_ids: | 1 | 0 | 0 | 0 | 0 | 0 |
prefix_lm_mask:
SOT | 1 0 0 0 0 0 |
SEG | 1 1 0 0 0 0 |
ア | 1 1 1 0 0 0 |
イ | 1 1 1 1 0 0 |
ウ | 1 1 1 1 1 0 |
エ | 1 1 1 1 1 1 |
>>> x_token = tokenizer("", prefix_text="アイウエ")
|
360_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptsan-japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gptsan-japanese/#usage-tips
|
.md
|
イ | 1 1 1 1 0 0 |
ウ | 1 1 1 1 1 0 |
エ | 1 1 1 1 1 1 |
>>> x_token = tokenizer("", prefix_text="アイウエ")
input_ids: | SOT | ア | イ | ウ | エ | SEG |
token_type_ids: | 1 | 1 | 1 | 1 | 1 | 0 |
prefix_lm_mask:
SOT | 1 1 1 1 1 0 |
ア | 1 1 1 1 1 0 |
イ | 1 1 1 1 1 0 |
ウ | 1 1 1 1 1 0 |
エ | 1 1 1 1 1 0 |
SEG | 1 1 1 1 1 1 |
>>> x_token = tokenizer("ウエ", prefix_text="アイ")
input_ids: | SOT | ア | イ | SEG | ウ | エ |
token_type_ids: | 1 | 1 | 1 | 0 | 0 | 0 |
prefix_lm_mask:
|
360_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptsan-japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gptsan-japanese/#usage-tips
|
.md
|
input_ids: | SOT | ア | イ | SEG | ウ | エ |
token_type_ids: | 1 | 1 | 1 | 0 | 0 | 0 |
prefix_lm_mask:
SOT | 1 1 1 0 0 0 |
ア | 1 1 1 0 0 0 |
イ | 1 1 1 0 0 0 |
SEG | 1 1 1 1 0 0 |
ウ | 1 1 1 1 1 0 |
エ | 1 1 1 1 1 1 |
|
360_6_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptsan-japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gptsan-japanese/#spout-vector
|
.md
|
A Spout Vector is a special vector for controlling text generation.
This vector is treated as the first embedding in self-attention to bring extraneous attention to the generated tokens.
In the pre-trained model published from `Tanrei/GPTSAN-japanese`, the Spout Vector is a 128-dimensional vector that passes through 8 fully connected layers in the model and is projected into the space acting as external attention.
|
360_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptsan-japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gptsan-japanese/#spout-vector
|
.md
|
The Spout Vector projected by the fully connected layer is split to be passed to all self-attentions.
|
360_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptsan-japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gptsan-japanese/#gptsanjapaneseconfig
|
.md
|
This is the configuration class to store the configuration of a [`GPTSanJapaneseModel`]. It is used to instantiate
a GPTSANJapanese model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the GPTSANJapanese
[Tanrei/GPTSAN-japanese](https://huggingface.co/Tanrei/GPTSAN-japanese) architecture.
|
360_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptsan-japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gptsan-japanese/#gptsanjapaneseconfig
|
.md
|
[Tanrei/GPTSAN-japanese](https://huggingface.co/Tanrei/GPTSAN-japanese) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Arguments:
vocab_size (`int`, *optional*, defaults to 36000):
Vocabulary size of the GPTSANJapanese model. Defines the number of different tokens that can be represented
by the `inputs_ids` passed when calling [`GPTSanJapaneseModel`].
|
360_8_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptsan-japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gptsan-japanese/#gptsanjapaneseconfig
|
.md
|
by the `inputs_ids` passed when calling [`GPTSanJapaneseModel`].
max_position_embeddings (`int`, *optional*, defaults to 1280):
The maximum sequence length that this model might ever be used with. Defaults set this to 1280.
d_model (`int`, *optional*, defaults to 1024):
Size of the encoder layers and the pooler layer.
d_ff (`int`, *optional*, defaults to 8192):
Size of the intermediate feed forward layer in each `SwitchTransformersBlock`.
d_ext (`int`, *optional*, defaults to 4096):
|
360_8_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptsan-japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gptsan-japanese/#gptsanjapaneseconfig
|
.md
|
Size of the intermediate feed forward layer in each `SwitchTransformersBlock`.
d_ext (`int`, *optional*, defaults to 4096):
Size of the intermediate feed forward layer in each Extra-layers.
d_spout (`int`, *optional*, defaults to 128):
Size of the `spout` vector.
num_switch_layers (`int`, *optional*, defaults to 10):
Number of layers in the Switch Transformer layer.
num_ext_layers (`int`, *optional*, defaults to 0):
Number of layers in the Extra-layers.
num_heads (`int`, *optional*, defaults to 16):
|
360_8_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptsan-japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gptsan-japanese/#gptsanjapaneseconfig
|
.md
|
Number of layers in the Extra-layers.
num_heads (`int`, *optional*, defaults to 16):
Number of attention heads for each attention layer in the Transformer encoder.
num_experts (`int`, *optional*, defaults to 16):
Number of experts for each SwitchTransformer layer.
expert_capacity (`int`, *optional*, defaults to 128):
Number of tokens that can be stored in each expert. If set to 1, the model will behave like a regular
Transformer.
dropout_rate (`float`, *optional*, defaults to 0.0):
|
360_8_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptsan-japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gptsan-japanese/#gptsanjapaneseconfig
|
.md
|
Transformer.
dropout_rate (`float`, *optional*, defaults to 0.0):
The ratio for all dropout layers.
layer_norm_eps (`float`, *optional*, defaults to 1e-5):
The epsilon used by the layer normalization layers.
router_bias (`bool`, *optional*, defaults to `False`):
Whether to add a bias to the router.
router_jitter_noise (`float`, *optional*, defaults to 0.0):
Amount of noise to add to the router. Set it to 0.0 during prediction or set small value (usually 1e-2)
during training.
|
360_8_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptsan-japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gptsan-japanese/#gptsanjapaneseconfig
|
.md
|
Amount of noise to add to the router. Set it to 0.0 during prediction or set small value (usually 1e-2)
during training.
router_dtype (`str`, *optional*, default to `"float32"`):
The `dtype` used for the routers. It is preferable to keep the `dtype` to `"float32"` as specified in the
*selective precision* discussion in [the paper](https://arxiv.org/abs/2101.03961).
router_ignore_padding_tokens (`bool`, *optional*, defaults to `False`):
Whether to ignore padding tokens when routing.
|
360_8_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptsan-japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gptsan-japanese/#gptsanjapaneseconfig
|
.md
|
router_ignore_padding_tokens (`bool`, *optional*, defaults to `False`):
Whether to ignore padding tokens when routing.
output_hidden_states (`bool`, *optional*, default to `False`):
Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
more detail.
output_attentions (`bool`, *optional*, defaults to `False`):
Whether or not to return the attentions tensors of all attention layers.
initializer_factor (`float`, *optional*, defaults to 0.002):
|
360_8_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptsan-japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gptsan-japanese/#gptsanjapaneseconfig
|
.md
|
initializer_factor (`float`, *optional*, defaults to 0.002):
A factor for initializing all weight matrices.
output_router_logits (`bool`, *optional*, default to `False`):
Whether or not to return the router logits of all experts.
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models)
|
360_8_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptsan-japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gptsan-japanese/#gptsanjapanesetokenizer
|
.md
|
This tokenizer is based on GPTNeoXJapaneseTokenizer and has the following modifications
- Decoding byte0~byte255 tokens correctly
- Added bagofword token handling
- Return token_type_ids for Prefix-LM model
The bagofword token represents a repetition of the previous token and is converted to 3 consecutive tokens when
decoding In addition, the original Japanese special Sub-Word-Encoding has been released in this repository
|
360_9_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptsan-japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gptsan-japanese/#gptsanjapanesetokenizer
|
.md
|
decoding In addition, the original Japanese special Sub-Word-Encoding has been released in this repository
(https://github.com/tanreinama/Japanese-BPEEncoder_V2). The token_type_ids is a mask indicating the prefix input
position of the Prefix-LM model. To specify a prefix position, specify a prefix input for prefix_text, or specify a
sentence of the prefix part and the part after it as a text pair of batch input.
Example:
```python
>>> from transformers import GPTSanJapaneseTokenizer
|
360_9_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptsan-japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gptsan-japanese/#gptsanjapanesetokenizer
|
.md
|
>>> tokenizer = GPTSanJapaneseTokenizer.from_pretrained("Tanrei/GPTSAN-japanese")
>>> # You can confirm both 慶応 and 慶應 are encoded to 17750
>>> tokenizer("吾輩は猫である🐯。実は慶応(慶應)大学出身")["input_ids"]
[35993, 35998, 34347, 31459, 30647, 31448, 25, 30659, 35729, 35676, 32417, 30647, 17750, 35589, 17750, 35590, 321, 1281]
|
360_9_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptsan-japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gptsan-japanese/#gptsanjapanesetokenizer
|
.md
|
>>> # Both 慶応 and 慶應 are decoded to 慶応
>>> tokenizer.decode(tokenizer("吾輩は猫である🐯。実は慶応(慶應)大学出身")["input_ids"])
'吾輩は猫である🐯。実は慶応(慶応)大学出身'
```
Example for Prefix-LM:
```python
>>> from transformers import GPTSanJapaneseTokenizer
>>> tokenizer = GPTSanJapaneseTokenizer.from_pretrained("Tanrei/GPTSAN-japanese")
>>> tokenizer("実は慶応(慶應)大学出身", prefix_text="吾輩は猫である🐯。")["input_ids"]
[35993, 34347, 31459, 30647, 31448, 25, 30659, 35729, 35676, 35998, 32417, 30647, 17750, 35589, 17750, 35590, 321, 1281]
|
360_9_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptsan-japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gptsan-japanese/#gptsanjapanesetokenizer
|
.md
|
>>> # Mask for Prefix-LM inputs
>>> tokenizer("実は慶応(慶應)大学出身", prefix_text="吾輩は猫である🐯。")["token_type_ids"]
[1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0]
```
Example for batch encode:
```python
>>> from transformers import GPTSanJapaneseTokenizer
|
360_9_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptsan-japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gptsan-japanese/#gptsanjapanesetokenizer
|
.md
|
>>> tokenizer = GPTSanJapaneseTokenizer.from_pretrained("Tanrei/GPTSAN-japanese")
>>> tokenizer([["武田信玄", "は、"], ["織田信長", "の配下の、"]], padding=True)["input_ids"]
[[35993, 35998, 8640, 25948, 35993, 35998, 30647, 35675, 35999, 35999], [35993, 35998, 10382, 9868, 35993, 35998, 30646, 9459, 30646, 35675]]
>>> # Mask for Prefix-LM inputs
>>> tokenizer([["武田信玄", "は、"], ["織田信長", "の配下の、"]], padding=True)["token_type_ids"]
[[1, 0, 0, 0, 0, 0, 0, 0, 0, 0], [1, 0, 0, 0, 0, 0, 0, 0, 0, 0]]
|
360_9_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptsan-japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gptsan-japanese/#gptsanjapanesetokenizer
|
.md
|
>>> # Mask for padding
>>> tokenizer([["武田信玄", "は、"], ["織田信長", "の配下の、"]], padding=True)["attention_mask"]
[[1, 1, 1, 1, 1, 1, 1, 1, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]
```
Args:
vocab_file (`str`):
File containing the vocabulary.
emoji_file (`str`):
File containing the emoji.
unk_token (`str`, *optional*, defaults to `"<|nottoken|>"`):
The token used for unknown charactor
pad_token (`str`, *optional*, defaults to `"<|separator|>"`):
The token used for padding
|
360_9_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptsan-japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gptsan-japanese/#gptsanjapanesetokenizer
|
.md
|
The token used for unknown charactor
pad_token (`str`, *optional*, defaults to `"<|separator|>"`):
The token used for padding
bos_token (`str`, *optional*, defaults to `"<|startoftext|>"`):
The beginning of sequence token.
eos_token (`str`, *optional*, defaults to `"<|endoftext|>"`):
The end of sequence token.
sep_token (`str`, *optional*, defaults to `"<|segmenter|>"`):
A special token to separate token to prefix part and general input part.
do_clean_text (`bool`, *optional*, defaults to `False`):
|
360_9_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptsan-japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gptsan-japanese/#gptsanjapanesetokenizer
|
.md
|
do_clean_text (`bool`, *optional*, defaults to `False`):
Whether or not to clean text for URL, EMAIL, TEL, Japanese DATE and Japanese PRICE.
|
360_9_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptsan-japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gptsan-japanese/#gptsanjapanesemodel
|
.md
|
The bare GPTSAN-japanese Model transformer outputting raw hidden-states without any specific head on top.
The [GPTSAN-japanese](https://github.com/tanreinama/GPTSAN) model was proposed in General-purpose Swich transformer
based Japanese language model
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
|
360_10_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptsan-japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gptsan-japanese/#gptsanjapanesemodel
|
.md
|
and behavior.
Parameters:
config ([`GPTSanJapaneseConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
|
360_10_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptsan-japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gptsan-japanese/#gptsanjapaneseforconditionalgeneration
|
.md
|
The bare GPTSAN-japanese Model with a language modeling head.
The [GPTSAN-japanese](https://github.com/tanreinama/GPTSAN) model was proposed in General-purpose Swich transformer
based Japanese language model
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
|
360_11_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptsan-japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gptsan-japanese/#gptsanjapaneseforconditionalgeneration
|
.md
|
and behavior.
Parameters:
config ([`GPTSanJapaneseConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
360_11_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nat.md
|
https://huggingface.co/docs/transformers/en/model_doc/nat/
|
.md
|
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
361_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nat.md
|
https://huggingface.co/docs/transformers/en/model_doc/nat/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
361_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nat.md
|
https://huggingface.co/docs/transformers/en/model_doc/nat/#neighborhood-attention-transformer
|
.md
|
<Tip warning={true}>
This model is in maintenance mode only, we don't accept any new PRs changing its code.
If you run into any issues running this model, please reinstall the last version that supported this model: v4.40.2.
You can do so by running the following command: `pip install -U transformers==4.40.2`.
</Tip>
|
361_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nat.md
|
https://huggingface.co/docs/transformers/en/model_doc/nat/#overview
|
.md
|
NAT was proposed in [Neighborhood Attention Transformer](https://arxiv.org/abs/2204.07143)
by Ali Hassani, Steven Walton, Jiachen Li, Shen Li, and Humphrey Shi.
It is a hierarchical vision transformer based on Neighborhood Attention, a sliding-window self attention pattern.
The abstract from the paper is the following:
*We present Neighborhood Attention (NA), the first efficient and scalable sliding-window attention mechanism for vision.
|
361_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nat.md
|
https://huggingface.co/docs/transformers/en/model_doc/nat/#overview
|
.md
|
*We present Neighborhood Attention (NA), the first efficient and scalable sliding-window attention mechanism for vision.
NA is a pixel-wise operation, localizing self attention (SA) to the nearest neighboring pixels, and therefore enjoys a
linear time and space complexity compared to the quadratic complexity of SA. The sliding-window pattern allows NA's
receptive field to grow without needing extra pixel shifts, and preserves translational equivariance, unlike
|
361_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nat.md
|
https://huggingface.co/docs/transformers/en/model_doc/nat/#overview
|
.md
|
receptive field to grow without needing extra pixel shifts, and preserves translational equivariance, unlike
Swin Transformer's Window Self Attention (WSA). We develop NATTEN (Neighborhood Attention Extension), a Python package
with efficient C++ and CUDA kernels, which allows NA to run up to 40% faster than Swin's WSA while using up to 25% less
memory. We further present Neighborhood Attention Transformer (NAT), a new hierarchical transformer design based on NA
|
361_2_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nat.md
|
https://huggingface.co/docs/transformers/en/model_doc/nat/#overview
|
.md
|
memory. We further present Neighborhood Attention Transformer (NAT), a new hierarchical transformer design based on NA
that boosts image classification and downstream vision performance. Experimental results on NAT are competitive;
NAT-Tiny reaches 83.2% top-1 accuracy on ImageNet, 51.4% mAP on MS-COCO and 48.4% mIoU on ADE20K, which is 1.9%
ImageNet accuracy, 1.0% COCO mAP, and 2.6% ADE20K mIoU improvement over a Swin model with similar size. *
<img
|
361_2_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nat.md
|
https://huggingface.co/docs/transformers/en/model_doc/nat/#overview
|
.md
|
ImageNet accuracy, 1.0% COCO mAP, and 2.6% ADE20K mIoU improvement over a Swin model with similar size. *
<img
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/neighborhood-attention-pattern.jpg"
alt="drawing" width="600"/>
<small> Neighborhood Attention compared to other attention patterns.
Taken from the <a href="https://arxiv.org/abs/2204.07143">original paper</a>.</small>
This model was contributed by [Ali Hassani](https://huggingface.co/alihassanijr).
|
361_2_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nat.md
|
https://huggingface.co/docs/transformers/en/model_doc/nat/#overview
|
.md
|
This model was contributed by [Ali Hassani](https://huggingface.co/alihassanijr).
The original code can be found [here](https://github.com/SHI-Labs/Neighborhood-Attention-Transformer).
|
361_2_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nat.md
|
https://huggingface.co/docs/transformers/en/model_doc/nat/#usage-tips
|
.md
|
- One can use the [`AutoImageProcessor`] API to prepare images for the model.
- NAT can be used as a *backbone*. When `output_hidden_states = True`,
it will output both `hidden_states` and `reshaped_hidden_states`.
The `reshaped_hidden_states` have a shape of `(batch, num_channels, height, width)` rather than
`(batch_size, height, width, num_channels)`.
Notes:
- NAT depends on [NATTEN](https://github.com/SHI-Labs/NATTEN/)'s implementation of Neighborhood Attention.
|
361_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nat.md
|
https://huggingface.co/docs/transformers/en/model_doc/nat/#usage-tips
|
.md
|
Notes:
- NAT depends on [NATTEN](https://github.com/SHI-Labs/NATTEN/)'s implementation of Neighborhood Attention.
You can install it with pre-built wheels for Linux by referring to [shi-labs.com/natten](https://shi-labs.com/natten),
or build on your system by running `pip install natten`.
Note that the latter will likely take time to compile. NATTEN does not support Windows devices yet.
- Patch size of 4 is only supported at the moment.
|
361_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nat.md
|
https://huggingface.co/docs/transformers/en/model_doc/nat/#resources
|
.md
|
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with NAT.
<PipelineTag pipeline="image-classification"/>
- [`NatForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb).
|
361_4_0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.