source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phi3.md
|
https://huggingface.co/docs/transformers/en/model_doc/phi3/#phi3config
|
.md
|
hidden_size (`int`, *optional*, defaults to 3072):
Dimension of the hidden representations.
intermediate_size (`int`, *optional*, defaults to 8192):
Dimension of the MLP representations.
num_hidden_layers (`int`, *optional*, defaults to 32):
Number of hidden layers in the Transformer decoder.
num_attention_heads (`int`, *optional*, defaults to 32):
Number of attention heads for each attention layer in the Transformer decoder.
num_key_value_heads (`int`, *optional*):
|
302_5_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phi3.md
|
https://huggingface.co/docs/transformers/en/model_doc/phi3/#phi3config
|
.md
|
Number of attention heads for each attention layer in the Transformer decoder.
num_key_value_heads (`int`, *optional*):
This is the number of key_value heads that should be used to implement Grouped Query Attention. If
`num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if
`num_key_value_heads=1` the model will use Multi Query Attention (MQA) otherwise GQA is used. When
|
302_5_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phi3.md
|
https://huggingface.co/docs/transformers/en/model_doc/phi3/#phi3config
|
.md
|
`num_key_value_heads=1` the model will use Multi Query Attention (MQA) otherwise GQA is used. When
converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
by meanpooling all the original heads within that group. For more details checkout [this
paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to
`num_attention_heads`.
resid_pdrop (`float`, *optional*, defaults to 0.0):
Dropout probability for mlp outputs.
|
302_5_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phi3.md
|
https://huggingface.co/docs/transformers/en/model_doc/phi3/#phi3config
|
.md
|
`num_attention_heads`.
resid_pdrop (`float`, *optional*, defaults to 0.0):
Dropout probability for mlp outputs.
embd_pdrop (`int`, *optional*, defaults to 0.0):
The dropout ratio for the embeddings.
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio after computing the attention scores.
hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
The non-linear activation function (function or string) in the decoder.
|
302_5_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phi3.md
|
https://huggingface.co/docs/transformers/en/model_doc/phi3/#phi3config
|
.md
|
The non-linear activation function (function or string) in the decoder.
max_position_embeddings (`int`, *optional*, defaults to 4096):
The maximum sequence length that this model might ever be used with.
original_max_position_embeddings (`int`, *optional*, defaults to 4096):
The maximum sequence length that this model was trained with. This is used to determine the size of the
original RoPE embeddings when using long scaling.
initializer_range (`float`, *optional*, defaults to 0.02):
|
302_5_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phi3.md
|
https://huggingface.co/docs/transformers/en/model_doc/phi3/#phi3config
|
.md
|
original RoPE embeddings when using long scaling.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
rms_norm_eps (`float`, *optional*, defaults to 1e-05):
The epsilon value used for the RMSNorm.
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models). Only
|
302_5_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phi3.md
|
https://huggingface.co/docs/transformers/en/model_doc/phi3/#phi3config
|
.md
|
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if `config.is_decoder=True`. Whether to tie weight embeddings or not.
tie_word_embeddings (`bool`, *optional*, defaults to `False`):
Whether to tie weight embeddings
rope_theta (`float`, *optional*, defaults to 10000.0):
The base period of the RoPE embeddings.
rope_scaling (`dict`, *optional*):
The scaling strategy for the RoPE embeddings. If `None`, no scaling is applied. If a dictionary, it must
|
302_5_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phi3.md
|
https://huggingface.co/docs/transformers/en/model_doc/phi3/#phi3config
|
.md
|
The scaling strategy for the RoPE embeddings. If `None`, no scaling is applied. If a dictionary, it must
contain the following keys: `type`, `short_factor` and `long_factor`. The `type` must be `longrope` and
the `short_factor` and `long_factor` must be lists of numbers with the same length as the hidden size
divided by the number of attention heads divided by 2.
bos_token_id (`int`, *optional*, defaults to 1):
The id of the "beginning-of-sequence" token.
|
302_5_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phi3.md
|
https://huggingface.co/docs/transformers/en/model_doc/phi3/#phi3config
|
.md
|
bos_token_id (`int`, *optional*, defaults to 1):
The id of the "beginning-of-sequence" token.
eos_token_id (`int`, *optional*, defaults to 32000):
The id of the "end-of-sequence" token.
pad_token_id (`int`, *optional*, defaults to 32000):
The id of the padding token.
sliding_window (`int`, *optional*):
Sliding window attention window size. If `None`, no sliding window is applied.
Example:
```python
>>> from transformers import Phi3Model, Phi3Config
|
302_5_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phi3.md
|
https://huggingface.co/docs/transformers/en/model_doc/phi3/#phi3config
|
.md
|
>>> # Initializing a Phi-3 style configuration
>>> configuration = Phi3Config.from_pretrained("microsoft/Phi-3-mini-4k-instruct")
>>> # Initializing a model from the configuration
>>> model = Phi3Model(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
<frameworkcontent>
<pt>
|
302_5_11
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phi3.md
|
https://huggingface.co/docs/transformers/en/model_doc/phi3/#phi3model
|
.md
|
The bare Phi3 Model outputting raw hidden-states without any specific head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
302_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phi3.md
|
https://huggingface.co/docs/transformers/en/model_doc/phi3/#phi3model
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`Phi3Config`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
|
302_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phi3.md
|
https://huggingface.co/docs/transformers/en/model_doc/phi3/#phi3model
|
.md
|
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Transformer decoder consisting of *config.num_hidden_layers* layers. Each layer is a [`Phi3DecoderLayer`]
Args:
config: Phi3Config
Methods: forward
|
302_6_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phi3.md
|
https://huggingface.co/docs/transformers/en/model_doc/phi3/#phi3forcausallm
|
.md
|
No docstring available for Phi3ForCausalLM
Methods: forward
- generate
|
302_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phi3.md
|
https://huggingface.co/docs/transformers/en/model_doc/phi3/#phi3forsequenceclassification
|
.md
|
The Phi3 Model transformer with a sequence classification head on top (linear layer).
[`Phi3ForSequenceClassification`] uses the last token in order to do the classification, as other causal models
(e.g. GPT-2) do.
Since it does classification on the last token, it requires to know the position of the last token. If a
`pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If
|
302_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phi3.md
|
https://huggingface.co/docs/transformers/en/model_doc/phi3/#phi3forsequenceclassification
|
.md
|
`pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If
no `pad_token_id` is defined, it simply takes the last value in each row of the batch. Since it cannot guess the
padding tokens when `inputs_embeds` are passed instead of `input_ids`, it does the same (take the last value in
each row of the batch).
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
|
302_8_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phi3.md
|
https://huggingface.co/docs/transformers/en/model_doc/phi3/#phi3forsequenceclassification
|
.md
|
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`Phi3Config`]):
|
302_8_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phi3.md
|
https://huggingface.co/docs/transformers/en/model_doc/phi3/#phi3forsequenceclassification
|
.md
|
and behavior.
Parameters:
config ([`Phi3Config`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
302_8_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phi3.md
|
https://huggingface.co/docs/transformers/en/model_doc/phi3/#phi3fortokenclassification
|
.md
|
The Phi3 Model transformer with a token classification head on top (a linear layer on top of the hidden-states
output) e.g. for Named-Entity-Recognition (NER) tasks.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
302_9_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phi3.md
|
https://huggingface.co/docs/transformers/en/model_doc/phi3/#phi3fortokenclassification
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`Phi3Config`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
|
302_9_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phi3.md
|
https://huggingface.co/docs/transformers/en/model_doc/phi3/#phi3fortokenclassification
|
.md
|
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
</pt>
</frameworkcontent>
|
302_9_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/upernet.md
|
https://huggingface.co/docs/transformers/en/model_doc/upernet/
|
.md
|
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
303_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/upernet.md
|
https://huggingface.co/docs/transformers/en/model_doc/upernet/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
303_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/upernet.md
|
https://huggingface.co/docs/transformers/en/model_doc/upernet/#overview
|
.md
|
The UPerNet model was proposed in [Unified Perceptual Parsing for Scene Understanding](https://arxiv.org/abs/1807.10221)
by Tete Xiao, Yingcheng Liu, Bolei Zhou, Yuning Jiang, Jian Sun. UPerNet is a general framework to effectively segment
a wide range of concepts from images, leveraging any vision backbone like [ConvNeXt](convnext) or [Swin](swin).
The abstract from the paper is the following:
|
303_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/upernet.md
|
https://huggingface.co/docs/transformers/en/model_doc/upernet/#overview
|
.md
|
*Humans recognize the visual world at multiple levels: we effortlessly categorize scenes and detect objects inside, while also identifying the textures and surfaces of the objects along with their different compositional parts. In this paper, we study a new task called Unified Perceptual Parsing, which requires the machine vision systems to recognize as many visual concepts as possible from a given image. A multi-task framework called UPerNet and a training strategy are developed to learn from
|
303_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/upernet.md
|
https://huggingface.co/docs/transformers/en/model_doc/upernet/#overview
|
.md
|
as possible from a given image. A multi-task framework called UPerNet and a training strategy are developed to learn from heterogeneous image annotations. We benchmark our framework on Unified Perceptual Parsing and show that it is able to effectively segment a wide range of concepts from images. The trained networks are further applied to discover visual knowledge in natural scenes.*
|
303_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/upernet.md
|
https://huggingface.co/docs/transformers/en/model_doc/upernet/#overview
|
.md
|
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/upernet_architecture.jpg"
alt="drawing" width="600"/>
<small> UPerNet framework. Taken from the <a href="https://arxiv.org/abs/1807.10221">original paper</a>. </small>
|
303_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/upernet.md
|
https://huggingface.co/docs/transformers/en/model_doc/upernet/#overview
|
.md
|
<small> UPerNet framework. Taken from the <a href="https://arxiv.org/abs/1807.10221">original paper</a>. </small>
This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code is based on OpenMMLab's mmsegmentation [here](https://github.com/open-mmlab/mmsegmentation/blob/master/mmseg/models/decode_heads/uper_head.py).
|
303_1_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/upernet.md
|
https://huggingface.co/docs/transformers/en/model_doc/upernet/#usage-examples
|
.md
|
UPerNet is a general framework for semantic segmentation. It can be used with any vision backbone, like so:
```py
from transformers import SwinConfig, UperNetConfig, UperNetForSemanticSegmentation
backbone_config = SwinConfig(out_features=["stage1", "stage2", "stage3", "stage4"])
|
303_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/upernet.md
|
https://huggingface.co/docs/transformers/en/model_doc/upernet/#usage-examples
|
.md
|
backbone_config = SwinConfig(out_features=["stage1", "stage2", "stage3", "stage4"])
config = UperNetConfig(backbone_config=backbone_config)
model = UperNetForSemanticSegmentation(config)
```
To use another vision backbone, like [ConvNeXt](convnext), simply instantiate the model with the appropriate backbone:
```py
from transformers import ConvNextConfig, UperNetConfig, UperNetForSemanticSegmentation
backbone_config = ConvNextConfig(out_features=["stage1", "stage2", "stage3", "stage4"])
|
303_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/upernet.md
|
https://huggingface.co/docs/transformers/en/model_doc/upernet/#usage-examples
|
.md
|
backbone_config = ConvNextConfig(out_features=["stage1", "stage2", "stage3", "stage4"])
config = UperNetConfig(backbone_config=backbone_config)
model = UperNetForSemanticSegmentation(config)
```
Note that this will randomly initialize all the weights of the model.
|
303_2_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/upernet.md
|
https://huggingface.co/docs/transformers/en/model_doc/upernet/#resources
|
.md
|
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with UPerNet.
- Demo notebooks for UPerNet can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/UPerNet).
|
303_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/upernet.md
|
https://huggingface.co/docs/transformers/en/model_doc/upernet/#resources
|
.md
|
- Demo notebooks for UPerNet can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/UPerNet).
- [`UperNetForSemanticSegmentation`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/semantic-segmentation) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/semantic_segmentation.ipynb).
- See also: [Semantic segmentation task guide](../tasks/semantic_segmentation)
|
303_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/upernet.md
|
https://huggingface.co/docs/transformers/en/model_doc/upernet/#resources
|
.md
|
- See also: [Semantic segmentation task guide](../tasks/semantic_segmentation)
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
|
303_3_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/upernet.md
|
https://huggingface.co/docs/transformers/en/model_doc/upernet/#upernetconfig
|
.md
|
This is the configuration class to store the configuration of an [`UperNetForSemanticSegmentation`]. It is used to
instantiate an UperNet model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the UperNet
[openmmlab/upernet-convnext-tiny](https://huggingface.co/openmmlab/upernet-convnext-tiny) architecture.
|
303_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/upernet.md
|
https://huggingface.co/docs/transformers/en/model_doc/upernet/#upernetconfig
|
.md
|
[openmmlab/upernet-convnext-tiny](https://huggingface.co/openmmlab/upernet-convnext-tiny) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
backbone_config (`PretrainedConfig` or `dict`, *optional*, defaults to `ResNetConfig()`):
The configuration of the backbone model.
backbone (`str`, *optional*):
|
303_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/upernet.md
|
https://huggingface.co/docs/transformers/en/model_doc/upernet/#upernetconfig
|
.md
|
The configuration of the backbone model.
backbone (`str`, *optional*):
Name of backbone to use when `backbone_config` is `None`. If `use_pretrained_backbone` is `True`, this
will load the corresponding pretrained weights from the timm or transformers library. If `use_pretrained_backbone`
is `False`, this loads the backbone's config and uses that to initialize the backbone with random weights.
use_pretrained_backbone (`bool`, *optional*, `False`):
Whether to use pretrained weights for the backbone.
|
303_4_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/upernet.md
|
https://huggingface.co/docs/transformers/en/model_doc/upernet/#upernetconfig
|
.md
|
use_pretrained_backbone (`bool`, *optional*, `False`):
Whether to use pretrained weights for the backbone.
use_timm_backbone (`bool`, *optional*, `False`):
Whether to load `backbone` from the timm library. If `False`, the backbone is loaded from the transformers
library.
backbone_kwargs (`dict`, *optional*):
Keyword arguments to be passed to AutoBackbone when loading from a checkpoint
e.g. `{'out_indices': (0, 1, 2, 3)}`. Cannot be specified if `backbone_config` is set.
|
303_4_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/upernet.md
|
https://huggingface.co/docs/transformers/en/model_doc/upernet/#upernetconfig
|
.md
|
e.g. `{'out_indices': (0, 1, 2, 3)}`. Cannot be specified if `backbone_config` is set.
hidden_size (`int`, *optional*, defaults to 512):
The number of hidden units in the convolutional layers.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
pool_scales (`Tuple[int]`, *optional*, defaults to `[1, 2, 3, 6]`):
Pooling scales used in Pooling Pyramid Module applied on the last feature map.
|
303_4_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/upernet.md
|
https://huggingface.co/docs/transformers/en/model_doc/upernet/#upernetconfig
|
.md
|
Pooling scales used in Pooling Pyramid Module applied on the last feature map.
use_auxiliary_head (`bool`, *optional*, defaults to `True`):
Whether to use an auxiliary head during training.
auxiliary_loss_weight (`float`, *optional*, defaults to 0.4):
Weight of the cross-entropy loss of the auxiliary head.
auxiliary_channels (`int`, *optional*, defaults to 256):
Number of channels to use in the auxiliary head.
auxiliary_num_convs (`int`, *optional*, defaults to 1):
|
303_4_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/upernet.md
|
https://huggingface.co/docs/transformers/en/model_doc/upernet/#upernetconfig
|
.md
|
Number of channels to use in the auxiliary head.
auxiliary_num_convs (`int`, *optional*, defaults to 1):
Number of convolutional layers to use in the auxiliary head.
auxiliary_concat_input (`bool`, *optional*, defaults to `False`):
Whether to concatenate the output of the auxiliary head with the input before the classification layer.
loss_ignore_index (`int`, *optional*, defaults to 255):
The index that is ignored by the loss function.
Examples:
```python
|
303_4_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/upernet.md
|
https://huggingface.co/docs/transformers/en/model_doc/upernet/#upernetconfig
|
.md
|
loss_ignore_index (`int`, *optional*, defaults to 255):
The index that is ignored by the loss function.
Examples:
```python
>>> from transformers import UperNetConfig, UperNetForSemanticSegmentation
|
303_4_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/upernet.md
|
https://huggingface.co/docs/transformers/en/model_doc/upernet/#upernetconfig
|
.md
|
>>> # Initializing a configuration
>>> configuration = UperNetConfig()
>>> # Initializing a model (with random weights) from the configuration
>>> model = UperNetForSemanticSegmentation(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
303_4_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/upernet.md
|
https://huggingface.co/docs/transformers/en/model_doc/upernet/#upernetforsemanticsegmentation
|
.md
|
UperNet framework leveraging any vision backbone e.g. for ADE20k, CityScapes.
Parameters:
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
config ([`UperNetConfig`]): Model configuration class with all the parameters of the model.
|
303_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/upernet.md
|
https://huggingface.co/docs/transformers/en/model_doc/upernet/#upernetforsemanticsegmentation
|
.md
|
behavior.
config ([`UperNetConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
303_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot/
|
.md
|
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
304_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
304_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot/#overview
|
.md
|
The Blender chatbot model was proposed in [Recipes for building an open-domain chatbot](https://arxiv.org/pdf/2004.13637.pdf) Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu,
Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston on 30 Apr 2020.
The abstract of the paper is the following:
*Building open-domain chatbots is a challenging area for machine learning research. While prior work has shown that
|
304_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot/#overview
|
.md
|
*Building open-domain chatbots is a challenging area for machine learning research. While prior work has shown that
scaling neural models in the number of parameters and the size of the data they are trained on gives improved results,
we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of
skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to
|
304_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot/#overview
|
.md
|
skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to
their partners, and displaying knowledge, empathy and personality appropriately, while maintaining a consistent
persona. We show that large scale models can learn these skills when given appropriate training data and choice of
generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter models, and make our models
|
304_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot/#overview
|
.md
|
generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter models, and make our models
and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn
dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing
failure cases of our models.*
|
304_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot/#overview
|
.md
|
failure cases of our models.*
This model was contributed by [sshleifer](https://huggingface.co/sshleifer). The authors' code can be found [here](https://github.com/facebookresearch/ParlAI) .
|
304_1_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot/#usage-tips-and-example
|
.md
|
Blenderbot is a model with absolute position embeddings so it's usually advised to pad the inputs on the right
rather than the left.
An example:
```python
>>> from transformers import BlenderbotTokenizer, BlenderbotForConditionalGeneration
|
304_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot/#usage-tips-and-example
|
.md
|
>>> mname = "facebook/blenderbot-400M-distill"
>>> model = BlenderbotForConditionalGeneration.from_pretrained(mname)
>>> tokenizer = BlenderbotTokenizer.from_pretrained(mname)
>>> UTTERANCE = "My friends are cool but they eat too many carbs."
>>> inputs = tokenizer([UTTERANCE], return_tensors="pt")
>>> reply_ids = model.generate(**inputs)
>>> print(tokenizer.batch_decode(reply_ids))
["<s> That's unfortunate. Are they trying to lose weight or are they just trying to be healthier?</s>"]
```
|
304_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot/#implementation-notes
|
.md
|
- Blenderbot uses a standard [seq2seq model transformer](https://arxiv.org/pdf/1706.03762.pdf) based architecture.
- Available checkpoints can be found in the [model hub](https://huggingface.co/models?search=blenderbot).
- This is the *default* Blenderbot model class. However, some smaller checkpoints, such as
`facebook/blenderbot_small_90M`, have a different architecture and consequently should be used with
[BlenderbotSmall](blenderbot-small).
|
304_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot/#resources
|
.md
|
- [Causal language modeling task guide](../tasks/language_modeling)
- [Translation task guide](../tasks/translation)
- [Summarization task guide](../tasks/summarization)
|
304_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot/#blenderbotconfig
|
.md
|
This is the configuration class to store the configuration of a [`BlenderbotModel`]. It is used to instantiate an
Blenderbot model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the Blenderbot
[facebook/blenderbot-3B](https://huggingface.co/facebook/blenderbot-3B) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
304_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot/#blenderbotconfig
|
.md
|
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 50265):
Vocabulary size of the Blenderbot model. Defines the number of different tokens that can be represented by
the `inputs_ids` passed when calling [`BlenderbotModel`] or [`TFBlenderbotModel`].
d_model (`int`, *optional*, defaults to 1024):
|
304_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot/#blenderbotconfig
|
.md
|
d_model (`int`, *optional*, defaults to 1024):
Dimensionality of the layers and the pooler layer.
encoder_layers (`int`, *optional*, defaults to 12):
Number of encoder layers.
decoder_layers (`int`, *optional*, defaults to 12):
Number of decoder layers.
encoder_attention_heads (`int`, *optional*, defaults to 16):
Number of attention heads for each attention layer in the Transformer encoder.
decoder_attention_heads (`int`, *optional*, defaults to 16):
|
304_5_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot/#blenderbotconfig
|
.md
|
decoder_attention_heads (`int`, *optional*, defaults to 16):
Number of attention heads for each attention layer in the Transformer decoder.
decoder_ffn_dim (`int`, *optional*, defaults to 4096):
Dimensionality of the "intermediate" (often named feed-forward) layer in decoder.
encoder_ffn_dim (`int`, *optional*, defaults to 4096):
Dimensionality of the "intermediate" (often named feed-forward) layer in decoder.
activation_function (`str` or `function`, *optional*, defaults to `"gelu"`):
|
304_5_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot/#blenderbotconfig
|
.md
|
activation_function (`str` or `function`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"silu"` and `"gelu_new"` are supported.
dropout (`float`, *optional*, defaults to 0.1):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
|
304_5_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot/#blenderbotconfig
|
.md
|
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
activation_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for activations inside the fully connected layer.
max_position_embeddings (`int`, *optional*, defaults to 128):
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
init_std (`float`, *optional*, defaults to 0.02):
|
304_5_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot/#blenderbotconfig
|
.md
|
just in case (e.g., 512 or 1024 or 2048).
init_std (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
encoder_layerdrop (`float`, *optional*, defaults to 0.0):
The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
decoder_layerdrop (`float`, *optional*, defaults to 0.0):
|
304_5_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot/#blenderbotconfig
|
.md
|
for more details.
decoder_layerdrop (`float`, *optional*, defaults to 0.0):
The LayerDrop probability for the decoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
scale_embedding (`bool`, *optional*, defaults to `False`):
Scale embeddings by diving by sqrt(d_model).
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models)
forced_eos_token_id (`int`, *optional*, defaults to 2):
|
304_5_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot/#blenderbotconfig
|
.md
|
forced_eos_token_id (`int`, *optional*, defaults to 2):
The id of the token to force as the last generated token when `max_length` is reached. Usually set to
`eos_token_id`.
Example:
```python
>>> from transformers import BlenderbotConfig, BlenderbotModel
|
304_5_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot/#blenderbotconfig
|
.md
|
>>> # Initializing a Blenderbot facebook/blenderbot-3B style configuration
>>> configuration = BlenderbotConfig()
>>> # Initializing a model (with random weights) from the facebook/blenderbot-3B style configuration
>>> model = BlenderbotModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
304_5_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot/#blenderbottokenizer
|
.md
|
Constructs a Blenderbot tokenizer, derived from the GPT-2 tokenizer, using byte-level Byte-Pair-Encoding.
This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will
be encoded differently whether it is at the beginning of the sentence (without space) or not:
```python
>>> from transformers import BlenderbotTokenizer
|
304_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot/#blenderbottokenizer
|
.md
|
>>> tokenizer = BlenderbotTokenizer.from_pretrained("facebook/blenderbot-3B")
>>> tokenizer.add_prefix_space = False
>>> tokenizer("Hello world")["input_ids"]
[47, 921, 86, 1085, 2]
|
304_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot/#blenderbottokenizer
|
.md
|
>>> tokenizer(" Hello world")["input_ids"]
[6950, 1085, 2]
```
You can get around that behavior by passing `add_prefix_space=True` when instantiating this tokenizer or when you
call it on some text, but since the model was not pretrained this way, it might yield a decrease in performance.
<Tip>
When used with `is_split_into_words=True`, this tokenizer will add a space before each word (even the first one).
</Tip>
|
304_6_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot/#blenderbottokenizer
|
.md
|
When used with `is_split_into_words=True`, this tokenizer will add a space before each word (even the first one).
</Tip>
This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
Args:
vocab_file (`str`):
Path to the vocabulary file.
merges_file (`str`):
Path to the merges file.
errors (`str`, *optional*, defaults to `"replace"`):
Paradigm to follow when decoding bytes to UTF-8. See
|
304_6_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot/#blenderbottokenizer
|
.md
|
errors (`str`, *optional*, defaults to `"replace"`):
Paradigm to follow when decoding bytes to UTF-8. See
[bytes.decode](https://docs.python.org/3/library/stdtypes.html#bytes.decode) for more information.
bos_token (`str`, *optional*, defaults to `"<s>"`):
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
<Tip>
When building a sequence using special tokens, this is not the token that is used for the beginning of
|
304_6_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot/#blenderbottokenizer
|
.md
|
<Tip>
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the `cls_token`.
</Tip>
eos_token (`str`, *optional*, defaults to `"</s>"`):
The end of sequence token.
<Tip>
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the `sep_token`.
</Tip>
sep_token (`str`, *optional*, defaults to `"</s>"`):
|
304_6_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot/#blenderbottokenizer
|
.md
|
The token used is the `sep_token`.
</Tip>
sep_token (`str`, *optional*, defaults to `"</s>"`):
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (`str`, *optional*, defaults to `"<s>"`):
|
304_6_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot/#blenderbottokenizer
|
.md
|
token of a sequence built with special tokens.
cls_token (`str`, *optional*, defaults to `"<s>"`):
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (`str`, *optional*, defaults to `"<unk>"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
|
304_6_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot/#blenderbottokenizer
|
.md
|
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (`str`, *optional*, defaults to `"<pad>"`):
The token used for padding, for example when batching sequences of different lengths.
mask_token (`str`, *optional*, defaults to `"<mask>"`):
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
|
304_6_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot/#blenderbottokenizer
|
.md
|
modeling. This is the token which the model will try to predict.
add_prefix_space (`bool`, *optional*, defaults to `False`):
Whether or not to add an initial space to the input. This allows to treat the leading word just as any
other word. (Blenderbot tokenizer detect beginning of words by the preceding space).
Methods: build_inputs_with_special_tokens
|
304_6_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot/#blenderbottokenizerfast
|
.md
|
Construct a "fast" Blenderbot tokenizer (backed by HuggingFace's *tokenizers* library), derived from the GPT-2
tokenizer, using byte-level Byte-Pair-Encoding.
This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will
be encoded differently whether it is at the beginning of the sentence (without space) or not:
```python
>>> from transformers import BlenderbotTokenizerFast
|
304_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot/#blenderbottokenizerfast
|
.md
|
>>> tokenizer = BlenderbotTokenizerFast.from_pretrained("facebook/blenderbot-3B")
>>> tokenizer("Hello world")["input_ids"]
[6950, 1085, 2]
|
304_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot/#blenderbottokenizerfast
|
.md
|
>>> tokenizer(" Hello world")["input_ids"]
[6950, 1085, 2]
```
You can get around that behavior by passing `add_prefix_space=True` when instantiating this tokenizer or when you
call it on some text, but since the model was not pretrained this way, it might yield a decrease in performance.
<Tip>
When used with `is_split_into_words=True`, this tokenizer needs to be instantiated with `add_prefix_space=True`.
</Tip>
|
304_7_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot/#blenderbottokenizerfast
|
.md
|
When used with `is_split_into_words=True`, this tokenizer needs to be instantiated with `add_prefix_space=True`.
</Tip>
This tokenizer inherits from [`PreTrainedTokenizerFast`] which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
Args:
vocab_file (`str`):
Path to the vocabulary file.
merges_file (`str`):
Path to the merges file.
errors (`str`, *optional*, defaults to `"replace"`):
|
304_7_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot/#blenderbottokenizerfast
|
.md
|
Path to the vocabulary file.
merges_file (`str`):
Path to the merges file.
errors (`str`, *optional*, defaults to `"replace"`):
Paradigm to follow when decoding bytes to UTF-8. See
[bytes.decode](https://docs.python.org/3/library/stdtypes.html#bytes.decode) for more information.
bos_token (`str`, *optional*, defaults to `"<s>"`):
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
<Tip>
|
304_7_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot/#blenderbottokenizerfast
|
.md
|
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
<Tip>
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the `cls_token`.
</Tip>
eos_token (`str`, *optional*, defaults to `"</s>"`):
The end of sequence token.
<Tip>
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the `sep_token`.
|
304_7_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot/#blenderbottokenizerfast
|
.md
|
The token used is the `sep_token`.
</Tip>
sep_token (`str`, *optional*, defaults to `"</s>"`):
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (`str`, *optional*, defaults to `"<s>"`):
|
304_7_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot/#blenderbottokenizerfast
|
.md
|
token of a sequence built with special tokens.
cls_token (`str`, *optional*, defaults to `"<s>"`):
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (`str`, *optional*, defaults to `"<unk>"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
|
304_7_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot/#blenderbottokenizerfast
|
.md
|
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (`str`, *optional*, defaults to `"<pad>"`):
The token used for padding, for example when batching sequences of different lengths.
mask_token (`str`, *optional*, defaults to `"<mask>"`):
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
|
304_7_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot/#blenderbottokenizerfast
|
.md
|
modeling. This is the token which the model will try to predict.
add_prefix_space (`bool`, *optional*, defaults to `False`):
Whether or not to add an initial space to the input. This allows to treat the leading word just as any
other word. (Blenderbot tokenizer detect beginning of words by the preceding space).
trim_offsets (`bool`, *optional*, defaults to `True`):
Whether the post processing step should trim offsets to avoid including whitespaces.
Methods: build_inputs_with_special_tokens
|
304_7_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot/#blenderbottokenizerfast
|
.md
|
Methods: build_inputs_with_special_tokens
<frameworkcontent>
<pt>
|
304_7_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot/#blenderbotmodel
|
.md
|
See [`~transformers.BartModel`] for arguments to *forward* and *generate*
The bare Blenderbot Model outputting raw hidden-states without any specific head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
304_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot/#blenderbotmodel
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`BlenderbotConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
|
304_8_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot/#blenderbotmodel
|
.md
|
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
304_8_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot/#blenderbotforconditionalgeneration
|
.md
|
See [`~transformers.BartForConditionalGeneration`] for arguments to *forward* and *generate*
The Blenderbot Model with a language modeling head. Can be used for summarization.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
|
304_9_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot/#blenderbotforconditionalgeneration
|
.md
|
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`BlenderbotConfig`]):
|
304_9_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot/#blenderbotforconditionalgeneration
|
.md
|
and behavior.
Parameters:
config ([`BlenderbotConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
304_9_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot/#blenderbotforcausallm
|
.md
|
No docstring available for BlenderbotForCausalLM
Methods: forward
</pt>
<tf>
|
304_10_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot/#tfblenderbotmodel
|
.md
|
No docstring available for TFBlenderbotModel
Methods: call
|
304_11_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot/#tfblenderbotforconditionalgeneration
|
.md
|
No docstring available for TFBlenderbotForConditionalGeneration
Methods: call
</tf>
<jax>
|
304_12_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot/#flaxblenderbotmodel
|
.md
|
No docstring available for FlaxBlenderbotModel
Methods: __call__
- encode
- decode
|
304_13_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot/#flaxblenderbotforconditionalgeneration
|
.md
|
No docstring available for FlaxBlenderbotForConditionalGeneration
Methods: __call__
- encode
- decode
</jax>
</frameworkcontent>
|
304_14_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/splinter.md
|
https://huggingface.co/docs/transformers/en/model_doc/splinter/
|
.md
|
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
305_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/splinter.md
|
https://huggingface.co/docs/transformers/en/model_doc/splinter/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
305_0_1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.