source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot-small.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot-small/#blenderbotsmallconfig
|
.md
|
encoder_layerdrop (`float`, *optional*, defaults to 0.0):
The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
decoder_layerdrop (`float`, *optional*, defaults to 0.0):
The LayerDrop probability for the decoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
scale_embedding (`bool`, *optional*, defaults to `False`):
Scale embeddings by diving by sqrt(d_model).
|
203_5_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot-small.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot-small/#blenderbotsmallconfig
|
.md
|
for more details.
scale_embedding (`bool`, *optional*, defaults to `False`):
Scale embeddings by diving by sqrt(d_model).
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models)
forced_eos_token_id (`int`, *optional*, defaults to 2):
The id of the token to force as the last generated token when `max_length` is reached. Usually set to
`eos_token_id`.
Example:
```python
|
203_5_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot-small.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot-small/#blenderbotsmallconfig
|
.md
|
`eos_token_id`.
Example:
```python
>>> from transformers import BlenderbotSmallConfig, BlenderbotSmallModel
|
203_5_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot-small.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot-small/#blenderbotsmallconfig
|
.md
|
>>> # Initializing a BlenderbotSmall facebook/blenderbot_small-90M style configuration
>>> configuration = BlenderbotSmallConfig()
>>> # Initializing a model (with random weights) from the facebook/blenderbot_small-90M style configuration
>>> model = BlenderbotSmallModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
203_5_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot-small.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot-small/#blenderbotsmalltokenizer
|
.md
|
Constructs a Blenderbot-90M tokenizer based on BPE (Byte-Pair-Encoding)
This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to
the superclass for more information regarding methods.
Args:
vocab_file (`str`):
File containing the vocabulary.
merges_file (`str`):
Path to the merges file.
bos_token (`str`, *optional*, defaults to `"__start__"`):
The beginning of sentence token.
eos_token (`str`, *optional*, defaults to `"__end__"`):
|
203_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot-small.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot-small/#blenderbotsmalltokenizer
|
.md
|
The beginning of sentence token.
eos_token (`str`, *optional*, defaults to `"__end__"`):
The end of sentence token.
unk_token (`str`, *optional*, defaults to `"__unk__"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (`str`, *optional*, defaults to `"__null__"`):
The token used for padding, for example when batching sequences of different lengths.
kwargs (*optional*):
|
203_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot-small.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot-small/#blenderbotsmalltokenizer
|
.md
|
The token used for padding, for example when batching sequences of different lengths.
kwargs (*optional*):
Additional keyword arguments passed along to [`PreTrainedTokenizer`]
Methods: build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
|
203_6_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot-small.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot-small/#blenderbotsmalltokenizerfast
|
.md
|
Construct a "fast" BlenderbotSmall tokenizer (backed by HuggingFace's *tokenizers* library).
Args:
vocab_file (`str`):
Path to the vocabulary file.
<frameworkcontent>
<pt>
|
203_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot-small.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot-small/#blenderbotsmallmodel
|
.md
|
The bare BlenderbotSmall Model outputting raw hidden-states without any specific head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
203_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot-small.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot-small/#blenderbotsmallmodel
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`BlenderbotSmallConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
|
203_8_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot-small.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot-small/#blenderbotsmallmodel
|
.md
|
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
203_8_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot-small.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot-small/#blenderbotsmallforconditionalgeneration
|
.md
|
The BlenderbotSmall Model with a language modeling head. Can be used for summarization.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
203_9_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot-small.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot-small/#blenderbotsmallforconditionalgeneration
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`BlenderbotSmallConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
|
203_9_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot-small.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot-small/#blenderbotsmallforconditionalgeneration
|
.md
|
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
203_9_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot-small.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot-small/#blenderbotsmallforcausallm
|
.md
|
No docstring available for BlenderbotSmallForCausalLM
Methods: forward
</pt>
<tf>
|
203_10_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot-small.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot-small/#tfblenderbotsmallmodel
|
.md
|
No docstring available for TFBlenderbotSmallModel
Methods: call
|
203_11_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot-small.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot-small/#tfblenderbotsmallforconditionalgeneration
|
.md
|
No docstring available for TFBlenderbotSmallForConditionalGeneration
Methods: call
</tf>
<jax>
|
203_12_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot-small.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot-small/#flaxblenderbotsmallmodel
|
.md
|
No docstring available for FlaxBlenderbotSmallModel
Methods: __call__
- encode
- decode
|
203_13_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot-small.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot-small/#flaxblenderbotforconditionalgeneration
|
.md
|
No docstring available for FlaxBlenderbotSmallForConditionalGeneration
Methods: __call__
- encode
- decode
</jax>
</frameworkcontent>
|
203_14_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitmatte.md
|
https://huggingface.co/docs/transformers/en/model_doc/vitmatte/
|
.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
204_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitmatte.md
|
https://huggingface.co/docs/transformers/en/model_doc/vitmatte/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
|
204_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitmatte.md
|
https://huggingface.co/docs/transformers/en/model_doc/vitmatte/#overview
|
.md
|
The ViTMatte model was proposed in [Boosting Image Matting with Pretrained Plain Vision Transformers](https://arxiv.org/abs/2305.15272) by Jingfeng Yao, Xinggang Wang, Shusheng Yang, Baoyuan Wang.
ViTMatte leverages plain [Vision Transformers](vit) for the task of image matting, which is the process of accurately estimating the foreground object in images and videos.
The abstract from the paper is the following:
|
204_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitmatte.md
|
https://huggingface.co/docs/transformers/en/model_doc/vitmatte/#overview
|
.md
|
*Recently, plain vision Transformers (ViTs) have shown impressive performance on various computer vision tasks, thanks to their strong modeling capacity and large-scale pretraining. However, they have not yet conquered the problem of image matting. We hypothesize that image matting could also be boosted by ViTs and present a new efficient and robust ViT-based matting system, named ViTMatte. Our method utilizes (i) a hybrid attention mechanism combined with a convolution neck to help ViTs achieve an
|
204_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitmatte.md
|
https://huggingface.co/docs/transformers/en/model_doc/vitmatte/#overview
|
.md
|
named ViTMatte. Our method utilizes (i) a hybrid attention mechanism combined with a convolution neck to help ViTs achieve an excellent performance-computation trade-off in matting tasks. (ii) Additionally, we introduce the detail capture module, which just consists of simple lightweight convolutions to complement the detailed information required by matting. To the best of our knowledge, ViTMatte is the first work to unleash the potential of ViT on image matting with concise adaptation. It inherits many
|
204_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitmatte.md
|
https://huggingface.co/docs/transformers/en/model_doc/vitmatte/#overview
|
.md
|
ViTMatte is the first work to unleash the potential of ViT on image matting with concise adaptation. It inherits many superior properties from ViT to matting, including various pretraining strategies, concise architecture design, and flexible inference strategies. We evaluate ViTMatte on Composition-1k and Distinctions-646, the most commonly used benchmark for image matting, our method achieves state-of-the-art performance and outperforms prior matting works by a large margin.*
|
204_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitmatte.md
|
https://huggingface.co/docs/transformers/en/model_doc/vitmatte/#overview
|
.md
|
This model was contributed by [nielsr](https://huggingface.co/nielsr).
The original code can be found [here](https://github.com/hustvl/ViTMatte).
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/vitmatte_architecture.png"
alt="drawing" width="600"/>
<small> ViTMatte high-level overview. Taken from the <a href="https://arxiv.org/abs/2305.15272">original paper.</a> </small>
|
204_1_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitmatte.md
|
https://huggingface.co/docs/transformers/en/model_doc/vitmatte/#resources
|
.md
|
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ViTMatte.
- A demo notebook regarding inference with [`VitMatteForImageMatting`], including background replacement, can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/ViTMatte).
<Tip>
The model expects both the image and trimap (concatenated) as input. Use [`ViTMatteImageProcessor`] for this purpose.
</Tip>
|
204_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitmatte.md
|
https://huggingface.co/docs/transformers/en/model_doc/vitmatte/#vitmatteconfig
|
.md
|
This is the configuration class to store the configuration of [`VitMatteForImageMatting`]. It is used to
instantiate a ViTMatte model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the ViTMatte
[hustvl/vitmatte-small-composition-1k](https://huggingface.co/hustvl/vitmatte-small-composition-1k) architecture.
|
204_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitmatte.md
|
https://huggingface.co/docs/transformers/en/model_doc/vitmatte/#vitmatteconfig
|
.md
|
[hustvl/vitmatte-small-composition-1k](https://huggingface.co/hustvl/vitmatte-small-composition-1k) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
backbone_config (`PretrainedConfig` or `dict`, *optional*, defaults to `VitDetConfig()`):
The configuration of the backbone model.
backbone (`str`, *optional*):
|
204_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitmatte.md
|
https://huggingface.co/docs/transformers/en/model_doc/vitmatte/#vitmatteconfig
|
.md
|
The configuration of the backbone model.
backbone (`str`, *optional*):
Name of backbone to use when `backbone_config` is `None`. If `use_pretrained_backbone` is `True`, this
will load the corresponding pretrained weights from the timm or transformers library. If `use_pretrained_backbone`
is `False`, this loads the backbone's config and uses that to initialize the backbone with random weights.
use_pretrained_backbone (`bool`, *optional*, defaults to `False`):
|
204_3_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitmatte.md
|
https://huggingface.co/docs/transformers/en/model_doc/vitmatte/#vitmatteconfig
|
.md
|
use_pretrained_backbone (`bool`, *optional*, defaults to `False`):
Whether to use pretrained weights for the backbone.
use_timm_backbone (`bool`, *optional*, defaults to `False`):
Whether to load `backbone` from the timm library. If `False`, the backbone is loaded from the transformers
library.
backbone_kwargs (`dict`, *optional*):
Keyword arguments to be passed to AutoBackbone when loading from a checkpoint
e.g. `{'out_indices': (0, 1, 2, 3)}`. Cannot be specified if `backbone_config` is set.
|
204_3_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitmatte.md
|
https://huggingface.co/docs/transformers/en/model_doc/vitmatte/#vitmatteconfig
|
.md
|
e.g. `{'out_indices': (0, 1, 2, 3)}`. Cannot be specified if `backbone_config` is set.
hidden_size (`int`, *optional*, defaults to 384):
The number of input channels of the decoder.
batch_norm_eps (`float`, *optional*, defaults to 1e-05):
The epsilon used by the batch norm layers.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
|
204_3_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitmatte.md
|
https://huggingface.co/docs/transformers/en/model_doc/vitmatte/#vitmatteconfig
|
.md
|
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
convstream_hidden_sizes (`List[int]`, *optional*, defaults to `[48, 96, 192]`):
The output channels of the ConvStream module.
fusion_hidden_sizes (`List[int]`, *optional*, defaults to `[256, 128, 64, 32]`):
The output channels of the Fusion blocks.
Example:
```python
>>> from transformers import VitMatteConfig, VitMatteForImageMatting
|
204_3_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitmatte.md
|
https://huggingface.co/docs/transformers/en/model_doc/vitmatte/#vitmatteconfig
|
.md
|
>>> # Initializing a ViTMatte hustvl/vitmatte-small-composition-1k style configuration
>>> configuration = VitMatteConfig()
>>> # Initializing a model (with random weights) from the hustvl/vitmatte-small-composition-1k style configuration
>>> model = VitMatteForImageMatting(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
204_3_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitmatte.md
|
https://huggingface.co/docs/transformers/en/model_doc/vitmatte/#vitmatteimageprocessor
|
.md
|
Constructs a ViTMatte image processor.
Args:
do_rescale (`bool`, *optional*, defaults to `True`):
Whether to rescale the image by the specified scale `rescale_factor`. Can be overridden by the `do_rescale`
parameter in the `preprocess` method.
rescale_factor (`int` or `float`, *optional*, defaults to `1/255`):
Scale factor to use if rescaling the image. Can be overridden by the `rescale_factor` parameter in the
`preprocess` method.
do_normalize (`bool`, *optional*, defaults to `True`):
|
204_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitmatte.md
|
https://huggingface.co/docs/transformers/en/model_doc/vitmatte/#vitmatteimageprocessor
|
.md
|
`preprocess` method.
do_normalize (`bool`, *optional*, defaults to `True`):
Whether to normalize the image. Can be overridden by the `do_normalize` parameter in the `preprocess`
method.
image_mean (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_MEAN`):
Mean to use if normalizing the image. This is a float or list of floats the length of the number of
channels in the image. Can be overridden by the `image_mean` parameter in the `preprocess` method.
|
204_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitmatte.md
|
https://huggingface.co/docs/transformers/en/model_doc/vitmatte/#vitmatteimageprocessor
|
.md
|
channels in the image. Can be overridden by the `image_mean` parameter in the `preprocess` method.
image_std (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_STD`):
Standard deviation to use if normalizing the image. This is a float or list of floats the length of the
number of channels in the image. Can be overridden by the `image_std` parameter in the `preprocess` method.
do_pad (`bool`, *optional*, defaults to `True`):
|
204_4_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitmatte.md
|
https://huggingface.co/docs/transformers/en/model_doc/vitmatte/#vitmatteimageprocessor
|
.md
|
do_pad (`bool`, *optional*, defaults to `True`):
Whether to pad the image to make the width and height divisible by `size_divisibility`. Can be overridden
by the `do_pad` parameter in the `preprocess` method.
size_divisibility (`int`, *optional*, defaults to 32):
The width and height of the image will be padded to be divisible by this number.
Methods: preprocess
|
204_4_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitmatte.md
|
https://huggingface.co/docs/transformers/en/model_doc/vitmatte/#vitmatteforimagematting
|
.md
|
ViTMatte framework leveraging any vision backbone e.g. for ADE20k, CityScapes.
Parameters:
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
config ([`UperNetConfig`]): Model configuration class with all the parameters of the model.
|
204_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitmatte.md
|
https://huggingface.co/docs/transformers/en/model_doc/vitmatte/#vitmatteforimagematting
|
.md
|
behavior.
config ([`UperNetConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
204_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bloom.md
|
https://huggingface.co/docs/transformers/en/model_doc/bloom/
|
.md
|
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
205_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bloom.md
|
https://huggingface.co/docs/transformers/en/model_doc/bloom/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
205_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bloom.md
|
https://huggingface.co/docs/transformers/en/model_doc/bloom/#overview
|
.md
|
The BLOOM model has been proposed with its various versions through the [BigScience Workshop](https://bigscience.huggingface.co/). BigScience is inspired by other open science initiatives where researchers have pooled their time and resources to collectively achieve a higher impact.
The architecture of BLOOM is essentially similar to GPT3 (auto-regressive model for next token prediction), but has been trained on 46 different languages and 13 programming languages.
|
205_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bloom.md
|
https://huggingface.co/docs/transformers/en/model_doc/bloom/#overview
|
.md
|
Several smaller versions of the models have been trained on the same dataset. BLOOM is available in the following versions:
- [bloom-560m](https://huggingface.co/bigscience/bloom-560m)
- [bloom-1b1](https://huggingface.co/bigscience/bloom-1b1)
- [bloom-1b7](https://huggingface.co/bigscience/bloom-1b7)
- [bloom-3b](https://huggingface.co/bigscience/bloom-3b)
- [bloom-7b1](https://huggingface.co/bigscience/bloom-7b1)
- [bloom](https://huggingface.co/bigscience/bloom) (176B parameters)
|
205_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bloom.md
|
https://huggingface.co/docs/transformers/en/model_doc/bloom/#resources
|
.md
|
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with BLOOM. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
<PipelineTag pipeline="text-generation"/>
|
205_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bloom.md
|
https://huggingface.co/docs/transformers/en/model_doc/bloom/#resources
|
.md
|
<PipelineTag pipeline="text-generation"/>
- [`BloomForCausalLM`] is supported by this [causal language modeling example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling#gpt-2gpt-and-causal-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb).
See also:
- [Causal language modeling task guide](../tasks/language_modeling)
|
205_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bloom.md
|
https://huggingface.co/docs/transformers/en/model_doc/bloom/#resources
|
.md
|
See also:
- [Causal language modeling task guide](../tasks/language_modeling)
- [Text classification task guide](../tasks/sequence_classification)
- [Token classification task guide](../tasks/token_classification)
- [Question answering task guide](../tasks/question_answering)
⚡️ Inference
- A blog on [Optimization story: Bloom inference](https://huggingface.co/blog/bloom-inference-optimization).
|
205_2_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bloom.md
|
https://huggingface.co/docs/transformers/en/model_doc/bloom/#resources
|
.md
|
⚡️ Inference
- A blog on [Optimization story: Bloom inference](https://huggingface.co/blog/bloom-inference-optimization).
- A blog on [Incredibly Fast BLOOM Inference with DeepSpeed and Accelerate](https://huggingface.co/blog/bloom-inference-pytorch-scripts).
⚙️ Training
- A blog on [The Technology Behind BLOOM Training](https://huggingface.co/blog/bloom-megatron-deepspeed).
|
205_2_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bloom.md
|
https://huggingface.co/docs/transformers/en/model_doc/bloom/#bloomconfig
|
.md
|
This is the configuration class to store the configuration of a [`BloomModel`]. It is used to instantiate a Bloom
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to the Bloom architecture
[bigscience/bloom](https://huggingface.co/bigscience/bloom).
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
205_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bloom.md
|
https://huggingface.co/docs/transformers/en/model_doc/bloom/#bloomconfig
|
.md
|
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 250880):
Vocabulary size of the Bloom model. Defines the maximum number of different tokens that can be represented
by the `inputs_ids` passed when calling [`BloomModel`]. Check [this
|
205_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bloom.md
|
https://huggingface.co/docs/transformers/en/model_doc/bloom/#bloomconfig
|
.md
|
by the `inputs_ids` passed when calling [`BloomModel`]. Check [this
discussion](https://huggingface.co/bigscience/bloom/discussions/120#633d28389addb8530b406c2a) on how the
`vocab_size` has been defined.
hidden_size (`int`, *optional*, defaults to 64):
Dimensionality of the embeddings and hidden states.
n_layer (`int`, *optional*, defaults to 2):
Number of hidden layers in the Transformer encoder.
n_head (`int`, *optional*, defaults to 8):
|
205_3_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bloom.md
|
https://huggingface.co/docs/transformers/en/model_doc/bloom/#bloomconfig
|
.md
|
Number of hidden layers in the Transformer encoder.
n_head (`int`, *optional*, defaults to 8):
Number of attention heads for each attention layer in the Transformer encoder.
layer_norm_epsilon (`float`, *optional*, defaults to 1e-5):
The epsilon to use in the layer normalization layers.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
|
205_3_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bloom.md
|
https://huggingface.co/docs/transformers/en/model_doc/bloom/#bloomconfig
|
.md
|
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
apply_residual_connection_post_layernorm (`bool`, *optional*, defaults to `False`):
If enabled, use the layer norm of the hidden states as the residual in the transformer blocks
hidden_dropout (`float`, *optional*, defaults to 0.1):
Dropout rate of the dropout function on the bias dropout.
attention_dropout (`float`, *optional*, defaults to 0.1):
Dropout rate applied to the attention probs
|
205_3_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bloom.md
|
https://huggingface.co/docs/transformers/en/model_doc/bloom/#bloomconfig
|
.md
|
attention_dropout (`float`, *optional*, defaults to 0.1):
Dropout rate applied to the attention probs
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models).
pretraining_tp (`int`, *optional*, defaults to `1`):
Experimental feature. Tensor parallelism rank used during pretraining with Megatron. Please refer to [this
|
205_3_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bloom.md
|
https://huggingface.co/docs/transformers/en/model_doc/bloom/#bloomconfig
|
.md
|
Experimental feature. Tensor parallelism rank used during pretraining with Megatron. Please refer to [this
document](https://huggingface.co/docs/transformers/parallelism) to understand more about it. This value is
necessary to ensure exact reproducibility of the pretraining results. Please refer to [this
issue](https://github.com/pytorch/pytorch/issues/76232). Note also that this is enabled only when
`slow_but_exact=True`.
slow_but_exact (`bool`, *optional*, defaults to `False`):
|
205_3_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bloom.md
|
https://huggingface.co/docs/transformers/en/model_doc/bloom/#bloomconfig
|
.md
|
`slow_but_exact=True`.
slow_but_exact (`bool`, *optional*, defaults to `False`):
Experimental feature. Whether to use slow but exact implementation of the attention mechanism. While
merging the TP rank tensors, due to slicing operations the results may be slightly different between the
model trained on Megatron and our model. Please refer to [this
issue](https://github.com/pytorch/pytorch/issues/76232). A solution to obtain more accurate results is to
|
205_3_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bloom.md
|
https://huggingface.co/docs/transformers/en/model_doc/bloom/#bloomconfig
|
.md
|
issue](https://github.com/pytorch/pytorch/issues/76232). A solution to obtain more accurate results is to
enable this feature. Enabling this will hurt the computational time of the inference. Will be probably
resolved in the future once the main model has been fine-tuned with TP_rank=1.
Example:
```python
>>> from transformers import BloomConfig, BloomModel
|
205_3_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bloom.md
|
https://huggingface.co/docs/transformers/en/model_doc/bloom/#bloomconfig
|
.md
|
>>> # Initializing a Bloom configuration
>>> configuration = BloomConfig()
>>> # Initializing a model (with random weights) from the configuration
>>> model = BloomModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
Methods: all
|
205_3_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bloom.md
|
https://huggingface.co/docs/transformers/en/model_doc/bloom/#bloomtokenizerfast
|
.md
|
Construct a "fast" Bloom tokenizer (backed by HuggingFace's *tokenizers* library). Based on byte-level
Byte-Pair-Encoding.
This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will
be encoded differently whether it is at the beginning of the sentence (without space) or not:
```python
>>> from transformers import BloomTokenizerFast
|
205_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bloom.md
|
https://huggingface.co/docs/transformers/en/model_doc/bloom/#bloomtokenizerfast
|
.md
|
>>> tokenizer = BloomTokenizerFast.from_pretrained("bigscience/bloom")
>>> tokenizer("Hello world")["input_ids"]
[59414, 8876]
|
205_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bloom.md
|
https://huggingface.co/docs/transformers/en/model_doc/bloom/#bloomtokenizerfast
|
.md
|
>>> tokenizer(" Hello world")["input_ids"]
[86153, 8876]
```
You can get around that behavior by passing `add_prefix_space=True` when instantiating this tokenizer, but since
the model was not pretrained this way, it might yield a decrease in performance.
<Tip>
When used with `is_split_into_words=True`, this tokenizer needs to be instantiated with `add_prefix_space=True`.
</Tip>
This tokenizer inherits from [`PreTrainedTokenizerFast`] which contains most of the main methods. Users should
|
205_4_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bloom.md
|
https://huggingface.co/docs/transformers/en/model_doc/bloom/#bloomtokenizerfast
|
.md
|
</Tip>
This tokenizer inherits from [`PreTrainedTokenizerFast`] which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
Args:
vocab_file (`str`):
Path to the vocabulary file.
merges_file (`str`):
Path to the merges file.
errors (`str`, *optional*, defaults to `"replace"`):
Paradigm to follow when decoding bytes to UTF-8. See
[bytes.decode](https://docs.python.org/3/library/stdtypes.html#bytes.decode) for more information.
|
205_4_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bloom.md
|
https://huggingface.co/docs/transformers/en/model_doc/bloom/#bloomtokenizerfast
|
.md
|
[bytes.decode](https://docs.python.org/3/library/stdtypes.html#bytes.decode) for more information.
unk_token (`str`, *optional*, defaults to `<|endoftext|>`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
bos_token (`str`, *optional*, defaults to `<|endoftext|>`):
The beginning of sequence token.
eos_token (`str`, *optional*, defaults to `<|endoftext|>`):
The end of sequence token.
|
205_4_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bloom.md
|
https://huggingface.co/docs/transformers/en/model_doc/bloom/#bloomtokenizerfast
|
.md
|
The beginning of sequence token.
eos_token (`str`, *optional*, defaults to `<|endoftext|>`):
The end of sequence token.
add_prefix_space (`bool`, *optional*, defaults to `False`):
Whether or not to add an initial space to the input. This allows to treat the leading word just as any
other word. (Bloom tokenizer detect beginning of words by the preceding space).
trim_offsets (`bool`, *optional*, defaults to `True`):
Whether or not the post-processing step should trim offsets to avoid including whitespaces.
|
205_4_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bloom.md
|
https://huggingface.co/docs/transformers/en/model_doc/bloom/#bloomtokenizerfast
|
.md
|
Whether or not the post-processing step should trim offsets to avoid including whitespaces.
Methods: all
<frameworkcontent>
<pt>
|
205_4_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bloom.md
|
https://huggingface.co/docs/transformers/en/model_doc/bloom/#bloommodel
|
.md
|
The bare Bloom Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
205_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bloom.md
|
https://huggingface.co/docs/transformers/en/model_doc/bloom/#bloommodel
|
.md
|
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`BloomConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
205_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bloom.md
|
https://huggingface.co/docs/transformers/en/model_doc/bloom/#bloommodel
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
205_5_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bloom.md
|
https://huggingface.co/docs/transformers/en/model_doc/bloom/#bloomforcausallm
|
.md
|
The Bloom Model transformer with a language modeling head on top (linear layer with weights tied to the input
embeddings).
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
205_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bloom.md
|
https://huggingface.co/docs/transformers/en/model_doc/bloom/#bloomforcausallm
|
.md
|
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`BloomConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
205_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bloom.md
|
https://huggingface.co/docs/transformers/en/model_doc/bloom/#bloomforcausallm
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
205_6_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bloom.md
|
https://huggingface.co/docs/transformers/en/model_doc/bloom/#bloomforsequenceclassification
|
.md
|
The Bloom Model transformer with a sequence classification head on top (linear layer).
[`BloomForSequenceClassification`] uses the last token in order to do the classification, as other causal models
(e.g. GPT-1) do.
Since it does classification on the last token, it requires to know the position of the last token. If a
`pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If
|
205_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bloom.md
|
https://huggingface.co/docs/transformers/en/model_doc/bloom/#bloomforsequenceclassification
|
.md
|
`pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If
no `pad_token_id` is defined, it simply takes the last value in each row of the batch. Since it cannot guess the
padding tokens when `inputs_embeds` are passed instead of `input_ids`, it does the same (take the last value in
each row of the batch).
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
|
205_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bloom.md
|
https://huggingface.co/docs/transformers/en/model_doc/bloom/#bloomforsequenceclassification
|
.md
|
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
|
205_7_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bloom.md
|
https://huggingface.co/docs/transformers/en/model_doc/bloom/#bloomforsequenceclassification
|
.md
|
and behavior.
Parameters:
config ([`BloomConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
205_7_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bloom.md
|
https://huggingface.co/docs/transformers/en/model_doc/bloom/#bloomfortokenclassification
|
.md
|
Bloom Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for
Named-Entity-Recognition (NER) tasks.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
205_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bloom.md
|
https://huggingface.co/docs/transformers/en/model_doc/bloom/#bloomfortokenclassification
|
.md
|
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`BloomConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
205_8_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bloom.md
|
https://huggingface.co/docs/transformers/en/model_doc/bloom/#bloomfortokenclassification
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
205_8_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bloom.md
|
https://huggingface.co/docs/transformers/en/model_doc/bloom/#bloomforquestionanswering
|
.md
|
The BLOOM Model transformer with a span classification head on top for extractive question-answering tasks like
SQuAD (a linear layers on top of the hidden-states output to compute `span start logits` and `span end logits`).
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings etc.)
|
205_9_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bloom.md
|
https://huggingface.co/docs/transformers/en/model_doc/bloom/#bloomforquestionanswering
|
.md
|
library implements for all its model (such as downloading or saving, resizing the input embeddings etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`BloomConfig`]): Model configuration class with all the parameters of the model.
|
205_9_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bloom.md
|
https://huggingface.co/docs/transformers/en/model_doc/bloom/#bloomforquestionanswering
|
.md
|
and behavior.
Parameters:
config ([`BloomConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
</pt>
<jax>
|
205_9_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bloom.md
|
https://huggingface.co/docs/transformers/en/model_doc/bloom/#flaxbloommodel
|
.md
|
No docstring available for FlaxBloomModel
Methods: __call__
|
205_10_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bloom.md
|
https://huggingface.co/docs/transformers/en/model_doc/bloom/#flaxbloomforcausallm
|
.md
|
No docstring available for FlaxBloomForCausalLM
Methods: __call__
</jax>
</frameworkcontent>
|
205_11_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text_2.md
|
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text_2/
|
.md
|
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
206_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text_2.md
|
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text_2/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
206_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text_2.md
|
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text_2/#speech2text2
|
.md
|
<Tip warning={true}>
This model is in maintenance mode only, we don't accept any new PRs changing its code.
If you run into any issues running this model, please reinstall the last version that supported this model: v4.40.2.
You can do so by running the following command: `pip install -U transformers==4.40.2`.
</Tip>
|
206_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text_2.md
|
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text_2/#overview
|
.md
|
The Speech2Text2 model is used together with [Wav2Vec2](wav2vec2) for Speech Translation models proposed in
[Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) by
Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau.
Speech2Text2 is a *decoder-only* transformer model that can be used with any speech *encoder-only*, such as
[Wav2Vec2](wav2vec2) or [HuBERT](hubert) for Speech-to-Text tasks. Please refer to the
|
206_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text_2.md
|
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text_2/#overview
|
.md
|
[Wav2Vec2](wav2vec2) or [HuBERT](hubert) for Speech-to-Text tasks. Please refer to the
[SpeechEncoderDecoder](speech-encoder-decoder) class on how to combine Speech2Text2 with any speech *encoder-only*
model.
This model was contributed by [Patrick von Platen](https://huggingface.co/patrickvonplaten).
The original code can be found [here](https://github.com/pytorch/fairseq/blob/1f7ef9ed1e1061f8c7f88f8b94c7186834398690/fairseq/models/wav2vec/wav2vec2_asr.py#L266).
|
206_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text_2.md
|
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text_2/#usage-tips
|
.md
|
- Speech2Text2 achieves state-of-the-art results on the CoVoST Speech Translation dataset. For more information, see
the [official models](https://huggingface.co/models?other=speech2text2) .
- Speech2Text2 is always used within the [SpeechEncoderDecoder](speech-encoder-decoder) framework.
- Speech2Text2's tokenizer is based on [fastBPE](https://github.com/glample/fastBPE).
|
206_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text_2.md
|
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text_2/#inference
|
.md
|
Speech2Text2's [`SpeechEncoderDecoderModel`] model accepts raw waveform input values from speech and
makes use of [`~generation.GenerationMixin.generate`] to translate the input speech
autoregressively to the target language.
The [`Wav2Vec2FeatureExtractor`] class is responsible for preprocessing the input speech and
[`Speech2Text2Tokenizer`] decodes the generated target tokens to the target string. The
[`Speech2Text2Processor`] wraps [`Wav2Vec2FeatureExtractor`] and
|
206_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text_2.md
|
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text_2/#inference
|
.md
|
[`Speech2Text2Processor`] wraps [`Wav2Vec2FeatureExtractor`] and
[`Speech2Text2Tokenizer`] into a single instance to both extract the input features and decode the
predicted token ids.
- Step-by-step Speech Translation
```python
>>> import torch
>>> from transformers import Speech2Text2Processor, SpeechEncoderDecoderModel
>>> from datasets import load_dataset
>>> import soundfile as sf
|
206_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text_2.md
|
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text_2/#inference
|
.md
|
>>> model = SpeechEncoderDecoderModel.from_pretrained("facebook/s2t-wav2vec2-large-en-de")
>>> processor = Speech2Text2Processor.from_pretrained("facebook/s2t-wav2vec2-large-en-de")
>>> def map_to_array(batch):
... speech, _ = sf.read(batch["file"])
... batch["speech"] = speech
... return batch
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> ds = ds.map(map_to_array)
|
206_4_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text_2.md
|
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text_2/#inference
|
.md
|
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> ds = ds.map(map_to_array)
>>> inputs = processor(ds["speech"][0], sampling_rate=16_000, return_tensors="pt")
>>> generated_ids = model.generate(inputs=inputs["input_values"], attention_mask=inputs["attention_mask"])
|
206_4_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text_2.md
|
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text_2/#inference
|
.md
|
>>> transcription = processor.batch_decode(generated_ids)
```
- Speech Translation via Pipelines
The automatic speech recognition pipeline can also be used to translate speech in just a couple lines of code
```python
>>> from datasets import load_dataset
>>> from transformers import pipeline
|
206_4_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text_2.md
|
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text_2/#inference
|
.md
|
>>> librispeech_en = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> asr = pipeline(
... "automatic-speech-recognition",
... model="facebook/s2t-wav2vec2-large-en-de",
... feature_extractor="facebook/s2t-wav2vec2-large-en-de",
... )
>>> translation_de = asr(librispeech_en[0]["file"])
```
See [model hub](https://huggingface.co/models?filter=speech2text2) to look for Speech2Text2 checkpoints.
|
206_4_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text_2.md
|
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text_2/#resources
|
.md
|
- [Causal language modeling task guide](../tasks/language_modeling)
|
206_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text_2.md
|
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text_2/#speech2text2config
|
.md
|
This is the configuration class to store the configuration of a [`Speech2Text2ForCausalLM`]. It is used to
instantiate an Speech2Text2 model according to the specified arguments, defining the model architecture.
Instantiating a configuration with the defaults will yield a similar configuration to that of the Speech2Text2
[facebook/s2t-wav2vec2-large-en-de](https://huggingface.co/facebook/s2t-wav2vec2-large-en-de) architecture.
|
206_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text_2.md
|
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text_2/#speech2text2config
|
.md
|
[facebook/s2t-wav2vec2-large-en-de](https://huggingface.co/facebook/s2t-wav2vec2-large-en-de) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 50265):
Vocabulary size of the Speech2Text model. Defines the number of different tokens that can be represented by
the `inputs_ids` passed when calling [`Speech2TextModel`]
|
206_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text_2.md
|
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text_2/#speech2text2config
|
.md
|
the `inputs_ids` passed when calling [`Speech2TextModel`]
d_model (`int`, *optional*, defaults to 1024):
Dimensionality of the layers and the pooler layer.
decoder_layers (`int`, *optional*, defaults to 12):
Number of decoder layers.
decoder_attention_heads (`int`, *optional*, defaults to 16):
Number of attention heads for each attention layer in the Transformer decoder.
decoder_ffn_dim (`int`, *optional*, defaults to 4096):
Dimensionality of the "intermediate" (often named feed-forward) layer in decoder.
|
206_6_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text_2.md
|
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text_2/#speech2text2config
|
.md
|
Dimensionality of the "intermediate" (often named feed-forward) layer in decoder.
activation_function (`str` or `function`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the pooler. If string, `"gelu"`, `"relu"`,
`"silu"` and `"gelu_new"` are supported.
dropout (`float`, *optional*, defaults to 0.1):
The dropout probability for all fully connected layers in the embeddings, and pooler.
attention_dropout (`float`, *optional*, defaults to 0.0):
|
206_6_3
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.