source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics2.md
|
https://huggingface.co/docs/transformers/en/model_doc/idefics2/#model-optimizations-flash-attention
|
.md
|
The code snippets above showcase inference without any optimization tricks. However, one can drastically speed up the model by leveraging [Flash Attention](../perf_train_gpu_one#flash-attention-2), which is a faster implementation of the attention mechanism used inside the model.
First, make sure to install the latest version of Flash Attention 2 to include the sliding window attention feature.
```bash
pip install -U flash-attn --no-build-isolation
```
|
258_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics2.md
|
https://huggingface.co/docs/transformers/en/model_doc/idefics2/#model-optimizations-flash-attention
|
.md
|
```bash
pip install -U flash-attn --no-build-isolation
```
Make also sure that you have a hardware that is compatible with Flash-Attention 2. Read more about it in the official documentation of the [flash attention repository](https://github.com/Dao-AILab/flash-attention). Make also sure to load your model in half-precision (e.g. `torch.float16`)
To load and run a model using Flash Attention-2, simply change the code snippet above with the following change:
```diff
|
258_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics2.md
|
https://huggingface.co/docs/transformers/en/model_doc/idefics2/#model-optimizations-flash-attention
|
.md
|
To load and run a model using Flash Attention-2, simply change the code snippet above with the following change:
```diff
model = Idefics2ForConditionalGeneration.from_pretrained(
"HuggingFaceM4/idefics2-8b",
+ torch_dtype=torch.float16,
+ attn_implementation="flash_attention_2",
).to(device)
```
|
258_3_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics2.md
|
https://huggingface.co/docs/transformers/en/model_doc/idefics2/#shrinking-down-idefics2-using-quantization
|
.md
|
As the Idefics2 model has 8 billion parameters, that would require about 16GB of GPU RAM in half precision (float16), since each parameter is stored in 2 bytes. However, one can shrink down the size of the model using [quantization](../quantization.md). If the model is quantized to 4 bits (or half a byte per parameter), that requires only about 3.5GB of RAM.
|
258_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics2.md
|
https://huggingface.co/docs/transformers/en/model_doc/idefics2/#shrinking-down-idefics2-using-quantization
|
.md
|
Quantizing a model is as simple as passing a `quantization_config` to the model. One can change the code snippet above with the changes below. We'll leverage the BitsAndyBytes quantization (but refer to [this page](../quantization.md) for other quantization methods):
```diff
+ from transformers import BitsAndBytesConfig
|
258_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics2.md
|
https://huggingface.co/docs/transformers/en/model_doc/idefics2/#shrinking-down-idefics2-using-quantization
|
.md
|
+ quantization_config = BitsAndBytesConfig(
+ load_in_4bit=True,
+ bnb_4bit_quant_type="nf4",
+ bnb_4bit_use_double_quant=True,
+ bnb_4bit_compute_dtype=torch.float16
+ )
model = Idefics2ForConditionalGeneration.from_pretrained(
"HuggingFaceM4/idefics2-8b",
+ torch_dtype=torch.float16,
+ quantization_config=quantization_config,
).to(device)
```
|
258_4_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics2.md
|
https://huggingface.co/docs/transformers/en/model_doc/idefics2/#resources
|
.md
|
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with Idefics2. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
|
258_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics2.md
|
https://huggingface.co/docs/transformers/en/model_doc/idefics2/#resources
|
.md
|
- A notebook on how to fine-tune Idefics2 on a custom dataset using the [Trainer](../main_classes/trainer.md) can be found [here](https://colab.research.google.com/drive/1NtcTgRbSBKN7pYD3Vdx1j9m8pt3fhFDB?usp=sharing). It supports both full fine-tuning as well as (quantized) LoRa.
- A script regarding how to fine-tune Idefics2 using the TRL library can be found [here](https://gist.github.com/edbeeching/228652fc6c2b29a1641be5a5778223cb).
|
258_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics2.md
|
https://huggingface.co/docs/transformers/en/model_doc/idefics2/#resources
|
.md
|
- Demo notebook regarding fine-tuning Idefics2 for JSON extraction use cases can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/Idefics2). 🌎
|
258_5_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics2.md
|
https://huggingface.co/docs/transformers/en/model_doc/idefics2/#idefics2config
|
.md
|
This is the configuration class to store the configuration of a [`Idefics2Model`]. It is used to instantiate a
Idefics2 model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the model of the Idefics2
[HuggingFaceM4/idefics2-8b](https://huggingface.co/HuggingFaceM4/idefics2-8b) architecture.
|
258_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics2.md
|
https://huggingface.co/docs/transformers/en/model_doc/idefics2/#idefics2config
|
.md
|
[HuggingFaceM4/idefics2-8b](https://huggingface.co/HuggingFaceM4/idefics2-8b) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should cache the key/value pairs of the attention mechanism.
image_token_id (`int`, *optional*, defaults to 32001):
The id of the "image" token.
|
258_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics2.md
|
https://huggingface.co/docs/transformers/en/model_doc/idefics2/#idefics2config
|
.md
|
image_token_id (`int`, *optional*, defaults to 32001):
The id of the "image" token.
tie_word_embeddings (`bool`, *optional*, defaults to `False`):
Whether or not to tie the word embeddings with the token embeddings.
vision_config (`IdeficsVisionConfig` or `dict`, *optional*):
Custom vision config or dict
perceiver_config (`IdeficsPerceiverConfig` or `dict`, *optional*):
Custom perceiver config or dict
text_config (`MistralConfig` or `dict`, *optional*):
Custom text config or dict for the text model
|
258_6_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics2.md
|
https://huggingface.co/docs/transformers/en/model_doc/idefics2/#idefics2config
|
.md
|
text_config (`MistralConfig` or `dict`, *optional*):
Custom text config or dict for the text model
Example:
```python
>>> from transformers import Idefics2Model, Idefics2Config
>>> # Initializing configuration
>>> configuration = Idefics2Config()
>>> # Initializing a model from the configuration
>>> model = Idefics2Model(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
258_6_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics2.md
|
https://huggingface.co/docs/transformers/en/model_doc/idefics2/#idefics2model
|
.md
|
Idefics2 model consisting of a SIGLIP vision encoder and Mistral language decoder
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
258_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics2.md
|
https://huggingface.co/docs/transformers/en/model_doc/idefics2/#idefics2model
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`Idefics2Config`] or [`Idefics2VisionConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
|
258_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics2.md
|
https://huggingface.co/docs/transformers/en/model_doc/idefics2/#idefics2model
|
.md
|
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
258_7_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics2.md
|
https://huggingface.co/docs/transformers/en/model_doc/idefics2/#idefics2forconditionalgeneration
|
.md
|
The Idefics2 Model with a language modeling head. It is made up a SigLIP vision encoder, with a language modeling head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
258_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics2.md
|
https://huggingface.co/docs/transformers/en/model_doc/idefics2/#idefics2forconditionalgeneration
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`Idefics2Config`] or [`Idefics2VisionConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
|
258_8_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics2.md
|
https://huggingface.co/docs/transformers/en/model_doc/idefics2/#idefics2forconditionalgeneration
|
.md
|
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
Constructs a Idefics image processor.
Args:
do_convert_rgb (`bool`, *optional*, defaults to `True`):
Whether to convert the image to RGB. This is useful if the input image is of a different format e.g. RGBA.
Only has an effect if the input image is in the PIL format.
do_resize (`bool`, *optional*, defaults to `True`):
|
258_8_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics2.md
|
https://huggingface.co/docs/transformers/en/model_doc/idefics2/#idefics2forconditionalgeneration
|
.md
|
Only has an effect if the input image is in the PIL format.
do_resize (`bool`, *optional*, defaults to `True`):
Whether to resize the image. The longest edge of the image is resized to be <= `size["longest_edge"]`, with the
shortest edge resized to keep the input aspect ratio, with a minimum size of `size["shortest_edge"]`.
size (`Dict`, *optional*):
Controls the size of the output image. This is a dictionary containing the keys "shortest_edge" and "longest_edge".
|
258_8_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics2.md
|
https://huggingface.co/docs/transformers/en/model_doc/idefics2/#idefics2forconditionalgeneration
|
.md
|
Controls the size of the output image. This is a dictionary containing the keys "shortest_edge" and "longest_edge".
resample (`Resampling`, *optional*, defaults to `Resampling.BILINEAR`):
Resampling filter to use when resizing the image.
do_rescale (`bool`, *optional*, defaults to `True`):
Whether to rescale the image. If set to `True`, the image is rescaled to have pixel values between 0 and 1.
rescale_factor (`float`, *optional*, defaults to `1/255`):
|
258_8_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics2.md
|
https://huggingface.co/docs/transformers/en/model_doc/idefics2/#idefics2forconditionalgeneration
|
.md
|
rescale_factor (`float`, *optional*, defaults to `1/255`):
Rescale factor to rescale the image by if `do_rescale` is set to `True`.
do_normalize (`bool`, *optional*, defaults to `True`):
Whether to normalize the image. If set to `True`, the image is normalized to have a mean of `image_mean` and
a standard deviation of `image_std`.
image_mean (`float` or `List[float]`, *optional*, defaults to `IDEFICS_STANDARD_MEAN`):
|
258_8_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics2.md
|
https://huggingface.co/docs/transformers/en/model_doc/idefics2/#idefics2forconditionalgeneration
|
.md
|
a standard deviation of `image_std`.
image_mean (`float` or `List[float]`, *optional*, defaults to `IDEFICS_STANDARD_MEAN`):
Mean to use if normalizing the image. This is a float or list of floats the length of the number of
channels in the image. Can be overridden by the `image_mean` parameter in the `preprocess` method. Can be
overridden by the `image_mean` parameter in the `preprocess` method.
image_std (`float` or `List[float]`, *optional*, defaults to `IDEFICS_STANDARD_STD`):
|
258_8_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics2.md
|
https://huggingface.co/docs/transformers/en/model_doc/idefics2/#idefics2forconditionalgeneration
|
.md
|
image_std (`float` or `List[float]`, *optional*, defaults to `IDEFICS_STANDARD_STD`):
Standard deviation to use if normalizing the image. This is a float or list of floats the length of the
number of channels in the image. Can be overridden by the `image_std` parameter in the `preprocess` method.
Can be overridden by the `image_std` parameter in the `preprocess` method.
do_pad (`bool`, *optional*, defaults to `True`):
|
258_8_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics2.md
|
https://huggingface.co/docs/transformers/en/model_doc/idefics2/#idefics2forconditionalgeneration
|
.md
|
Can be overridden by the `image_std` parameter in the `preprocess` method.
do_pad (`bool`, *optional*, defaults to `True`):
Whether or not to pad the images to the largest height and width in the batch and number of images per
sample in the batch, such that the returned tensor is of shape (batch_size, max_num_images, num_channels, max_height, max_width).
do_image_splitting (`bool`, *optional*, defaults to `False`):
|
258_8_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics2.md
|
https://huggingface.co/docs/transformers/en/model_doc/idefics2/#idefics2forconditionalgeneration
|
.md
|
do_image_splitting (`bool`, *optional*, defaults to `False`):
Whether to split the image into a sequence 4 equal sub-images concatenated with the original image. That
strategy was first introduced in https://arxiv.org/abs/2311.06607.
Methods: preprocess
Constructs a IDEFICS2 processor which wraps a LLama tokenizer and IDEFICS2 image processor into a single processor.
[`IdeficsProcessor`] offers all the functionalities of [`Idefics2ImageProcessor`] and [`LlamaTokenizerFast`]. See
|
258_8_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics2.md
|
https://huggingface.co/docs/transformers/en/model_doc/idefics2/#idefics2forconditionalgeneration
|
.md
|
[`IdeficsProcessor`] offers all the functionalities of [`Idefics2ImageProcessor`] and [`LlamaTokenizerFast`]. See
the docstring of [`~IdeficsProcessor.__call__`] and [`~IdeficsProcessor.decode`] for more information.
Args:
image_processor (`Idefics2ImageProcessor`):
An instance of [`Idefics2ImageProcessor`]. The image processor is a required input.
tokenizer (`PreTrainedTokenizerBase`, *optional*):
|
258_8_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics2.md
|
https://huggingface.co/docs/transformers/en/model_doc/idefics2/#idefics2forconditionalgeneration
|
.md
|
tokenizer (`PreTrainedTokenizerBase`, *optional*):
An instance of [`PreTrainedTokenizerBase`]. This should correspond with the model's text model. The tokenizer is a required input.
image_seq_len (`int`, *optional*, defaults to 64):
The length of the image sequence i.e. the number of <image> tokens per image in the input.
This parameter is used to build the string from the input prompt and image tokens and should match the
config.perceiver_config.resampler_n_latents value for the model used.
|
258_8_11
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics2.md
|
https://huggingface.co/docs/transformers/en/model_doc/idefics2/#idefics2forconditionalgeneration
|
.md
|
config.perceiver_config.resampler_n_latents value for the model used.
chat_template (`str`, *optional*): A Jinja template which will be used to convert lists of messages
in a chat into a tokenizable string.
Methods: __call__
|
258_8_12
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/informer.md
|
https://huggingface.co/docs/transformers/en/model_doc/informer/
|
.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
259_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/informer.md
|
https://huggingface.co/docs/transformers/en/model_doc/informer/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
259_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/informer.md
|
https://huggingface.co/docs/transformers/en/model_doc/informer/#overview
|
.md
|
The Informer model was proposed in [Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting](https://arxiv.org/abs/2012.07436) by Haoyi Zhou, Shanghang Zhang, Jieqi Peng, Shuai Zhang, Jianxin Li, Hui Xiong, and Wancai Zhang.
This method introduces a Probabilistic Attention mechanism to select the "active" queries rather than the "lazy" queries and provides a sparse Transformer thus mitigating the quadratic compute and memory requirements of vanilla attention.
|
259_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/informer.md
|
https://huggingface.co/docs/transformers/en/model_doc/informer/#overview
|
.md
|
The abstract from the paper is the following:
|
259_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/informer.md
|
https://huggingface.co/docs/transformers/en/model_doc/informer/#overview
|
.md
|
*Many real-world applications require the prediction of long sequence time-series, such as electricity consumption planning. Long sequence time-series forecasting (LSTF) demands a high prediction capacity of the model, which is the ability to capture precise long-range dependency coupling between output and input efficiently. Recent studies have shown the potential of Transformer to increase the prediction capacity. However, there are several severe issues with Transformer that prevent it from being
|
259_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/informer.md
|
https://huggingface.co/docs/transformers/en/model_doc/informer/#overview
|
.md
|
to increase the prediction capacity. However, there are several severe issues with Transformer that prevent it from being directly applicable to LSTF, including quadratic time complexity, high memory usage, and inherent limitation of the encoder-decoder architecture. To address these issues, we design an efficient transformer-based model for LSTF, named Informer, with three distinctive characteristics: (i) a ProbSparse self-attention mechanism, which achieves O(L logL) in time complexity and memory usage,
|
259_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/informer.md
|
https://huggingface.co/docs/transformers/en/model_doc/informer/#overview
|
.md
|
characteristics: (i) a ProbSparse self-attention mechanism, which achieves O(L logL) in time complexity and memory usage, and has comparable performance on sequences' dependency alignment. (ii) the self-attention distilling highlights dominating attention by halving cascading layer input, and efficiently handles extreme long input sequences. (iii) the generative style decoder, while conceptually simple, predicts the long time-series sequences at one forward operation rather than a step-by-step way, which
|
259_1_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/informer.md
|
https://huggingface.co/docs/transformers/en/model_doc/informer/#overview
|
.md
|
conceptually simple, predicts the long time-series sequences at one forward operation rather than a step-by-step way, which drastically improves the inference speed of long-sequence predictions. Extensive experiments on four large-scale datasets demonstrate that Informer significantly outperforms existing methods and provides a new solution to the LSTF problem.*
|
259_1_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/informer.md
|
https://huggingface.co/docs/transformers/en/model_doc/informer/#overview
|
.md
|
This model was contributed by [elisim](https://huggingface.co/elisim) and [kashif](https://huggingface.co/kashif).
The original code can be found [here](https://github.com/zhouhaoyi/Informer2020).
|
259_1_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/informer.md
|
https://huggingface.co/docs/transformers/en/model_doc/informer/#resources
|
.md
|
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
- Check out the Informer blog-post in HuggingFace blog: [Multivariate Probabilistic Time Series Forecasting with Informer](https://huggingface.co/blog/informer)
|
259_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/informer.md
|
https://huggingface.co/docs/transformers/en/model_doc/informer/#informerconfig
|
.md
|
This is the configuration class to store the configuration of an [`InformerModel`]. It is used to instantiate an
Informer model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the Informer
[huggingface/informer-tourism-monthly](https://huggingface.co/huggingface/informer-tourism-monthly) architecture.
|
259_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/informer.md
|
https://huggingface.co/docs/transformers/en/model_doc/informer/#informerconfig
|
.md
|
[huggingface/informer-tourism-monthly](https://huggingface.co/huggingface/informer-tourism-monthly) architecture.
Configuration objects inherit from [`PretrainedConfig`] can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
prediction_length (`int`):
The prediction length for the decoder. In other words, the prediction horizon of the model. This value is
typically dictated by the dataset and we recommend to set it appropriately.
|
259_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/informer.md
|
https://huggingface.co/docs/transformers/en/model_doc/informer/#informerconfig
|
.md
|
typically dictated by the dataset and we recommend to set it appropriately.
context_length (`int`, *optional*, defaults to `prediction_length`):
The context length for the encoder. If `None`, the context length will be the same as the
`prediction_length`.
distribution_output (`string`, *optional*, defaults to `"student_t"`):
The distribution emission head for the model. Could be either "student_t", "normal" or "negative_binomial".
loss (`string`, *optional*, defaults to `"nll"`):
|
259_3_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/informer.md
|
https://huggingface.co/docs/transformers/en/model_doc/informer/#informerconfig
|
.md
|
loss (`string`, *optional*, defaults to `"nll"`):
The loss function for the model corresponding to the `distribution_output` head. For parametric
distributions it is the negative log likelihood (nll) - which currently is the only supported one.
input_size (`int`, *optional*, defaults to 1):
The size of the target variable which by default is 1 for univariate targets. Would be > 1 in case of
multivariate targets.
scaling (`string` or `bool`, *optional* defaults to `"mean"`):
|
259_3_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/informer.md
|
https://huggingface.co/docs/transformers/en/model_doc/informer/#informerconfig
|
.md
|
multivariate targets.
scaling (`string` or `bool`, *optional* defaults to `"mean"`):
Whether to scale the input targets via "mean" scaler, "std" scaler or no scaler if `None`. If `True`, the
scaler is set to "mean".
lags_sequence (`list[int]`, *optional*, defaults to `[1, 2, 3, 4, 5, 6, 7]`):
The lags of the input time series as covariates often dictated by the frequency of the data. Default is
`[1, 2, 3, 4, 5, 6, 7]` but we recommend to change it based on the dataset appropriately.
|
259_3_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/informer.md
|
https://huggingface.co/docs/transformers/en/model_doc/informer/#informerconfig
|
.md
|
`[1, 2, 3, 4, 5, 6, 7]` but we recommend to change it based on the dataset appropriately.
num_time_features (`int`, *optional*, defaults to 0):
The number of time features in the input time series.
num_dynamic_real_features (`int`, *optional*, defaults to 0):
The number of dynamic real valued features.
num_static_categorical_features (`int`, *optional*, defaults to 0):
The number of static categorical features.
num_static_real_features (`int`, *optional*, defaults to 0):
|
259_3_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/informer.md
|
https://huggingface.co/docs/transformers/en/model_doc/informer/#informerconfig
|
.md
|
The number of static categorical features.
num_static_real_features (`int`, *optional*, defaults to 0):
The number of static real valued features.
cardinality (`list[int]`, *optional*):
The cardinality (number of different values) for each of the static categorical features. Should be a list
of integers, having the same length as `num_static_categorical_features`. Cannot be `None` if
`num_static_categorical_features` is > 0.
embedding_dimension (`list[int]`, *optional*):
|
259_3_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/informer.md
|
https://huggingface.co/docs/transformers/en/model_doc/informer/#informerconfig
|
.md
|
`num_static_categorical_features` is > 0.
embedding_dimension (`list[int]`, *optional*):
The dimension of the embedding for each of the static categorical features. Should be a list of integers,
having the same length as `num_static_categorical_features`. Cannot be `None` if
`num_static_categorical_features` is > 0.
d_model (`int`, *optional*, defaults to 64):
Dimensionality of the transformer layers.
encoder_layers (`int`, *optional*, defaults to 2):
Number of encoder layers.
|
259_3_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/informer.md
|
https://huggingface.co/docs/transformers/en/model_doc/informer/#informerconfig
|
.md
|
Dimensionality of the transformer layers.
encoder_layers (`int`, *optional*, defaults to 2):
Number of encoder layers.
decoder_layers (`int`, *optional*, defaults to 2):
Number of decoder layers.
encoder_attention_heads (`int`, *optional*, defaults to 2):
Number of attention heads for each attention layer in the Transformer encoder.
decoder_attention_heads (`int`, *optional*, defaults to 2):
Number of attention heads for each attention layer in the Transformer decoder.
|
259_3_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/informer.md
|
https://huggingface.co/docs/transformers/en/model_doc/informer/#informerconfig
|
.md
|
Number of attention heads for each attention layer in the Transformer decoder.
encoder_ffn_dim (`int`, *optional*, defaults to 32):
Dimension of the "intermediate" (often named feed-forward) layer in encoder.
decoder_ffn_dim (`int`, *optional*, defaults to 32):
Dimension of the "intermediate" (often named feed-forward) layer in decoder.
activation_function (`str` or `function`, *optional*, defaults to `"gelu"`):
|
259_3_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/informer.md
|
https://huggingface.co/docs/transformers/en/model_doc/informer/#informerconfig
|
.md
|
activation_function (`str` or `function`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the encoder and decoder. If string, `"gelu"` and
`"relu"` are supported.
dropout (`float`, *optional*, defaults to 0.1):
The dropout probability for all fully connected layers in the encoder, and decoder.
encoder_layerdrop (`float`, *optional*, defaults to 0.1):
The dropout probability for the attention and fully connected layers for each encoder layer.
|
259_3_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/informer.md
|
https://huggingface.co/docs/transformers/en/model_doc/informer/#informerconfig
|
.md
|
The dropout probability for the attention and fully connected layers for each encoder layer.
decoder_layerdrop (`float`, *optional*, defaults to 0.1):
The dropout probability for the attention and fully connected layers for each decoder layer.
attention_dropout (`float`, *optional*, defaults to 0.1):
The dropout probability for the attention probabilities.
activation_dropout (`float`, *optional*, defaults to 0.1):
The dropout probability used between the two layers of the feed-forward networks.
|
259_3_11
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/informer.md
|
https://huggingface.co/docs/transformers/en/model_doc/informer/#informerconfig
|
.md
|
The dropout probability used between the two layers of the feed-forward networks.
num_parallel_samples (`int`, *optional*, defaults to 100):
The number of samples to generate in parallel for each time step of inference.
init_std (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated normal weight initialization distribution.
use_cache (`bool`, *optional*, defaults to `True`):
Whether to use the past key/values attentions (if applicable to the model) to speed up decoding.
|
259_3_12
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/informer.md
|
https://huggingface.co/docs/transformers/en/model_doc/informer/#informerconfig
|
.md
|
Whether to use the past key/values attentions (if applicable to the model) to speed up decoding.
attention_type (`str`, *optional*, defaults to "prob"):
Attention used in encoder. This can be set to "prob" (Informer's ProbAttention) or "full" (vanilla
transformer's canonical self-attention).
sampling_factor (`int`, *optional*, defaults to 5):
ProbSparse sampling factor (only makes affect when `attention_type`="prob"). It is used to control the
reduced query matrix (Q_reduce) input length.
|
259_3_13
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/informer.md
|
https://huggingface.co/docs/transformers/en/model_doc/informer/#informerconfig
|
.md
|
reduced query matrix (Q_reduce) input length.
distil (`bool`, *optional*, defaults to `True`):
Whether to use distilling in encoder.
Example:
```python
>>> from transformers import InformerConfig, InformerModel
|
259_3_14
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/informer.md
|
https://huggingface.co/docs/transformers/en/model_doc/informer/#informerconfig
|
.md
|
>>> # Initializing an Informer configuration with 12 time steps for prediction
>>> configuration = InformerConfig(prediction_length=12)
>>> # Randomly initializing a model (with random weights) from the configuration
>>> model = InformerModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
259_3_15
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/informer.md
|
https://huggingface.co/docs/transformers/en/model_doc/informer/#informermodel
|
.md
|
The bare Informer Model outputting raw hidden-states without any specific head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
259_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/informer.md
|
https://huggingface.co/docs/transformers/en/model_doc/informer/#informermodel
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`TimeSeriesTransformerConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
|
259_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/informer.md
|
https://huggingface.co/docs/transformers/en/model_doc/informer/#informermodel
|
.md
|
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
259_4_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/informer.md
|
https://huggingface.co/docs/transformers/en/model_doc/informer/#informerforprediction
|
.md
|
The Informer Model with a distribution head on top for time-series forecasting.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
259_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/informer.md
|
https://huggingface.co/docs/transformers/en/model_doc/informer/#informerforprediction
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`TimeSeriesTransformerConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
|
259_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/informer.md
|
https://huggingface.co/docs/transformers/en/model_doc/informer/#informerforprediction
|
.md
|
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
259_5_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/herbert.md
|
https://huggingface.co/docs/transformers/en/model_doc/herbert/
|
.md
|
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
260_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/herbert.md
|
https://huggingface.co/docs/transformers/en/model_doc/herbert/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
260_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/herbert.md
|
https://huggingface.co/docs/transformers/en/model_doc/herbert/#overview
|
.md
|
The HerBERT model was proposed in [KLEJ: Comprehensive Benchmark for Polish Language Understanding](https://www.aclweb.org/anthology/2020.acl-main.111.pdf) by Piotr Rybak, Robert Mroczkowski, Janusz Tracz, and
Ireneusz Gawlik. It is a BERT-based Language Model trained on Polish Corpora using only MLM objective with dynamic
masking of whole words.
The abstract from the paper is the following:
*In recent years, a series of Transformer-based models unlocked major improvements in general natural language
|
260_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/herbert.md
|
https://huggingface.co/docs/transformers/en/model_doc/herbert/#overview
|
.md
|
*In recent years, a series of Transformer-based models unlocked major improvements in general natural language
understanding (NLU) tasks. Such a fast pace of research would not be possible without general NLU benchmarks, which
allow for a fair comparison of the proposed methods. However, such benchmarks are available only for a handful of
languages. To alleviate this issue, we introduce a comprehensive multi-task benchmark for the Polish language
|
260_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/herbert.md
|
https://huggingface.co/docs/transformers/en/model_doc/herbert/#overview
|
.md
|
languages. To alleviate this issue, we introduce a comprehensive multi-task benchmark for the Polish language
understanding, accompanied by an online leaderboard. It consists of a diverse set of tasks, adopted from existing
datasets for named entity recognition, question-answering, textual entailment, and others. We also introduce a new
sentiment analysis task for the e-commerce domain, named Allegro Reviews (AR). To ensure a common evaluation scheme and
|
260_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/herbert.md
|
https://huggingface.co/docs/transformers/en/model_doc/herbert/#overview
|
.md
|
sentiment analysis task for the e-commerce domain, named Allegro Reviews (AR). To ensure a common evaluation scheme and
promote models that generalize to different NLU tasks, the benchmark includes datasets from varying domains and
applications. Additionally, we release HerBERT, a Transformer-based model trained specifically for the Polish language,
which has the best average performance and obtains the best results for three out of nine tasks. Finally, we provide an
|
260_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/herbert.md
|
https://huggingface.co/docs/transformers/en/model_doc/herbert/#overview
|
.md
|
which has the best average performance and obtains the best results for three out of nine tasks. Finally, we provide an
extensive evaluation, including several standard baselines and recently proposed, multilingual Transformer-based
models.*
This model was contributed by [rmroczkowski](https://huggingface.co/rmroczkowski). The original code can be found
[here](https://github.com/allegro/HerBERT).
|
260_1_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/herbert.md
|
https://huggingface.co/docs/transformers/en/model_doc/herbert/#usage-example
|
.md
|
```python
>>> from transformers import HerbertTokenizer, RobertaModel
>>> tokenizer = HerbertTokenizer.from_pretrained("allegro/herbert-klej-cased-tokenizer-v1")
>>> model = RobertaModel.from_pretrained("allegro/herbert-klej-cased-v1")
>>> encoded_input = tokenizer.encode("Kto ma lepszą sztukę, ma lepszy rząd – to jasne.", return_tensors="pt")
>>> outputs = model(encoded_input)
|
260_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/herbert.md
|
https://huggingface.co/docs/transformers/en/model_doc/herbert/#usage-example
|
.md
|
>>> # HerBERT can also be loaded using AutoTokenizer and AutoModel:
>>> import torch
>>> from transformers import AutoModel, AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("allegro/herbert-klej-cased-tokenizer-v1")
>>> model = AutoModel.from_pretrained("allegro/herbert-klej-cased-v1")
```
<Tip>
Herbert implementation is the same as `BERT` except for the tokenization method. Refer to [BERT documentation](bert)
for API reference and examples.
</Tip>
|
260_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/herbert.md
|
https://huggingface.co/docs/transformers/en/model_doc/herbert/#herberttokenizer
|
.md
|
Construct a BPE tokenizer for HerBERT.
Peculiarities:
- uses BERT's pre-tokenizer: BaseTokenizer splits tokens on spaces, and also on punctuation. Each occurrence of a
punctuation character will be treated separately.
- Such pretokenized input is BPE subtokenized
This tokenizer inherits from [`XLMTokenizer`] which contains most of the methods. Users should refer to the
superclass for more information regarding methods.
|
260_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/herbert.md
|
https://huggingface.co/docs/transformers/en/model_doc/herbert/#herberttokenizerfast
|
.md
|
Construct a "Fast" BPE tokenizer for HerBERT (backed by HuggingFace's *tokenizers* library).
Peculiarities:
- uses BERT's pre-tokenizer: BertPreTokenizer splits tokens on spaces, and also on punctuation. Each occurrence of
a punctuation character will be treated separately.
This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the methods. Users should refer to the
superclass for more information regarding methods.
Args:
vocab_file (`str`):
Path to the vocabulary file.
|
260_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/herbert.md
|
https://huggingface.co/docs/transformers/en/model_doc/herbert/#herberttokenizerfast
|
.md
|
superclass for more information regarding methods.
Args:
vocab_file (`str`):
Path to the vocabulary file.
merges_file (`str`):
Path to the merges file.
|
260_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encodec.md
|
https://huggingface.co/docs/transformers/en/model_doc/encodec/
|
.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
261_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encodec.md
|
https://huggingface.co/docs/transformers/en/model_doc/encodec/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
261_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encodec.md
|
https://huggingface.co/docs/transformers/en/model_doc/encodec/#overview
|
.md
|
The EnCodec neural codec model was proposed in [High Fidelity Neural Audio Compression](https://arxiv.org/abs/2210.13438) by Alexandre Défossez, Jade Copet, Gabriel Synnaeve, Yossi Adi.
The abstract from the paper is the following:
|
261_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encodec.md
|
https://huggingface.co/docs/transformers/en/model_doc/encodec/#overview
|
.md
|
*We introduce a state-of-the-art real-time, high-fidelity, audio codec leveraging neural networks. It consists in a streaming encoder-decoder architecture with quantized latent space trained in an end-to-end fashion. We simplify and speed-up the training by using a single multiscale spectrogram adversary that efficiently reduces artifacts and produce high-quality samples. We introduce a novel loss balancer mechanism to stabilize training: the weight of a loss now defines the fraction of the overall
|
261_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encodec.md
|
https://huggingface.co/docs/transformers/en/model_doc/encodec/#overview
|
.md
|
introduce a novel loss balancer mechanism to stabilize training: the weight of a loss now defines the fraction of the overall gradient it should represent, thus decoupling the choice of this hyper-parameter from the typical scale of the loss. Finally, we study how lightweight Transformer models can be used to further compress the obtained representation by up to 40%, while staying faster than real time. We provide a detailed description of the key design choices of the proposed model including: training
|
261_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encodec.md
|
https://huggingface.co/docs/transformers/en/model_doc/encodec/#overview
|
.md
|
faster than real time. We provide a detailed description of the key design choices of the proposed model including: training objective, architectural changes and a study of various perceptual loss functions. We present an extensive subjective evaluation (MUSHRA tests) together with an ablation study for a range of bandwidths and audio domains, including speech, noisy-reverberant speech, and music. Our approach is superior to the baselines methods across all evaluated settings, considering both 24 kHz
|
261_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encodec.md
|
https://huggingface.co/docs/transformers/en/model_doc/encodec/#overview
|
.md
|
speech, and music. Our approach is superior to the baselines methods across all evaluated settings, considering both 24 kHz monophonic and 48 kHz stereophonic audio.*
|
261_1_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encodec.md
|
https://huggingface.co/docs/transformers/en/model_doc/encodec/#overview
|
.md
|
This model was contributed by [Matthijs](https://huggingface.co/Matthijs), [Patrick Von Platen](https://huggingface.co/patrickvonplaten) and [Arthur Zucker](https://huggingface.co/ArthurZ).
The original code can be found [here](https://github.com/facebookresearch/encodec).
|
261_1_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encodec.md
|
https://huggingface.co/docs/transformers/en/model_doc/encodec/#usage-example
|
.md
|
Here is a quick example of how to encode and decode an audio using this model:
```python
>>> from datasets import load_dataset, Audio
>>> from transformers import EncodecModel, AutoProcessor
>>> librispeech_dummy = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
|
261_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encodec.md
|
https://huggingface.co/docs/transformers/en/model_doc/encodec/#usage-example
|
.md
|
>>> model = EncodecModel.from_pretrained("facebook/encodec_24khz")
>>> processor = AutoProcessor.from_pretrained("facebook/encodec_24khz")
>>> librispeech_dummy = librispeech_dummy.cast_column("audio", Audio(sampling_rate=processor.sampling_rate))
>>> audio_sample = librispeech_dummy[-1]["audio"]["array"]
>>> inputs = processor(raw_audio=audio_sample, sampling_rate=processor.sampling_rate, return_tensors="pt")
|
261_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encodec.md
|
https://huggingface.co/docs/transformers/en/model_doc/encodec/#usage-example
|
.md
|
>>> encoder_outputs = model.encode(inputs["input_values"], inputs["padding_mask"])
>>> audio_values = model.decode(encoder_outputs.audio_codes, encoder_outputs.audio_scales, inputs["padding_mask"])[0]
>>> # or the equivalent with a forward pass
>>> audio_values = model(inputs["input_values"], inputs["padding_mask"]).audio_values
```
|
261_2_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encodec.md
|
https://huggingface.co/docs/transformers/en/model_doc/encodec/#encodecconfig
|
.md
|
This is the configuration class to store the configuration of an [`EncodecModel`]. It is used to instantiate a
Encodec model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the
[facebook/encodec_24khz](https://huggingface.co/facebook/encodec_24khz) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
261_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encodec.md
|
https://huggingface.co/docs/transformers/en/model_doc/encodec/#encodecconfig
|
.md
|
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
target_bandwidths (`List[float]`, *optional*, defaults to `[1.5, 3.0, 6.0, 12.0, 24.0]`):
The range of diffent bandwiths the model can encode audio with.
sampling_rate (`int`, *optional*, defaults to 24000):
The sampling rate at which the audio waveform should be digitalized expressed in hertz (Hz).
|
261_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encodec.md
|
https://huggingface.co/docs/transformers/en/model_doc/encodec/#encodecconfig
|
.md
|
The sampling rate at which the audio waveform should be digitalized expressed in hertz (Hz).
audio_channels (`int`, *optional*, defaults to 1):
Number of channels in the audio data. Either 1 for mono or 2 for stereo.
normalize (`bool`, *optional*, defaults to `False`):
Whether the audio shall be normalized when passed.
chunk_length_s (`float`, *optional*):
If defined the audio is pre-processed into chunks of lengths `chunk_length_s` and then encoded.
overlap (`float`, *optional*):
|
261_3_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encodec.md
|
https://huggingface.co/docs/transformers/en/model_doc/encodec/#encodecconfig
|
.md
|
If defined the audio is pre-processed into chunks of lengths `chunk_length_s` and then encoded.
overlap (`float`, *optional*):
Defines the overlap between each chunk. It is used to compute the `chunk_stride` using the following
formulae : `int((1.0 - self.overlap) * self.chunk_length)`.
hidden_size (`int`, *optional*, defaults to 128):
Intermediate representation dimension.
num_filters (`int`, *optional*, defaults to 32):
Number of convolution kernels of first `EncodecConv1d` down sampling layer.
|
261_3_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encodec.md
|
https://huggingface.co/docs/transformers/en/model_doc/encodec/#encodecconfig
|
.md
|
num_filters (`int`, *optional*, defaults to 32):
Number of convolution kernels of first `EncodecConv1d` down sampling layer.
num_residual_layers (`int`, *optional*, defaults to 1):
Number of residual layers.
upsampling_ratios (`Sequence[int]` , *optional*, defaults to `[8, 5, 4, 2]`):
Kernel size and stride ratios. The encoder uses downsampling ratios instead of upsampling ratios, hence it
will use the ratios in the reverse order to the ones specified here that must match the decoder order.
|
261_3_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encodec.md
|
https://huggingface.co/docs/transformers/en/model_doc/encodec/#encodecconfig
|
.md
|
will use the ratios in the reverse order to the ones specified here that must match the decoder order.
norm_type (`str`, *optional*, defaults to `"weight_norm"`):
Normalization method. Should be in `["weight_norm", "time_group_norm"]`
kernel_size (`int`, *optional*, defaults to 7):
Kernel size for the initial convolution.
last_kernel_size (`int`, *optional*, defaults to 7):
Kernel size for the last convolution layer.
residual_kernel_size (`int`, *optional*, defaults to 3):
|
261_3_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encodec.md
|
https://huggingface.co/docs/transformers/en/model_doc/encodec/#encodecconfig
|
.md
|
Kernel size for the last convolution layer.
residual_kernel_size (`int`, *optional*, defaults to 3):
Kernel size for the residual layers.
dilation_growth_rate (`int`, *optional*, defaults to 2):
How much to increase the dilation with each layer.
use_causal_conv (`bool`, *optional*, defaults to `True`):
Whether to use fully causal convolution.
pad_mode (`str`, *optional*, defaults to `"reflect"`):
Padding mode for the convolutions.
compress (`int`, *optional*, defaults to 2):
|
261_3_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encodec.md
|
https://huggingface.co/docs/transformers/en/model_doc/encodec/#encodecconfig
|
.md
|
Padding mode for the convolutions.
compress (`int`, *optional*, defaults to 2):
Reduced dimensionality in residual branches (from Demucs v3).
num_lstm_layers (`int`, *optional*, defaults to 2):
Number of LSTM layers at the end of the encoder.
trim_right_ratio (`float`, *optional*, defaults to 1.0):
Ratio for trimming at the right of the transposed convolution under the `use_causal_conv = True` setup. If
equal to 1.0, it means that all the trimming is done at the right.
|
261_3_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encodec.md
|
https://huggingface.co/docs/transformers/en/model_doc/encodec/#encodecconfig
|
.md
|
equal to 1.0, it means that all the trimming is done at the right.
codebook_size (`int`, *optional*, defaults to 1024):
Number of discret codes that make up VQVAE.
codebook_dim (`int`, *optional*):
Dimension of the codebook vectors. If not defined, uses `hidden_size`.
use_conv_shortcut (`bool`, *optional*, defaults to `True`):
Whether to use a convolutional layer as the 'skip' connection in the `EncodecResnetBlock` block. If False,
an identity function will be used, giving a generic residual connection.
|
261_3_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encodec.md
|
https://huggingface.co/docs/transformers/en/model_doc/encodec/#encodecconfig
|
.md
|
an identity function will be used, giving a generic residual connection.
Example:
```python
>>> from transformers import EncodecModel, EncodecConfig
|
261_3_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encodec.md
|
https://huggingface.co/docs/transformers/en/model_doc/encodec/#encodecconfig
|
.md
|
>>> # Initializing a "facebook/encodec_24khz" style configuration
>>> configuration = EncodecConfig()
>>> # Initializing a model (with random weights) from the "facebook/encodec_24khz" style configuration
>>> model = EncodecModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
261_3_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encodec.md
|
https://huggingface.co/docs/transformers/en/model_doc/encodec/#encodecfeatureextractor
|
.md
|
Constructs an EnCodec feature extractor.
This feature extractor inherits from [`~feature_extraction_sequence_utils.SequenceFeatureExtractor`] which contains
most of the main methods. Users should refer to this superclass for more information regarding those methods.
Instantiating a feature extractor with the defaults will yield a similar configuration to that of the
[facebook/encodec_24khz](https://huggingface.co/facebook/encodec_24khz) architecture.
Args:
|
261_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encodec.md
|
https://huggingface.co/docs/transformers/en/model_doc/encodec/#encodecfeatureextractor
|
.md
|
[facebook/encodec_24khz](https://huggingface.co/facebook/encodec_24khz) architecture.
Args:
feature_size (`int`, *optional*, defaults to 1):
The feature dimension of the extracted features. Use 1 for mono, 2 for stereo.
sampling_rate (`int`, *optional*, defaults to 24000):
The sampling rate at which the audio waveform should be digitalized expressed in hertz (Hz).
padding_value (`float`, *optional*, defaults to 0.0):
The value that is used to fill the padding values.
chunk_length_s (`float`, *optional*):
|
261_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encodec.md
|
https://huggingface.co/docs/transformers/en/model_doc/encodec/#encodecfeatureextractor
|
.md
|
The value that is used to fill the padding values.
chunk_length_s (`float`, *optional*):
If defined the audio is pre-processed into chunks of lengths `chunk_length_s` and then encoded.
overlap (`float`, *optional*):
Defines the overlap between each chunk. It is used to compute the `chunk_stride` using the following
formulae : `int((1.0 - self.overlap) * self.chunk_length)`.
Methods: __call__
|
261_4_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encodec.md
|
https://huggingface.co/docs/transformers/en/model_doc/encodec/#encodecmodel
|
.md
|
The EnCodec neural audio codec model.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
|
261_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encodec.md
|
https://huggingface.co/docs/transformers/en/model_doc/encodec/#encodecmodel
|
.md
|
and behavior.
Parameters:
config ([`EncodecConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: decode
- encode
- forward
|
261_5_1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.