source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#bartforconditionalgeneration | .md | load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 395_11_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#bartforsequenceclassification | .md | Bart model with a sequence classification/head on top (a linear layer on top of the pooled output) e.g. for GLUE
tasks.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | 395_12_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#bartforsequenceclassification | .md | etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`BartConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the | 395_12_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#bartforsequenceclassification | .md | load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 395_12_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#bartforquestionanswering | .md | BART Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layer on top of the hidden-states output to compute `span start logits` and `span end logits`).
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.) | 395_13_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#bartforquestionanswering | .md | library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`BartConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not | 395_13_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#bartforquestionanswering | .md | Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 395_13_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#bartforcausallm | .md | BART decoder with a language modeling head on top (linear layer with weights tied to the input embeddings).
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | 395_14_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#bartforcausallm | .md | etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`BartConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the | 395_14_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#bartforcausallm | .md | load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
</pt>
<tf> | 395_14_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#tfbartmodel | .md | No docstring available for TFBartModel
Methods: call | 395_15_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#tfbartforconditionalgeneration | .md | No docstring available for TFBartForConditionalGeneration
Methods: call | 395_16_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#tfbartforsequenceclassification | .md | No docstring available for TFBartForSequenceClassification
Methods: call
</tf>
<jax> | 395_17_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#flaxbartmodel | .md | No docstring available for FlaxBartModel
Methods: __call__
- encode
- decode | 395_18_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#flaxbartforconditionalgeneration | .md | No docstring available for FlaxBartForConditionalGeneration
Methods: __call__
- encode
- decode | 395_19_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#flaxbartforsequenceclassification | .md | No docstring available for FlaxBartForSequenceClassification
Methods: __call__
- encode
- decode | 395_20_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#flaxbartforquestionanswering | .md | No docstring available for FlaxBartForQuestionAnswering
Methods: __call__
- encode
- decode | 395_21_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bart.md | https://huggingface.co/docs/transformers/en/model_doc/bart/#flaxbartforcausallm | .md | No docstring available for FlaxBartForCausalLM
Methods: __call__
</jax>
</frameworkcontent> | 395_22_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dac.md | https://huggingface.co/docs/transformers/en/model_doc/dac/ | .md | <!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 396_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dac.md | https://huggingface.co/docs/transformers/en/model_doc/dac/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 396_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dac.md | https://huggingface.co/docs/transformers/en/model_doc/dac/#overview | .md | The DAC model was proposed in [Descript Audio Codec: High-Fidelity Audio Compression with Improved RVQGAN](https://arxiv.org/abs/2306.06546) by Rithesh Kumar, Prem Seetharaman, Alejandro Luebs, Ishaan Kumar, Kundan Kumar. | 396_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dac.md | https://huggingface.co/docs/transformers/en/model_doc/dac/#overview | .md | The Descript Audio Codec (DAC) model is a powerful tool for compressing audio data, making it highly efficient for storage and transmission. By compressing 44.1 KHz audio into tokens at just 8kbps bandwidth, the DAC model enables high-quality audio processing while significantly reducing the data footprint. This is particularly useful in scenarios where bandwidth is limited or storage space is at a premium, such as in streaming applications, remote conferencing, and archiving large audio datasets. | 396_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dac.md | https://huggingface.co/docs/transformers/en/model_doc/dac/#overview | .md | The abstract from the paper is the following: | 396_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dac.md | https://huggingface.co/docs/transformers/en/model_doc/dac/#overview | .md | *Language models have been successfully used to model natural signals, such as images, speech, and music. A key component of these models is a high quality neural compression model that can compress high-dimensional natural signals into lower dimensional discrete tokens. To that end, we introduce a high-fidelity universal neural audio compression algorithm that achieves ~90x compression of 44.1 KHz audio into tokens at just 8kbps bandwidth. We achieve this by combining advances in high-fidelity audio | 396_1_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dac.md | https://huggingface.co/docs/transformers/en/model_doc/dac/#overview | .md | compression of 44.1 KHz audio into tokens at just 8kbps bandwidth. We achieve this by combining advances in high-fidelity audio generation with better vector quantization techniques from the image domain, along with improved adversarial and reconstruction losses. We compress all domains (speech, environment, music, etc.) with a single universal model, making it widely applicable to generative modeling of all audio. We compare with competing audio compression algorithms, and find our method outperforms them | 396_1_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dac.md | https://huggingface.co/docs/transformers/en/model_doc/dac/#overview | .md | generative modeling of all audio. We compare with competing audio compression algorithms, and find our method outperforms them significantly. We provide thorough ablations for every design choice, as well as open-source code and trained model weights. We hope our work can lay the foundation for the next generation of high-fidelity audio modeling.* | 396_1_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dac.md | https://huggingface.co/docs/transformers/en/model_doc/dac/#overview | .md | This model was contributed by [Kamil Akesbi](https://huggingface.co/kamilakesbi).
The original code can be found [here](https://github.com/descriptinc/descript-audio-codec/tree/main?tab=readme-ov-file). | 396_1_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dac.md | https://huggingface.co/docs/transformers/en/model_doc/dac/#model-structure | .md | The Descript Audio Codec (DAC) model is structured into three distinct stages:
1. Encoder Model: This stage compresses the input audio, reducing its size while retaining essential information.
2. Residual Vector Quantizer (RVQ) Model: Working in tandem with the encoder, this model quantizes the latent codes of the audio, refining the compression and ensuring high-quality reconstruction. | 396_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dac.md | https://huggingface.co/docs/transformers/en/model_doc/dac/#model-structure | .md | 3. Decoder Model: This final stage reconstructs the audio from its compressed form, restoring it to a state that closely resembles the original input. | 396_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dac.md | https://huggingface.co/docs/transformers/en/model_doc/dac/#usage-example | .md | Here is a quick example of how to encode and decode an audio using this model:
```python
>>> from datasets import load_dataset, Audio
>>> from transformers import DacModel, AutoProcessor
>>> librispeech_dummy = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") | 396_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dac.md | https://huggingface.co/docs/transformers/en/model_doc/dac/#usage-example | .md | >>> model = DacModel.from_pretrained("descript/dac_16khz")
>>> processor = AutoProcessor.from_pretrained("descript/dac_16khz")
>>> librispeech_dummy = librispeech_dummy.cast_column("audio", Audio(sampling_rate=processor.sampling_rate))
>>> audio_sample = librispeech_dummy[-1]["audio"]["array"]
>>> inputs = processor(raw_audio=audio_sample, sampling_rate=processor.sampling_rate, return_tensors="pt") | 396_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dac.md | https://huggingface.co/docs/transformers/en/model_doc/dac/#usage-example | .md | >>> encoder_outputs = model.encode(inputs["input_values"])
>>> # Get the intermediate audio codes
>>> audio_codes = encoder_outputs.audio_codes
>>> # Reconstruct the audio from its quantized representation
>>> audio_values = model.decode(encoder_outputs.quantized_representation)
>>> # or the equivalent with a forward pass
>>> audio_values = model(inputs["input_values"]).audio_values
``` | 396_3_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dac.md | https://huggingface.co/docs/transformers/en/model_doc/dac/#dacconfig | .md | This is the configuration class to store the configuration of an [`DacModel`]. It is used to instantiate a
Dac model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the
[descript/dac_16khz](https://huggingface.co/descript/dac_16khz) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the | 396_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dac.md | https://huggingface.co/docs/transformers/en/model_doc/dac/#dacconfig | .md | Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
encoder_hidden_size (`int`, *optional*, defaults to 64):
Intermediate representation dimension for the encoder.
downsampling_ratios (`List[int]`, *optional*, defaults to `[2, 4, 8, 8]`):
Ratios for downsampling in the encoder. These are used in reverse order for upsampling in the decoder. | 396_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dac.md | https://huggingface.co/docs/transformers/en/model_doc/dac/#dacconfig | .md | Ratios for downsampling in the encoder. These are used in reverse order for upsampling in the decoder.
decoder_hidden_size (`int`, *optional*, defaults to 1536):
Intermediate representation dimension for the decoder.
n_codebooks (`int`, *optional*, defaults to 9):
Number of codebooks in the VQVAE.
codebook_size (`int`, *optional*, defaults to 1024):
Number of discrete codes in each codebook.
codebook_dim (`int`, *optional*, defaults to 8): | 396_4_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dac.md | https://huggingface.co/docs/transformers/en/model_doc/dac/#dacconfig | .md | Number of discrete codes in each codebook.
codebook_dim (`int`, *optional*, defaults to 8):
Dimension of the codebook vectors. If not defined, uses `encoder_hidden_size`.
quantizer_dropout (`bool`, *optional*, defaults to 0):
Whether to apply dropout to the quantizer.
commitment_loss_weight (float, *optional*, defaults to 0.25):
Weight of the commitment loss term in the VQVAE loss function.
codebook_loss_weight (float, *optional*, defaults to 1.0): | 396_4_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dac.md | https://huggingface.co/docs/transformers/en/model_doc/dac/#dacconfig | .md | Weight of the commitment loss term in the VQVAE loss function.
codebook_loss_weight (float, *optional*, defaults to 1.0):
Weight of the codebook loss term in the VQVAE loss function.
sampling_rate (`int`, *optional*, defaults to 16000):
The sampling rate at which the audio waveform should be digitalized expressed in hertz (Hz).
Example:
```python
>>> from transformers import DacModel, DacConfig | 396_4_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dac.md | https://huggingface.co/docs/transformers/en/model_doc/dac/#dacconfig | .md | >>> # Initializing a "descript/dac_16khz" style configuration
>>> configuration = DacConfig()
>>> # Initializing a model (with random weights) from the "descript/dac_16khz" style configuration
>>> model = DacModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
``` | 396_4_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dac.md | https://huggingface.co/docs/transformers/en/model_doc/dac/#dacfeatureextractor | .md | Constructs an Dac feature extractor.
This feature extractor inherits from [`~feature_extraction_sequence_utils.SequenceFeatureExtractor`] which contains
most of the main methods. Users should refer to this superclass for more information regarding those methods.
Args:
feature_size (`int`, *optional*, defaults to 1):
The feature dimension of the extracted features. Use 1 for mono, 2 for stereo.
sampling_rate (`int`, *optional*, defaults to 16000): | 396_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dac.md | https://huggingface.co/docs/transformers/en/model_doc/dac/#dacfeatureextractor | .md | sampling_rate (`int`, *optional*, defaults to 16000):
The sampling rate at which the audio waveform should be digitalized, expressed in hertz (Hz).
padding_value (`float`, *optional*, defaults to 0.0):
The value that is used for padding.
hop_length (`int`, *optional*, defaults to 512):
Overlap length between successive windows.
Methods: __call__ | 396_5_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dac.md | https://huggingface.co/docs/transformers/en/model_doc/dac/#dacmodel | .md | The DAC (Descript Audio Codec) model.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior. | 396_6_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dac.md | https://huggingface.co/docs/transformers/en/model_doc/dac/#dacmodel | .md | and behavior.
Parameters:
config ([`DacConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: decode
- encode
- forward | 396_6_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapex.md | https://huggingface.co/docs/transformers/en/model_doc/tapex/ | .md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 397_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapex.md | https://huggingface.co/docs/transformers/en/model_doc/tapex/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 397_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapex.md | https://huggingface.co/docs/transformers/en/model_doc/tapex/#tapex | .md | <Tip warning={true}>
This model is in maintenance mode only, we don't accept any new PRs changing its code.
If you run into any issues running this model, please reinstall the last version that supported this model: v4.30.0.
You can do so by running the following command: `pip install -U transformers==4.30.0`.
</Tip> | 397_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapex.md | https://huggingface.co/docs/transformers/en/model_doc/tapex/#overview | .md | The TAPEX model was proposed in [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu,
Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou. TAPEX pre-trains a BART model to solve synthetic SQL queries, after
which it can be fine-tuned to answer natural language questions related to tabular data, as well as performing table fact checking.
TAPEX has been fine-tuned on several datasets: | 397_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapex.md | https://huggingface.co/docs/transformers/en/model_doc/tapex/#overview | .md | TAPEX has been fine-tuned on several datasets:
- [SQA](https://www.microsoft.com/en-us/download/details.aspx?id=54253) (Sequential Question Answering by Microsoft)
- [WTQ](https://github.com/ppasupat/WikiTableQuestions) (Wiki Table Questions by Stanford University)
- [WikiSQL](https://github.com/salesforce/WikiSQL) (by Salesforce)
- [TabFact](https://tabfact.github.io/) (by USCB NLP Lab).
The abstract from the paper is the following: | 397_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapex.md | https://huggingface.co/docs/transformers/en/model_doc/tapex/#overview | .md | - [TabFact](https://tabfact.github.io/) (by USCB NLP Lab).
The abstract from the paper is the following:
*Recent progress in language model pre-training has achieved a great success via leveraging large-scale unstructured textual data. However, it is
still a challenge to apply pre-training on structured tabular data due to the absence of large-scale high-quality tabular data. In this paper, we | 397_2_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapex.md | https://huggingface.co/docs/transformers/en/model_doc/tapex/#overview | .md | propose TAPEX to show that table pre-training can be achieved by learning a neural SQL executor over a synthetic corpus, which is obtained by automatically
synthesizing executable SQL queries and their execution outputs. TAPEX addresses the data scarcity challenge via guiding the language model to mimic a SQL
executor on the diverse, large-scale and high-quality synthetic corpus. We evaluate TAPEX on four benchmark datasets. Experimental results demonstrate that | 397_2_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapex.md | https://huggingface.co/docs/transformers/en/model_doc/tapex/#overview | .md | TAPEX outperforms previous table pre-training approaches by a large margin and achieves new state-of-the-art results on all of them. This includes improvements
on the weakly-supervised WikiSQL denotation accuracy to 89.5% (+2.3%), the WikiTableQuestions denotation accuracy to 57.5% (+4.8%), the SQA denotation accuracy
to 74.5% (+3.5%), and the TabFact accuracy to 84.2% (+3.2%). To our knowledge, this is the first work to exploit table pre-training via synthetic executable programs | 397_2_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapex.md | https://huggingface.co/docs/transformers/en/model_doc/tapex/#overview | .md | and to achieve new state-of-the-art results on various downstream tasks.* | 397_2_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapex.md | https://huggingface.co/docs/transformers/en/model_doc/tapex/#usage-tips | .md | - TAPEX is a generative (seq2seq) model. One can directly plug in the weights of TAPEX into a BART model.
- TAPEX has checkpoints on the hub that are either pre-trained only, or fine-tuned on WTQ, SQA, WikiSQL and TabFact.
- Sentences + tables are presented to the model as `sentence + " " + linearized table`. The linearized table has the following format:
`col: col1 | col2 | col 3 row 1 : val1 | val2 | val3 row 2 : ...`. | 397_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapex.md | https://huggingface.co/docs/transformers/en/model_doc/tapex/#usage-tips | .md | `col: col1 | col2 | col 3 row 1 : val1 | val2 | val3 row 2 : ...`.
- TAPEX has its own tokenizer, that allows to prepare all data for the model easily. One can pass Pandas DataFrames and strings to the tokenizer,
and it will automatically create the `input_ids` and `attention_mask` (as shown in the usage examples below). | 397_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapex.md | https://huggingface.co/docs/transformers/en/model_doc/tapex/#usage-inference | .md | Below, we illustrate how to use TAPEX for table question answering. As one can see, one can directly plug in the weights of TAPEX into a BART model.
We use the [Auto API](auto), which will automatically instantiate the appropriate tokenizer ([`TapexTokenizer`]) and model ([`BartForConditionalGeneration`]) for us,
based on the configuration file of the checkpoint on the hub.
```python
>>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
>>> import pandas as pd | 397_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapex.md | https://huggingface.co/docs/transformers/en/model_doc/tapex/#usage-inference | .md | >>> tokenizer = AutoTokenizer.from_pretrained("microsoft/tapex-large-finetuned-wtq")
>>> model = AutoModelForSeq2SeqLM.from_pretrained("microsoft/tapex-large-finetuned-wtq")
>>> # prepare table + question
>>> data = {"Actors": ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"], "Number of movies": ["87", "53", "69"]}
>>> table = pd.DataFrame.from_dict(data)
>>> question = "how many movies does Leonardo Di Caprio have?"
>>> encoding = tokenizer(table, question, return_tensors="pt") | 397_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapex.md | https://huggingface.co/docs/transformers/en/model_doc/tapex/#usage-inference | .md | >>> encoding = tokenizer(table, question, return_tensors="pt")
>>> # let the model generate an answer autoregressively
>>> outputs = model.generate(**encoding) | 397_4_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapex.md | https://huggingface.co/docs/transformers/en/model_doc/tapex/#usage-inference | .md | >>> # decode back to text
>>> predicted_answer = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]
>>> print(predicted_answer)
53
```
Note that [`TapexTokenizer`] also supports batched inference. Hence, one can provide a batch of different tables/questions, or a batch of a single table
and multiple questions, or a batch of a single query and multiple tables. Let's illustrate this:
```python
>>> # prepare table + question | 397_4_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapex.md | https://huggingface.co/docs/transformers/en/model_doc/tapex/#usage-inference | .md | ```python
>>> # prepare table + question
>>> data = {"Actors": ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"], "Number of movies": ["87", "53", "69"]}
>>> table = pd.DataFrame.from_dict(data)
>>> questions = [
... "how many movies does Leonardo Di Caprio have?",
... "which actor has 69 movies?",
... "what's the first name of the actor who has 87 movies?",
... ]
>>> encoding = tokenizer(table, questions, padding=True, return_tensors="pt") | 397_4_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapex.md | https://huggingface.co/docs/transformers/en/model_doc/tapex/#usage-inference | .md | >>> # let the model generate an answer autoregressively
>>> outputs = model.generate(**encoding) | 397_4_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapex.md | https://huggingface.co/docs/transformers/en/model_doc/tapex/#usage-inference | .md | >>> # decode back to text
>>> tokenizer.batch_decode(outputs, skip_special_tokens=True)
[' 53', ' george clooney', ' brad pitt']
```
In case one wants to do table verification (i.e. the task of determining whether a given sentence is supported or refuted by the contents
of a table), one can instantiate a [`BartForSequenceClassification`] model. TAPEX has checkpoints on the hub fine-tuned on TabFact, an important | 397_4_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapex.md | https://huggingface.co/docs/transformers/en/model_doc/tapex/#usage-inference | .md | benchmark for table fact checking (it achieves 84% accuracy). The code example below again leverages the [Auto API](auto).
```python
>>> from transformers import AutoTokenizer, AutoModelForSequenceClassification | 397_4_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapex.md | https://huggingface.co/docs/transformers/en/model_doc/tapex/#usage-inference | .md | >>> tokenizer = AutoTokenizer.from_pretrained("microsoft/tapex-large-finetuned-tabfact")
>>> model = AutoModelForSequenceClassification.from_pretrained("microsoft/tapex-large-finetuned-tabfact")
>>> # prepare table + sentence
>>> data = {"Actors": ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"], "Number of movies": ["87", "53", "69"]}
>>> table = pd.DataFrame.from_dict(data)
>>> sentence = "George Clooney has 30 movies"
>>> encoding = tokenizer(table, sentence, return_tensors="pt") | 397_4_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapex.md | https://huggingface.co/docs/transformers/en/model_doc/tapex/#usage-inference | .md | >>> encoding = tokenizer(table, sentence, return_tensors="pt")
>>> # forward pass
>>> outputs = model(**encoding)
>>> # print prediction
>>> predicted_class_idx = outputs.logits[0].argmax(dim=0).item()
>>> print(model.config.id2label[predicted_class_idx])
Refused
```
<Tip>
TAPEX architecture is the same as BART, except for tokenization. Refer to [BART documentation](bart) for information on
configuration classes and their parameters. TAPEX-specific tokenizer is documented below.
</Tip> | 397_4_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapex.md | https://huggingface.co/docs/transformers/en/model_doc/tapex/#tapextokenizer | .md | Construct a TAPEX tokenizer. Based on byte-level Byte-Pair-Encoding (BPE).
This tokenizer can be used to flatten one or more table(s) and concatenate them with one or more related sentences
to be used by TAPEX models. The format that the TAPEX tokenizer creates is the following:
sentence col: col1 | col2 | col 3 row 1 : val1 | val2 | val3 row 2 : ...
The tokenizer supports a single table + single query, a single table and multiple queries (in which case the table | 397_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapex.md | https://huggingface.co/docs/transformers/en/model_doc/tapex/#tapextokenizer | .md | The tokenizer supports a single table + single query, a single table and multiple queries (in which case the table
will be duplicated for every query), a single query and multiple tables (in which case the query will be duplicated
for every table), and multiple tables and queries. In other words, you can provide a batch of tables + questions to
the tokenizer for instance to prepare them for the model. | 397_5_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapex.md | https://huggingface.co/docs/transformers/en/model_doc/tapex/#tapextokenizer | .md | the tokenizer for instance to prepare them for the model.
Tokenization itself is based on the BPE algorithm. It is identical to the one used by BART, RoBERTa and GPT-2.
This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
Args:
vocab_file (`str`):
Path to the vocabulary file.
merges_file (`str`):
Path to the merges file.
do_lower_case (`bool`, *optional*, defaults to `True`): | 397_5_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapex.md | https://huggingface.co/docs/transformers/en/model_doc/tapex/#tapextokenizer | .md | merges_file (`str`):
Path to the merges file.
do_lower_case (`bool`, *optional*, defaults to `True`):
Whether or not to lowercase the input when tokenizing.
errors (`str`, *optional*, defaults to `"replace"`):
Paradigm to follow when decoding bytes to UTF-8. See
[bytes.decode](https://docs.python.org/3/library/stdtypes.html#bytes.decode) for more information.
bos_token (`str`, *optional*, defaults to `"<s>"`): | 397_5_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapex.md | https://huggingface.co/docs/transformers/en/model_doc/tapex/#tapextokenizer | .md | bos_token (`str`, *optional*, defaults to `"<s>"`):
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
<Tip>
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the `cls_token`.
</Tip>
eos_token (`str`, *optional*, defaults to `"</s>"`):
The end of sequence token.
<Tip> | 397_5_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapex.md | https://huggingface.co/docs/transformers/en/model_doc/tapex/#tapextokenizer | .md | </Tip>
eos_token (`str`, *optional*, defaults to `"</s>"`):
The end of sequence token.
<Tip>
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the `sep_token`.
</Tip>
sep_token (`str`, *optional*, defaults to `"</s>"`):
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for | 397_5_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapex.md | https://huggingface.co/docs/transformers/en/model_doc/tapex/#tapextokenizer | .md | The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (`str`, *optional*, defaults to `"<s>"`):
The classifier token which is used when doing sequence classification (classification of the whole sequence | 397_5_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapex.md | https://huggingface.co/docs/transformers/en/model_doc/tapex/#tapextokenizer | .md | The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (`str`, *optional*, defaults to `"<unk>"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (`str`, *optional*, defaults to `"<pad>"`): | 397_5_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapex.md | https://huggingface.co/docs/transformers/en/model_doc/tapex/#tapextokenizer | .md | token instead.
pad_token (`str`, *optional*, defaults to `"<pad>"`):
The token used for padding, for example when batching sequences of different lengths.
mask_token (`str`, *optional*, defaults to `"<mask>"`):
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
add_prefix_space (`bool`, *optional*, defaults to `False`): | 397_5_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapex.md | https://huggingface.co/docs/transformers/en/model_doc/tapex/#tapextokenizer | .md | modeling. This is the token which the model will try to predict.
add_prefix_space (`bool`, *optional*, defaults to `False`):
Whether or not to add an initial space to the input. This allows to treat the leading word just as any
other word. (BART tokenizer detect beginning of words by the preceding space).
max_cell_length (`int`, *optional*, defaults to 15):
Maximum number of characters per cell when linearizing a table. If this number is exceeded, truncation
takes place.
Methods: __call__ | 397_5_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapex.md | https://huggingface.co/docs/transformers/en/model_doc/tapex/#tapextokenizer | .md | takes place.
Methods: __call__
- save_vocabulary | 397_5_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/efficientformer.md | https://huggingface.co/docs/transformers/en/model_doc/efficientformer/ | .md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 398_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/efficientformer.md | https://huggingface.co/docs/transformers/en/model_doc/efficientformer/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 398_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/efficientformer.md | https://huggingface.co/docs/transformers/en/model_doc/efficientformer/#efficientformer | .md | <Tip warning={true}>
This model is in maintenance mode only, we don't accept any new PRs changing its code.
If you run into any issues running this model, please reinstall the last version that supported this model: v4.40.2.
You can do so by running the following command: `pip install -U transformers==4.40.2`.
</Tip> | 398_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/efficientformer.md | https://huggingface.co/docs/transformers/en/model_doc/efficientformer/#overview | .md | The EfficientFormer model was proposed in [EfficientFormer: Vision Transformers at MobileNet Speed](https://arxiv.org/abs/2206.01191)
by Yanyu Li, Geng Yuan, Yang Wen, Eric Hu, Georgios Evangelidis, Sergey Tulyakov, Yanzhi Wang, Jian Ren. EfficientFormer proposes a
dimension-consistent pure transformer that can be run on mobile devices for dense prediction tasks like image classification, object
detection and semantic segmentation.
The abstract from the paper is the following: | 398_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/efficientformer.md | https://huggingface.co/docs/transformers/en/model_doc/efficientformer/#overview | .md | detection and semantic segmentation.
The abstract from the paper is the following:
*Vision Transformers (ViT) have shown rapid progress in computer vision tasks, achieving promising results on various benchmarks.
However, due to the massive number of parameters and model design, e.g., attention mechanism, ViT-based models are generally
times slower than lightweight convolutional networks. Therefore, the deployment of ViT for real-time applications is particularly | 398_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/efficientformer.md | https://huggingface.co/docs/transformers/en/model_doc/efficientformer/#overview | .md | challenging, especially on resource-constrained hardware such as mobile devices. Recent efforts try to reduce the computation
complexity of ViT through network architecture search or hybrid design with MobileNet block, yet the inference speed is still
unsatisfactory. This leads to an important question: can transformers run as fast as MobileNet while obtaining high performance?
To answer this, we first revisit the network architecture and operators used in ViT-based models and identify inefficient designs. | 398_2_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/efficientformer.md | https://huggingface.co/docs/transformers/en/model_doc/efficientformer/#overview | .md | Then we introduce a dimension-consistent pure transformer (without MobileNet blocks) as a design paradigm.
Finally, we perform latency-driven slimming to get a series of final models dubbed EfficientFormer.
Extensive experiments show the superiority of EfficientFormer in performance and speed on mobile devices.
Our fastest model, EfficientFormer-L1, achieves 79.2% top-1 accuracy on ImageNet-1K with only 1.6 ms inference latency on | 398_2_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/efficientformer.md | https://huggingface.co/docs/transformers/en/model_doc/efficientformer/#overview | .md | Our fastest model, EfficientFormer-L1, achieves 79.2% top-1 accuracy on ImageNet-1K with only 1.6 ms inference latency on
iPhone 12 (compiled with CoreML), which { runs as fast as MobileNetV2×1.4 (1.6 ms, 74.7% top-1),} and our largest model,
EfficientFormer-L7, obtains 83.3% accuracy with only 7.0 ms latency. Our work proves that properly designed transformers can
reach extremely low latency on mobile devices while maintaining high performance.* | 398_2_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/efficientformer.md | https://huggingface.co/docs/transformers/en/model_doc/efficientformer/#overview | .md | reach extremely low latency on mobile devices while maintaining high performance.*
This model was contributed by [novice03](https://huggingface.co/novice03) and [Bearnardd](https://huggingface.co/Bearnardd).
The original code can be found [here](https://github.com/snap-research/EfficientFormer). The TensorFlow version of this model was added by [D-Roberts](https://huggingface.co/D-Roberts). | 398_2_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/efficientformer.md | https://huggingface.co/docs/transformers/en/model_doc/efficientformer/#documentation-resources | .md | - [Image classification task guide](../tasks/image_classification) | 398_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/efficientformer.md | https://huggingface.co/docs/transformers/en/model_doc/efficientformer/#efficientformerconfig | .md | This is the configuration class to store the configuration of an [`EfficientFormerModel`]. It is used to
instantiate an EfficientFormer model according to the specified arguments, defining the model architecture.
Instantiating a configuration with the defaults will yield a similar configuration to that of the EfficientFormer
[snap-research/efficientformer-l1](https://huggingface.co/snap-research/efficientformer-l1) architecture. | 398_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/efficientformer.md | https://huggingface.co/docs/transformers/en/model_doc/efficientformer/#efficientformerconfig | .md | [snap-research/efficientformer-l1](https://huggingface.co/snap-research/efficientformer-l1) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
depths (`List(int)`, *optional*, defaults to `[3, 2, 6, 4]`)
Depth of each stage.
hidden_sizes (`List(int)`, *optional*, defaults to `[48, 96, 224, 448]`)
Dimensionality of each stage. | 398_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/efficientformer.md | https://huggingface.co/docs/transformers/en/model_doc/efficientformer/#efficientformerconfig | .md | Depth of each stage.
hidden_sizes (`List(int)`, *optional*, defaults to `[48, 96, 224, 448]`)
Dimensionality of each stage.
downsamples (`List(bool)`, *optional*, defaults to `[True, True, True, True]`)
Whether or not to downsample inputs between two stages.
dim (`int`, *optional*, defaults to 448):
Number of channels in Meta3D layers
key_dim (`int`, *optional*, defaults to 32):
The size of the key in meta3D block.
attention_ratio (`int`, *optional*, defaults to 4): | 398_4_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/efficientformer.md | https://huggingface.co/docs/transformers/en/model_doc/efficientformer/#efficientformerconfig | .md | The size of the key in meta3D block.
attention_ratio (`int`, *optional*, defaults to 4):
Ratio of the dimension of the query and value to the dimension of the key in MSHA block
resolution (`int`, *optional*, defaults to 7)
Size of each patch
num_hidden_layers (`int`, *optional*, defaults to 5):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 8):
Number of attention heads for each attention layer in the 3D MetaBlock. | 398_4_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/efficientformer.md | https://huggingface.co/docs/transformers/en/model_doc/efficientformer/#efficientformerconfig | .md | num_attention_heads (`int`, *optional*, defaults to 8):
Number of attention heads for each attention layer in the 3D MetaBlock.
mlp_expansion_ratio (`int`, *optional*, defaults to 4):
Ratio of size of the hidden dimensionality of an MLP to the dimensionality of its input.
hidden_dropout_prob (`float`, *optional*, defaults to 0.1):
The dropout probability for all fully connected layers in the embeddings and encoder.
patch_size (`int`, *optional*, defaults to 16):
The size (resolution) of each patch. | 398_4_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/efficientformer.md | https://huggingface.co/docs/transformers/en/model_doc/efficientformer/#efficientformerconfig | .md | patch_size (`int`, *optional*, defaults to 16):
The size (resolution) of each patch.
num_channels (`int`, *optional*, defaults to 3):
The number of input channels.
pool_size (`int`, *optional*, defaults to 3):
Kernel size of pooling layers.
downsample_patch_size (`int`, *optional*, defaults to 3):
The size of patches in downsampling layers.
downsample_stride (`int`, *optional*, defaults to 2):
The stride of convolution kernels in downsampling layers.
downsample_pad (`int`, *optional*, defaults to 1): | 398_4_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/efficientformer.md | https://huggingface.co/docs/transformers/en/model_doc/efficientformer/#efficientformerconfig | .md | The stride of convolution kernels in downsampling layers.
downsample_pad (`int`, *optional*, defaults to 1):
Padding in downsampling layers.
drop_path_rate (`int`, *optional*, defaults to 0):
Rate at which to increase dropout probability in DropPath.
num_meta3d_blocks (`int`, *optional*, defaults to 1):
The number of 3D MetaBlocks in the last stage.
distillation (`bool`, *optional*, defaults to `True`):
Whether to add a distillation head.
use_layer_scale (`bool`, *optional*, defaults to `True`): | 398_4_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/efficientformer.md | https://huggingface.co/docs/transformers/en/model_doc/efficientformer/#efficientformerconfig | .md | Whether to add a distillation head.
use_layer_scale (`bool`, *optional*, defaults to `True`):
Whether to scale outputs from token mixers.
layer_scale_init_value (`float`, *optional*, defaults to 1e-5):
Factor by which outputs from token mixers are scaled.
hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"selu"` and `"gelu_new"` are supported. | 398_4_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/efficientformer.md | https://huggingface.co/docs/transformers/en/model_doc/efficientformer/#efficientformerconfig | .md | `"relu"`, `"selu"` and `"gelu_new"` are supported.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (`float`, *optional*, defaults to 1e-12):
The epsilon used by the layer normalization layers.
image_size (`int`, *optional*, defaults to `224`):
The size (resolution) of each image.
Example:
```python
>>> from transformers import EfficientFormerConfig, EfficientFormerModel | 398_4_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/efficientformer.md | https://huggingface.co/docs/transformers/en/model_doc/efficientformer/#efficientformerconfig | .md | >>> # Initializing a EfficientFormer efficientformer-l1 style configuration
>>> configuration = EfficientFormerConfig()
>>> # Initializing a EfficientFormerModel (with random weights) from the efficientformer-l3 style configuration
>>> model = EfficientFormerModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
``` | 398_4_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/efficientformer.md | https://huggingface.co/docs/transformers/en/model_doc/efficientformer/#efficientformerimageprocessor | .md | Constructs a EfficientFormer image processor.
Args:
do_resize (`bool`, *optional*, defaults to `True`):
Whether to resize the image's (height, width) dimensions to the specified `(size["height"],
size["width"])`. Can be overridden by the `do_resize` parameter in the `preprocess` method.
size (`dict`, *optional*, defaults to `{"height": 224, "width": 224}`):
Size of the output image after resizing. Can be overridden by the `size` parameter in the `preprocess`
method. | 398_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/efficientformer.md | https://huggingface.co/docs/transformers/en/model_doc/efficientformer/#efficientformerimageprocessor | .md | Size of the output image after resizing. Can be overridden by the `size` parameter in the `preprocess`
method.
resample (`PILImageResampling`, *optional*, defaults to `PILImageResampling.BILINEAR`):
Resampling filter to use if resizing the image. Can be overridden by the `resample` parameter in the
`preprocess` method.
do_center_crop (`bool`, *optional*, defaults to `True`):
Whether to center crop the image to the specified `crop_size`. Can be overridden by `do_center_crop` in the
`preprocess` method. | 398_5_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/efficientformer.md | https://huggingface.co/docs/transformers/en/model_doc/efficientformer/#efficientformerimageprocessor | .md | `preprocess` method.
crop_size (`Dict[str, int]` *optional*, defaults to 224):
Size of the output image after applying `center_crop`. Can be overridden by `crop_size` in the `preprocess`
method.
do_rescale (`bool`, *optional*, defaults to `True`):
Whether to rescale the image by the specified scale `rescale_factor`. Can be overridden by the `do_rescale`
parameter in the `preprocess` method.
rescale_factor (`int` or `float`, *optional*, defaults to `1/255`): | 398_5_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/efficientformer.md | https://huggingface.co/docs/transformers/en/model_doc/efficientformer/#efficientformerimageprocessor | .md | parameter in the `preprocess` method.
rescale_factor (`int` or `float`, *optional*, defaults to `1/255`):
Scale factor to use if rescaling the image. Can be overridden by the `rescale_factor` parameter in the
`preprocess` method.
do_normalize:
Whether to normalize the image. Can be overridden by the `do_normalize` parameter in the `preprocess`
method.
image_mean (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_MEAN`): | 398_5_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/efficientformer.md | https://huggingface.co/docs/transformers/en/model_doc/efficientformer/#efficientformerimageprocessor | .md | method.
image_mean (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_MEAN`):
Mean to use if normalizing the image. This is a float or list of floats the length of the number of
channels in the image. Can be overridden by the `image_mean` parameter in the `preprocess` method.
image_std (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_STD`):
Standard deviation to use if normalizing the image. This is a float or list of floats the length of the | 398_5_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/efficientformer.md | https://huggingface.co/docs/transformers/en/model_doc/efficientformer/#efficientformerimageprocessor | .md | Standard deviation to use if normalizing the image. This is a float or list of floats the length of the
number of channels in the image. Can be overridden by the `image_std` parameter in the `preprocess` method.
Methods: preprocess
<frameworkcontent>
<pt> | 398_5_5 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.