source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/splinter.md
https://huggingface.co/docs/transformers/en/model_doc/splinter/#overview
.md
The Splinter model was proposed in [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438) by Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy. Splinter is an encoder-only transformer (similar to BERT) pretrained using the recurring span selection task on a large corpus comprising Wikipedia and the Toronto Book Corpus. The abstract from the paper is the following:
305_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/splinter.md
https://huggingface.co/docs/transformers/en/model_doc/splinter/#overview
.md
comprising Wikipedia and the Toronto Book Corpus. The abstract from the paper is the following: In several question answering benchmarks, pretrained models have reached human parity through fine-tuning on an order of 100,000 annotated questions and answers. We explore the more realistic few-shot setting, where only a few hundred training examples are available, and observe that standard models perform poorly, highlighting the discrepancy between
305_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/splinter.md
https://huggingface.co/docs/transformers/en/model_doc/splinter/#overview
.md
training examples are available, and observe that standard models perform poorly, highlighting the discrepancy between current pretraining objectives and question answering. We propose a new pretraining scheme tailored for question answering: recurring span selection. Given a passage with multiple sets of recurring spans, we mask in each set all recurring spans but one, and ask the model to select the correct span in the passage for each masked span. Masked spans
305_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/splinter.md
https://huggingface.co/docs/transformers/en/model_doc/splinter/#overview
.md
recurring spans but one, and ask the model to select the correct span in the passage for each masked span. Masked spans are replaced with a special token, viewed as a question representation, that is later used during fine-tuning to select the answer span. The resulting model obtains surprisingly good results on multiple benchmarks (e.g., 72.7 F1 on SQuAD with only 128 training examples), while maintaining competitive performance in the high-resource setting.
305_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/splinter.md
https://huggingface.co/docs/transformers/en/model_doc/splinter/#overview
.md
with only 128 training examples), while maintaining competitive performance in the high-resource setting. This model was contributed by [yuvalkirstain](https://huggingface.co/yuvalkirstain) and [oriram](https://huggingface.co/oriram). The original code can be found [here](https://github.com/oriram/splinter).
305_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/splinter.md
https://huggingface.co/docs/transformers/en/model_doc/splinter/#usage-tips
.md
- Splinter was trained to predict answers spans conditioned on a special [QUESTION] token. These tokens contextualize to question representations which are used to predict the answers. This layer is called QASS, and is the default behaviour in the [`SplinterForQuestionAnswering`] class. Therefore: - Use [`SplinterTokenizer`] (rather than [`BertTokenizer`]), as it already contains this special token. Also, its default behavior is to use this token when two sequences are given (for
305_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/splinter.md
https://huggingface.co/docs/transformers/en/model_doc/splinter/#usage-tips
.md
contains this special token. Also, its default behavior is to use this token when two sequences are given (for example, in the *run_qa.py* script). - If you plan on using Splinter outside *run_qa.py*, please keep in mind the question token - it might be important for the success of your model, especially in a few-shot setting. - Please note there are two different checkpoints for each size of Splinter. Both are basically the same, except that
305_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/splinter.md
https://huggingface.co/docs/transformers/en/model_doc/splinter/#usage-tips
.md
- Please note there are two different checkpoints for each size of Splinter. Both are basically the same, except that one also has the pretrained weights of the QASS layer (*tau/splinter-base-qass* and *tau/splinter-large-qass*) and one doesn't (*tau/splinter-base* and *tau/splinter-large*). This is done to support randomly initializing this layer at fine-tuning, as it is shown to yield better results for some cases in the paper.
305_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/splinter.md
https://huggingface.co/docs/transformers/en/model_doc/splinter/#resources
.md
- [Question answering task guide](../tasks/question-answering)
305_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/splinter.md
https://huggingface.co/docs/transformers/en/model_doc/splinter/#splinterconfig
.md
This is the configuration class to store the configuration of a [`SplinterModel`]. It is used to instantiate an Splinter model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Splinter [tau/splinter-base](https://huggingface.co/tau/splinter-base) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
305_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/splinter.md
https://huggingface.co/docs/transformers/en/model_doc/splinter/#splinterconfig
.md
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 30522): Vocabulary size of the Splinter model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`SplinterModel`]. hidden_size (`int`, *optional*, defaults to 768): Dimension of the encoder layers and the pooler layer.
305_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/splinter.md
https://huggingface.co/docs/transformers/en/model_doc/splinter/#splinterconfig
.md
hidden_size (`int`, *optional*, defaults to 768): Dimension of the encoder layers and the pooler layer. num_hidden_layers (`int`, *optional*, defaults to 12): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 12): Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (`int`, *optional*, defaults to 3072): Dimension of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
305_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/splinter.md
https://huggingface.co/docs/transformers/en/model_doc/splinter/#splinterconfig
.md
Dimension of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder. hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported. hidden_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
305_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/splinter.md
https://huggingface.co/docs/transformers/en/model_doc/splinter/#splinterconfig
.md
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout ratio for the attention probabilities. max_position_embeddings (`int`, *optional*, defaults to 512): The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). type_vocab_size (`int`, *optional*, defaults to 2):
305_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/splinter.md
https://huggingface.co/docs/transformers/en/model_doc/splinter/#splinterconfig
.md
just in case (e.g., 512 or 1024 or 2048). type_vocab_size (`int`, *optional*, defaults to 2): The vocabulary size of the `token_type_ids` passed when calling [`SplinterModel`]. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-12): The epsilon used by the layer normalization layers. use_cache (`bool`, *optional*, defaults to `True`):
305_4_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/splinter.md
https://huggingface.co/docs/transformers/en/model_doc/splinter/#splinterconfig
.md
The epsilon used by the layer normalization layers. use_cache (`bool`, *optional*, defaults to `True`): Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if `config.is_decoder=True`. question_token_id (`int`, *optional*, defaults to 104): The id of the `[QUESTION]` token. Example: ```python >>> from transformers import SplinterModel, SplinterConfig
305_4_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/splinter.md
https://huggingface.co/docs/transformers/en/model_doc/splinter/#splinterconfig
.md
>>> # Initializing a Splinter tau/splinter-base style configuration >>> configuration = SplinterConfig() >>> # Initializing a model from the tau/splinter-base style configuration >>> model = SplinterModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
305_4_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/splinter.md
https://huggingface.co/docs/transformers/en/model_doc/splinter/#splintertokenizer
.md
Construct a Splinter tokenizer. Based on WordPiece. This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. Args: vocab_file (`str`): File containing the vocabulary. do_lower_case (`bool`, *optional*, defaults to `True`): Whether or not to lowercase the input when tokenizing. do_basic_tokenize (`bool`, *optional*, defaults to `True`):
305_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/splinter.md
https://huggingface.co/docs/transformers/en/model_doc/splinter/#splintertokenizer
.md
Whether or not to lowercase the input when tokenizing. do_basic_tokenize (`bool`, *optional*, defaults to `True`): Whether or not to do basic tokenization before WordPiece. never_split (`Iterable`, *optional*): Collection of tokens which will never be split during tokenization. Only has an effect when `do_basic_tokenize=True` unk_token (`str`, *optional*, defaults to `"[UNK]"`): The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.
305_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/splinter.md
https://huggingface.co/docs/transformers/en/model_doc/splinter/#splintertokenizer
.md
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. sep_token (`str`, *optional*, defaults to `"[SEP]"`): The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens. pad_token (`str`, *optional*, defaults to `"[PAD]"`):
305_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/splinter.md
https://huggingface.co/docs/transformers/en/model_doc/splinter/#splintertokenizer
.md
token of a sequence built with special tokens. pad_token (`str`, *optional*, defaults to `"[PAD]"`): The token used for padding, for example when batching sequences of different lengths. cls_token (`str`, *optional*, defaults to `"[CLS]"`): The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens.
305_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/splinter.md
https://huggingface.co/docs/transformers/en/model_doc/splinter/#splintertokenizer
.md
instead of per-token classification). It is the first token of the sequence when built with special tokens. mask_token (`str`, *optional*, defaults to `"[MASK]"`): The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict. question_token (`str`, *optional*, defaults to `"[QUESTION]"`): The token used for constructing question representations.
305_5_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/splinter.md
https://huggingface.co/docs/transformers/en/model_doc/splinter/#splintertokenizer
.md
question_token (`str`, *optional*, defaults to `"[QUESTION]"`): The token used for constructing question representations. tokenize_chinese_chars (`bool`, *optional*, defaults to `True`): Whether or not to tokenize Chinese characters. This should likely be deactivated for Japanese (see this [issue](https://github.com/huggingface/transformers/issues/328)). strip_accents (`bool`, *optional*): Whether or not to strip all accents. If this option is not specified, then it will be determined by the
305_5_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/splinter.md
https://huggingface.co/docs/transformers/en/model_doc/splinter/#splintertokenizer
.md
Whether or not to strip all accents. If this option is not specified, then it will be determined by the value for `lowercase` (as in the original BERT). Methods: build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary
305_5_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/splinter.md
https://huggingface.co/docs/transformers/en/model_doc/splinter/#splintertokenizerfast
.md
Construct a "fast" Splinter tokenizer (backed by HuggingFace's *tokenizers* library). Based on WordPiece. This tokenizer inherits from [`PreTrainedTokenizerFast`] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. Args: vocab_file (`str`): File containing the vocabulary. do_lower_case (`bool`, *optional*, defaults to `True`): Whether or not to lowercase the input when tokenizing.
305_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/splinter.md
https://huggingface.co/docs/transformers/en/model_doc/splinter/#splintertokenizerfast
.md
do_lower_case (`bool`, *optional*, defaults to `True`): Whether or not to lowercase the input when tokenizing. unk_token (`str`, *optional*, defaults to `"[UNK]"`): The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. sep_token (`str`, *optional*, defaults to `"[SEP]"`): The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
305_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/splinter.md
https://huggingface.co/docs/transformers/en/model_doc/splinter/#splintertokenizerfast
.md
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens. pad_token (`str`, *optional*, defaults to `"[PAD]"`): The token used for padding, for example when batching sequences of different lengths. cls_token (`str`, *optional*, defaults to `"[CLS]"`):
305_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/splinter.md
https://huggingface.co/docs/transformers/en/model_doc/splinter/#splintertokenizerfast
.md
cls_token (`str`, *optional*, defaults to `"[CLS]"`): The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens. mask_token (`str`, *optional*, defaults to `"[MASK]"`): The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict.
305_6_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/splinter.md
https://huggingface.co/docs/transformers/en/model_doc/splinter/#splintertokenizerfast
.md
modeling. This is the token which the model will try to predict. question_token (`str`, *optional*, defaults to `"[QUESTION]"`): The token used for constructing question representations. clean_text (`bool`, *optional*, defaults to `True`): Whether or not to clean the text before tokenization by removing any control characters and replacing all whitespaces by the classic one. tokenize_chinese_chars (`bool`, *optional*, defaults to `True`):
305_6_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/splinter.md
https://huggingface.co/docs/transformers/en/model_doc/splinter/#splintertokenizerfast
.md
whitespaces by the classic one. tokenize_chinese_chars (`bool`, *optional*, defaults to `True`): Whether or not to tokenize Chinese characters. This should likely be deactivated for Japanese (see [this issue](https://github.com/huggingface/transformers/issues/328)). strip_accents (`bool`, *optional*): Whether or not to strip all accents. If this option is not specified, then it will be determined by the value for `lowercase` (as in the original BERT).
305_6_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/splinter.md
https://huggingface.co/docs/transformers/en/model_doc/splinter/#splintertokenizerfast
.md
value for `lowercase` (as in the original BERT). wordpieces_prefix (`str`, *optional*, defaults to `"##"`): The prefix for subwords.
305_6_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/splinter.md
https://huggingface.co/docs/transformers/en/model_doc/splinter/#splintermodel
.md
The bare Splinter Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`SplinterConfig`]): Model configuration class with all the parameters of the model.
305_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/splinter.md
https://huggingface.co/docs/transformers/en/model_doc/splinter/#splintermodel
.md
behavior. Parameters: config ([`SplinterConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. The model is an encoder (with only self-attention) following the architecture described in [Attention is all you
305_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/splinter.md
https://huggingface.co/docs/transformers/en/model_doc/splinter/#splintermodel
.md
The model is an encoder (with only self-attention) following the architecture described in [Attention is all you need](https://arxiv.org/abs/1706.03762) by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin. Methods: forward
305_7_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/splinter.md
https://huggingface.co/docs/transformers/en/model_doc/splinter/#splinterforquestionanswering
.md
Splinter Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute `span start logits` and `span end logits`). This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters:
305_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/splinter.md
https://huggingface.co/docs/transformers/en/model_doc/splinter/#splinterforquestionanswering
.md
behavior. Parameters: config ([`SplinterConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
305_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/splinter.md
https://huggingface.co/docs/transformers/en/model_doc/splinter/#splinterforpretraining
.md
Splinter Model for the recurring span selection task as done during the pretraining. The difference to the QA task is that we do not have a question, but multiple question tokens that replace the occurrences of recurring spans instead. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters:
305_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/splinter.md
https://huggingface.co/docs/transformers/en/model_doc/splinter/#splinterforpretraining
.md
behavior. Parameters: config ([`SplinterConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
305_9_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew-d.md
https://huggingface.co/docs/transformers/en/model_doc/sew-d/
.md
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
306_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew-d.md
https://huggingface.co/docs/transformers/en/model_doc/sew-d/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
306_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew-d.md
https://huggingface.co/docs/transformers/en/model_doc/sew-d/#overview
.md
SEW-D (Squeezed and Efficient Wav2Vec with Disentangled attention) was proposed in [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi. The abstract from the paper is the following: *This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition
306_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew-d.md
https://huggingface.co/docs/transformers/en/model_doc/sew-d/#overview
.md
*This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a
306_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew-d.md
https://huggingface.co/docs/transformers/en/model_doc/sew-d/#overview
.md
pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.* This model was contributed by [anton-l](https://huggingface.co/anton-l).
306_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew-d.md
https://huggingface.co/docs/transformers/en/model_doc/sew-d/#usage-tips
.md
- SEW-D is a speech model that accepts a float array corresponding to the raw waveform of the speech signal. - SEWDForCTC is fine-tuned using connectionist temporal classification (CTC) so the model output has to be decoded using [`Wav2Vec2CTCTokenizer`].
306_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew-d.md
https://huggingface.co/docs/transformers/en/model_doc/sew-d/#resources
.md
- [Audio classification task guide](../tasks/audio_classification) - [Automatic speech recognition task guide](../tasks/asr)
306_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew-d.md
https://huggingface.co/docs/transformers/en/model_doc/sew-d/#sewdconfig
.md
This is the configuration class to store the configuration of a [`SEWDModel`]. It is used to instantiate a SEW-D model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the SEW-D [asapp/sew-d-tiny-100k](https://huggingface.co/asapp/sew-d-tiny-100k) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
306_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew-d.md
https://huggingface.co/docs/transformers/en/model_doc/sew-d/#sewdconfig
.md
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 32): Vocabulary size of the SEW-D model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`SEWD`]. hidden_size (`int`, *optional*, defaults to 768): Dimensionality of the encoder layers and the pooler layer.
306_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew-d.md
https://huggingface.co/docs/transformers/en/model_doc/sew-d/#sewdconfig
.md
hidden_size (`int`, *optional*, defaults to 768): Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (`int`, *optional*, defaults to 12): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 12): Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (`int`, *optional*, defaults to 3072): Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
306_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew-d.md
https://huggingface.co/docs/transformers/en/model_doc/sew-d/#sewdconfig
.md
Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder. squeeze_factor (`int`, *optional*, defaults to 2): Sequence length downsampling factor after the encoder and upsampling factor after the transformer. max_position_embeddings (`int`, *optional*, defaults to 512): The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). position_buckets (`int`, *optional*, defaults to 256):
306_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew-d.md
https://huggingface.co/docs/transformers/en/model_doc/sew-d/#sewdconfig
.md
just in case (e.g., 512 or 1024 or 2048). position_buckets (`int`, *optional*, defaults to 256): The maximum size of relative position embeddings. share_att_key (`bool`, *optional*, defaults to `True`): Whether to share attention key with c2p and p2c. relative_attention (`bool`, *optional*, defaults to `True`): Whether to use relative position encoding. pos_att_type (`Tuple[str]`, *optional*, defaults to `("p2c", "c2p")`):
306_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew-d.md
https://huggingface.co/docs/transformers/en/model_doc/sew-d/#sewdconfig
.md
Whether to use relative position encoding. pos_att_type (`Tuple[str]`, *optional*, defaults to `("p2c", "c2p")`): The type of relative position attention, it can be a combination of `("p2c", "c2p")`, e.g. `("p2c")`, `("p2c", "c2p")`, `("p2c", "c2p")`. norm_rel_ebd (`str`, *optional*, defaults to `"layer_norm"`): Whether to use layer norm in relative embedding (`"layer_norm"` if yes) hidden_act (`str` or `function`, *optional*, defaults to `"gelu_python"`):
306_4_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew-d.md
https://huggingface.co/docs/transformers/en/model_doc/sew-d/#sewdconfig
.md
hidden_act (`str` or `function`, *optional*, defaults to `"gelu_python"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"selu"`, `"gelu_python"` and `"gelu_new"` are supported. hidden_dropout (`float`, *optional*, defaults to 0.1): Deprecated. Not used by the model and will be removed in a future version. activation_dropout (`float`, *optional*, defaults to 0.1):
306_4_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew-d.md
https://huggingface.co/docs/transformers/en/model_doc/sew-d/#sewdconfig
.md
activation_dropout (`float`, *optional*, defaults to 0.1): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_dropout (`float`, *optional*, defaults to 0.1): The dropout ratio for the attention probabilities. final_dropout (`float`, *optional*, defaults to 0.1): The dropout probability for the final projection layer of [`SEWDForCTC`]. initializer_range (`float`, *optional*, defaults to 0.02):
306_4_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew-d.md
https://huggingface.co/docs/transformers/en/model_doc/sew-d/#sewdconfig
.md
initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-7): The epsilon used by the layer normalization layers in the transformer encoder. feature_layer_norm_eps (`float`, *optional*, defaults to 1e-5): The epsilon used by the layer normalization after the feature encoder. feat_extract_norm (`str`, *optional*, defaults to `"group"`):
306_4_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew-d.md
https://huggingface.co/docs/transformers/en/model_doc/sew-d/#sewdconfig
.md
feat_extract_norm (`str`, *optional*, defaults to `"group"`): The norm to be applied to 1D convolutional layers in feature encoder. One of `"group"` for group normalization of only the first 1D convolutional layer or `"layer"` for layer normalization of all 1D convolutional layers. feat_proj_dropout (`float`, *optional*, defaults to 0.0): The dropout probability for output of the feature encoder. feat_extract_activation (`str, `optional`, defaults to `"gelu"`):
306_4_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew-d.md
https://huggingface.co/docs/transformers/en/model_doc/sew-d/#sewdconfig
.md
The dropout probability for output of the feature encoder. feat_extract_activation (`str, `optional`, defaults to `"gelu"`): The non-linear activation function (function or string) in the 1D convolutional layers of the feature extractor. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported. conv_dim (`Tuple[int]` or `List[int]`, *optional*, defaults to `(64, 128, 128, 128, 128, 256, 256, 256, 256, 512, 512, 512, 512)`):
306_4_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew-d.md
https://huggingface.co/docs/transformers/en/model_doc/sew-d/#sewdconfig
.md
A tuple of integers defining the number of input and output channels of each 1D convolutional layer in the feature encoder. The length of *conv_dim* defines the number of 1D convolutional layers. conv_stride (`Tuple[int]` or `List[int]`, *optional*, defaults to `(5, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1)`): A tuple of integers defining the stride of each 1D convolutional layer in the feature encoder. The length
306_4_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew-d.md
https://huggingface.co/docs/transformers/en/model_doc/sew-d/#sewdconfig
.md
A tuple of integers defining the stride of each 1D convolutional layer in the feature encoder. The length of *conv_stride* defines the number of convolutional layers and has to match the length of *conv_dim*. conv_kernel (`Tuple[int]` or `List[int]`, *optional*, defaults to `(10, 3, 1, 3, 1, 3, 1, 3, 1, 2, 1, 2, 1)`): A tuple of integers defining the kernel size of each 1D convolutional layer in the feature encoder. The
306_4_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew-d.md
https://huggingface.co/docs/transformers/en/model_doc/sew-d/#sewdconfig
.md
A tuple of integers defining the kernel size of each 1D convolutional layer in the feature encoder. The length of *conv_kernel* defines the number of convolutional layers and has to match the length of *conv_dim*. conv_bias (`bool`, *optional*, defaults to `False`): Whether the 1D convolutional layers have a bias. num_conv_pos_embeddings (`int`, *optional*, defaults to 128): Number of convolutional positional embeddings. Defines the kernel size of 1D convolutional positional embeddings layer.
306_4_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew-d.md
https://huggingface.co/docs/transformers/en/model_doc/sew-d/#sewdconfig
.md
Number of convolutional positional embeddings. Defines the kernel size of 1D convolutional positional embeddings layer. num_conv_pos_embedding_groups (`int`, *optional*, defaults to 16): Number of groups of 1D convolutional positional embeddings layer. apply_spec_augment (`bool`, *optional*, defaults to `True`): Whether to apply *SpecAugment* data augmentation to the outputs of the feature encoder. For reference see [SpecAugment: A Simple Data Augmentation Method for Automatic Speech
306_4_14
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew-d.md
https://huggingface.co/docs/transformers/en/model_doc/sew-d/#sewdconfig
.md
[SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition](https://arxiv.org/abs/1904.08779). mask_time_prob (`float`, *optional*, defaults to 0.05): Percentage (between 0 and 1) of all feature vectors along the time axis which will be masked. The masking procecure generates ''mask_time_prob*len(time_axis)/mask_time_length'' independent masks over the axis. If reasoning from the propability of each feature vector to be chosen as the start of the vector span to be
306_4_15
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew-d.md
https://huggingface.co/docs/transformers/en/model_doc/sew-d/#sewdconfig
.md
reasoning from the propability of each feature vector to be chosen as the start of the vector span to be masked, *mask_time_prob* should be `prob_vector_start*mask_time_length`. Note that overlap may decrease the actual percentage of masked vectors. This is only relevant if `apply_spec_augment is True`. mask_time_length (`int`, *optional*, defaults to 10): Length of vector span along the time axis. mask_time_min_masks (`int`, *optional*, defaults to 2),:
306_4_16
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew-d.md
https://huggingface.co/docs/transformers/en/model_doc/sew-d/#sewdconfig
.md
Length of vector span along the time axis. mask_time_min_masks (`int`, *optional*, defaults to 2),: The minimum number of masks of length `mask_feature_length` generated along the time axis, each time step, irrespectively of `mask_feature_prob`. Only relevant if ''mask_time_prob*len(time_axis)/mask_time_length < mask_time_min_masks'' mask_feature_prob (`float`, *optional*, defaults to 0.0): Percentage (between 0 and 1) of all feature vectors along the feature axis which will be masked. The
306_4_17
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew-d.md
https://huggingface.co/docs/transformers/en/model_doc/sew-d/#sewdconfig
.md
Percentage (between 0 and 1) of all feature vectors along the feature axis which will be masked. The masking procecure generates ''mask_feature_prob*len(feature_axis)/mask_time_length'' independent masks over the axis. If reasoning from the propability of each feature vector to be chosen as the start of the vector span to be masked, *mask_feature_prob* should be `prob_vector_start*mask_feature_length`. Note that overlap
306_4_18
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew-d.md
https://huggingface.co/docs/transformers/en/model_doc/sew-d/#sewdconfig
.md
span to be masked, *mask_feature_prob* should be `prob_vector_start*mask_feature_length`. Note that overlap may decrease the actual percentage of masked vectors. This is only relevant if `apply_spec_augment is True`. mask_feature_length (`int`, *optional*, defaults to 10): Length of vector span along the feature axis. mask_feature_min_masks (`int`, *optional*, defaults to 0),: The minimum number of masks of length `mask_feature_length` generated along the feature axis, each time
306_4_19
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew-d.md
https://huggingface.co/docs/transformers/en/model_doc/sew-d/#sewdconfig
.md
The minimum number of masks of length `mask_feature_length` generated along the feature axis, each time step, irrespectively of `mask_feature_prob`. Only relevant if ''mask_feature_prob*len(feature_axis)/mask_feature_length < mask_feature_min_masks'' diversity_loss_weight (`int`, *optional*, defaults to 0.1): The weight of the codebook diversity loss component. ctc_loss_reduction (`str`, *optional*, defaults to `"sum"`):
306_4_20
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew-d.md
https://huggingface.co/docs/transformers/en/model_doc/sew-d/#sewdconfig
.md
The weight of the codebook diversity loss component. ctc_loss_reduction (`str`, *optional*, defaults to `"sum"`): Specifies the reduction to apply to the output of `torch.nn.CTCLoss`. Only relevant when training an instance of [`SEWDForCTC`]. ctc_zero_infinity (`bool`, *optional*, defaults to `False`): Whether to zero infinite losses and the associated gradients of `torch.nn.CTCLoss`. Infinite losses mainly
306_4_21
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew-d.md
https://huggingface.co/docs/transformers/en/model_doc/sew-d/#sewdconfig
.md
Whether to zero infinite losses and the associated gradients of `torch.nn.CTCLoss`. Infinite losses mainly occur when the inputs are too short to be aligned to the targets. Only relevant when training an instance of [`SEWDForCTC`]. use_weighted_layer_sum (`bool`, *optional*, defaults to `False`): Whether to use a weighted average of layer outputs with learned weights. Only relevant when using an instance of [`Wav2Vec2ForSequenceClassification`]. classifier_proj_size (`int`, *optional*, defaults to 256):
306_4_22
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew-d.md
https://huggingface.co/docs/transformers/en/model_doc/sew-d/#sewdconfig
.md
instance of [`Wav2Vec2ForSequenceClassification`]. classifier_proj_size (`int`, *optional*, defaults to 256): Dimensionality of the projection before token mean-pooling for classification. Example: ```python >>> from transformers import SEWDConfig, SEWDModel
306_4_23
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew-d.md
https://huggingface.co/docs/transformers/en/model_doc/sew-d/#sewdconfig
.md
>>> # Initializing a SEW-D asapp/sew-d-tiny-100k style configuration >>> configuration = SEWDConfig() >>> # Initializing a model (with random weights) from the asapp/sew-d-tiny-100k style configuration >>> model = SEWDModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
306_4_24
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew-d.md
https://huggingface.co/docs/transformers/en/model_doc/sew-d/#sewdmodel
.md
The bare SEW-D Model transformer outputting raw hidden-states without any specific head on top. SEW-D was proposed in [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.).
306_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew-d.md
https://huggingface.co/docs/transformers/en/model_doc/sew-d/#sewdmodel
.md
library implements for all its model (such as downloading or saving etc.). This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`SEWDConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
306_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew-d.md
https://huggingface.co/docs/transformers/en/model_doc/sew-d/#sewdmodel
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
306_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew-d.md
https://huggingface.co/docs/transformers/en/model_doc/sew-d/#sewdforctc
.md
SEW-D Model with a `language modeling` head on top for Connectionist Temporal Classification (CTC). SEW-D was proposed in [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.).
306_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew-d.md
https://huggingface.co/docs/transformers/en/model_doc/sew-d/#sewdforctc
.md
library implements for all its model (such as downloading or saving etc.). This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`SEWDConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
306_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew-d.md
https://huggingface.co/docs/transformers/en/model_doc/sew-d/#sewdforctc
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
306_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew-d.md
https://huggingface.co/docs/transformers/en/model_doc/sew-d/#sewdforsequenceclassification
.md
SEWD Model with a sequence classification head on top (a linear layer over the pooled output) for tasks like SUPERB Keyword Spotting. SEW-D was proposed in [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
306_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew-d.md
https://huggingface.co/docs/transformers/en/model_doc/sew-d/#sewdforsequenceclassification
.md
Yoav Artzi. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.). This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters:
306_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew-d.md
https://huggingface.co/docs/transformers/en/model_doc/sew-d/#sewdforsequenceclassification
.md
behavior. Parameters: config ([`SEWDConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
306_7_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/
.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
307_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
307_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#overview
.md
The TAPAS model was proposed in [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://www.aclweb.org/anthology/2020.acl-main.398) by Jonathan Herzig, Paweł Krzysztof Nowak, Thomas Müller, Francesco Piccinno and Julian Martin Eisenschlos. It's a BERT-based model specifically designed (and pre-trained) for answering questions about tabular data. Compared to BERT, TAPAS uses relative position embeddings and has 7
307_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#overview
.md
token types that encode tabular structure. TAPAS is pre-trained on the masked language modeling (MLM) objective on a large dataset comprising millions of tables from English Wikipedia and corresponding texts. For question answering, TAPAS has 2 heads on top: a cell selection head and an aggregation head, for (optionally) performing aggregations (such as counting or summing) among selected cells. TAPAS has been fine-tuned on several datasets:
307_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#overview
.md
- [SQA](https://www.microsoft.com/en-us/download/details.aspx?id=54253) (Sequential Question Answering by Microsoft) - [WTQ](https://github.com/ppasupat/WikiTableQuestions) (Wiki Table Questions by Stanford University) - [WikiSQL](https://github.com/salesforce/WikiSQL) (by Salesforce). It achieves state-of-the-art on both SQA and WTQ, while having comparable performance to SOTA on WikiSQL, with a much simpler architecture. The abstract from the paper is the following:
307_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#overview
.md
*Answering natural language questions over tables is usually seen as a semantic parsing task. To alleviate the collection cost of full logical forms, one popular approach focuses on weak supervision consisting of denotations instead of logical forms. However, training semantic parsers from weak supervision poses difficulties, and in addition, the generated logical forms are only used as an intermediate step prior to retrieving the denotation. In this paper, we present TAPAS, an approach to question
307_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#overview
.md
only used as an intermediate step prior to retrieving the denotation. In this paper, we present TAPAS, an approach to question answering over tables without generating logical forms. TAPAS trains from weak supervision, and predicts the denotation by selecting table cells and optionally applying a corresponding aggregation operator to such selection. TAPAS extends BERT's architecture to encode tables as input, initializes from an effective joint pre-training of text segments and tables crawled from
307_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#overview
.md
to encode tables as input, initializes from an effective joint pre-training of text segments and tables crawled from Wikipedia, and is trained end-to-end. We experiment with three different semantic parsing datasets, and find that TAPAS outperforms or rivals semantic parsing models by improving state-of-the-art accuracy on SQA from 55.1 to 67.2 and performing on par with the state-of-the-art on WIKISQL and WIKITQ, but with a simpler model architecture. We additionally find that transfer learning, which is
307_1_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#overview
.md
on WIKISQL and WIKITQ, but with a simpler model architecture. We additionally find that transfer learning, which is trivial in our setting, from WIKISQL to WIKITQ, yields 48.7 accuracy, 4.2 points above the state-of-the-art.*
307_1_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#overview
.md
In addition, the authors have further pre-trained TAPAS to recognize **table entailment**, by creating a balanced dataset of millions of automatically created training examples which are learned in an intermediate step prior to fine-tuning. The authors of TAPAS call this further pre-training intermediate pre-training (since TAPAS is first pre-trained on MLM, and then on another dataset). They found that intermediate pre-training further improves performance on SQA, achieving a new state-of-the-art as well
307_1_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#overview
.md
They found that intermediate pre-training further improves performance on SQA, achieving a new state-of-the-art as well as state-of-the-art on [TabFact](https://github.com/wenhuchen/Table-Fact-Checking), a large-scale dataset with 16k Wikipedia tables for table entailment (a binary classification task). For more details, see their follow-up paper: [Understanding tables with intermediate pre-training](https://www.aclweb.org/anthology/2020.findings-emnlp.27/) by Julian Martin Eisenschlos, Syrine Krichene and
307_1_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#overview
.md
pre-training](https://www.aclweb.org/anthology/2020.findings-emnlp.27/) by Julian Martin Eisenschlos, Syrine Krichene and Thomas Müller.
307_1_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#overview
.md
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/tapas_architecture.png" alt="drawing" width="600"/> <small> TAPAS architecture. Taken from the <a href="https://ai.googleblog.com/2020/04/using-neural-networks-to-find-answers.html">original blog post</a>.</small>
307_1_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#overview
.md
This model was contributed by [nielsr](https://huggingface.co/nielsr). The Tensorflow version of this model was contributed by [kamalkraj](https://huggingface.co/kamalkraj). The original code can be found [here](https://github.com/google-research/tapas).
307_1_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-tips
.md
- TAPAS is a model that uses relative position embeddings by default (restarting the position embeddings at every cell of the table). Note that this is something that was added after the publication of the original TAPAS paper. According to the authors, this usually results in a slightly better performance, and allows you to encode longer sequences without running out of embeddings. This is reflected in the `reset_position_index_per_cell` parameter of [`TapasConfig`], which is set to `True` by default. The
307_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-tips
.md
This is reflected in the `reset_position_index_per_cell` parameter of [`TapasConfig`], which is set to `True` by default. The default versions of the models available on the [hub](https://huggingface.co/models?search=tapas) all use relative position embeddings. You can still use the ones with absolute position embeddings by passing in an additional argument `revision="no_reset"` when calling the `from_pretrained()` method. Note that it's usually advised to pad the inputs on the right rather than the left.
307_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-tips
.md
- TAPAS is based on BERT, so `TAPAS-base` for example corresponds to a `BERT-base` architecture. Of course, `TAPAS-large` will result in the best performance (the results reported in the paper are from `TAPAS-large`). Results of the various sized models are shown on the [original GitHub repository](https://github.com/google-research/tapas).
307_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-tips
.md
- TAPAS has checkpoints fine-tuned on SQA, which are capable of answering questions related to a table in a conversational set-up. This means that you can ask follow-up questions such as "what is his age?" related to the previous question. Note that the forward pass of TAPAS is a bit different in case of a conversational set-up: in that case, you have to feed every table-question pair one by one to the model, such that the `prev_labels` token type ids can be overwritten by the predicted `labels` of the
307_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-tips
.md
pair one by one to the model, such that the `prev_labels` token type ids can be overwritten by the predicted `labels` of the model to the previous question. See "Usage" section for more info.
307_2_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-tips
.md
- TAPAS is similar to BERT and therefore relies on the masked language modeling (MLM) objective. It is therefore efficient at predicting masked tokens and at NLU in general, but is not optimal for text generation. Models trained with a causal language modeling (CLM) objective are better in that regard. Note that TAPAS can be used as an encoder in the EncoderDecoderModel framework, to combine it with an autoregressive text decoder such as GPT-2.
307_2_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
Here we explain how you can fine-tune [`TapasForQuestionAnswering`] on your own dataset. **STEP 1: Choose one of the 3 ways in which you can use TAPAS - or experiment** Basically, there are 3 different ways in which one can fine-tune [`TapasForQuestionAnswering`], corresponding to the different datasets on which Tapas was fine-tuned:
307_3_0