source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/squeezebert.md
|
https://huggingface.co/docs/transformers/en/model_doc/squeezebert/#squeezeberttokenizer
|
.md
|
Whether or not to tokenize Chinese characters.
This should likely be deactivated for Japanese (see this
[issue](https://github.com/huggingface/transformers/issues/328)).
strip_accents (`bool`, *optional*):
Whether or not to strip all accents. If this option is not specified, then it will be determined by the
value for `lowercase` (as in the original SqueezeBERT).
clean_up_tokenization_spaces (`bool`, *optional*, defaults to `True`):
|
173_5_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/squeezebert.md
|
https://huggingface.co/docs/transformers/en/model_doc/squeezebert/#squeezeberttokenizer
|
.md
|
value for `lowercase` (as in the original SqueezeBERT).
clean_up_tokenization_spaces (`bool`, *optional*, defaults to `True`):
Whether or not to cleanup spaces after decoding, cleanup consists in removing potential artifacts like
extra spaces.
Methods: build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
|
173_5_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/squeezebert.md
|
https://huggingface.co/docs/transformers/en/model_doc/squeezebert/#squeezeberttokenizerfast
|
.md
|
Construct a "fast" SqueezeBERT tokenizer (backed by HuggingFace's *tokenizers* library). Based on WordPiece.
This tokenizer inherits from [`PreTrainedTokenizerFast`] which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
Args:
vocab_file (`str`):
File containing the vocabulary.
do_lower_case (`bool`, *optional*, defaults to `True`):
Whether or not to lowercase the input when tokenizing.
|
173_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/squeezebert.md
|
https://huggingface.co/docs/transformers/en/model_doc/squeezebert/#squeezeberttokenizerfast
|
.md
|
do_lower_case (`bool`, *optional*, defaults to `True`):
Whether or not to lowercase the input when tokenizing.
unk_token (`str`, *optional*, defaults to `"[UNK]"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
sep_token (`str`, *optional*, defaults to `"[SEP]"`):
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
|
173_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/squeezebert.md
|
https://huggingface.co/docs/transformers/en/model_doc/squeezebert/#squeezeberttokenizerfast
|
.md
|
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
pad_token (`str`, *optional*, defaults to `"[PAD]"`):
The token used for padding, for example when batching sequences of different lengths.
cls_token (`str`, *optional*, defaults to `"[CLS]"`):
|
173_6_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/squeezebert.md
|
https://huggingface.co/docs/transformers/en/model_doc/squeezebert/#squeezeberttokenizerfast
|
.md
|
cls_token (`str`, *optional*, defaults to `"[CLS]"`):
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
mask_token (`str`, *optional*, defaults to `"[MASK]"`):
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
|
173_6_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/squeezebert.md
|
https://huggingface.co/docs/transformers/en/model_doc/squeezebert/#squeezeberttokenizerfast
|
.md
|
modeling. This is the token which the model will try to predict.
clean_text (`bool`, *optional*, defaults to `True`):
Whether or not to clean the text before tokenization by removing any control characters and replacing all
whitespaces by the classic one.
tokenize_chinese_chars (`bool`, *optional*, defaults to `True`):
Whether or not to tokenize Chinese characters. This should likely be deactivated for Japanese (see [this
issue](https://github.com/huggingface/transformers/issues/328)).
|
173_6_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/squeezebert.md
|
https://huggingface.co/docs/transformers/en/model_doc/squeezebert/#squeezeberttokenizerfast
|
.md
|
issue](https://github.com/huggingface/transformers/issues/328)).
strip_accents (`bool`, *optional*):
Whether or not to strip all accents. If this option is not specified, then it will be determined by the
value for `lowercase` (as in the original SqueezeBERT).
wordpieces_prefix (`str`, *optional*, defaults to `"##"`):
The prefix for subwords.
|
173_6_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/squeezebert.md
|
https://huggingface.co/docs/transformers/en/model_doc/squeezebert/#squeezebertmodel
|
.md
|
The bare SqueezeBERT Model transformer outputting raw hidden-states without any specific head on top.
The SqueezeBERT model was proposed in [SqueezeBERT: What can computer vision teach NLP about efficient neural
networks?](https://arxiv.org/abs/2006.11316) by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W.
Keutzer
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
|
173_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/squeezebert.md
|
https://huggingface.co/docs/transformers/en/model_doc/squeezebert/#squeezebertmodel
|
.md
|
Keutzer
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
|
173_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/squeezebert.md
|
https://huggingface.co/docs/transformers/en/model_doc/squeezebert/#squeezebertmodel
|
.md
|
and behavior.
For best results finetuning SqueezeBERT on text classification tasks, it is recommended to use the
*squeezebert/squeezebert-mnli-headless* checkpoint as a starting point.
Parameters:
config ([`SqueezeBertConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Hierarchy:
```
|
173_7_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/squeezebert.md
|
https://huggingface.co/docs/transformers/en/model_doc/squeezebert/#squeezebertmodel
|
.md
|
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Hierarchy:
```
Internal class hierarchy:
SqueezeBertModel
SqueezeBertEncoder
SqueezeBertModule
SqueezeBertSelfAttention
ConvActivation
ConvDropoutLayerNorm
```
Data layouts:
```
Input data is in [batch, sequence_length, hidden_size] format.
|
173_7_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/squeezebert.md
|
https://huggingface.co/docs/transformers/en/model_doc/squeezebert/#squeezebertmodel
|
.md
|
Data inside the encoder is in [batch, hidden_size, sequence_length] format. But, if `output_hidden_states == True`, the data from inside the encoder is returned in [batch, sequence_length, hidden_size] format.
The final output of the encoder is in [batch, sequence_length, hidden_size] format.
```
|
173_7_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/squeezebert.md
|
https://huggingface.co/docs/transformers/en/model_doc/squeezebert/#squeezebertformaskedlm
|
.md
|
SqueezeBERT Model with a `language modeling` head on top.
The SqueezeBERT model was proposed in [SqueezeBERT: What can computer vision teach NLP about efficient neural
networks?](https://arxiv.org/abs/2006.11316) by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W.
Keutzer
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
|
173_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/squeezebert.md
|
https://huggingface.co/docs/transformers/en/model_doc/squeezebert/#squeezebertformaskedlm
|
.md
|
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
For best results finetuning SqueezeBERT on text classification tasks, it is recommended to use the
|
173_8_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/squeezebert.md
|
https://huggingface.co/docs/transformers/en/model_doc/squeezebert/#squeezebertformaskedlm
|
.md
|
and behavior.
For best results finetuning SqueezeBERT on text classification tasks, it is recommended to use the
*squeezebert/squeezebert-mnli-headless* checkpoint as a starting point.
Parameters:
config ([`SqueezeBertConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Hierarchy:
```
|
173_8_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/squeezebert.md
|
https://huggingface.co/docs/transformers/en/model_doc/squeezebert/#squeezebertformaskedlm
|
.md
|
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Hierarchy:
```
Internal class hierarchy:
SqueezeBertModel
SqueezeBertEncoder
SqueezeBertModule
SqueezeBertSelfAttention
ConvActivation
ConvDropoutLayerNorm
```
Data layouts:
```
Input data is in [batch, sequence_length, hidden_size] format.
|
173_8_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/squeezebert.md
|
https://huggingface.co/docs/transformers/en/model_doc/squeezebert/#squeezebertformaskedlm
|
.md
|
Data inside the encoder is in [batch, hidden_size, sequence_length] format. But, if `output_hidden_states == True`, the data from inside the encoder is returned in [batch, sequence_length, hidden_size] format.
The final output of the encoder is in [batch, sequence_length, hidden_size] format.
```
|
173_8_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/squeezebert.md
|
https://huggingface.co/docs/transformers/en/model_doc/squeezebert/#squeezebertforsequenceclassification
|
.md
|
SqueezeBERT Model transformer with a sequence classification/regression head on top (a linear layer on top of the
pooled output) e.g. for GLUE tasks.
The SqueezeBERT model was proposed in [SqueezeBERT: What can computer vision teach NLP about efficient neural
networks?](https://arxiv.org/abs/2006.11316) by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W.
Keutzer
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
|
173_9_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/squeezebert.md
|
https://huggingface.co/docs/transformers/en/model_doc/squeezebert/#squeezebertforsequenceclassification
|
.md
|
Keutzer
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
|
173_9_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/squeezebert.md
|
https://huggingface.co/docs/transformers/en/model_doc/squeezebert/#squeezebertforsequenceclassification
|
.md
|
and behavior.
For best results finetuning SqueezeBERT on text classification tasks, it is recommended to use the
*squeezebert/squeezebert-mnli-headless* checkpoint as a starting point.
Parameters:
config ([`SqueezeBertConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Hierarchy:
```
|
173_9_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/squeezebert.md
|
https://huggingface.co/docs/transformers/en/model_doc/squeezebert/#squeezebertforsequenceclassification
|
.md
|
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Hierarchy:
```
Internal class hierarchy:
SqueezeBertModel
SqueezeBertEncoder
SqueezeBertModule
SqueezeBertSelfAttention
ConvActivation
ConvDropoutLayerNorm
```
Data layouts:
```
Input data is in [batch, sequence_length, hidden_size] format.
|
173_9_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/squeezebert.md
|
https://huggingface.co/docs/transformers/en/model_doc/squeezebert/#squeezebertforsequenceclassification
|
.md
|
Data inside the encoder is in [batch, hidden_size, sequence_length] format. But, if `output_hidden_states == True`, the data from inside the encoder is returned in [batch, sequence_length, hidden_size] format.
The final output of the encoder is in [batch, sequence_length, hidden_size] format.
```
|
173_9_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/squeezebert.md
|
https://huggingface.co/docs/transformers/en/model_doc/squeezebert/#squeezebertformultiplechoice
|
.md
|
SqueezeBERT Model with a multiple choice classification head on top (a linear layer on top of the pooled output and
a softmax) e.g. for RocStories/SWAG tasks.
The SqueezeBERT model was proposed in [SqueezeBERT: What can computer vision teach NLP about efficient neural
networks?](https://arxiv.org/abs/2006.11316) by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W.
Keutzer
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
|
173_10_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/squeezebert.md
|
https://huggingface.co/docs/transformers/en/model_doc/squeezebert/#squeezebertformultiplechoice
|
.md
|
Keutzer
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
|
173_10_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/squeezebert.md
|
https://huggingface.co/docs/transformers/en/model_doc/squeezebert/#squeezebertformultiplechoice
|
.md
|
and behavior.
For best results finetuning SqueezeBERT on text classification tasks, it is recommended to use the
*squeezebert/squeezebert-mnli-headless* checkpoint as a starting point.
Parameters:
config ([`SqueezeBertConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Hierarchy:
```
|
173_10_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/squeezebert.md
|
https://huggingface.co/docs/transformers/en/model_doc/squeezebert/#squeezebertformultiplechoice
|
.md
|
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Hierarchy:
```
Internal class hierarchy:
SqueezeBertModel
SqueezeBertEncoder
SqueezeBertModule
SqueezeBertSelfAttention
ConvActivation
ConvDropoutLayerNorm
```
Data layouts:
```
Input data is in [batch, sequence_length, hidden_size] format.
|
173_10_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/squeezebert.md
|
https://huggingface.co/docs/transformers/en/model_doc/squeezebert/#squeezebertformultiplechoice
|
.md
|
Data inside the encoder is in [batch, hidden_size, sequence_length] format. But, if `output_hidden_states == True`, the data from inside the encoder is returned in [batch, sequence_length, hidden_size] format.
The final output of the encoder is in [batch, sequence_length, hidden_size] format.
```
|
173_10_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/squeezebert.md
|
https://huggingface.co/docs/transformers/en/model_doc/squeezebert/#squeezebertfortokenclassification
|
.md
|
SqueezeBERT Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g.
for Named-Entity-Recognition (NER) tasks.
The SqueezeBERT model was proposed in [SqueezeBERT: What can computer vision teach NLP about efficient neural
networks?](https://arxiv.org/abs/2006.11316) by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W.
Keutzer
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
|
173_11_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/squeezebert.md
|
https://huggingface.co/docs/transformers/en/model_doc/squeezebert/#squeezebertfortokenclassification
|
.md
|
Keutzer
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
|
173_11_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/squeezebert.md
|
https://huggingface.co/docs/transformers/en/model_doc/squeezebert/#squeezebertfortokenclassification
|
.md
|
and behavior.
For best results finetuning SqueezeBERT on text classification tasks, it is recommended to use the
*squeezebert/squeezebert-mnli-headless* checkpoint as a starting point.
Parameters:
config ([`SqueezeBertConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Hierarchy:
```
|
173_11_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/squeezebert.md
|
https://huggingface.co/docs/transformers/en/model_doc/squeezebert/#squeezebertfortokenclassification
|
.md
|
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Hierarchy:
```
Internal class hierarchy:
SqueezeBertModel
SqueezeBertEncoder
SqueezeBertModule
SqueezeBertSelfAttention
ConvActivation
ConvDropoutLayerNorm
```
Data layouts:
```
Input data is in [batch, sequence_length, hidden_size] format.
|
173_11_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/squeezebert.md
|
https://huggingface.co/docs/transformers/en/model_doc/squeezebert/#squeezebertfortokenclassification
|
.md
|
Data inside the encoder is in [batch, hidden_size, sequence_length] format. But, if `output_hidden_states == True`, the data from inside the encoder is returned in [batch, sequence_length, hidden_size] format.
The final output of the encoder is in [batch, sequence_length, hidden_size] format.
```
|
173_11_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/squeezebert.md
|
https://huggingface.co/docs/transformers/en/model_doc/squeezebert/#squeezebertforquestionanswering
|
.md
|
SqueezeBERT Model with a span classification head on top for extractive question-answering tasks like SQuAD (a
linear layers on top of the hidden-states output to compute `span start logits` and `span end logits`).
The SqueezeBERT model was proposed in [SqueezeBERT: What can computer vision teach NLP about efficient neural
networks?](https://arxiv.org/abs/2006.11316) by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W.
Keutzer
|
173_12_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/squeezebert.md
|
https://huggingface.co/docs/transformers/en/model_doc/squeezebert/#squeezebertforquestionanswering
|
.md
|
networks?](https://arxiv.org/abs/2006.11316) by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W.
Keutzer
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
173_12_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/squeezebert.md
|
https://huggingface.co/docs/transformers/en/model_doc/squeezebert/#squeezebertforquestionanswering
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
For best results finetuning SqueezeBERT on text classification tasks, it is recommended to use the
*squeezebert/squeezebert-mnli-headless* checkpoint as a starting point.
Parameters:
|
173_12_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/squeezebert.md
|
https://huggingface.co/docs/transformers/en/model_doc/squeezebert/#squeezebertforquestionanswering
|
.md
|
*squeezebert/squeezebert-mnli-headless* checkpoint as a starting point.
Parameters:
config ([`SqueezeBertConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Hierarchy:
```
Internal class hierarchy:
SqueezeBertModel
SqueezeBertEncoder
SqueezeBertModule
SqueezeBertSelfAttention
|
173_12_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/squeezebert.md
|
https://huggingface.co/docs/transformers/en/model_doc/squeezebert/#squeezebertforquestionanswering
|
.md
|
Hierarchy:
```
Internal class hierarchy:
SqueezeBertModel
SqueezeBertEncoder
SqueezeBertModule
SqueezeBertSelfAttention
ConvActivation
ConvDropoutLayerNorm
```
Data layouts:
```
Input data is in [batch, sequence_length, hidden_size] format.
|
173_12_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/squeezebert.md
|
https://huggingface.co/docs/transformers/en/model_doc/squeezebert/#squeezebertforquestionanswering
|
.md
|
Data inside the encoder is in [batch, hidden_size, sequence_length] format. But, if `output_hidden_states == True`, the data from inside the encoder is returned in [batch, sequence_length, hidden_size] format.
The final output of the encoder is in [batch, sequence_length, hidden_size] format.
```
|
173_12_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2_phoneme.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2_phoneme/
|
.md
|
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
174_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2_phoneme.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2_phoneme/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
174_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2_phoneme.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2_phoneme/#overview
|
.md
|
The Wav2Vec2Phoneme model was proposed in [Simple and Effective Zero-shot Cross-lingual Phoneme Recognition (Xu et al.,
2021](https://arxiv.org/abs/2109.11680) by Qiantong Xu, Alexei Baevski, Michael Auli.
The abstract from the paper is the following:
*Recent progress in self-training, self-supervised pretraining and unsupervised learning enabled well performing speech
recognition systems without any labeled data. However, in many cases there is labeled data available for related
|
174_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2_phoneme.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2_phoneme/#overview
|
.md
|
recognition systems without any labeled data. However, in many cases there is labeled data available for related
languages which is not utilized by these methods. This paper extends previous work on zero-shot cross-lingual transfer
learning by fine-tuning a multilingually pretrained wav2vec 2.0 model to transcribe unseen languages. This is done by
mapping phonemes of the training languages to the target language using articulatory features. Experiments show that
|
174_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2_phoneme.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2_phoneme/#overview
|
.md
|
mapping phonemes of the training languages to the target language using articulatory features. Experiments show that
this simple method significantly outperforms prior work which introduced task-specific architectures and used only part
of a monolingually pretrained model.*
Relevant checkpoints can be found under https://huggingface.co/models?other=phoneme-recognition.
This model was contributed by [patrickvonplaten](https://huggingface.co/patrickvonplaten)
|
174_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2_phoneme.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2_phoneme/#overview
|
.md
|
This model was contributed by [patrickvonplaten](https://huggingface.co/patrickvonplaten)
The original code can be found [here](https://github.com/pytorch/fairseq/tree/master/fairseq/models/wav2vec).
|
174_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2_phoneme.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2_phoneme/#usage-tips
|
.md
|
- Wav2Vec2Phoneme uses the exact same architecture as Wav2Vec2
- Wav2Vec2Phoneme is a speech model that accepts a float array corresponding to the raw waveform of the speech signal.
- Wav2Vec2Phoneme model was trained using connectionist temporal classification (CTC) so the model output has to be
decoded using [`Wav2Vec2PhonemeCTCTokenizer`].
- Wav2Vec2Phoneme can be fine-tuned on multiple language at once and decode unseen languages in a single forward pass
to a sequence of phonemes
|
174_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2_phoneme.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2_phoneme/#usage-tips
|
.md
|
to a sequence of phonemes
- By default, the model outputs a sequence of phonemes. In order to transform the phonemes to a sequence of words one
should make use of a dictionary and language model.
<Tip>
Wav2Vec2Phoneme's architecture is based on the Wav2Vec2 model, for API reference, check out [`Wav2Vec2`](wav2vec2)'s documentation page
except for the tokenizer.
</Tip>
|
174_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2_phoneme.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2_phoneme/#wav2vec2phonemectctokenizer
|
.md
|
Constructs a Wav2Vec2PhonemeCTC tokenizer.
This tokenizer inherits from [`PreTrainedTokenizer`] which contains some of the main methods. Users should refer to
the superclass for more information regarding such methods.
Args:
vocab_file (`str`):
File containing the vocabulary.
bos_token (`str`, *optional*, defaults to `"<s>"`):
The beginning of sentence token.
eos_token (`str`, *optional*, defaults to `"</s>"`):
The end of sentence token.
unk_token (`str`, *optional*, defaults to `"<unk>"`):
|
174_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2_phoneme.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2_phoneme/#wav2vec2phonemectctokenizer
|
.md
|
The end of sentence token.
unk_token (`str`, *optional*, defaults to `"<unk>"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (`str`, *optional*, defaults to `"<pad>"`):
The token used for padding, for example when batching sequences of different lengths.
do_phonemize (`bool`, *optional*, defaults to `True`):
Whether the tokenizer should phonetize the input or not. Only if a sequence of phonemes is passed to the
|
174_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2_phoneme.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2_phoneme/#wav2vec2phonemectctokenizer
|
.md
|
Whether the tokenizer should phonetize the input or not. Only if a sequence of phonemes is passed to the
tokenizer, `do_phonemize` should be set to `False`.
phonemizer_lang (`str`, *optional*, defaults to `"en-us"`):
The language of the phoneme set to which the tokenizer should phonetize the input text to.
phonemizer_backend (`str`, *optional*. defaults to `"espeak"`):
The backend phonetization library that shall be used by the phonemizer library. Defaults to `espeak-ng`.
|
174_3_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2_phoneme.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2_phoneme/#wav2vec2phonemectctokenizer
|
.md
|
The backend phonetization library that shall be used by the phonemizer library. Defaults to `espeak-ng`.
See the [phonemizer package](https://github.com/bootphon/phonemizer#readme). for more information.
**kwargs
Additional keyword arguments passed along to [`PreTrainedTokenizer`]
Methods: __call__
- batch_decode
- decode
- phonemize
|
174_3_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/big_bird.md
|
https://huggingface.co/docs/transformers/en/model_doc/big_bird/
|
.md
|
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
175_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/big_bird.md
|
https://huggingface.co/docs/transformers/en/model_doc/big_bird/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
175_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/big_bird.md
|
https://huggingface.co/docs/transformers/en/model_doc/big_bird/#overview
|
.md
|
The BigBird model was proposed in [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by
Zaheer, Manzil and Guruganesh, Guru and Dubey, Kumar Avinava and Ainslie, Joshua and Alberti, Chris and Ontanon,
Santiago and Pham, Philip and Ravula, Anirudh and Wang, Qifan and Yang, Li and others. BigBird, is a sparse-attention
based transformer which extends Transformer based models, such as BERT to much longer sequences. In addition to sparse
|
175_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/big_bird.md
|
https://huggingface.co/docs/transformers/en/model_doc/big_bird/#overview
|
.md
|
based transformer which extends Transformer based models, such as BERT to much longer sequences. In addition to sparse
attention, BigBird also applies global attention as well as random attention to the input sequence. Theoretically, it
has been shown that applying sparse, global, and random attention approximates full attention, while being
computationally much more efficient for longer sequences. As a consequence of the capability to handle longer context,
|
175_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/big_bird.md
|
https://huggingface.co/docs/transformers/en/model_doc/big_bird/#overview
|
.md
|
computationally much more efficient for longer sequences. As a consequence of the capability to handle longer context,
BigBird has shown improved performance on various long document NLP tasks, such as question answering and
summarization, compared to BERT or RoBERTa.
The abstract from the paper is the following:
*Transformers-based models, such as BERT, have been one of the most successful deep learning models for NLP.
|
175_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/big_bird.md
|
https://huggingface.co/docs/transformers/en/model_doc/big_bird/#overview
|
.md
|
*Transformers-based models, such as BERT, have been one of the most successful deep learning models for NLP.
Unfortunately, one of their core limitations is the quadratic dependency (mainly in terms of memory) on the sequence
length due to their full attention mechanism. To remedy this, we propose, BigBird, a sparse attention mechanism that
reduces this quadratic dependency to linear. We show that BigBird is a universal approximator of sequence functions and
|
175_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/big_bird.md
|
https://huggingface.co/docs/transformers/en/model_doc/big_bird/#overview
|
.md
|
reduces this quadratic dependency to linear. We show that BigBird is a universal approximator of sequence functions and
is Turing complete, thereby preserving these properties of the quadratic, full attention model. Along the way, our
theoretical analysis reveals some of the benefits of having O(1) global tokens (such as CLS), that attend to the entire
sequence as part of the sparse attention mechanism. The proposed sparse attention can handle sequences of length up to
|
175_1_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/big_bird.md
|
https://huggingface.co/docs/transformers/en/model_doc/big_bird/#overview
|
.md
|
sequence as part of the sparse attention mechanism. The proposed sparse attention can handle sequences of length up to
8x of what was previously possible using similar hardware. As a consequence of the capability to handle longer context,
BigBird drastically improves performance on various NLP tasks such as question answering and summarization. We also
propose novel applications to genomics data.*
|
175_1_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/big_bird.md
|
https://huggingface.co/docs/transformers/en/model_doc/big_bird/#overview
|
.md
|
propose novel applications to genomics data.*
This model was contributed by [vasudevgupta](https://huggingface.co/vasudevgupta). The original code can be found
[here](https://github.com/google-research/bigbird).
|
175_1_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/big_bird.md
|
https://huggingface.co/docs/transformers/en/model_doc/big_bird/#usage-tips
|
.md
|
- For an in-detail explanation on how BigBird's attention works, see [this blog post](https://huggingface.co/blog/big-bird).
- BigBird comes with 2 implementations: **original_full** & **block_sparse**. For the sequence length < 1024, using
**original_full** is advised as there is no benefit in using **block_sparse** attention.
- The code currently uses window size of 3 blocks and 2 global blocks.
- Sequence length must be divisible by block size.
- Current implementation supports only **ITC**.
|
175_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/big_bird.md
|
https://huggingface.co/docs/transformers/en/model_doc/big_bird/#usage-tips
|
.md
|
- Sequence length must be divisible by block size.
- Current implementation supports only **ITC**.
- Current implementation doesn't support **num_random_blocks = 0**
- BigBird is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than
the left.
|
175_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/big_bird.md
|
https://huggingface.co/docs/transformers/en/model_doc/big_bird/#resources
|
.md
|
- [Text classification task guide](../tasks/sequence_classification)
- [Token classification task guide](../tasks/token_classification)
- [Question answering task guide](../tasks/question_answering)
- [Causal language modeling task guide](../tasks/language_modeling)
- [Masked language modeling task guide](../tasks/masked_language_modeling)
- [Multiple choice task guide](../tasks/multiple_choice)
|
175_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/big_bird.md
|
https://huggingface.co/docs/transformers/en/model_doc/big_bird/#bigbirdconfig
|
.md
|
This is the configuration class to store the configuration of a [`BigBirdModel`]. It is used to instantiate an
BigBird model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the BigBird
[google/bigbird-roberta-base](https://huggingface.co/google/bigbird-roberta-base) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
175_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/big_bird.md
|
https://huggingface.co/docs/transformers/en/model_doc/big_bird/#bigbirdconfig
|
.md
|
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 50358):
Vocabulary size of the BigBird model. Defines the number of different tokens that can be represented by the
`inputs_ids` passed when calling [`BigBirdModel`].
hidden_size (`int`, *optional*, defaults to 768):
Dimension of the encoder layers and the pooler layer.
|
175_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/big_bird.md
|
https://huggingface.co/docs/transformers/en/model_doc/big_bird/#bigbirdconfig
|
.md
|
hidden_size (`int`, *optional*, defaults to 768):
Dimension of the encoder layers and the pooler layer.
num_hidden_layers (`int`, *optional*, defaults to 12):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 12):
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (`int`, *optional*, defaults to 3072):
Dimension of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
|
175_4_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/big_bird.md
|
https://huggingface.co/docs/transformers/en/model_doc/big_bird/#bigbirdconfig
|
.md
|
Dimension of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (`str` or `function`, *optional*, defaults to `"gelu_new"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"selu"` and `"gelu_new"` are supported.
hidden_dropout_prob (`float`, *optional*, defaults to 0.1):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
|
175_4_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/big_bird.md
|
https://huggingface.co/docs/transformers/en/model_doc/big_bird/#bigbirdconfig
|
.md
|
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1):
The dropout ratio for the attention probabilities.
max_position_embeddings (`int`, *optional*, defaults to 4096):
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 1024 or 2048 or 4096).
type_vocab_size (`int`, *optional*, defaults to 2):
|
175_4_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/big_bird.md
|
https://huggingface.co/docs/transformers/en/model_doc/big_bird/#bigbirdconfig
|
.md
|
just in case (e.g., 1024 or 2048 or 4096).
type_vocab_size (`int`, *optional*, defaults to 2):
The vocabulary size of the `token_type_ids` passed when calling [`BigBirdModel`].
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (`float`, *optional*, defaults to 1e-12):
The epsilon used by the layer normalization layers.
is_decoder (`bool`, *optional*, defaults to `False`):
|
175_4_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/big_bird.md
|
https://huggingface.co/docs/transformers/en/model_doc/big_bird/#bigbirdconfig
|
.md
|
The epsilon used by the layer normalization layers.
is_decoder (`bool`, *optional*, defaults to `False`):
Whether the model is used as a decoder or not. If `False`, the model is used as an encoder.
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if `config.is_decoder=True`.
attention_type (`str`, *optional*, defaults to `"block_sparse"`)
|
175_4_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/big_bird.md
|
https://huggingface.co/docs/transformers/en/model_doc/big_bird/#bigbirdconfig
|
.md
|
relevant if `config.is_decoder=True`.
attention_type (`str`, *optional*, defaults to `"block_sparse"`)
Whether to use block sparse attention (with n complexity) as introduced in paper or original attention
layer (with n^2 complexity). Possible values are `"original_full"` and `"block_sparse"`.
use_bias (`bool`, *optional*, defaults to `True`)
Whether to use bias in query, key, value.
rescale_embeddings (`bool`, *optional*, defaults to `False`)
Whether to rescale embeddings with (hidden_size ** 0.5).
|
175_4_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/big_bird.md
|
https://huggingface.co/docs/transformers/en/model_doc/big_bird/#bigbirdconfig
|
.md
|
rescale_embeddings (`bool`, *optional*, defaults to `False`)
Whether to rescale embeddings with (hidden_size ** 0.5).
block_size (`int`, *optional*, defaults to 64)
Size of each block. Useful only when `attention_type == "block_sparse"`.
num_random_blocks (`int`, *optional*, defaults to 3)
Each query is going to attend these many number of random blocks. Useful only when `attention_type ==
"block_sparse"`.
classifier_dropout (`float`, *optional*):
The dropout ratio for the classification head.
Example:
|
175_4_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/big_bird.md
|
https://huggingface.co/docs/transformers/en/model_doc/big_bird/#bigbirdconfig
|
.md
|
"block_sparse"`.
classifier_dropout (`float`, *optional*):
The dropout ratio for the classification head.
Example:
```python
>>> from transformers import BigBirdConfig, BigBirdModel
|
175_4_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/big_bird.md
|
https://huggingface.co/docs/transformers/en/model_doc/big_bird/#bigbirdconfig
|
.md
|
>>> # Initializing a BigBird google/bigbird-roberta-base style configuration
>>> configuration = BigBirdConfig()
>>> # Initializing a model (with random weights) from the google/bigbird-roberta-base style configuration
>>> model = BigBirdModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
175_4_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/big_bird.md
|
https://huggingface.co/docs/transformers/en/model_doc/big_bird/#bigbirdtokenizer
|
.md
|
Construct a BigBird tokenizer. Based on [SentencePiece](https://github.com/google/sentencepiece).
This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
Args:
vocab_file (`str`):
[SentencePiece](https://github.com/google/sentencepiece) file (generally has a *.spm* extension) that
contains the vocabulary necessary to instantiate a tokenizer.
|
175_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/big_bird.md
|
https://huggingface.co/docs/transformers/en/model_doc/big_bird/#bigbirdtokenizer
|
.md
|
contains the vocabulary necessary to instantiate a tokenizer.
unk_token (`str`, *optional*, defaults to `"<unk>"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
bos_token (`str`, *optional*, defaults to `"<s>"`):
The begin of sequence token.
eos_token (`str`, *optional*, defaults to `"</s>"`):
The end of sequence token.
pad_token (`str`, *optional*, defaults to `"<pad>"`):
|
175_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/big_bird.md
|
https://huggingface.co/docs/transformers/en/model_doc/big_bird/#bigbirdtokenizer
|
.md
|
The end of sequence token.
pad_token (`str`, *optional*, defaults to `"<pad>"`):
The token used for padding, for example when batching sequences of different lengths.
sep_token (`str`, *optional*, defaults to `"[SEP]"`):
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
|
175_5_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/big_bird.md
|
https://huggingface.co/docs/transformers/en/model_doc/big_bird/#bigbirdtokenizer
|
.md
|
token of a sequence built with special tokens.
mask_token (`str`, *optional*, defaults to `"[MASK]"`):
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
cls_token (`str`, *optional*, defaults to `"[CLS]"`):
The classifier token which is used when doing sequence classification (classification of the whole sequence
|
175_5_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/big_bird.md
|
https://huggingface.co/docs/transformers/en/model_doc/big_bird/#bigbirdtokenizer
|
.md
|
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
sp_model_kwargs (`dict`, *optional*):
Will be passed to the `SentencePieceProcessor.__init__()` method. The [Python wrapper for
SentencePiece](https://github.com/google/sentencepiece/tree/master/python) can be used, among other things,
to set:
|
175_5_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/big_bird.md
|
https://huggingface.co/docs/transformers/en/model_doc/big_bird/#bigbirdtokenizer
|
.md
|
SentencePiece](https://github.com/google/sentencepiece/tree/master/python) can be used, among other things,
to set:
- `enable_sampling`: Enable subword regularization.
- `nbest_size`: Sampling parameters for unigram. Invalid for BPE-Dropout.
- `nbest_size = {0,1}`: No sampling is performed.
- `nbest_size > 1`: samples from the nbest_size results.
- `nbest_size < 0`: assuming that nbest_size is infinite and samples from the all hypothesis (lattice)
|
175_5_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/big_bird.md
|
https://huggingface.co/docs/transformers/en/model_doc/big_bird/#bigbirdtokenizer
|
.md
|
- `nbest_size < 0`: assuming that nbest_size is infinite and samples from the all hypothesis (lattice)
using forward-filtering-and-backward-sampling algorithm.
- `alpha`: Smoothing parameter for unigram sampling, and dropout probability of merge operations for
BPE-dropout.
Methods: build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
|
175_5_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/big_bird.md
|
https://huggingface.co/docs/transformers/en/model_doc/big_bird/#bigbirdtokenizerfast
|
.md
|
Construct a "fast" BigBird tokenizer (backed by HuggingFace's *tokenizers* library). Based on
[Unigram](https://huggingface.co/docs/tokenizers/python/latest/components.html?highlight=unigram#models). This
tokenizer inherits from [`PreTrainedTokenizerFast`] which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods
Args:
vocab_file (`str`):
[SentencePiece](https://github.com/google/sentencepiece) file (generally has a *.spm* extension) that
|
175_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/big_bird.md
|
https://huggingface.co/docs/transformers/en/model_doc/big_bird/#bigbirdtokenizerfast
|
.md
|
Args:
vocab_file (`str`):
[SentencePiece](https://github.com/google/sentencepiece) file (generally has a *.spm* extension) that
contains the vocabulary necessary to instantiate a tokenizer.
bos_token (`str`, *optional*, defaults to `"<s>"`):
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
<Tip>
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the `cls_token`.
|
175_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/big_bird.md
|
https://huggingface.co/docs/transformers/en/model_doc/big_bird/#bigbirdtokenizerfast
|
.md
|
sequence. The token used is the `cls_token`.
</Tip>
eos_token (`str`, *optional*, defaults to `"</s>"`):
The end of sequence token. .. note:: When building a sequence using special tokens, this is not the token
that is used for the end of sequence. The token used is the `sep_token`.
unk_token (`str`, *optional*, defaults to `"<unk>"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
|
175_6_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/big_bird.md
|
https://huggingface.co/docs/transformers/en/model_doc/big_bird/#bigbirdtokenizerfast
|
.md
|
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
sep_token (`str`, *optional*, defaults to `"[SEP]"`):
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
pad_token (`str`, *optional*, defaults to `"<pad>"`):
|
175_6_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/big_bird.md
|
https://huggingface.co/docs/transformers/en/model_doc/big_bird/#bigbirdtokenizerfast
|
.md
|
token of a sequence built with special tokens.
pad_token (`str`, *optional*, defaults to `"<pad>"`):
The token used for padding, for example when batching sequences of different lengths.
cls_token (`str`, *optional*, defaults to `"[CLS]"`):
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
|
175_6_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/big_bird.md
|
https://huggingface.co/docs/transformers/en/model_doc/big_bird/#bigbirdtokenizerfast
|
.md
|
instead of per-token classification). It is the first token of the sequence when built with special tokens.
mask_token (`str`, *optional*, defaults to `"[MASK]"`):
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
|
175_6_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/big_bird.md
|
https://huggingface.co/docs/transformers/en/model_doc/big_bird/#bigbird-specific-outputs
|
.md
|
models.big_bird.modeling_big_bird.BigBirdForPreTrainingOutput
Output type of [`BigBirdForPreTraining`].
Args:
loss (*optional*, returned when `labels` is provided, `torch.FloatTensor` of shape `(1,)`):
Total loss as the sum of the masked language modeling loss and the next sequence prediction
(classification) loss.
prediction_logits (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.vocab_size)`):
|
175_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/big_bird.md
|
https://huggingface.co/docs/transformers/en/model_doc/big_bird/#bigbird-specific-outputs
|
.md
|
(classification) loss.
prediction_logits (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.vocab_size)`):
Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
seq_relationship_logits (`torch.FloatTensor` of shape `(batch_size, 2)`):
Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation
before SoftMax).
|
175_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/big_bird.md
|
https://huggingface.co/docs/transformers/en/model_doc/big_bird/#bigbird-specific-outputs
|
.md
|
Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation
before SoftMax).
hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of
shape `(batch_size, sequence_length, hidden_size)`.
|
175_7_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/big_bird.md
|
https://huggingface.co/docs/transformers/en/model_doc/big_bird/#bigbird-specific-outputs
|
.md
|
shape `(batch_size, sequence_length, hidden_size)`.
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
sequence_length)`.
|
175_7_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/big_bird.md
|
https://huggingface.co/docs/transformers/en/model_doc/big_bird/#bigbird-specific-outputs
|
.md
|
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
sequence_length)`.
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
<frameworkcontent>
<pt>
|
175_7_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/big_bird.md
|
https://huggingface.co/docs/transformers/en/model_doc/big_bird/#bigbirdmodel
|
.md
|
The bare BigBird Model transformer outputting raw hidden-states without any specific head on top.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`BigBirdConfig`]): Model configuration class with all the parameters of the model.
|
175_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/big_bird.md
|
https://huggingface.co/docs/transformers/en/model_doc/big_bird/#bigbirdmodel
|
.md
|
behavior.
Parameters:
config ([`BigBirdConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of
|
175_8_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/big_bird.md
|
https://huggingface.co/docs/transformers/en/model_doc/big_bird/#bigbirdmodel
|
.md
|
The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of
cross-attention is added between the self-attention layers, following the architecture described in [Attention is
all you need](https://arxiv.org/abs/1706.03762) by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit,
Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin.
|
175_8_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/big_bird.md
|
https://huggingface.co/docs/transformers/en/model_doc/big_bird/#bigbirdmodel
|
.md
|
Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin.
To behave as an decoder the model needs to be initialized with the `is_decoder` argument of the configuration set
to `True`. To be used in a Seq2Seq model, the model needs to initialized with both `is_decoder` argument and
`add_cross_attention` set to `True`; an `encoder_hidden_states` is then expected as an input to the forward pass.
Methods: forward
|
175_8_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/big_bird.md
|
https://huggingface.co/docs/transformers/en/model_doc/big_bird/#bigbirdforpretraining
|
.md
|
No docstring available for BigBirdForPreTraining
Methods: forward
|
175_9_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/big_bird.md
|
https://huggingface.co/docs/transformers/en/model_doc/big_bird/#bigbirdforcausallm
|
.md
|
BigBird Model with a `language modeling` head on top for CLM fine-tuning.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`BigBirdConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
175_10_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/big_bird.md
|
https://huggingface.co/docs/transformers/en/model_doc/big_bird/#bigbirdforcausallm
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
175_10_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/big_bird.md
|
https://huggingface.co/docs/transformers/en/model_doc/big_bird/#bigbirdformaskedlm
|
.md
|
BigBird Model with a `language modeling` head on top.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`BigBirdConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
175_11_0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.