source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2config
|
.md
|
num_negatives (`int`, *optional*, defaults to 100):
Number of negative samples for the contrastive loss.
codevector_dim (`int`, *optional*, defaults to 256):
Dimensionality of the quantized feature vectors.
proj_codevector_dim (`int`, *optional*, defaults to 256):
Dimensionality of the final projection of both the quantized and the transformer features.
diversity_loss_weight (`int`, *optional*, defaults to 0.1):
The weight of the codebook diversity loss component.
|
158_8_21
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2config
|
.md
|
diversity_loss_weight (`int`, *optional*, defaults to 0.1):
The weight of the codebook diversity loss component.
ctc_loss_reduction (`str`, *optional*, defaults to `"sum"`):
Specifies the reduction to apply to the output of `torch.nn.CTCLoss`. Only relevant when training an
instance of [`Wav2Vec2ForCTC`].
ctc_zero_infinity (`bool`, *optional*, defaults to `False`):
Whether to zero infinite losses and the associated gradients of `torch.nn.CTCLoss`. Infinite losses mainly
|
158_8_22
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2config
|
.md
|
Whether to zero infinite losses and the associated gradients of `torch.nn.CTCLoss`. Infinite losses mainly
occur when the inputs are too short to be aligned to the targets. Only relevant when training an instance
of [`Wav2Vec2ForCTC`].
use_weighted_layer_sum (`bool`, *optional*, defaults to `False`):
Whether to use a weighted average of layer outputs with learned weights. Only relevant when using an
instance of [`Wav2Vec2ForSequenceClassification`].
|
158_8_23
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2config
|
.md
|
instance of [`Wav2Vec2ForSequenceClassification`].
classifier_proj_size (`int`, *optional*, defaults to 256):
Dimensionality of the projection before token mean-pooling for classification.
tdnn_dim (`Tuple[int]` or `List[int]`, *optional*, defaults to `(512, 512, 512, 512, 1500)`):
A tuple of integers defining the number of output channels of each 1D convolutional layer in the *TDNN*
module of the *XVector* model. The length of *tdnn_dim* defines the number of *TDNN* layers.
|
158_8_24
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2config
|
.md
|
module of the *XVector* model. The length of *tdnn_dim* defines the number of *TDNN* layers.
tdnn_kernel (`Tuple[int]` or `List[int]`, *optional*, defaults to `(5, 3, 3, 1, 1)`):
A tuple of integers defining the kernel size of each 1D convolutional layer in the *TDNN* module of the
*XVector* model. The length of *tdnn_kernel* has to match the length of *tdnn_dim*.
tdnn_dilation (`Tuple[int]` or `List[int]`, *optional*, defaults to `(1, 2, 3, 1, 1)`):
|
158_8_25
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2config
|
.md
|
tdnn_dilation (`Tuple[int]` or `List[int]`, *optional*, defaults to `(1, 2, 3, 1, 1)`):
A tuple of integers defining the dilation factor of each 1D convolutional layer in *TDNN* module of the
*XVector* model. The length of *tdnn_dilation* has to match the length of *tdnn_dim*.
xvector_output_dim (`int`, *optional*, defaults to 512):
Dimensionality of the *XVector* embedding vectors.
add_adapter (`bool`, *optional*, defaults to `False`):
|
158_8_26
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2config
|
.md
|
Dimensionality of the *XVector* embedding vectors.
add_adapter (`bool`, *optional*, defaults to `False`):
Whether a convolutional network should be stacked on top of the Wav2Vec2 Encoder. Can be very useful for
warm-starting Wav2Vec2 for SpeechEncoderDecoder models.
adapter_kernel_size (`int`, *optional*, defaults to 3):
Kernel size of the convolutional layers in the adapter network. Only relevant if `add_adapter is True`.
adapter_stride (`int`, *optional*, defaults to 2):
|
158_8_27
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2config
|
.md
|
adapter_stride (`int`, *optional*, defaults to 2):
Stride of the convolutional layers in the adapter network. Only relevant if `add_adapter is True`.
num_adapter_layers (`int`, *optional*, defaults to 3):
Number of convolutional layers that should be used in the adapter network. Only relevant if `add_adapter is
True`.
adapter_attn_dim (`int`, *optional*):
Dimension of the attention adapter weights to be used in each attention block. An example of a model using
|
158_8_28
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2config
|
.md
|
Dimension of the attention adapter weights to be used in each attention block. An example of a model using
attention adapters is [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all).
output_hidden_size (`int`, *optional*):
Dimensionality of the encoder output layer. If not defined, this defaults to *hidden-size*. Only relevant
if `add_adapter is True`.
Example:
```python
>>> from transformers import Wav2Vec2Config, Wav2Vec2Model
|
158_8_29
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2config
|
.md
|
>>> # Initializing a Wav2Vec2 facebook/wav2vec2-base-960h style configuration
>>> configuration = Wav2Vec2Config()
>>> # Initializing a model (with random weights) from the facebook/wav2vec2-base-960h style configuration
>>> model = Wav2Vec2Model(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
158_8_30
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2ctctokenizer
|
.md
|
Constructs a Wav2Vec2CTC tokenizer.
This tokenizer inherits from [`PreTrainedTokenizer`] which contains some of the main methods. Users should refer to
the superclass for more information regarding such methods.
Args:
vocab_file (`str`):
File containing the vocabulary.
bos_token (`str`, *optional*, defaults to `"<s>"`):
The beginning of sentence token.
eos_token (`str`, *optional*, defaults to `"</s>"`):
The end of sentence token.
unk_token (`str`, *optional*, defaults to `"<unk>"`):
|
158_9_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2ctctokenizer
|
.md
|
The end of sentence token.
unk_token (`str`, *optional*, defaults to `"<unk>"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (`str`, *optional*, defaults to `"<pad>"`):
The token used for padding, for example when batching sequences of different lengths.
word_delimiter_token (`str`, *optional*, defaults to `"|"`):
The token used for defining the end of a word.
do_lower_case (`bool`, *optional*, defaults to `False`):
|
158_9_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2ctctokenizer
|
.md
|
The token used for defining the end of a word.
do_lower_case (`bool`, *optional*, defaults to `False`):
Whether or not to accept lowercase input and lowercase the output when decoding.
target_lang (`str`, *optional*):
A target language the tokenizer should set by default. `target_lang` has to be defined for multi-lingual,
nested vocabulary such as [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all).
**kwargs
Additional keyword arguments passed along to [`PreTrainedTokenizer`]
|
158_9_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2ctctokenizer
|
.md
|
**kwargs
Additional keyword arguments passed along to [`PreTrainedTokenizer`]
Methods: __call__
- save_vocabulary
- decode
- batch_decode
- set_target_lang
|
158_9_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2featureextractor
|
.md
|
Constructs a Wav2Vec2 feature extractor.
This feature extractor inherits from [`~feature_extraction_sequence_utils.SequenceFeatureExtractor`] which contains
most of the main methods. Users should refer to this superclass for more information regarding those methods.
Args:
feature_size (`int`, *optional*, defaults to 1):
The feature dimension of the extracted features.
sampling_rate (`int`, *optional*, defaults to 16000):
|
158_10_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2featureextractor
|
.md
|
The feature dimension of the extracted features.
sampling_rate (`int`, *optional*, defaults to 16000):
The sampling rate at which the audio files should be digitalized expressed in hertz (Hz).
padding_value (`float`, *optional*, defaults to 0.0):
The value that is used to fill the padding values.
do_normalize (`bool`, *optional*, defaults to `True`):
Whether or not to zero-mean unit-variance normalize the input. Normalizing can help to significantly
improve the performance for some models, *e.g.*,
|
158_10_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2featureextractor
|
.md
|
improve the performance for some models, *e.g.*,
[wav2vec2-lv60](https://huggingface.co/models?search=lv60).
return_attention_mask (`bool`, *optional*, defaults to `False`):
Whether or not [`~Wav2Vec2FeatureExtractor.__call__`] should return `attention_mask`.
<Tip>
Wav2Vec2 models that have set `config.feat_extract_norm == "group"`, such as
[wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base-960h), have **not** been trained using
|
158_10_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2featureextractor
|
.md
|
[wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base-960h), have **not** been trained using
`attention_mask`. For such models, `input_values` should simply be padded with 0 and no `attention_mask`
should be passed.
For Wav2Vec2 models that have set `config.feat_extract_norm == "layer"`, such as
[wav2vec2-lv60](https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self), `attention_mask` should be
passed for batched inference.
</Tip>
Methods: __call__
|
158_10_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2processor
|
.md
|
Constructs a Wav2Vec2 processor which wraps a Wav2Vec2 feature extractor and a Wav2Vec2 CTC tokenizer into a single
processor.
[`Wav2Vec2Processor`] offers all the functionalities of [`Wav2Vec2FeatureExtractor`] and [`PreTrainedTokenizer`].
See the docstring of [`~Wav2Vec2Processor.__call__`] and [`~Wav2Vec2Processor.decode`] for more information.
Args:
feature_extractor (`Wav2Vec2FeatureExtractor`):
An instance of [`Wav2Vec2FeatureExtractor`]. The feature extractor is a required input.
|
158_11_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2processor
|
.md
|
An instance of [`Wav2Vec2FeatureExtractor`]. The feature extractor is a required input.
tokenizer ([`PreTrainedTokenizer`]):
An instance of [`PreTrainedTokenizer`]. The tokenizer is a required input.
Methods: __call__
- pad
- from_pretrained
- save_pretrained
- batch_decode
- decode
|
158_11_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2processorwithlm
|
.md
|
Constructs a Wav2Vec2 processor which wraps a Wav2Vec2 feature extractor, a Wav2Vec2 CTC tokenizer and a decoder
with language model support into a single processor for language model boosted speech recognition decoding.
Args:
feature_extractor ([`Wav2Vec2FeatureExtractor`] or [`SeamlessM4TFeatureExtractor`]):
An instance of [`Wav2Vec2FeatureExtractor`] or [`SeamlessM4TFeatureExtractor`]. The feature extractor is a required input.
tokenizer ([`Wav2Vec2CTCTokenizer`]):
|
158_12_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2processorwithlm
|
.md
|
tokenizer ([`Wav2Vec2CTCTokenizer`]):
An instance of [`Wav2Vec2CTCTokenizer`]. The tokenizer is a required input.
decoder (`pyctcdecode.BeamSearchDecoderCTC`):
An instance of [`pyctcdecode.BeamSearchDecoderCTC`]. The decoder is a required input.
Methods: __call__
- pad
- from_pretrained
- save_pretrained
- batch_decode
- decode
|
158_12_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#decoding-multiple-audios
|
.md
|
If you are planning to decode multiple batches of audios, you should consider using [`~Wav2Vec2ProcessorWithLM.batch_decode`] and passing an instantiated `multiprocessing.Pool`.
Otherwise, [`~Wav2Vec2ProcessorWithLM.batch_decode`] performance will be slower than calling [`~Wav2Vec2ProcessorWithLM.decode`] for each audio individually, as it internally instantiates a new `Pool` for every call. See the example below:
```python
>>> # Let's see how to use a user-managed pool for batch decoding multiple audios
|
158_13_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#decoding-multiple-audios
|
.md
|
```python
>>> # Let's see how to use a user-managed pool for batch decoding multiple audios
>>> from multiprocessing import get_context
>>> from transformers import AutoTokenizer, AutoProcessor, AutoModelForCTC
>>> from datasets import load_dataset
>>> import datasets
>>> import torch
|
158_13_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#decoding-multiple-audios
|
.md
|
>>> # import model, feature extractor, tokenizer
>>> model = AutoModelForCTC.from_pretrained("patrickvonplaten/wav2vec2-base-100h-with-lm").to("cuda")
>>> processor = AutoProcessor.from_pretrained("patrickvonplaten/wav2vec2-base-100h-with-lm")
>>> # load example dataset
>>> dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> dataset = dataset.cast_column("audio", datasets.Audio(sampling_rate=16_000))
|
158_13_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#decoding-multiple-audios
|
.md
|
>>> def map_to_array(batch):
... batch["speech"] = batch["audio"]["array"]
... return batch
>>> # prepare speech data for batch inference
>>> dataset = dataset.map(map_to_array, remove_columns=["audio"])
>>> def map_to_pred(batch, pool):
... inputs = processor(batch["speech"], sampling_rate=16_000, padding=True, return_tensors="pt")
... inputs = {k: v.to("cuda") for k, v in inputs.items()}
... with torch.no_grad():
... logits = model(**inputs).logits
|
158_13_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#decoding-multiple-audios
|
.md
|
... with torch.no_grad():
... logits = model(**inputs).logits
... transcription = processor.batch_decode(logits.cpu().numpy(), pool).text
... batch["transcription"] = transcription
... return batch
|
158_13_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#decoding-multiple-audios
|
.md
|
>>> # note: pool should be instantiated *after* `Wav2Vec2ProcessorWithLM`.
>>> # otherwise, the LM won't be available to the pool's sub-processes
>>> # select number of processes and batch_size based on number of CPU cores available and on dataset size
>>> with get_context("fork").Pool(processes=2) as pool:
... result = dataset.map(
... map_to_pred, batched=True, batch_size=2, fn_kwargs={"pool": pool}, remove_columns=["speech"]
... )
|
158_13_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#decoding-multiple-audios
|
.md
|
>>> result["transcription"][:2]
['MISTER QUILTER IS THE APOSTLE OF THE MIDDLE CLASSES AND WE ARE GLAD TO WELCOME HIS GOSPEL', "NOR IS MISTER COULTER'S MANNER LESS INTERESTING THAN HIS MATTER"]
```
|
158_13_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2-specific-outputs
|
.md
|
models.wav2vec2_with_lm.processing_wav2vec2_with_lm.Wav2Vec2DecoderWithLMOutput
Output type of [`Wav2Vec2DecoderWithLM`], with transcription.
Args:
text (list of `str` or `str`):
Decoded logits in text from. Usually the speech transcription.
logit_score (list of `float` or `float`):
Total logit score of the beams associated with produced text.
lm_score (list of `float`):
Fused lm_score of the beams associated with produced text.
|
158_14_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2-specific-outputs
|
.md
|
lm_score (list of `float`):
Fused lm_score of the beams associated with produced text.
word_offsets (list of `List[Dict[str, Union[int, str]]]` or `List[Dict[str, Union[int, str]]]`):
Offsets of the decoded words. In combination with sampling rate and model downsampling rate word offsets
can be used to compute time stamps for each word.
models.wav2vec2.modeling_wav2vec2.Wav2Vec2BaseModelOutput
Base class for models that have been trained with the Wav2Vec2 loss objective.
Args:
|
158_14_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2-specific-outputs
|
.md
|
Base class for models that have been trained with the Wav2Vec2 loss objective.
Args:
last_hidden_state (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`):
Sequence of hidden-states at the output of the last layer of the model.
extract_features (`torch.FloatTensor` of shape `(batch_size, sequence_length, conv_dim[-1])`):
Sequence of extracted feature vectors of the last convolutional layer of the model.
|
158_14_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2-specific-outputs
|
.md
|
Sequence of extracted feature vectors of the last convolutional layer of the model.
hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of
shape `(batch_size, sequence_length, hidden_size)`.
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
|
158_14_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2-specific-outputs
|
.md
|
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
sequence_length)`.
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
|
158_14_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2-specific-outputs
|
.md
|
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForPreTrainingOutput
Output type of [`Wav2Vec2ForPreTraining`], with potential hidden states and attentions.
Args:
loss (*optional*, returned when `sample_negative_indices` are passed, `torch.FloatTensor` of shape `(1,)`):
Total loss as the sum of the contrastive loss (L_m) and the diversity loss (L_d) as stated in the [official
|
158_14_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2-specific-outputs
|
.md
|
Total loss as the sum of the contrastive loss (L_m) and the diversity loss (L_d) as stated in the [official
paper](https://arxiv.org/pdf/2006.11477.pdf) . (classification) loss.
projected_states (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.proj_codevector_dim)`):
Hidden-states of the model projected to *config.proj_codevector_dim* that can be used to predict the masked
projected quantized states.
|
158_14_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2-specific-outputs
|
.md
|
projected quantized states.
projected_quantized_states (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.proj_codevector_dim)`):
Quantized extracted feature vectors projected to *config.proj_codevector_dim* representing the positive
target vectors for contrastive loss.
hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
|
158_14_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2-specific-outputs
|
.md
|
Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of
shape `(batch_size, sequence_length, hidden_size)`.
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
|
158_14_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2-specific-outputs
|
.md
|
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
sequence_length)`.
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
contrastive_loss (*optional*, returned when `sample_negative_indices` are passed, `torch.FloatTensor` of shape `(1,)`):
The contrastive loss (L_m) as stated in the [official paper](https://arxiv.org/pdf/2006.11477.pdf) .
|
158_14_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2-specific-outputs
|
.md
|
The contrastive loss (L_m) as stated in the [official paper](https://arxiv.org/pdf/2006.11477.pdf) .
diversity_loss (*optional*, returned when `sample_negative_indices` are passed, `torch.FloatTensor` of shape `(1,)`):
The diversity loss (L_d) as stated in the [official paper](https://arxiv.org/pdf/2006.11477.pdf) .
[[autodoc]] models.wav2vec2.modeling_flax_wav2vec2.FlaxWav2Vec2BaseModelOutput:
|
158_14_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2-specific-outputs
|
.md
|
[[autodoc]] models.wav2vec2.modeling_flax_wav2vec2.FlaxWav2Vec2BaseModelOutput:
modeling_flax_wav2vec2 requires the FLAX library but it was not found in your environment. Checkout the instructions on the
installation page: https://github.com/google/flax and follow the ones that match your environment.
Please note that you may need to restart your runtime after installation.
[[autodoc]] models.wav2vec2.modeling_flax_wav2vec2.FlaxWav2Vec2ForPreTrainingOutput:
|
158_14_11
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2-specific-outputs
|
.md
|
[[autodoc]] models.wav2vec2.modeling_flax_wav2vec2.FlaxWav2Vec2ForPreTrainingOutput:
modeling_flax_wav2vec2 requires the FLAX library but it was not found in your environment. Checkout the instructions on the
installation page: https://github.com/google/flax and follow the ones that match your environment.
Please note that you may need to restart your runtime after installation.
<frameworkcontent>
<pt>
|
158_14_12
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2model
|
.md
|
The bare Wav2Vec2 Model transformer outputting raw hidden-states without any specific head on top.
Wav2Vec2 was proposed in [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech
Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael
Auli.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).
|
158_15_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2model
|
.md
|
library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`Wav2Vec2Config`]): Model configuration class with all the parameters of the model.
|
158_15_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2model
|
.md
|
behavior.
Parameters:
config ([`Wav2Vec2Config`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
158_15_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2forctc
|
.md
|
Wav2Vec2 Model with a `language modeling` head on top for Connectionist Temporal Classification (CTC).
Wav2Vec2 was proposed in [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech
Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael
Auli.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).
|
158_16_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2forctc
|
.md
|
library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`Wav2Vec2Config`]): Model configuration class with all the parameters of the model.
|
158_16_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2forctc
|
.md
|
behavior.
Parameters:
config ([`Wav2Vec2Config`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
target_lang (`str`, *optional*):
Language id of adapter weights. Adapter weights are stored in the format adapter.<lang>.safetensors or
|
158_16_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2forctc
|
.md
|
Language id of adapter weights. Adapter weights are stored in the format adapter.<lang>.safetensors or
adapter.<lang>.bin. Only relevant when using an instance of [`Wav2Vec2ForCTC`] with adapters. Uses 'eng' by
default.
Methods: forward
- load_adapter
|
158_16_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2forsequenceclassification
|
.md
|
Wav2Vec2 Model with a sequence classification head on top (a linear layer over the pooled output) for tasks like
SUPERB Keyword Spotting.
Wav2Vec2 was proposed in [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech
Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael
Auli.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
|
158_17_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2forsequenceclassification
|
.md
|
Auli.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
|
158_17_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2forsequenceclassification
|
.md
|
behavior.
Parameters:
config ([`Wav2Vec2Config`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
158_17_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2foraudioframeclassification
|
.md
|
Wav2Vec2 Model with a frame classification head on top for tasks like Speaker Diarization.
Wav2Vec2 was proposed in [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech
Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael
Auli.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).
|
158_18_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2foraudioframeclassification
|
.md
|
library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`Wav2Vec2Config`]): Model configuration class with all the parameters of the model.
|
158_18_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2foraudioframeclassification
|
.md
|
behavior.
Parameters:
config ([`Wav2Vec2Config`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
158_18_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2forxvector
|
.md
|
Wav2Vec2 Model with an XVector feature extraction head on top for tasks like Speaker Verification.
Wav2Vec2 was proposed in [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech
Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael
Auli.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).
|
158_19_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2forxvector
|
.md
|
library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`Wav2Vec2Config`]): Model configuration class with all the parameters of the model.
|
158_19_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2forxvector
|
.md
|
behavior.
Parameters:
config ([`Wav2Vec2Config`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
158_19_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2forpretraining
|
.md
|
Wav2Vec2 Model with a quantizer and `VQ` head on top.
Wav2Vec2 was proposed in [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech
Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael
Auli.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).
|
158_20_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2forpretraining
|
.md
|
library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`Wav2Vec2Config`]): Model configuration class with all the parameters of the model.
|
158_20_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2forpretraining
|
.md
|
behavior.
Parameters:
config ([`Wav2Vec2Config`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
</pt>
<tf>
|
158_20_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#tfwav2vec2model
|
.md
|
No docstring available for TFWav2Vec2Model
Methods: call
|
158_21_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#tfwav2vec2forsequenceclassification
|
.md
|
No docstring available for TFWav2Vec2ForSequenceClassification
Methods: call
|
158_22_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#tfwav2vec2forctc
|
.md
|
No docstring available for TFWav2Vec2ForCTC
Methods: call
</tf>
<jax>
|
158_23_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#flaxwav2vec2model
|
.md
|
No docstring available for FlaxWav2Vec2Model
Methods: __call__
|
158_24_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#flaxwav2vec2forctc
|
.md
|
No docstring available for FlaxWav2Vec2ForCTC
Methods: __call__
|
158_25_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#flaxwav2vec2forpretraining
|
.md
|
No docstring available for FlaxWav2Vec2ForPreTraining
Methods: __call__
</jax>
</frameworkcontent>
|
158_26_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt-sw3.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt-sw3/
|
.md
|
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
159_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt-sw3.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt-sw3/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
159_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt-sw3.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt-sw3/#overview
|
.md
|
The GPT-Sw3 model was first proposed in
[Lessons Learned from GPT-SW3: Building the First Large-Scale Generative Language Model for Swedish](http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.376.pdf)
by Ariel Ekgren, Amaru Cuba Gyllensten, Evangelia Gogoulou, Alice Heiman, Severine Verlinden, Joey Öhman,
Fredrik Carlsson, Magnus Sahlgren.
Since that first paper the authors have extended their work and trained new models on their new 1.2TB corpora named The Nordic Pile.
|
159_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt-sw3.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt-sw3/#overview
|
.md
|
GPT-Sw3 is a collection of large decoder-only pretrained transformer language models that were developed by AI Sweden
in collaboration with RISE and the WASP WARA for Media and Language. GPT-Sw3 has been trained on a dataset containing
320B tokens in Swedish, Norwegian, Danish, Icelandic, English, and programming code. The model was pretrained using a
causal language modeling (CLM) objective utilizing the NeMo Megatron GPT implementation.
|
159_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt-sw3.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt-sw3/#overview
|
.md
|
causal language modeling (CLM) objective utilizing the NeMo Megatron GPT implementation.
This model was contributed by [AI Sweden Models](https://huggingface.co/AI-Sweden-Models).
|
159_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt-sw3.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt-sw3/#usage-example
|
.md
|
```python
>>> from transformers import AutoTokenizer, AutoModelForCausalLM
>>> tokenizer = AutoTokenizer.from_pretrained("AI-Sweden-Models/gpt-sw3-356m")
>>> model = AutoModelForCausalLM.from_pretrained("AI-Sweden-Models/gpt-sw3-356m")
>>> input_ids = tokenizer("Träd är fina för att", return_tensors="pt")["input_ids"]
>>> generated_token_ids = model.generate(inputs=input_ids, max_new_tokens=10, do_sample=True)[0]
|
159_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt-sw3.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt-sw3/#usage-example
|
.md
|
>>> generated_token_ids = model.generate(inputs=input_ids, max_new_tokens=10, do_sample=True)[0]
>>> print(tokenizer.decode(generated_token_ids))
Träd är fina för att de är färgstarka. Men ibland är det fint
```
|
159_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt-sw3.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt-sw3/#resources
|
.md
|
- [Text classification task guide](../tasks/sequence_classification)
- [Token classification task guide](../tasks/token_classification)
- [Causal language modeling task guide](../tasks/language_modeling)
<Tip>
The implementation uses the `GPT2Model` coupled with our `GPTSw3Tokenizer`. Refer to [GPT2Model documentation](gpt2)
for API reference and examples.
|
159_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt-sw3.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt-sw3/#resources
|
.md
|
for API reference and examples.
Note that sentencepiece is required to use our tokenizer and can be installed with `pip install transformers[sentencepiece]` or `pip install sentencepiece`
</Tip>
|
159_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt-sw3.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt-sw3/#gptsw3tokenizer
|
.md
|
Construct an GPTSw3 tokenizer. Based on [SentencePiece](https://github.com/google/sentencepiece).
This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
Example usage:
```python
>>> from transformers import GPTSw3Tokenizer
|
159_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt-sw3.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt-sw3/#gptsw3tokenizer
|
.md
|
>>> tokenizer = GPTSw3Tokenizer.from_pretrained("AI-Sweden-Models/gpt-sw3-126m")
>>> tokenizer("Svenska är kul!")["input_ids"]
[1814, 377, 3617, 63504]
```
Args:
vocab_file (`str`):
[SentencePiece](https://github.com/google/sentencepiece) file (generally has a *.spm* extension) that
contains the vocabulary necessary to instantiate a tokenizer.
do_lower_case (`bool`, *optional*, defaults to `False`):
Whether or not to lowercase the input when tokenizing.
|
159_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt-sw3.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt-sw3/#gptsw3tokenizer
|
.md
|
do_lower_case (`bool`, *optional*, defaults to `False`):
Whether or not to lowercase the input when tokenizing.
remove_space (`bool`, *optional*, defaults to `False`):
Whether or not to strip the text when tokenizing (removing excess spaces before and after the string).
keep_accents (`bool`, *optional*, defaults to `False`):
Whether or not to keep accents when tokenizing.
pad_token (`str`, *optional*):
|
159_4_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt-sw3.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt-sw3/#gptsw3tokenizer
|
.md
|
Whether or not to keep accents when tokenizing.
pad_token (`str`, *optional*):
The token used for padding, for example when batching sequences of different lengths. If not provided, will
default to '<pad>' or '<unk>' depending on model size.
unk_token (`str`, *optional*):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead. If not provided, will default to '<unk>'.
eos_token (`str`, *optional*):
|
159_4_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt-sw3.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt-sw3/#gptsw3tokenizer
|
.md
|
token instead. If not provided, will default to '<unk>'.
eos_token (`str`, *optional*):
The end of sequence token seen during pretraining. If not provided, will default to '<|endoftext|>'
bos_token (`str`, *optional*):
The beginning of sequence token that can be used for downstream task, was not seen during pretraining. If
not provided, will default to '<s>' or '<|endoftext|>', depending on model size.
sp_model_kwargs (`dict`, *optional*):
|
159_4_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt-sw3.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt-sw3/#gptsw3tokenizer
|
.md
|
not provided, will default to '<s>' or '<|endoftext|>', depending on model size.
sp_model_kwargs (`dict`, *optional*):
Will be passed to the `SentencePieceProcessor.__init__()` method. The [Python wrapper for
SentencePiece](https://github.com/google/sentencepiece/tree/master/python) can be used, among other things,
to set:
- `enable_sampling`: Enable subword regularization.
- `nbest_size`: Sampling parameters for unigram. Invalid for BPE-Dropout.
- `nbest_size = {0,1}`: No sampling is performed.
|
159_4_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt-sw3.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt-sw3/#gptsw3tokenizer
|
.md
|
- `nbest_size`: Sampling parameters for unigram. Invalid for BPE-Dropout.
- `nbest_size = {0,1}`: No sampling is performed.
- `nbest_size > 1`: samples from the nbest_size results.
- `nbest_size < 0`: assuming that nbest_size is infinite and samples from the all hypothesis (lattice)
using forward-filtering-and-backward-sampling algorithm.
- `alpha`: Smoothing parameter for unigram sampling, and dropout probability of merge operations for
BPE-dropout.
Attributes:
sp_model (`SentencePieceProcessor`):
|
159_4_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt-sw3.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt-sw3/#gptsw3tokenizer
|
.md
|
BPE-dropout.
Attributes:
sp_model (`SentencePieceProcessor`):
The *SentencePiece* processor that is used for every conversion (string, tokens and IDs).
whitespaces (`set`):
The whitespaces that are replaced in the whitespace normalization in preprocessing.
non_printing_characters_re (`Pattern`):
The compiled regular expression to remove non-printing characters in preprocessing.
Methods: save_vocabulary
|
159_4_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vivit.md
|
https://huggingface.co/docs/transformers/en/model_doc/vivit/
|
.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
160_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vivit.md
|
https://huggingface.co/docs/transformers/en/model_doc/vivit/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
|
160_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vivit.md
|
https://huggingface.co/docs/transformers/en/model_doc/vivit/#overview
|
.md
|
The Vivit model was proposed in [ViViT: A Video Vision Transformer](https://arxiv.org/abs/2103.15691) by Anurag Arnab, Mostafa Dehghani, Georg Heigold, Chen Sun, Mario Lučić, Cordelia Schmid.
The paper proposes one of the first successful pure-transformer based set of models for video understanding.
The abstract from the paper is the following:
|
160_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vivit.md
|
https://huggingface.co/docs/transformers/en/model_doc/vivit/#overview
|
.md
|
*We present pure-transformer based models for video classification, drawing upon the recent success of such models in image classification. Our model extracts spatio-temporal tokens from the input video, which are then encoded by a series of transformer layers. In order to handle the long sequences of tokens encountered in video, we propose several, efficient variants of our model which factorise the spatial- and temporal-dimensions of the input. Although transformer-based models are known to only be
|
160_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vivit.md
|
https://huggingface.co/docs/transformers/en/model_doc/vivit/#overview
|
.md
|
model which factorise the spatial- and temporal-dimensions of the input. Although transformer-based models are known to only be effective when large training datasets are available, we show how we can effectively regularise the model during training and leverage pretrained image models to be able to train on comparatively small datasets. We conduct thorough ablation studies, and achieve state-of-the-art results on multiple video classification benchmarks including Kinetics 400 and 600, Epic Kitchens,
|
160_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vivit.md
|
https://huggingface.co/docs/transformers/en/model_doc/vivit/#overview
|
.md
|
and achieve state-of-the-art results on multiple video classification benchmarks including Kinetics 400 and 600, Epic Kitchens, Something-Something v2 and Moments in Time, outperforming prior methods based on deep 3D convolutional networks.*
|
160_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vivit.md
|
https://huggingface.co/docs/transformers/en/model_doc/vivit/#overview
|
.md
|
This model was contributed by [jegormeister](https://huggingface.co/jegormeister). The original code (written in JAX) can be found [here](https://github.com/google-research/scenic/tree/main/scenic/projects/vivit).
|
160_1_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vivit.md
|
https://huggingface.co/docs/transformers/en/model_doc/vivit/#using-scaled-dot-product-attention-sdpa
|
.md
|
PyTorch includes a native scaled dot-product attention (SDPA) operator as part of `torch.nn.functional`. This function
encompasses several implementations that can be applied depending on the inputs and the hardware in use. See the
[official documentation](https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html)
or the [GPU Inference](https://huggingface.co/docs/transformers/main/en/perf_infer_gpu_one#pytorch-scaled-dot-product-attention)
page for more information.
|
160_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vivit.md
|
https://huggingface.co/docs/transformers/en/model_doc/vivit/#using-scaled-dot-product-attention-sdpa
|
.md
|
page for more information.
SDPA is used by default for `torch>=2.1.1` when an implementation is available, but you may also set
`attn_implementation="sdpa"` in `from_pretrained()` to explicitly request SDPA to be used.
```
from transformers import VivitModel
model = VivitModel.from_pretrained("google/vivit-b-16x2-kinetics400", attn_implementation="sdpa", torch_dtype=torch.float16)
...
```
|
160_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vivit.md
|
https://huggingface.co/docs/transformers/en/model_doc/vivit/#using-scaled-dot-product-attention-sdpa
|
.md
|
...
```
For the best speedups, we recommend loading the model in half-precision (e.g. `torch.float16` or `torch.bfloat16`).
On a local benchmark (A100-40GB, PyTorch 2.3.0, OS Ubuntu 22.04) with `float32` and `google/vivit-b-16x2-kinetics400` model, we saw the following speedups during inference.
|
160_2_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vivit.md
|
https://huggingface.co/docs/transformers/en/model_doc/vivit/#training
|
.md
|
| num_training_steps | batch_size | is cuda | Speedup (%) | Eager peak mem (MB) | sdpa peak mem (MB) | Mem saving (%) |
|---------------------:|-------------:|----------:|--------------:|----------------------:|---------------------:|-----------------:|
| 100 | 1 | True | 7.122 | 2575.28 | 5932.54 | 130.364 |
|
160_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vivit.md
|
https://huggingface.co/docs/transformers/en/model_doc/vivit/#inference
|
.md
|
| num_batches | batch_size | is cuda | is half | Speedup (%) | Mem eager (MB) | Mem BT (MB) | Mem saved (%) |
|---------------|--------------|-----------|-----------|---------------|------------------|---------------|-----------------|
| 20 | 1 | True | False | 15.422 | 715.807 | 317.079 | 125.75 |
| 20 | 2 | True | False | 17.146 | 1234.75 | 447.175 | 176.122 |
|
160_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vivit.md
|
https://huggingface.co/docs/transformers/en/model_doc/vivit/#inference
|
.md
|
| 20 | 2 | True | False | 17.146 | 1234.75 | 447.175 | 176.122 |
| 20 | 4 | True | False | 18.093 | 2275.82 | 709.864 | 220.6 |
| 20 | 8 | True | False | 19.284 | 4358.19 | 1233.24 | 253.393 |
|
160_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vivit.md
|
https://huggingface.co/docs/transformers/en/model_doc/vivit/#vivitconfig
|
.md
|
This is the configuration class to store the configuration of a [`VivitModel`]. It is used to instantiate a ViViT
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the ViViT
[google/vivit-b-16x2-kinetics400](https://huggingface.co/google/vivit-b-16x2-kinetics400) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
160_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vivit.md
|
https://huggingface.co/docs/transformers/en/model_doc/vivit/#vivitconfig
|
.md
|
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
image_size (`int`, *optional*, defaults to 224):
The size (resolution) of each image.
num_frames (`int`, *optional*, defaults to 32):
The number of frames in each video.
tubelet_size (`List[int]`, *optional*, defaults to `[2, 16, 16]`):
The size (resolution) of each tubelet.
num_channels (`int`, *optional*, defaults to 3):
|
160_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vivit.md
|
https://huggingface.co/docs/transformers/en/model_doc/vivit/#vivitconfig
|
.md
|
The size (resolution) of each tubelet.
num_channels (`int`, *optional*, defaults to 3):
The number of input channels.
hidden_size (`int`, *optional*, defaults to 768):
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (`int`, *optional*, defaults to 12):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 12):
Number of attention heads for each attention layer in the Transformer encoder.
|
160_5_2
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.