source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neo.md
https://huggingface.co/docs/transformers/en/model_doc/gpt_neo/#gptneomodel
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
188_7_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neo.md
https://huggingface.co/docs/transformers/en/model_doc/gpt_neo/#gptneoforcausallm
.md
The GPT Neo Model transformer with a language modeling head on top (linear layer with weights tied to the input embeddings). This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
188_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neo.md
https://huggingface.co/docs/transformers/en/model_doc/gpt_neo/#gptneoforcausallm
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`GPTNeoConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
188_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neo.md
https://huggingface.co/docs/transformers/en/model_doc/gpt_neo/#gptneoforcausallm
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
188_8_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neo.md
https://huggingface.co/docs/transformers/en/model_doc/gpt_neo/#gptneoforquestionanswering
.md
The GPT-Neo Model transformer with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layer on top of the hidden-states output to compute `span start logits` and `span end logits`). This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
188_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neo.md
https://huggingface.co/docs/transformers/en/model_doc/gpt_neo/#gptneoforquestionanswering
.md
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`GPTNeoConfig`]): Model configuration class with all the parameters of the model.
188_9_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neo.md
https://huggingface.co/docs/transformers/en/model_doc/gpt_neo/#gptneoforquestionanswering
.md
and behavior. Parameters: config ([`GPTNeoConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
188_9_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neo.md
https://huggingface.co/docs/transformers/en/model_doc/gpt_neo/#gptneoforsequenceclassification
.md
The GPTNeo Model transformer with a sequence classification head on top (linear layer). [`GPTNeoForSequenceClassification`] uses the last token in order to do the classification, as other causal models (e.g. GPT-1) do. Since it does classification on the last token, it requires to know the position of the last token. If a `pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If
188_10_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neo.md
https://huggingface.co/docs/transformers/en/model_doc/gpt_neo/#gptneoforsequenceclassification
.md
`pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If no `pad_token_id` is defined, it simply takes the last value in each row of the batch. Since it cannot guess the padding tokens when `inputs_embeds` are passed instead of `input_ids`, it does the same (take the last value in each row of the batch). This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
188_10_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neo.md
https://huggingface.co/docs/transformers/en/model_doc/gpt_neo/#gptneoforsequenceclassification
.md
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters:
188_10_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neo.md
https://huggingface.co/docs/transformers/en/model_doc/gpt_neo/#gptneoforsequenceclassification
.md
and behavior. Parameters: config ([`GPTNeoConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
188_10_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neo.md
https://huggingface.co/docs/transformers/en/model_doc/gpt_neo/#gptneofortokenclassification
.md
GPT Neo model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
188_11_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neo.md
https://huggingface.co/docs/transformers/en/model_doc/gpt_neo/#gptneofortokenclassification
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`GPTNeoConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
188_11_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neo.md
https://huggingface.co/docs/transformers/en/model_doc/gpt_neo/#gptneofortokenclassification
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward </pt> <jax>
188_11_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neo.md
https://huggingface.co/docs/transformers/en/model_doc/gpt_neo/#flaxgptneomodel
.md
No docstring available for FlaxGPTNeoModel Methods: __call__
188_12_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neo.md
https://huggingface.co/docs/transformers/en/model_doc/gpt_neo/#flaxgptneoforcausallm
.md
No docstring available for FlaxGPTNeoForCausalLM Methods: __call__ </jax> </frameworkcontent>
188_13_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/hubert.md
https://huggingface.co/docs/transformers/en/model_doc/hubert/
.md
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
189_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/hubert.md
https://huggingface.co/docs/transformers/en/model_doc/hubert/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
189_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/hubert.md
https://huggingface.co/docs/transformers/en/model_doc/hubert/#overview
.md
Hubert was proposed in [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed. The abstract from the paper is the following: *Self-supervised approaches for speech representation learning are challenged by three unique problems: (1) there are
189_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/hubert.md
https://huggingface.co/docs/transformers/en/model_doc/hubert/#overview
.md
*Self-supervised approaches for speech representation learning are challenged by three unique problems: (1) there are multiple sound units in each input utterance, (2) there is no lexicon of input sound units during the pre-training phase, and (3) sound units have variable lengths with no explicit segmentation. To deal with these three problems, we propose the Hidden-Unit BERT (HuBERT) approach for self-supervised speech representation learning, which utilizes an
189_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/hubert.md
https://huggingface.co/docs/transformers/en/model_doc/hubert/#overview
.md
propose the Hidden-Unit BERT (HuBERT) approach for self-supervised speech representation learning, which utilizes an offline clustering step to provide aligned target labels for a BERT-like prediction loss. A key ingredient of our approach is applying the prediction loss over the masked regions only, which forces the model to learn a combined acoustic and language model over the continuous inputs. HuBERT relies primarily on the consistency of the unsupervised
189_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/hubert.md
https://huggingface.co/docs/transformers/en/model_doc/hubert/#overview
.md
acoustic and language model over the continuous inputs. HuBERT relies primarily on the consistency of the unsupervised clustering step rather than the intrinsic quality of the assigned cluster labels. Starting with a simple k-means teacher of 100 clusters, and using two iterations of clustering, the HuBERT model either matches or improves upon the state-of-the-art wav2vec 2.0 performance on the Librispeech (960h) and Libri-light (60,000h) benchmarks with 10min, 1h,
189_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/hubert.md
https://huggingface.co/docs/transformers/en/model_doc/hubert/#overview
.md
state-of-the-art wav2vec 2.0 performance on the Librispeech (960h) and Libri-light (60,000h) benchmarks with 10min, 1h, 10h, 100h, and 960h fine-tuning subsets. Using a 1B parameter model, HuBERT shows up to 19% and 13% relative WER reduction on the more challenging dev-other and test-other evaluation subsets.* This model was contributed by [patrickvonplaten](https://huggingface.co/patrickvonplaten).
189_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/hubert.md
https://huggingface.co/docs/transformers/en/model_doc/hubert/#usage-tips
.md
- Hubert is a speech model that accepts a float array corresponding to the raw waveform of the speech signal. - Hubert model was fine-tuned using connectionist temporal classification (CTC) so the model output has to be decoded using [`Wav2Vec2CTCTokenizer`].
189_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/hubert.md
https://huggingface.co/docs/transformers/en/model_doc/hubert/#using-flash-attention-2
.md
Flash Attention 2 is an faster, optimized version of the model.
189_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/hubert.md
https://huggingface.co/docs/transformers/en/model_doc/hubert/#installation
.md
First, check whether your hardware is compatible with Flash Attention 2. The latest list of compatible hardware can be found in the [official documentation](https://github.com/Dao-AILab/flash-attention#installation-and-features). If your hardware is not compatible with Flash Attention 2, you can still benefit from attention kernel optimisations through Better Transformer support covered [above](https://huggingface.co/docs/transformers/main/en/model_doc/bark#using-better-transformer).
189_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/hubert.md
https://huggingface.co/docs/transformers/en/model_doc/hubert/#installation
.md
Next, [install](https://github.com/Dao-AILab/flash-attention#installation-and-features) the latest version of Flash Attention 2: ```bash pip install -U flash-attn --no-build-isolation ```
189_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/hubert.md
https://huggingface.co/docs/transformers/en/model_doc/hubert/#usage
.md
Below is an expected speedup diagram comparing the pure inference time between the native implementation in transformers of `facebook/hubert-large-ls960-ft`, the flash-attention-2 and the sdpa (scale-dot-product-attention) version. We show the average speedup obtained on the `librispeech_asr` `clean` validation split: ```python >>> from transformers import Wav2Vec2Model
189_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/hubert.md
https://huggingface.co/docs/transformers/en/model_doc/hubert/#usage
.md
model = Wav2Vec2Model.from_pretrained("facebook/hubert-large-ls960-ft", torch_dtype=torch.float16, attn_implementation="flash_attention_2").to(device) ... ```
189_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/hubert.md
https://huggingface.co/docs/transformers/en/model_doc/hubert/#expected-speedups
.md
Below is an expected speedup diagram comparing the pure inference time between the native implementation in transformers of the `facebook/hubert-large-ls960-ft` model and the flash-attention-2 and sdpa (scale-dot-product-attention) versions. . We show the average speedup obtained on the `librispeech_asr` `clean` validation split: <div style="text-align: center"> <img src="https://huggingface.co/datasets/kamilakesbi/transformers_image_doc/resolve/main/data/Hubert_speedup.png"> </div>
189_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/hubert.md
https://huggingface.co/docs/transformers/en/model_doc/hubert/#resources
.md
- [Audio classification task guide](../tasks/audio_classification) - [Automatic speech recognition task guide](../tasks/asr)
189_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/hubert.md
https://huggingface.co/docs/transformers/en/model_doc/hubert/#hubertconfig
.md
This is the configuration class to store the configuration of a [`HubertModel`]. It is used to instantiate an Hubert model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Hubert [facebook/hubert-base-ls960](https://huggingface.co/facebook/hubert-base-ls960) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
189_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/hubert.md
https://huggingface.co/docs/transformers/en/model_doc/hubert/#hubertconfig
.md
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 32): Vocabulary size of the Hubert model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`HubertModel`]. Vocabulary size of the model. Defines the different
189_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/hubert.md
https://huggingface.co/docs/transformers/en/model_doc/hubert/#hubertconfig
.md
`inputs_ids` passed when calling [`HubertModel`]. Vocabulary size of the model. Defines the different tokens that can be represented by the *inputs_ids* passed to the forward method of [`HubertModel`]. hidden_size (`int`, *optional*, defaults to 768): Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (`int`, *optional*, defaults to 12): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 12):
189_8_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/hubert.md
https://huggingface.co/docs/transformers/en/model_doc/hubert/#hubertconfig
.md
Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 12): Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (`int`, *optional*, defaults to 3072): Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder. hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
189_8_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/hubert.md
https://huggingface.co/docs/transformers/en/model_doc/hubert/#hubertconfig
.md
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported. hidden_dropout(`float`, *optional*, defaults to 0.1): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. activation_dropout (`float`, *optional*, defaults to 0.1): The dropout ratio for activations inside the fully connected layer. attention_dropout(`float`, *optional*, defaults to 0.1):
189_8_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/hubert.md
https://huggingface.co/docs/transformers/en/model_doc/hubert/#hubertconfig
.md
The dropout ratio for activations inside the fully connected layer. attention_dropout(`float`, *optional*, defaults to 0.1): The dropout ratio for the attention probabilities. final_dropout (`float`, *optional*, defaults to 0.1): The dropout probability for the final projection layer of [`Wav2Vec2ForCTC`]. layerdrop (`float`, *optional*, defaults to 0.1): The LayerDrop probability. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more details.
189_8_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/hubert.md
https://huggingface.co/docs/transformers/en/model_doc/hubert/#hubertconfig
.md
The LayerDrop probability. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more details. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-12): The epsilon used by the layer normalization layers. feat_extract_norm (`str`, *optional*, defaults to `"group"`):
189_8_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/hubert.md
https://huggingface.co/docs/transformers/en/model_doc/hubert/#hubertconfig
.md
The epsilon used by the layer normalization layers. feat_extract_norm (`str`, *optional*, defaults to `"group"`): The norm to be applied to 1D convolutional layers in feature encoder. One of `"group"` for group normalization of only the first 1D convolutional layer or `"layer"` for layer normalization of all 1D convolutional layers. feat_proj_dropout (`float`, *optional*, defaults to 0.0): The dropout probability for output of the feature encoder.
189_8_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/hubert.md
https://huggingface.co/docs/transformers/en/model_doc/hubert/#hubertconfig
.md
feat_proj_dropout (`float`, *optional*, defaults to 0.0): The dropout probability for output of the feature encoder. feat_proj_layer_norm (`bool`, *optional*, defaults to `True`): Whether to apply LayerNorm to the output of the feature encoder. feat_extract_activation (`str, `optional`, defaults to `"gelu"`): The non-linear activation function (function or string) in the 1D convolutional layers of the feature extractor. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported.
189_8_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/hubert.md
https://huggingface.co/docs/transformers/en/model_doc/hubert/#hubertconfig
.md
extractor. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported. conv_dim (`Tuple[int]`, *optional*, defaults to `(512, 512, 512, 512, 512, 512, 512)`): A tuple of integers defining the number of input and output channels of each 1D convolutional layer in the feature encoder. The length of *conv_dim* defines the number of 1D convolutional layers. conv_stride (`Tuple[int]`, *optional*, defaults to `(5, 2, 2, 2, 2, 2, 2)`):
189_8_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/hubert.md
https://huggingface.co/docs/transformers/en/model_doc/hubert/#hubertconfig
.md
conv_stride (`Tuple[int]`, *optional*, defaults to `(5, 2, 2, 2, 2, 2, 2)`): A tuple of integers defining the stride of each 1D convolutional layer in the feature encoder. The length of *conv_stride* defines the number of convolutional layers and has to match the length of *conv_dim*. conv_kernel (`Tuple[int]`, *optional*, defaults to `(10, 3, 3, 3, 3, 3, 3)`): A tuple of integers defining the kernel size of each 1D convolutional layer in the feature encoder. The
189_8_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/hubert.md
https://huggingface.co/docs/transformers/en/model_doc/hubert/#hubertconfig
.md
A tuple of integers defining the kernel size of each 1D convolutional layer in the feature encoder. The length of *conv_kernel* defines the number of convolutional layers and has to match the length of *conv_dim*. conv_bias (`bool`, *optional*, defaults to `False`): Whether the 1D convolutional layers have a bias. num_conv_pos_embeddings (`int`, *optional*, defaults to 128): Number of convolutional positional embeddings. Defines the kernel size of 1D convolutional positional embeddings layer.
189_8_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/hubert.md
https://huggingface.co/docs/transformers/en/model_doc/hubert/#hubertconfig
.md
Number of convolutional positional embeddings. Defines the kernel size of 1D convolutional positional embeddings layer. num_conv_pos_embedding_groups (`int`, *optional*, defaults to 16): Number of groups of 1D convolutional positional embeddings layer. conv_pos_batch_norm (`bool`, *optional*, defaults to `False`): Whether to use batch norm instead of weight norm in conv_pos do_stable_layer_norm (`bool`, *optional*, defaults to `False`):
189_8_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/hubert.md
https://huggingface.co/docs/transformers/en/model_doc/hubert/#hubertconfig
.md
Whether to use batch norm instead of weight norm in conv_pos do_stable_layer_norm (`bool`, *optional*, defaults to `False`): Whether do apply *stable* layer norm architecture of the Transformer encoder. `do_stable_layer_norm is True` corresponds to applying layer norm before the attention layer, whereas `do_stable_layer_norm is False` corresponds to applying layer norm after the attention layer. apply_spec_augment (`bool`, *optional*, defaults to `True`):
189_8_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/hubert.md
https://huggingface.co/docs/transformers/en/model_doc/hubert/#hubertconfig
.md
apply_spec_augment (`bool`, *optional*, defaults to `True`): Whether to apply *SpecAugment* data augmentation to the outputs of the feature encoder. For reference see [SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition](https://arxiv.org/abs/1904.08779). mask_time_prob (`float`, *optional*, defaults to 0.05): Percentage (between 0 and 1) of all feature vectors along the time axis which will be masked. The masking
189_8_14
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/hubert.md
https://huggingface.co/docs/transformers/en/model_doc/hubert/#hubertconfig
.md
Percentage (between 0 and 1) of all feature vectors along the time axis which will be masked. The masking procecure generates ''mask_time_prob*len(time_axis)/mask_time_length'' independent masks over the axis. If reasoning from the propability of each feature vector to be chosen as the start of the vector span to be masked, *mask_time_prob* should be `prob_vector_start*mask_time_length`. Note that overlap may decrease the
189_8_15
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/hubert.md
https://huggingface.co/docs/transformers/en/model_doc/hubert/#hubertconfig
.md
masked, *mask_time_prob* should be `prob_vector_start*mask_time_length`. Note that overlap may decrease the actual percentage of masked vectors. This is only relevant if `apply_spec_augment is True`. mask_time_length (`int`, *optional*, defaults to 10): Length of vector span along the time axis. mask_time_min_masks (`int`, *optional*, defaults to 2),: The minimum number of masks of length `mask_feature_length` generated along the time axis, each time step,
189_8_16
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/hubert.md
https://huggingface.co/docs/transformers/en/model_doc/hubert/#hubertconfig
.md
The minimum number of masks of length `mask_feature_length` generated along the time axis, each time step, irrespectively of `mask_feature_prob`. Only relevant if ''mask_time_prob*len(time_axis)/mask_time_length < mask_time_min_masks'' mask_feature_prob (`float`, *optional*, defaults to 0.0): Percentage (between 0 and 1) of all feature vectors along the feature axis which will be masked. The masking procecure generates ''mask_feature_prob*len(feature_axis)/mask_time_length'' independent masks over
189_8_17
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/hubert.md
https://huggingface.co/docs/transformers/en/model_doc/hubert/#hubertconfig
.md
masking procecure generates ''mask_feature_prob*len(feature_axis)/mask_time_length'' independent masks over the axis. If reasoning from the propability of each feature vector to be chosen as the start of the vector span to be masked, *mask_feature_prob* should be `prob_vector_start*mask_feature_length`. Note that overlap may decrease the actual percentage of masked vectors. This is only relevant if `apply_spec_augment is True`. mask_feature_length (`int`, *optional*, defaults to 10):
189_8_18
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/hubert.md
https://huggingface.co/docs/transformers/en/model_doc/hubert/#hubertconfig
.md
True`. mask_feature_length (`int`, *optional*, defaults to 10): Length of vector span along the feature axis. mask_feature_min_masks (`int`, *optional*, defaults to 0),: The minimum number of masks of length `mask_feature_length` generated along the feature axis, each time step, irrespectively of `mask_feature_prob`. Only relevant if ''mask_feature_prob*len(feature_axis)/mask_feature_length < mask_feature_min_masks'' ctc_loss_reduction (`str`, *optional*, defaults to `"sum"`):
189_8_19
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/hubert.md
https://huggingface.co/docs/transformers/en/model_doc/hubert/#hubertconfig
.md
ctc_loss_reduction (`str`, *optional*, defaults to `"sum"`): Specifies the reduction to apply to the output of `torch.nn.CTCLoss`. Only relevant when training an instance of [`HubertForCTC`]. ctc_zero_infinity (`bool`, *optional*, defaults to `False`): Whether to zero infinite losses and the associated gradients of `torch.nn.CTCLoss`. Infinite losses mainly occur when the inputs are too short to be aligned to the targets. Only relevant when training an instance of [`HubertForCTC`].
189_8_20
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/hubert.md
https://huggingface.co/docs/transformers/en/model_doc/hubert/#hubertconfig
.md
occur when the inputs are too short to be aligned to the targets. Only relevant when training an instance of [`HubertForCTC`]. use_weighted_layer_sum (`bool`, *optional*, defaults to `False`): Whether to use a weighted average of layer outputs with learned weights. Only relevant when using an instance of [`HubertForSequenceClassification`]. classifier_proj_size (`int`, *optional*, defaults to 256): Dimensionality of the projection before token mean-pooling for classification. Example: ```python
189_8_21
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/hubert.md
https://huggingface.co/docs/transformers/en/model_doc/hubert/#hubertconfig
.md
Dimensionality of the projection before token mean-pooling for classification. Example: ```python >>> from transformers import HubertModel, HubertConfig
189_8_22
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/hubert.md
https://huggingface.co/docs/transformers/en/model_doc/hubert/#hubertconfig
.md
>>> # Initializing a Hubert facebook/hubert-base-ls960 style configuration >>> configuration = HubertConfig() >>> # Initializing a model from the facebook/hubert-base-ls960 style configuration >>> model = HubertModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ``` <frameworkcontent> <pt>
189_8_23
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/hubert.md
https://huggingface.co/docs/transformers/en/model_doc/hubert/#hubertmodel
.md
The bare Hubert Model transformer outputting raw hidden-states without any specific head on top. Hubert was proposed in [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
189_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/hubert.md
https://huggingface.co/docs/transformers/en/model_doc/hubert/#hubertmodel
.md
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.). This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters:
189_9_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/hubert.md
https://huggingface.co/docs/transformers/en/model_doc/hubert/#hubertmodel
.md
behavior. Parameters: config ([`HubertConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
189_9_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/hubert.md
https://huggingface.co/docs/transformers/en/model_doc/hubert/#hubertforctc
.md
Hubert Model with a `language modeling` head on top for Connectionist Temporal Classification (CTC). Hubert was proposed in [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
189_10_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/hubert.md
https://huggingface.co/docs/transformers/en/model_doc/hubert/#hubertforctc
.md
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.). This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters:
189_10_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/hubert.md
https://huggingface.co/docs/transformers/en/model_doc/hubert/#hubertforctc
.md
behavior. Parameters: config ([`HubertConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
189_10_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/hubert.md
https://huggingface.co/docs/transformers/en/model_doc/hubert/#hubertforsequenceclassification
.md
Hubert Model with a sequence classification head on top (a linear layer over the pooled output) for tasks like SUPERB Keyword Spotting. Hubert was proposed in [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed.
189_11_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/hubert.md
https://huggingface.co/docs/transformers/en/model_doc/hubert/#hubertforsequenceclassification
.md
Ruslan Salakhutdinov, Abdelrahman Mohamed. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.). This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters:
189_11_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/hubert.md
https://huggingface.co/docs/transformers/en/model_doc/hubert/#hubertforsequenceclassification
.md
behavior. Parameters: config ([`HubertConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward </pt> <tf>
189_11_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/hubert.md
https://huggingface.co/docs/transformers/en/model_doc/hubert/#tfhubertmodel
.md
No docstring available for TFHubertModel Methods: call
189_12_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/hubert.md
https://huggingface.co/docs/transformers/en/model_doc/hubert/#tfhubertforctc
.md
No docstring available for TFHubertForCTC Methods: call </tf> </frameworkcontent>
189_13_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2/
.md
<!--Copyright 2024 The Qwen Team and The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
190_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2/
.md
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
190_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2/#overview
.md
Qwen2 is the new model series of large language models from the Qwen team. Previously, we released the Qwen series, including Qwen2-0.5B, Qwen2-1.5B, Qwen2-7B, Qwen2-57B-A14B, Qwen2-72B, Qwen2-Audio, etc.
190_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2/#model-details
.md
Qwen2 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes.
190_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2/#usage-tips
.md
`Qwen2-7B` and `Qwen2-7B-Instruct` can be found on the [Huggingface Hub](https://huggingface.co/Qwen) In the following, we demonstrate how to use `Qwen2-7B-Instruct` for the inference. Note that we have used the ChatML format for dialog, in this demo we show how to leverage `apply_chat_template` for this purpose. ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> device = "cuda" # the device to load the model onto
190_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2/#usage-tips
.md
>>> model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2-7B-Instruct", device_map="auto") >>> tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2-7B-Instruct") >>> prompt = "Give me a short introduction to large language model." >>> messages = [{"role": "user", "content": prompt}] >>> text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) >>> model_inputs = tokenizer([text], return_tensors="pt").to(device)
190_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2/#usage-tips
.md
>>> model_inputs = tokenizer([text], return_tensors="pt").to(device) >>> generated_ids = model.generate(model_inputs.input_ids, max_new_tokens=512, do_sample=True) >>> generated_ids = [output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)] >>> response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] ```
190_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2/#qwen2config
.md
This is the configuration class to store the configuration of a [`Qwen2Model`]. It is used to instantiate a Qwen2 model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of Qwen2-7B-beta [Qwen/Qwen2-7B-beta](https://huggingface.co/Qwen/Qwen2-7B-beta). Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
190_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2/#qwen2config
.md
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 151936): Vocabulary size of the Qwen2 model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`Qwen2Model`] hidden_size (`int`, *optional*, defaults to 4096): Dimension of the hidden representations.
190_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2/#qwen2config
.md
hidden_size (`int`, *optional*, defaults to 4096): Dimension of the hidden representations. intermediate_size (`int`, *optional*, defaults to 22016): Dimension of the MLP representations. num_hidden_layers (`int`, *optional*, defaults to 32): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 32): Number of attention heads for each attention layer in the Transformer encoder. num_key_value_heads (`int`, *optional*, defaults to 32):
190_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2/#qwen2config
.md
num_key_value_heads (`int`, *optional*, defaults to 32): This is the number of key_value heads that should be used to implement Grouped Query Attention. If `num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if `num_key_value_heads=1` the model will use Multi Query Attention (MQA) otherwise GQA is used. When converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
190_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2/#qwen2config
.md
converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed by meanpooling all the original heads within that group. For more details checkout [this paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to `32`. hidden_act (`str` or `function`, *optional*, defaults to `"silu"`): The non-linear activation function (function or string) in the decoder. max_position_embeddings (`int`, *optional*, defaults to 32768):
190_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2/#qwen2config
.md
max_position_embeddings (`int`, *optional*, defaults to 32768): The maximum sequence length that this model might ever be used with. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. rms_norm_eps (`float`, *optional*, defaults to 1e-06): The epsilon used by the rms normalization layers. use_cache (`bool`, *optional*, defaults to `True`):
190_4_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2/#qwen2config
.md
The epsilon used by the rms normalization layers. use_cache (`bool`, *optional*, defaults to `True`): Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if `config.is_decoder=True`. tie_word_embeddings (`bool`, *optional*, defaults to `False`): Whether the model's input and output word embeddings should be tied. rope_theta (`float`, *optional*, defaults to 10000.0): The base period of the RoPE embeddings. rope_scaling (`Dict`, *optional*):
190_4_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2/#qwen2config
.md
The base period of the RoPE embeddings. rope_scaling (`Dict`, *optional*): Dictionary containing the scaling configuration for the RoPE embeddings. NOTE: if you apply new rope type and you expect the model to work on longer `max_position_embeddings`, we recommend you to update this value accordingly. Expected contents: `rope_type` (`str`): The sub-variant of RoPE to use. Can be one of ['default', 'linear', 'dynamic', 'yarn', 'longrope', 'llama3'], with 'default' being the original RoPE implementation.
190_4_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2/#qwen2config
.md
'llama3'], with 'default' being the original RoPE implementation. `factor` (`float`, *optional*): Used with all rope types except 'default'. The scaling factor to apply to the RoPE embeddings. In most scaling types, a `factor` of x will enable the model to handle sequences of length x * original maximum pre-trained length. `original_max_position_embeddings` (`int`, *optional*): Used with 'dynamic', 'longrope' and 'llama3'. The original max position embeddings used during pretraining.
190_4_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2/#qwen2config
.md
Used with 'dynamic', 'longrope' and 'llama3'. The original max position embeddings used during pretraining. `attention_factor` (`float`, *optional*): Used with 'yarn' and 'longrope'. The scaling factor to be applied on the attention computation. If unspecified, it defaults to value recommended by the implementation, using the `factor` field to infer the suggested value. `beta_fast` (`float`, *optional*): Only used with 'yarn'. Parameter to set the boundary for extrapolation (only) in the linear
190_4_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2/#qwen2config
.md
`beta_fast` (`float`, *optional*): Only used with 'yarn'. Parameter to set the boundary for extrapolation (only) in the linear ramp function. If unspecified, it defaults to 32. `beta_slow` (`float`, *optional*): Only used with 'yarn'. Parameter to set the boundary for interpolation (only) in the linear ramp function. If unspecified, it defaults to 1. `short_factor` (`List[float]`, *optional*): Only used with 'longrope'. The scaling factor to be applied to short contexts (<
190_4_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2/#qwen2config
.md
`short_factor` (`List[float]`, *optional*): Only used with 'longrope'. The scaling factor to be applied to short contexts (< `original_max_position_embeddings`). Must be a list of numbers with the same length as the hidden size divided by the number of attention heads divided by 2 `long_factor` (`List[float]`, *optional*): Only used with 'longrope'. The scaling factor to be applied to long contexts (< `original_max_position_embeddings`). Must be a list of numbers with the same length as the hidden
190_4_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2/#qwen2config
.md
`original_max_position_embeddings`). Must be a list of numbers with the same length as the hidden size divided by the number of attention heads divided by 2 `low_freq_factor` (`float`, *optional*): Only used with 'llama3'. Scaling factor applied to low frequency components of the RoPE `high_freq_factor` (`float`, *optional*): Only used with 'llama3'. Scaling factor applied to high frequency components of the RoPE use_sliding_window (`bool`, *optional*, defaults to `False`):
190_4_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2/#qwen2config
.md
use_sliding_window (`bool`, *optional*, defaults to `False`): Whether to use sliding window attention. sliding_window (`int`, *optional*, defaults to 4096): Sliding window attention (SWA) window size. If not specified, will default to `4096`. max_window_layers (`int`, *optional*, defaults to 28): The number of layers that use SWA (Sliding Window Attention). The bottom layers use SWA while the top use full attention. attention_dropout (`float`, *optional*, defaults to 0.0):
190_4_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2/#qwen2config
.md
attention_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. ```python >>> from transformers import Qwen2Model, Qwen2Config
190_4_14
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2/#qwen2config
.md
>>> # Initializing a Qwen2 style configuration >>> configuration = Qwen2Config() >>> # Initializing a model from the Qwen2-7B style configuration >>> model = Qwen2Model(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
190_4_15
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2/#qwen2tokenizer
.md
Construct a Qwen2 tokenizer. Based on byte-level Byte-Pair-Encoding. Same with GPT2Tokenizer, this tokenizer has been trained to treat spaces like parts of the tokens so a word will be encoded differently whether it is at the beginning of the sentence (without space) or not: ```python >>> from transformers import Qwen2Tokenizer >>> tokenizer = Qwen2Tokenizer.from_pretrained("Qwen/Qwen-tokenizer") >>> tokenizer("Hello world")["input_ids"] [9707, 1879]
190_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2/#qwen2tokenizer
.md
>>> tokenizer(" Hello world")["input_ids"] [21927, 1879] ``` This is expected. You should not use GPT2Tokenizer instead, because of the different pretokenization rules. This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. Args: vocab_file (`str`): Path to the vocabulary file. merges_file (`str`): Path to the merges file. errors (`str`, *optional*, defaults to `"replace"`):
190_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2/#qwen2tokenizer
.md
Path to the vocabulary file. merges_file (`str`): Path to the merges file. errors (`str`, *optional*, defaults to `"replace"`): Paradigm to follow when decoding bytes to UTF-8. See [bytes.decode](https://docs.python.org/3/library/stdtypes.html#bytes.decode) for more information. unk_token (`str`, *optional*, defaults to `"<|endoftext|>"`): The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. bos_token (`str`, *optional*):
190_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2/#qwen2tokenizer
.md
token instead. bos_token (`str`, *optional*): The beginning of sequence token. Not applicable for this tokenizer. eos_token (`str`, *optional*, defaults to `"<|endoftext|>"`): The end of sequence token. pad_token (`str`, *optional*, defaults to `"<|endoftext|>"`): The token used for padding, for example when batching sequences of different lengths. clean_up_tokenization_spaces (`bool`, *optional*, defaults to `False`):
190_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2/#qwen2tokenizer
.md
clean_up_tokenization_spaces (`bool`, *optional*, defaults to `False`): Whether or not the model should cleanup the spaces that were added when splitting the input text during the tokenization process. Not applicable to this tokenizer, since tokenization does not add spaces. split_special_tokens (`bool`, *optional*, defaults to `False`): Whether or not the special tokens should be split during the tokenization process. The default behavior is
190_5_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2/#qwen2tokenizer
.md
Whether or not the special tokens should be split during the tokenization process. The default behavior is to not split special tokens. This means that if `<|endoftext|>` is the `eos_token`, then `tokenizer.tokenize("<|endoftext|>") = ['<|endoftext|>`]. Otherwise, if `split_special_tokens=True`, then `tokenizer.tokenize("<|endoftext|>")` will be give `['<', '|', 'endo', 'ft', 'ext', '|', '>']`. This argument is only supported for `slow` tokenizers for the moment. Methods: save_vocabulary
190_5_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2/#qwen2tokenizerfast
.md
Construct a "fast" Qwen2 tokenizer (backed by HuggingFace's *tokenizers* library). Based on byte-level Byte-Pair-Encoding. Same with GPT2Tokenizer, this tokenizer has been trained to treat spaces like parts of the tokens so a word will be encoded differently whether it is at the beginning of the sentence (without space) or not: ```python >>> from transformers import Qwen2TokenizerFast
190_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2/#qwen2tokenizerfast
.md
>>> tokenizer = Qwen2TokenizerFast.from_pretrained("Qwen/Qwen-tokenizer") >>> tokenizer("Hello world")["input_ids"] [9707, 1879]
190_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2/#qwen2tokenizerfast
.md
>>> tokenizer(" Hello world")["input_ids"] [21927, 1879] ``` This is expected. This tokenizer inherits from [`PreTrainedTokenizerFast`] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. Args: vocab_file (`str`, *optional*): Path to the vocabulary file. merges_file (`str`, *optional*): Path to the merges file. tokenizer_file (`str`, *optional*):
190_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2/#qwen2tokenizerfast
.md
Path to the vocabulary file. merges_file (`str`, *optional*): Path to the merges file. tokenizer_file (`str`, *optional*): Path to [tokenizers](https://github.com/huggingface/tokenizers) file (generally has a .json extension) that contains everything needed to load the tokenizer. unk_token (`str`, *optional*, defaults to `"<|endoftext|>"`): The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. Not applicable to this tokenizer.
190_6_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2/#qwen2tokenizerfast
.md
token instead. Not applicable to this tokenizer. bos_token (`str`, *optional*): The beginning of sequence token. Not applicable for this tokenizer. eos_token (`str`, *optional*, defaults to `"<|endoftext|>"`): The end of sequence token. pad_token (`str`, *optional*, defaults to `"<|endoftext|>"`): The token used for padding, for example when batching sequences of different lengths.
190_6_4