source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinov2_with_registers.md
https://huggingface.co/docs/transformers/en/model_doc/dinov2_with_registers/#dinov2withregistersconfig
.md
image_size (`int`, *optional*, defaults to 224): The size (resolution) of each image. patch_size (`int`, *optional*, defaults to 16): The size (resolution) of each patch. num_channels (`int`, *optional*, defaults to 3): The number of input channels. qkv_bias (`bool`, *optional*, defaults to `True`): Whether to add a bias to the queries, keys and values. layerscale_value (`float`, *optional*, defaults to 1.0): Initial value to use for layer scale. drop_path_rate (`float`, *optional*, defaults to 0.0):
365_2_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinov2_with_registers.md
https://huggingface.co/docs/transformers/en/model_doc/dinov2_with_registers/#dinov2withregistersconfig
.md
Initial value to use for layer scale. drop_path_rate (`float`, *optional*, defaults to 0.0): Stochastic depth rate per sample (when applied in the main path of residual layers). use_swiglu_ffn (`bool`, *optional*, defaults to `False`): Whether to use the SwiGLU feedforward neural network. num_register_tokens (`int`, *optional*, defaults to 4): Number of register tokens to use. out_features (`List[str]`, *optional*):
365_2_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinov2_with_registers.md
https://huggingface.co/docs/transformers/en/model_doc/dinov2_with_registers/#dinov2withregistersconfig
.md
Number of register tokens to use. out_features (`List[str]`, *optional*): If used as backbone, list of features to output. Can be any of `"stem"`, `"stage1"`, `"stage2"`, etc. (depending on how many stages the model has). If unset and `out_indices` is set, will default to the corresponding stages. If unset and `out_indices` is unset, will default to the last stage. Must be in the same order as defined in the `stage_names` attribute. out_indices (`List[int]`, *optional*):
365_2_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinov2_with_registers.md
https://huggingface.co/docs/transformers/en/model_doc/dinov2_with_registers/#dinov2withregistersconfig
.md
same order as defined in the `stage_names` attribute. out_indices (`List[int]`, *optional*): If used as backbone, list of indices of features to output. Can be any of 0, 1, 2, etc. (depending on how many stages the model has). If unset and `out_features` is set, will default to the corresponding stages. If unset and `out_features` is unset, will default to the last stage. Must be in the same order as defined in the `stage_names` attribute. apply_layernorm (`bool`, *optional*, defaults to `True`):
365_2_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinov2_with_registers.md
https://huggingface.co/docs/transformers/en/model_doc/dinov2_with_registers/#dinov2withregistersconfig
.md
same order as defined in the `stage_names` attribute. apply_layernorm (`bool`, *optional*, defaults to `True`): Whether to apply layer normalization to the feature maps in case the model is used as backbone. reshape_hidden_states (`bool`, *optional*, defaults to `True`): Whether to reshape the feature maps to 4D tensors of shape `(batch_size, hidden_size, height, width)` in case the model is used as backbone. If `False`, the feature maps will be 3D tensors of shape `(batch_size, seq_len, hidden_size)`.
365_2_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinov2_with_registers.md
https://huggingface.co/docs/transformers/en/model_doc/dinov2_with_registers/#dinov2withregistersconfig
.md
seq_len, hidden_size)`. Example: ```python >>> from transformers import Dinov2WithRegistersConfig, Dinov2WithRegistersModel
365_2_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinov2_with_registers.md
https://huggingface.co/docs/transformers/en/model_doc/dinov2_with_registers/#dinov2withregistersconfig
.md
>>> # Initializing a Dinov2WithRegisters base style configuration >>> configuration = Dinov2WithRegistersConfig() >>> # Initializing a model (with random weights) from the base style configuration >>> model = Dinov2WithRegistersModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
365_2_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinov2_with_registers.md
https://huggingface.co/docs/transformers/en/model_doc/dinov2_with_registers/#dinov2withregistersmodel
.md
The bare Dinov2WithRegisters Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`Dinov2WithRegistersConfig`]): Model configuration class with all the parameters of the model.
365_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinov2_with_registers.md
https://huggingface.co/docs/transformers/en/model_doc/dinov2_with_registers/#dinov2withregistersmodel
.md
behavior. Parameters: config ([`Dinov2WithRegistersConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
365_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinov2_with_registers.md
https://huggingface.co/docs/transformers/en/model_doc/dinov2_with_registers/#dinov2withregistersforimageclassification
.md
Dinov2WithRegisters Model transformer with an image classification head on top (a linear layer on top of the final hidden state of the [CLS] token) e.g. for ImageNet. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters:
365_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinov2_with_registers.md
https://huggingface.co/docs/transformers/en/model_doc/dinov2_with_registers/#dinov2withregistersforimageclassification
.md
behavior. Parameters: config ([`Dinov2WithRegistersConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
365_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitdet.md
https://huggingface.co/docs/transformers/en/model_doc/vitdet/
.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
366_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitdet.md
https://huggingface.co/docs/transformers/en/model_doc/vitdet/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
366_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitdet.md
https://huggingface.co/docs/transformers/en/model_doc/vitdet/#overview
.md
The ViTDet model was proposed in [Exploring Plain Vision Transformer Backbones for Object Detection](https://arxiv.org/abs/2203.16527) by Yanghao Li, Hanzi Mao, Ross Girshick, Kaiming He. VitDet leverages the plain [Vision Transformer](vit) for the task of object detection. The abstract from the paper is the following:
366_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitdet.md
https://huggingface.co/docs/transformers/en/model_doc/vitdet/#overview
.md
*We explore the plain, non-hierarchical Vision Transformer (ViT) as a backbone network for object detection. This design enables the original ViT architecture to be fine-tuned for object detection without needing to redesign a hierarchical backbone for pre-training. With minimal adaptations for fine-tuning, our plain-backbone detector can achieve competitive results. Surprisingly, we observe: (i) it is sufficient to build a simple feature pyramid from a single-scale feature map (without the common FPN
366_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitdet.md
https://huggingface.co/docs/transformers/en/model_doc/vitdet/#overview
.md
we observe: (i) it is sufficient to build a simple feature pyramid from a single-scale feature map (without the common FPN design) and (ii) it is sufficient to use window attention (without shifting) aided with very few cross-window propagation blocks. With plain ViT backbones pre-trained as Masked Autoencoders (MAE), our detector, named ViTDet, can compete with the previous leading methods that were all based on hierarchical backbones, reaching up to 61.3 AP_box on the COCO dataset using only ImageNet-1K
366_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitdet.md
https://huggingface.co/docs/transformers/en/model_doc/vitdet/#overview
.md
methods that were all based on hierarchical backbones, reaching up to 61.3 AP_box on the COCO dataset using only ImageNet-1K pre-training. We hope our study will draw attention to research on plain-backbone detectors.*
366_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitdet.md
https://huggingface.co/docs/transformers/en/model_doc/vitdet/#overview
.md
This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/facebookresearch/detectron2/tree/main/projects/ViTDet). Tips: - At the moment, only the backbone is available.
366_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitdet.md
https://huggingface.co/docs/transformers/en/model_doc/vitdet/#vitdetconfig
.md
This is the configuration class to store the configuration of a [`VitDetModel`]. It is used to instantiate an VitDet model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the VitDet [google/vitdet-base-patch16-224](https://huggingface.co/google/vitdet-base-patch16-224) architecture.
366_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitdet.md
https://huggingface.co/docs/transformers/en/model_doc/vitdet/#vitdetconfig
.md
[google/vitdet-base-patch16-224](https://huggingface.co/google/vitdet-base-patch16-224) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: hidden_size (`int`, *optional*, defaults to 768): Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (`int`, *optional*, defaults to 12): Number of hidden layers in the Transformer encoder.
366_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitdet.md
https://huggingface.co/docs/transformers/en/model_doc/vitdet/#vitdetconfig
.md
num_hidden_layers (`int`, *optional*, defaults to 12): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 12): Number of attention heads for each attention layer in the Transformer encoder. mlp_ratio (`int`, *optional*, defaults to 4): Ratio of mlp hidden dim to embedding dim. hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
366_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitdet.md
https://huggingface.co/docs/transformers/en/model_doc/vitdet/#vitdetconfig
.md
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported. dropout_prob (`float`, *optional*, defaults to 0.0): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
366_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitdet.md
https://huggingface.co/docs/transformers/en/model_doc/vitdet/#vitdetconfig
.md
The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-06): The epsilon used by the layer normalization layers. image_size (`int`, *optional*, defaults to 224): The size (resolution) of each image. pretrain_image_size (`int`, *optional*, defaults to 224): The size (resolution) of each image during pretraining. patch_size (`int`, *optional*, defaults to 16): The size (resolution) of each patch.
366_2_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitdet.md
https://huggingface.co/docs/transformers/en/model_doc/vitdet/#vitdetconfig
.md
patch_size (`int`, *optional*, defaults to 16): The size (resolution) of each patch. num_channels (`int`, *optional*, defaults to 3): The number of input channels. qkv_bias (`bool`, *optional*, defaults to `True`): Whether to add a bias to the queries, keys and values. drop_path_rate (`float`, *optional*, defaults to 0.0): Stochastic depth rate. window_block_indices (`List[int]`, *optional*, defaults to `[]`):
366_2_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitdet.md
https://huggingface.co/docs/transformers/en/model_doc/vitdet/#vitdetconfig
.md
Stochastic depth rate. window_block_indices (`List[int]`, *optional*, defaults to `[]`): List of indices of blocks that should have window attention instead of regular global self-attention. residual_block_indices (`List[int]`, *optional*, defaults to `[]`): List of indices of blocks that should have an extra residual block after the MLP. use_absolute_position_embeddings (`bool`, *optional*, defaults to `True`): Whether to add absolute position embeddings to the patch embeddings.
366_2_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitdet.md
https://huggingface.co/docs/transformers/en/model_doc/vitdet/#vitdetconfig
.md
Whether to add absolute position embeddings to the patch embeddings. use_relative_position_embeddings (`bool`, *optional*, defaults to `False`): Whether to add relative position embeddings to the attention maps. window_size (`int`, *optional*, defaults to 0): The size of the attention window. out_features (`List[str]`, *optional*): If used as backbone, list of features to output. Can be any of `"stem"`, `"stage1"`, `"stage2"`, etc.
366_2_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitdet.md
https://huggingface.co/docs/transformers/en/model_doc/vitdet/#vitdetconfig
.md
If used as backbone, list of features to output. Can be any of `"stem"`, `"stage1"`, `"stage2"`, etc. (depending on how many stages the model has). If unset and `out_indices` is set, will default to the corresponding stages. If unset and `out_indices` is unset, will default to the last stage. Must be in the same order as defined in the `stage_names` attribute. out_indices (`List[int]`, *optional*): If used as backbone, list of indices of features to output. Can be any of 0, 1, 2, etc. (depending on how
366_2_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitdet.md
https://huggingface.co/docs/transformers/en/model_doc/vitdet/#vitdetconfig
.md
If used as backbone, list of indices of features to output. Can be any of 0, 1, 2, etc. (depending on how many stages the model has). If unset and `out_features` is set, will default to the corresponding stages. If unset and `out_features` is unset, will default to the last stage. Must be in the same order as defined in the `stage_names` attribute. Example: ```python >>> from transformers import VitDetConfig, VitDetModel
366_2_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitdet.md
https://huggingface.co/docs/transformers/en/model_doc/vitdet/#vitdetconfig
.md
>>> # Initializing a VitDet configuration >>> configuration = VitDetConfig() >>> # Initializing a model (with random weights) from the configuration >>> model = VitDetModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
366_2_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitdet.md
https://huggingface.co/docs/transformers/en/model_doc/vitdet/#vitdetmodel
.md
The bare VitDet Transformer model outputting raw hidden-states without any specific head on top. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`VitDetConfig`]): Model configuration class with all the parameters of the model.
366_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitdet.md
https://huggingface.co/docs/transformers/en/model_doc/vitdet/#vitdetmodel
.md
behavior. Parameters: config ([`VitDetConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
366_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text.md
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text/
.md
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
367_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text.md
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
367_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text.md
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text/#overview
.md
The Speech2Text model was proposed in [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino. It's a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech Translation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are
367_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text.md
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text/#overview
.md
Translation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are fed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the transcripts/translations autoregressively. Speech2Text has been fine-tuned on several datasets for ASR and ST: [LibriSpeech](http://www.openslr.org/12), [CoVoST 2](https://github.com/facebookresearch/covost), [MuST-C](https://ict.fbk.eu/must-c/).
367_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text.md
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text/#overview
.md
This model was contributed by [valhalla](https://huggingface.co/valhalla). The original code can be found [here](https://github.com/pytorch/fairseq/tree/master/examples/speech_to_text).
367_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text.md
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text/#inference
.md
Speech2Text is a speech model that accepts a float tensor of log-mel filter-bank features extracted from the speech signal. It's a transformer-based seq2seq model, so the transcripts/translations are generated autoregressively. The `generate()` method can be used for inference. The [`Speech2TextFeatureExtractor`] class is responsible for extracting the log-mel filter-bank features. The [`Speech2TextProcessor`] wraps [`Speech2TextFeatureExtractor`] and
367_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text.md
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text/#inference
.md
features. The [`Speech2TextProcessor`] wraps [`Speech2TextFeatureExtractor`] and [`Speech2TextTokenizer`] into a single instance to both extract the input features and decode the predicted token ids. The feature extractor depends on `torchaudio` and the tokenizer depends on `sentencepiece` so be sure to install those packages before running the examples. You could either install those as extra speech dependencies with
367_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text.md
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text/#inference
.md
install those packages before running the examples. You could either install those as extra speech dependencies with `pip install transformers"[speech, sentencepiece]"` or install the packages separately with `pip install torchaudio sentencepiece`. Also `torchaudio` requires the development version of the [libsndfile](http://www.mega-nerd.com/libsndfile/) package which can be installed via a system package manager. On Ubuntu it can be installed as follows: `apt install libsndfile1-dev`
367_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text.md
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text/#inference
.md
be installed as follows: `apt install libsndfile1-dev` - ASR and Speech Translation ```python >>> import torch >>> from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration >>> from datasets import load_dataset
367_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text.md
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text/#inference
.md
>>> model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-librispeech-asr") >>> processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-librispeech-asr") >>> ds = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation") >>> inputs = processor(ds[0]["audio"]["array"], sampling_rate=ds[0]["audio"]["sampling_rate"], return_tensors="pt") >>> generated_ids = model.generate(inputs["input_features"], attention_mask=inputs["attention_mask"])
367_2_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text.md
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text/#inference
.md
>>> transcription = processor.batch_decode(generated_ids, skip_special_tokens=True) >>> transcription ['mister quilter is the apostle of the middle classes and we are glad to welcome his gospel'] ``` - Multilingual speech translation For multilingual speech translation models, `eos_token_id` is used as the `decoder_start_token_id` and the target language id is forced as the first generated token. To force the target language id as the first
367_2_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text.md
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text/#inference
.md
the target language id is forced as the first generated token. To force the target language id as the first generated token, pass the `forced_bos_token_id` parameter to the `generate()` method. The following example shows how to transate English speech to French text using the *facebook/s2t-medium-mustc-multilingual-st* checkpoint. ```python >>> import torch >>> from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration >>> from datasets import load_dataset
367_2_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text.md
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text/#inference
.md
>>> model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-medium-mustc-multilingual-st") >>> processor = Speech2TextProcessor.from_pretrained("facebook/s2t-medium-mustc-multilingual-st") >>> ds = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
367_2_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text.md
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text/#inference
.md
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation") >>> inputs = processor(ds[0]["audio"]["array"], sampling_rate=ds[0]["audio"]["sampling_rate"], return_tensors="pt") >>> generated_ids = model.generate( ... inputs["input_features"], ... attention_mask=inputs["attention_mask"], ... forced_bos_token_id=processor.tokenizer.lang_code_to_id["fr"], ... )
367_2_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text.md
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text/#inference
.md
>>> translation = processor.batch_decode(generated_ids, skip_special_tokens=True) >>> translation ["(Vidéo) Si M. Kilder est l'apossible des classes moyennes, et nous sommes heureux d'être accueillis dans son évangile."] ``` See the [model hub](https://huggingface.co/models?filter=speech_to_text) to look for Speech2Text checkpoints.
367_2_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text.md
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text/#speech2textconfig
.md
This is the configuration class to store the configuration of a [`Speech2TextModel`]. It is used to instantiate a Speech2Text model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Speech2Text [facebook/s2t-small-librispeech-asr](https://huggingface.co/facebook/s2t-small-librispeech-asr) architecture.
367_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text.md
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text/#speech2textconfig
.md
[facebook/s2t-small-librispeech-asr](https://huggingface.co/facebook/s2t-small-librispeech-asr) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 10000): Vocabulary size of the Speech2Text model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`Speech2TextModel`]
367_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text.md
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text/#speech2textconfig
.md
the `inputs_ids` passed when calling [`Speech2TextModel`] encoder_layers (`int`, *optional*, defaults to 12): Number of encoder layers. encoder_ffn_dim (`int`, *optional*, defaults to 2048): Dimensionality of the "intermediate" (often named feed-forward) layer in encoder. encoder_attention_heads (`int`, *optional*, defaults to 4): Number of attention heads for each attention layer in the Transformer encoder. decoder_layers (`int`, *optional*, defaults to 6): Number of decoder layers.
367_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text.md
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text/#speech2textconfig
.md
decoder_layers (`int`, *optional*, defaults to 6): Number of decoder layers. decoder_ffn_dim (`int`, *optional*, defaults to 2048): Dimensionality of the "intermediate" (often named feed-forward) layer in decoder. decoder_attention_heads (`int`, *optional*, defaults to 4): Number of attention heads for each attention layer in the Transformer decoder. encoder_layerdrop (`float`, *optional*, defaults to 0.0):
367_3_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text.md
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text/#speech2textconfig
.md
encoder_layerdrop (`float`, *optional*, defaults to 0.0): The LayerDrop probability for the encoder. See the [LayerDrop paper](https://arxiv.org/abs/1909.11556) for more details. decoder_layerdrop (`float`, *optional*, defaults to 0.0): The LayerDrop probability for the decoder. See the [LayerDrop paper](https://arxiv.org/abs/1909.11556) for more details. use_cache (`bool`, *optional*, defaults to `True`): Whether the model should return the last key/values attentions (not used by all models).
367_3_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text.md
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text/#speech2textconfig
.md
Whether the model should return the last key/values attentions (not used by all models). is_encoder_decoder (`bool`, *optional*, defaults to `True`): Whether the model is set up as an encoder-decoder architecture for sequence-to-sequence tasks. activation_function (`str` or `function`, *optional*, defaults to `"relu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"silu"` and `"gelu_new"` are supported.
367_3_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text.md
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text/#speech2textconfig
.md
`"relu"`, `"silu"` and `"gelu_new"` are supported. d_model (`int`, *optional*, defaults to 256): Dimensionality of the layers and the pooler layer. dropout (`float`, *optional*, defaults to 0.1): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. activation_dropout (`float`, *optional*, defaults to 0.0):
367_3_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text.md
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text/#speech2textconfig
.md
The dropout ratio for the attention probabilities. activation_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for activations inside the fully connected layer. init_std (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. decoder_start_token_id (`int`, *optional*, defaults to 2): The initial token ID of the decoder when decoding sequences. scale_embedding (`bool`, *optional*, defaults to `True`):
367_3_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text.md
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text/#speech2textconfig
.md
The initial token ID of the decoder when decoding sequences. scale_embedding (`bool`, *optional*, defaults to `True`): Whether the embeddings are scaled by the square root of `d_model`. pad_token_id (`int`, *optional*, defaults to 1): Padding token id. bos_token_id (`int`, *optional*, defaults to 0): The id of the beginning-of-sequence token. eos_token_id (`int`, *optional*, defaults to 2): The id of the end-of-sequence token. max_source_positions (`int`, *optional*, defaults to 6000):
367_3_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text.md
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text/#speech2textconfig
.md
The id of the end-of-sequence token. max_source_positions (`int`, *optional*, defaults to 6000): The maximum sequence length of log-mel filter-bank features that this model might ever be used with. max_target_positions (`int`, *optional*, defaults to 1024): The maximum sequence length that this model might ever be used with. Typically, set this to something large just in case (e.g., 512 or 1024 or 2048). num_conv_layers (`int`, *optional*, defaults to 2):
367_3_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text.md
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text/#speech2textconfig
.md
just in case (e.g., 512 or 1024 or 2048). num_conv_layers (`int`, *optional*, defaults to 2): Number of 1D convolutional layers in the conv module. conv_kernel_sizes (`Tuple[int]`, *optional*, defaults to `(5, 5)`): A tuple of integers defining the kernel size of each 1D convolutional layer in the conv module. The length of `conv_kernel_sizes` has to match `num_conv_layers`. conv_channels (`int`, *optional*, defaults to 1024):
367_3_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text.md
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text/#speech2textconfig
.md
of `conv_kernel_sizes` has to match `num_conv_layers`. conv_channels (`int`, *optional*, defaults to 1024): An integer defining the number of output channels of each convolution layers except the final one in the conv module. input_feat_per_channel (`int`, *optional*, defaults to 80): An integer specifying the size of feature vector. This is also the dimensions of log-mel filter-bank features. input_channels (`int`, *optional*, defaults to 1):
367_3_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text.md
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text/#speech2textconfig
.md
features. input_channels (`int`, *optional*, defaults to 1): An integer specifying number of input channels of the input feature vector. Example: ```python >>> from transformers import Speech2TextConfig, Speech2TextModel
367_3_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text.md
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text/#speech2textconfig
.md
>>> # Initializing a Speech2Text s2t_transformer_s style configuration >>> configuration = Speech2TextConfig() >>> # Initializing a model (with random weights) from the s2t_transformer_s style configuration >>> model = Speech2TextModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
367_3_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text.md
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text/#speech2texttokenizer
.md
Construct an Speech2Text tokenizer. This tokenizer inherits from [`PreTrainedTokenizer`] which contains some of the main methods. Users should refer to the superclass for more information regarding such methods. Args: vocab_file (`str`): File containing the vocabulary. spm_file (`str`): Path to the [SentencePiece](https://github.com/google/sentencepiece) model file bos_token (`str`, *optional*, defaults to `"<s>"`): The beginning of sentence token. eos_token (`str`, *optional*, defaults to `"</s>"`):
367_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text.md
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text/#speech2texttokenizer
.md
The beginning of sentence token. eos_token (`str`, *optional*, defaults to `"</s>"`): The end of sentence token. unk_token (`str`, *optional*, defaults to `"<unk>"`): The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. pad_token (`str`, *optional*, defaults to `"<pad>"`): The token used for padding, for example when batching sequences of different lengths. do_upper_case (`bool`, *optional*, defaults to `False`):
367_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text.md
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text/#speech2texttokenizer
.md
do_upper_case (`bool`, *optional*, defaults to `False`): Whether or not to uppercase the output when decoding. do_lower_case (`bool`, *optional*, defaults to `False`): Whether or not to lowercase the input when tokenizing. tgt_lang (`str`, *optional*): A string representing the target language. sp_model_kwargs (`dict`, *optional*): Will be passed to the `SentencePieceProcessor.__init__()` method. The [Python wrapper for
367_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text.md
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text/#speech2texttokenizer
.md
sp_model_kwargs (`dict`, *optional*): Will be passed to the `SentencePieceProcessor.__init__()` method. The [Python wrapper for SentencePiece](https://github.com/google/sentencepiece/tree/master/python) can be used, among other things, to set: - `enable_sampling`: Enable subword regularization. - `nbest_size`: Sampling parameters for unigram. Invalid for BPE-Dropout. - `nbest_size = {0,1}`: No sampling is performed. - `nbest_size > 1`: samples from the nbest_size results.
367_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text.md
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text/#speech2texttokenizer
.md
- `nbest_size = {0,1}`: No sampling is performed. - `nbest_size > 1`: samples from the nbest_size results. - `nbest_size < 0`: assuming that nbest_size is infinite and samples from the all hypothesis (lattice) using forward-filtering-and-backward-sampling algorithm. - `alpha`: Smoothing parameter for unigram sampling, and dropout probability of merge operations for BPE-dropout. **kwargs Additional keyword arguments passed along to [`PreTrainedTokenizer`] Methods: build_inputs_with_special_tokens
367_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text.md
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text/#speech2texttokenizer
.md
**kwargs Additional keyword arguments passed along to [`PreTrainedTokenizer`] Methods: build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary
367_4_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text.md
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text/#speech2textfeatureextractor
.md
Constructs a Speech2Text feature extractor. This feature extractor inherits from [`Speech2TextFeatureExtractor`] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. This class extracts mel-filter bank features from raw speech using TorchAudio if installed or using numpy otherwise, and applies utterance-level cepstral mean and variance normalization to the extracted features. Args: feature_size (`int`, *optional*, defaults to 80):
367_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text.md
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text/#speech2textfeatureextractor
.md
Args: feature_size (`int`, *optional*, defaults to 80): The feature dimension of the extracted features. sampling_rate (`int`, *optional*, defaults to 16000): The sampling rate at which the audio files should be digitalized expressed in hertz (Hz). num_mel_bins (`int`, *optional*, defaults to 80): Number of Mel-frequency bins. padding_value (`float`, *optional*, defaults to 0.0): The value that is used to fill the padding vectors. do_ceptral_normalize (`bool`, *optional*, defaults to `True`):
367_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text.md
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text/#speech2textfeatureextractor
.md
The value that is used to fill the padding vectors. do_ceptral_normalize (`bool`, *optional*, defaults to `True`): Whether or not to apply utterance-level cepstral mean and variance normalization to extracted features. normalize_means (`bool`, *optional*, defaults to `True`): Whether or not to zero-mean normalize the extracted features. normalize_vars (`bool`, *optional*, defaults to `True`): Whether or not to unit-variance normalize the extracted features. Methods: __call__
367_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text.md
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text/#speech2textprocessor
.md
Constructs a Speech2Text processor which wraps a Speech2Text feature extractor and a Speech2Text tokenizer into a single processor. [`Speech2TextProcessor`] offers all the functionalities of [`Speech2TextFeatureExtractor`] and [`Speech2TextTokenizer`]. See the [`~Speech2TextProcessor.__call__`] and [`~Speech2TextProcessor.decode`] for more information. Args: feature_extractor (`Speech2TextFeatureExtractor`): An instance of [`Speech2TextFeatureExtractor`]. The feature extractor is a required input.
367_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text.md
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text/#speech2textprocessor
.md
An instance of [`Speech2TextFeatureExtractor`]. The feature extractor is a required input. tokenizer (`Speech2TextTokenizer`): An instance of [`Speech2TextTokenizer`]. The tokenizer is a required input. Methods: __call__ - from_pretrained - save_pretrained - batch_decode - decode <frameworkcontent> <pt>
367_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text.md
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text/#speech2textmodel
.md
The bare Speech2Text Model outputting raw hidden-states without any specific head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
367_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text.md
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text/#speech2textmodel
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`Speech2TextConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the
367_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text.md
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text/#speech2textmodel
.md
load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
367_7_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text.md
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text/#speech2textforconditionalgeneration
.md
The Speech2Text Model with a language modeling head. Can be used for summarization. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
367_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text.md
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text/#speech2textforconditionalgeneration
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`Speech2TextConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the
367_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text.md
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text/#speech2textforconditionalgeneration
.md
load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward </pt> <tf>
367_8_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text.md
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text/#tfspeech2textmodel
.md
No docstring available for TFSpeech2TextModel Methods: call
367_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech_to_text.md
https://huggingface.co/docs/transformers/en/model_doc/speech_to_text/#tfspeech2textforconditionalgeneration
.md
No docstring available for TFSpeech2TextForConditionalGeneration Methods: call </tf> </frameworkcontent>
367_10_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/helium.md
https://huggingface.co/docs/transformers/en/model_doc/helium/
.md
<!--Copyright 2024 Kyutai and The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
368_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/helium.md
https://huggingface.co/docs/transformers/en/model_doc/helium/
.md
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
368_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/helium.md
https://huggingface.co/docs/transformers/en/model_doc/helium/#overview
.md
Helium was proposed in [Announcing Helium-1 Preview](https://kyutai.org/2025/01/13/helium.html) by the Kyutai Team. Helium-1 preview is a lightweight language model with 2B parameters, targeting edge and mobile devices. It supports the following languages: English, French, German, Italian, Portuguese, Spanish. - **Developed by:** Kyutai - **Model type:** Large Language Model - **Language(s) (NLP):** English, French, German, Italian, Portuguese, Spanish - **License:** CC-BY 4.0
368_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/helium.md
https://huggingface.co/docs/transformers/en/model_doc/helium/#evaluation
.md
<!-- This section describes the evaluation protocols and provides the results. -->
368_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/helium.md
https://huggingface.co/docs/transformers/en/model_doc/helium/#testing-data
.md
<!-- This should link to a Dataset Card if possible. --> The model was evaluated on MMLU, TriviaQA, NaturalQuestions, ARC Easy & Challenge, Open Book QA, Common Sense QA, Physical Interaction QA, Social Interaction QA, HellaSwag, WinoGrande, Multilingual Knowledge QA, FLORES 200.
368_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/helium.md
https://huggingface.co/docs/transformers/en/model_doc/helium/#metrics
.md
<!-- These are the evaluation metrics being used, ideally with a description of why. --> We report accuracy on MMLU, ARC, OBQA, CSQA, PIQA, SIQA, HellaSwag, WinoGrande. We report exact match on TriviaQA, NQ and MKQA. We report BLEU on FLORES.
368_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/helium.md
https://huggingface.co/docs/transformers/en/model_doc/helium/#english-results
.md
| Benchmark | Helium-1 Preview | HF SmolLM2 (1.7B) | Gemma-2 (2.6B) | Llama-3.2 (3B) | Qwen2.5 (1.5B) | |--------------|--------|--------|--------|--------|--------| | | | | | | | | MMLU | 51.2 | 50.4 | 53.1 | 56.6 | 61.0 | | NQ | 17.3 | 15.1 | 17.7 | 22.0 | 13.1 | | TQA | 47.9 | 45.4 | 49.9 | 53.6 | 35.9 | | ARC E | 80.9 | 81.8 | 81.1 | 84.6 | 89.7 | | ARC C | 62.7 | 64.7 | 66.0 | 69.0 | 77.2 | | OBQA | 63.8 | 61.4 | 64.6 | 68.4 | 73.8 | | CSQA | 65.6 | 59.0 | 64.4 | 65.4 | 72.4 |
368_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/helium.md
https://huggingface.co/docs/transformers/en/model_doc/helium/#english-results
.md
| OBQA | 63.8 | 61.4 | 64.6 | 68.4 | 73.8 | | CSQA | 65.6 | 59.0 | 64.4 | 65.4 | 72.4 | | PIQA | 77.4 | 77.7 | 79.8 | 78.9 | 76.0 | | SIQA | 64.4 | 57.5 | 61.9 | 63.8 | 68.7 | | HS | 69.7 | 73.2 | 74.7 | 76.9 | 67.5 | | WG | 66.5 | 65.6 | 71.2 | 72.0 | 64.8 | | | | | | | | | Average | 60.7 | 59.3 | 62.2 | 64.7 | 63.6 |
368_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/helium.md
https://huggingface.co/docs/transformers/en/model_doc/helium/#multilingual-results
.md
| Language | Benchmark | Helium-1 Preview | HF SmolLM2 (1.7B) | Gemma-2 (2.6B) | Llama-3.2 (3B) | Qwen2.5 (1.5B) | |-----|--------------|--------|--------|--------|--------|--------| | | | | | | | | |German| MMLU | 45.6 | 35.3 | 45.0 | 47.5 | 49.5 | || ARC C | 56.7 | 38.4 | 54.7 | 58.3 | 60.2 | || HS | 53.5 | 33.9 | 53.4 | 53.7 | 42.8 | || MKQA | 16.1 | 7.1 | 18.9 | 20.2 | 10.4 | | | | | | | | | |Spanish| MMLU | 46.5 | 38.9 | 46.2 | 49.6 | 52.8 | || ARC C | 58.3 | 43.2 | 58.8 | 60.0 | 68.1 |
368_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/helium.md
https://huggingface.co/docs/transformers/en/model_doc/helium/#multilingual-results
.md
| | | | | | | | |Spanish| MMLU | 46.5 | 38.9 | 46.2 | 49.6 | 52.8 | || ARC C | 58.3 | 43.2 | 58.8 | 60.0 | 68.1 | || HS | 58.6 | 40.8 | 60.5 | 61.1 | 51.4 | || MKQA | 16.0 | 7.9 | 18.5 | 20.6 | 10.6 |
368_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/helium.md
https://huggingface.co/docs/transformers/en/model_doc/helium/#model-architecture-and-objective
.md
| Hyperparameter | Value | |--------------|--------| | Layers | 24 | | Heads | 20 | | Model dimension | 2560 | | MLP dimension | 7040 | | Context size | 4096 | | Theta RoPE | 100,000 | Tips: - This model was contributed by [Laurent Mazare](https://huggingface.co/lmz)
368_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/helium.md
https://huggingface.co/docs/transformers/en/model_doc/helium/#usage-tips
.md
`Helium` can be found on the [Huggingface Hub](https://huggingface.co/collections/kyutai/helium-1-preview) In the following, we demonstrate how to use `helium-1-preview` for the inference. ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> device = "cuda" # the device to load the model onto >>> model = AutoModelForCausalLM.from_pretrained("helium-1-preview", device_map="auto") >>> tokenizer = AutoTokenizer.from_pretrained("helium-1-preview")
368_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/helium.md
https://huggingface.co/docs/transformers/en/model_doc/helium/#usage-tips
.md
>>> prompt = "Give me a short introduction to large language model." >>> messages = [{"role": "user", "content": prompt}] >>> text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) >>> model_inputs = tokenizer([text], return_tensors="pt").to(device) >>> generated_ids = model.generate(model_inputs.input_ids, max_new_tokens=512, do_sample=True) >>> generated_ids = [output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)]
368_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/helium.md
https://huggingface.co/docs/transformers/en/model_doc/helium/#usage-tips
.md
>>> generated_ids = [output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)] >>> response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] ```
368_8_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/helium.md
https://huggingface.co/docs/transformers/en/model_doc/helium/#heliumconfig
.md
HeliumConfig This is the configuration class to store the configuration of a [`HeliumModel`]. It is used to instantiate an Helium model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Helium 2b model. e.g. [kyutai/helium-2b](https://huggingface.co/kyutai/helium-2b) Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
368_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/helium.md
https://huggingface.co/docs/transformers/en/model_doc/helium/#heliumconfig
.md
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 48000): Vocabulary size of the Helium model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`HeliumModel`] hidden_size (`int`, *optional*, defaults to 2560): Dimension of the hidden representations.
368_9_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/helium.md
https://huggingface.co/docs/transformers/en/model_doc/helium/#heliumconfig
.md
hidden_size (`int`, *optional*, defaults to 2560): Dimension of the hidden representations. intermediate_size (`int`, *optional*, defaults to 7040): Dimension of the MLP representations. num_hidden_layers (`int`, *optional*, defaults to 24): Number of hidden layers in the Transformer decoder. num_attention_heads (`int`, *optional*, defaults to 20): Number of attention heads for each attention layer in the Transformer decoder. num_key_value_heads (`int`, *optional*, defaults to 20):
368_9_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/helium.md
https://huggingface.co/docs/transformers/en/model_doc/helium/#heliumconfig
.md
num_key_value_heads (`int`, *optional*, defaults to 20): This is the number of key_value heads that should be used to implement Grouped Query Attention. If `num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if `num_key_value_heads=1` the model will use Multi Query Attention (MQA) otherwise GQA is used. When converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
368_9_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/helium.md
https://huggingface.co/docs/transformers/en/model_doc/helium/#heliumconfig
.md
converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed by meanpooling all the original heads within that group. For more details checkout [this paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to `num_attention_heads`. head_dim (`int`, *optional*, defaults to 128): The attention head dimension. hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
368_9_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/helium.md
https://huggingface.co/docs/transformers/en/model_doc/helium/#heliumconfig
.md
The attention head dimension. hidden_act (`str` or `function`, *optional*, defaults to `"silu"`): The legacy activation function. It is overwritten by the `hidden_activation`. attention_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. max_position_embeddings (`int`, *optional*, defaults to 4096): The maximum sequence length that this model might ever be used with. initializer_range (`float`, *optional*, defaults to 0.02):
368_9_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/helium.md
https://huggingface.co/docs/transformers/en/model_doc/helium/#heliumconfig
.md
The maximum sequence length that this model might ever be used with. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. rms_norm_eps (`float`, *optional*, defaults to 1e-08): The epsilon used by the rms normalization layers. use_cache (`bool`, *optional*, defaults to `True`): Whether or not the model should return the last key/values attentions (not used by all models). Only
368_9_6