source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/grounding-dino.md
https://huggingface.co/docs/transformers/en/model_doc/grounding-dino/#resources
.md
- Demo notebooks regarding inference with Grounding DINO as well as combining it with [SAM](sam) can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/Grounding%20DINO). 🌎
356_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/grounding-dino.md
https://huggingface.co/docs/transformers/en/model_doc/grounding-dino/#groundingdinoimageprocessor
.md
Constructs a Grounding DINO image processor. Args: format (`str`, *optional*, defaults to `AnnotationFormat.COCO_DETECTION`): Data format of the annotations. One of "coco_detection" or "coco_panoptic". do_resize (`bool`, *optional*, defaults to `True`): Controls whether to resize the image's (height, width) dimensions to the specified `size`. Can be overridden by the `do_resize` parameter in the `preprocess` method.
356_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/grounding-dino.md
https://huggingface.co/docs/transformers/en/model_doc/grounding-dino/#groundingdinoimageprocessor
.md
overridden by the `do_resize` parameter in the `preprocess` method. size (`Dict[str, int]` *optional*, defaults to `{"shortest_edge": 800, "longest_edge": 1333}`): Size of the image's `(height, width)` dimensions after resizing. Can be overridden by the `size` parameter in the `preprocess` method. Available options are: - `{"height": int, "width": int}`: The image will be resized to the exact size `(height, width)`. Do NOT keep the aspect ratio.
356_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/grounding-dino.md
https://huggingface.co/docs/transformers/en/model_doc/grounding-dino/#groundingdinoimageprocessor
.md
- `{"height": int, "width": int}`: The image will be resized to the exact size `(height, width)`. Do NOT keep the aspect ratio. - `{"shortest_edge": int, "longest_edge": int}`: The image will be resized to a maximum size respecting the aspect ratio and keeping the shortest edge less or equal to `shortest_edge` and the longest edge less or equal to `longest_edge`. - `{"max_height": int, "max_width": int}`: The image will be resized to the maximum size respecting the
356_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/grounding-dino.md
https://huggingface.co/docs/transformers/en/model_doc/grounding-dino/#groundingdinoimageprocessor
.md
- `{"max_height": int, "max_width": int}`: The image will be resized to the maximum size respecting the aspect ratio and keeping the height less or equal to `max_height` and the width less or equal to `max_width`. resample (`PILImageResampling`, *optional*, defaults to `Resampling.BILINEAR`): Resampling filter to use if resizing the image. do_rescale (`bool`, *optional*, defaults to `True`): Controls whether to rescale the image by the specified scale `rescale_factor`. Can be overridden by the
356_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/grounding-dino.md
https://huggingface.co/docs/transformers/en/model_doc/grounding-dino/#groundingdinoimageprocessor
.md
Controls whether to rescale the image by the specified scale `rescale_factor`. Can be overridden by the `do_rescale` parameter in the `preprocess` method. rescale_factor (`int` or `float`, *optional*, defaults to `1/255`): Scale factor to use if rescaling the image. Can be overridden by the `rescale_factor` parameter in the `preprocess` method. Controls whether to normalize the image. Can be overridden by the `do_normalize` parameter in the `preprocess` method.
356_5_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/grounding-dino.md
https://huggingface.co/docs/transformers/en/model_doc/grounding-dino/#groundingdinoimageprocessor
.md
parameter in the `preprocess` method. do_normalize (`bool`, *optional*, defaults to `True`): Whether to normalize the image. Can be overridden by the `do_normalize` parameter in the `preprocess` method. image_mean (`float` or `List[float]`, *optional*, defaults to `IMAGENET_DEFAULT_MEAN`): Mean values to use when normalizing the image. Can be a single value or a list of values, one for each channel. Can be overridden by the `image_mean` parameter in the `preprocess` method.
356_5_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/grounding-dino.md
https://huggingface.co/docs/transformers/en/model_doc/grounding-dino/#groundingdinoimageprocessor
.md
channel. Can be overridden by the `image_mean` parameter in the `preprocess` method. image_std (`float` or `List[float]`, *optional*, defaults to `IMAGENET_DEFAULT_STD`): Standard deviation values to use when normalizing the image. Can be a single value or a list of values, one for each channel. Can be overridden by the `image_std` parameter in the `preprocess` method. do_convert_annotations (`bool`, *optional*, defaults to `True`):
356_5_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/grounding-dino.md
https://huggingface.co/docs/transformers/en/model_doc/grounding-dino/#groundingdinoimageprocessor
.md
do_convert_annotations (`bool`, *optional*, defaults to `True`): Controls whether to convert the annotations to the format expected by the DETR model. Converts the bounding boxes to the format `(center_x, center_y, width, height)` and in the range `[0, 1]`. Can be overridden by the `do_convert_annotations` parameter in the `preprocess` method. do_pad (`bool`, *optional*, defaults to `True`): Controls whether to pad the image. Can be overridden by the `do_pad` parameter in the `preprocess`
356_5_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/grounding-dino.md
https://huggingface.co/docs/transformers/en/model_doc/grounding-dino/#groundingdinoimageprocessor
.md
Controls whether to pad the image. Can be overridden by the `do_pad` parameter in the `preprocess` method. If `True`, padding will be applied to the bottom and right of the image with zeros. If `pad_size` is provided, the image will be padded to the specified dimensions. Otherwise, the image will be padded to the maximum height and width of the batch. pad_size (`Dict[str, int]`, *optional*): The size `{"height": int, "width" int}` to pad the images to. Must be larger than any image size
356_5_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/grounding-dino.md
https://huggingface.co/docs/transformers/en/model_doc/grounding-dino/#groundingdinoimageprocessor
.md
The size `{"height": int, "width" int}` to pad the images to. Must be larger than any image size provided for preprocessing. If `pad_size` is not provided, images will be padded to the largest height and width in the batch. Methods: preprocess - post_process_object_detection
356_5_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/grounding-dino.md
https://huggingface.co/docs/transformers/en/model_doc/grounding-dino/#groundingdinoprocessor
.md
Constructs a Grounding DINO processor which wraps a Deformable DETR image processor and a BERT tokenizer into a single processor. [`GroundingDinoProcessor`] offers all the functionalities of [`GroundingDinoImageProcessor`] and [`AutoTokenizer`]. See the docstring of [`~GroundingDinoProcessor.__call__`] and [`~GroundingDinoProcessor.decode`] for more information. Args: image_processor (`GroundingDinoImageProcessor`): An instance of [`GroundingDinoImageProcessor`]. The image processor is a required input.
356_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/grounding-dino.md
https://huggingface.co/docs/transformers/en/model_doc/grounding-dino/#groundingdinoprocessor
.md
An instance of [`GroundingDinoImageProcessor`]. The image processor is a required input. tokenizer (`AutoTokenizer`): An instance of ['PreTrainedTokenizer`]. The tokenizer is a required input. Methods: post_process_grounded_object_detection
356_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/grounding-dino.md
https://huggingface.co/docs/transformers/en/model_doc/grounding-dino/#groundingdinoconfig
.md
This is the configuration class to store the configuration of a [`GroundingDinoModel`]. It is used to instantiate a Grounding DINO model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Grounding DINO [IDEA-Research/grounding-dino-tiny](https://huggingface.co/IDEA-Research/grounding-dino-tiny) architecture.
356_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/grounding-dino.md
https://huggingface.co/docs/transformers/en/model_doc/grounding-dino/#groundingdinoconfig
.md
[IDEA-Research/grounding-dino-tiny](https://huggingface.co/IDEA-Research/grounding-dino-tiny) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: backbone_config (`PretrainedConfig` or `dict`, *optional*, defaults to `ResNetConfig()`): The configuration of the backbone model. backbone (`str`, *optional*):
356_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/grounding-dino.md
https://huggingface.co/docs/transformers/en/model_doc/grounding-dino/#groundingdinoconfig
.md
The configuration of the backbone model. backbone (`str`, *optional*): Name of backbone to use when `backbone_config` is `None`. If `use_pretrained_backbone` is `True`, this will load the corresponding pretrained weights from the timm or transformers library. If `use_pretrained_backbone` is `False`, this loads the backbone's config and uses that to initialize the backbone with random weights. use_pretrained_backbone (`bool`, *optional*, defaults to `False`):
356_7_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/grounding-dino.md
https://huggingface.co/docs/transformers/en/model_doc/grounding-dino/#groundingdinoconfig
.md
use_pretrained_backbone (`bool`, *optional*, defaults to `False`): Whether to use pretrained weights for the backbone. use_timm_backbone (`bool`, *optional*, defaults to `False`): Whether to load `backbone` from the timm library. If `False`, the backbone is loaded from the transformers library. backbone_kwargs (`dict`, *optional*): Keyword arguments to be passed to AutoBackbone when loading from a checkpoint e.g. `{'out_indices': (0, 1, 2, 3)}`. Cannot be specified if `backbone_config` is set.
356_7_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/grounding-dino.md
https://huggingface.co/docs/transformers/en/model_doc/grounding-dino/#groundingdinoconfig
.md
e.g. `{'out_indices': (0, 1, 2, 3)}`. Cannot be specified if `backbone_config` is set. text_config (`Union[AutoConfig, dict]`, *optional*, defaults to `BertConfig`): The config object or dictionary of the text backbone. num_queries (`int`, *optional*, defaults to 900): Number of object queries, i.e. detection slots. This is the maximal number of objects [`GroundingDinoModel`] can detect in a single image. encoder_layers (`int`, *optional*, defaults to 6): Number of encoder layers.
356_7_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/grounding-dino.md
https://huggingface.co/docs/transformers/en/model_doc/grounding-dino/#groundingdinoconfig
.md
encoder_layers (`int`, *optional*, defaults to 6): Number of encoder layers. encoder_ffn_dim (`int`, *optional*, defaults to 2048): Dimension of the "intermediate" (often named feed-forward) layer in decoder. encoder_attention_heads (`int`, *optional*, defaults to 8): Number of attention heads for each attention layer in the Transformer encoder. decoder_layers (`int`, *optional*, defaults to 6): Number of decoder layers. decoder_ffn_dim (`int`, *optional*, defaults to 2048):
356_7_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/grounding-dino.md
https://huggingface.co/docs/transformers/en/model_doc/grounding-dino/#groundingdinoconfig
.md
Number of decoder layers. decoder_ffn_dim (`int`, *optional*, defaults to 2048): Dimension of the "intermediate" (often named feed-forward) layer in decoder. decoder_attention_heads (`int`, *optional*, defaults to 8): Number of attention heads for each attention layer in the Transformer decoder. is_encoder_decoder (`bool`, *optional*, defaults to `True`): Whether the model is used as an encoder/decoder or not. activation_function (`str` or `function`, *optional*, defaults to `"relu"`):
356_7_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/grounding-dino.md
https://huggingface.co/docs/transformers/en/model_doc/grounding-dino/#groundingdinoconfig
.md
activation_function (`str` or `function`, *optional*, defaults to `"relu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"silu"` and `"gelu_new"` are supported. d_model (`int`, *optional*, defaults to 256): Dimension of the layers. dropout (`float`, *optional*, defaults to 0.1): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_dropout (`float`, *optional*, defaults to 0.0):
356_7_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/grounding-dino.md
https://huggingface.co/docs/transformers/en/model_doc/grounding-dino/#groundingdinoconfig
.md
attention_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. activation_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for activations inside the fully connected layer. auxiliary_loss (`bool`, *optional*, defaults to `False`): Whether auxiliary decoding losses (loss at each decoder layer) are to be used. position_embedding_type (`str`, *optional*, defaults to `"sine"`):
356_7_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/grounding-dino.md
https://huggingface.co/docs/transformers/en/model_doc/grounding-dino/#groundingdinoconfig
.md
position_embedding_type (`str`, *optional*, defaults to `"sine"`): Type of position embeddings to be used on top of the image features. One of `"sine"` or `"learned"`. num_feature_levels (`int`, *optional*, defaults to 4): The number of input feature levels. encoder_n_points (`int`, *optional*, defaults to 4): The number of sampled keys in each feature level for each attention head in the encoder. decoder_n_points (`int`, *optional*, defaults to 4):
356_7_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/grounding-dino.md
https://huggingface.co/docs/transformers/en/model_doc/grounding-dino/#groundingdinoconfig
.md
decoder_n_points (`int`, *optional*, defaults to 4): The number of sampled keys in each feature level for each attention head in the decoder. two_stage (`bool`, *optional*, defaults to `True`): Whether to apply a two-stage deformable DETR, where the region proposals are also generated by a variant of Grounding DINO, which are further fed into the decoder for iterative bounding box refinement. class_cost (`float`, *optional*, defaults to 1.0):
356_7_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/grounding-dino.md
https://huggingface.co/docs/transformers/en/model_doc/grounding-dino/#groundingdinoconfig
.md
class_cost (`float`, *optional*, defaults to 1.0): Relative weight of the classification error in the Hungarian matching cost. bbox_cost (`float`, *optional*, defaults to 5.0): Relative weight of the L1 error of the bounding box coordinates in the Hungarian matching cost. giou_cost (`float`, *optional*, defaults to 2.0): Relative weight of the generalized IoU loss of the bounding box in the Hungarian matching cost. bbox_loss_coefficient (`float`, *optional*, defaults to 5.0):
356_7_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/grounding-dino.md
https://huggingface.co/docs/transformers/en/model_doc/grounding-dino/#groundingdinoconfig
.md
bbox_loss_coefficient (`float`, *optional*, defaults to 5.0): Relative weight of the L1 bounding box loss in the object detection loss. giou_loss_coefficient (`float`, *optional*, defaults to 2.0): Relative weight of the generalized IoU loss in the object detection loss. focal_alpha (`float`, *optional*, defaults to 0.25): Alpha parameter in the focal loss. disable_custom_kernels (`bool`, *optional*, defaults to `False`):
356_7_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/grounding-dino.md
https://huggingface.co/docs/transformers/en/model_doc/grounding-dino/#groundingdinoconfig
.md
Alpha parameter in the focal loss. disable_custom_kernels (`bool`, *optional*, defaults to `False`): Disable the use of custom CUDA and CPU kernels. This option is necessary for the ONNX export, as custom kernels are not supported by PyTorch ONNX export. max_text_len (`int`, *optional*, defaults to 256): The maximum length of the text input. text_enhancer_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for the text enhancer. fusion_droppath (`float`, *optional*, defaults to 0.1):
356_7_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/grounding-dino.md
https://huggingface.co/docs/transformers/en/model_doc/grounding-dino/#groundingdinoconfig
.md
The dropout ratio for the text enhancer. fusion_droppath (`float`, *optional*, defaults to 0.1): The droppath ratio for the fusion module. fusion_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for the fusion module. embedding_init_target (`bool`, *optional*, defaults to `True`): Whether to initialize the target with Embedding weights. query_dim (`int`, *optional*, defaults to 4): The dimension of the query vector. decoder_bbox_embed_share (`bool`, *optional*, defaults to `True`):
356_7_14
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/grounding-dino.md
https://huggingface.co/docs/transformers/en/model_doc/grounding-dino/#groundingdinoconfig
.md
The dimension of the query vector. decoder_bbox_embed_share (`bool`, *optional*, defaults to `True`): Whether to share the bbox regression head for all decoder layers. two_stage_bbox_embed_share (`bool`, *optional*, defaults to `False`): Whether to share the bbox embedding between the two-stage bbox generator and the region proposal generation. positional_embedding_temperature (`float`, *optional*, defaults to 20): The temperature for Sine Positional Embedding that is used together with vision backbone.
356_7_15
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/grounding-dino.md
https://huggingface.co/docs/transformers/en/model_doc/grounding-dino/#groundingdinoconfig
.md
The temperature for Sine Positional Embedding that is used together with vision backbone. init_std (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-05): The epsilon used by the layer normalization layers. Examples: ```python >>> from transformers import GroundingDinoConfig, GroundingDinoModel
356_7_16
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/grounding-dino.md
https://huggingface.co/docs/transformers/en/model_doc/grounding-dino/#groundingdinoconfig
.md
>>> # Initializing a Grounding DINO IDEA-Research/grounding-dino-tiny style configuration >>> configuration = GroundingDinoConfig() >>> # Initializing a model (with random weights) from the IDEA-Research/grounding-dino-tiny style configuration >>> model = GroundingDinoModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
356_7_17
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/grounding-dino.md
https://huggingface.co/docs/transformers/en/model_doc/grounding-dino/#groundingdinomodel
.md
The bare Grounding DINO Model (consisting of a backbone and encoder-decoder Transformer) outputting raw hidden-states without any specific head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
356_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/grounding-dino.md
https://huggingface.co/docs/transformers/en/model_doc/grounding-dino/#groundingdinomodel
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`GroundingDinoConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the
356_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/grounding-dino.md
https://huggingface.co/docs/transformers/en/model_doc/grounding-dino/#groundingdinomodel
.md
load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
356_8_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/grounding-dino.md
https://huggingface.co/docs/transformers/en/model_doc/grounding-dino/#groundingdinoforobjectdetection
.md
Grounding DINO Model (consisting of a backbone and encoder-decoder Transformer) with object detection heads on top, for tasks such as COCO detection. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
356_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/grounding-dino.md
https://huggingface.co/docs/transformers/en/model_doc/grounding-dino/#groundingdinoforobjectdetection
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`GroundingDinoConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the
356_9_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/grounding-dino.md
https://huggingface.co/docs/transformers/en/model_doc/grounding-dino/#groundingdinoforobjectdetection
.md
load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
356_9_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/
.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
357_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
357_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/#marianmt
.md
<div class="flex flex-wrap space-x-1"> <a href="https://huggingface.co/models?filter=marian"> <img alt="Models" src="https://img.shields.io/badge/All_model_pages-marian-blueviolet"> </a> <a href="https://huggingface.co/spaces/docs-demos/opus-mt-zh-en"> <img alt="Spaces" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue"> </a> </div>
357_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/#overview
.md
A framework for translation models, using the same models as BART. Translations should be similar, but not identical to output in the test set linked to in each model card. This model was contributed by [sshleifer](https://huggingface.co/sshleifer).
357_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/#implementation-notes
.md
- Each model is about 298 MB on disk, there are more than 1,000 models. - The list of supported language pairs can be found [here](https://huggingface.co/Helsinki-NLP). - Models were originally trained by [Jörg Tiedemann](https://researchportal.helsinki.fi/en/persons/j%C3%B6rg-tiedemann) using the [Marian](https://marian-nmt.github.io/) C++ library, which supports fast training and translation.
357_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/#implementation-notes
.md
- All models are transformer encoder-decoders with 6 layers in each component. Each model's performance is documented in a model card. - The 80 opus models that require BPE preprocessing are not supported. - The modeling code is the same as [`BartForConditionalGeneration`] with a few minor modifications: - static (sinusoid) positional embeddings (`MarianConfig.static_position_embeddings=True`) - no layernorm_embedding (`MarianConfig.normalize_embedding=False`)
357_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/#implementation-notes
.md
- no layernorm_embedding (`MarianConfig.normalize_embedding=False`) - the model starts generating with `pad_token_id` (which has 0 as a token_embedding) as the prefix (Bart uses `<s/>`), - Code to bulk convert models can be found in `convert_marian_to_pytorch.py`.
357_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/#naming
.md
- All model names use the following format: `Helsinki-NLP/opus-mt-{src}-{tgt}` - The language codes used to name models are inconsistent. Two digit codes can usually be found [here](https://developers.google.com/admin-sdk/directory/v1/languages), three digit codes require googling "language code {code}". - Codes formatted like `es_AR` are usually `code_{region}`. That one is Spanish from Argentina.
357_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/#naming
.md
code {code}". - Codes formatted like `es_AR` are usually `code_{region}`. That one is Spanish from Argentina. - The models were converted in two stages. The first 1000 models use ISO-639-2 codes to identify languages, the second group use a combination of ISO-639-5 codes and ISO-639-2 codes.
357_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/#examples
.md
- Since Marian models are smaller than many other translation models available in the library, they can be useful for fine-tuning experiments and integration tests. - [Fine-tune on GPU](https://github.com/huggingface/transformers/blob/master/examples/legacy/seq2seq/train_distil_marian_enro.sh)
357_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/#multilingual-models
.md
- All model names use the following format: `Helsinki-NLP/opus-mt-{src}-{tgt}`: - If a model can output multiple languages, and you should specify a language code by prepending the desired output language to the `src_text`. - You can see a models's supported language codes in its model card, under target constituents, like in [opus-mt-en-roa](https://huggingface.co/Helsinki-NLP/opus-mt-en-roa). - Note that if a model is only multilingual on the source side, like `Helsinki-NLP/opus-mt-roa-en`, no language
357_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/#multilingual-models
.md
- Note that if a model is only multilingual on the source side, like `Helsinki-NLP/opus-mt-roa-en`, no language codes are required. New multi-lingual models from the [Tatoeba-Challenge repo](https://github.com/Helsinki-NLP/Tatoeba-Challenge) require 3 character language codes: ```python >>> from transformers import MarianMTModel, MarianTokenizer
357_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/#multilingual-models
.md
>>> src_text = [ ... ">>fra<< this is a sentence in english that we want to translate to french", ... ">>por<< This should go to portuguese", ... ">>esp<< And this to Spanish", ... ]
357_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/#multilingual-models
.md
>>> model_name = "Helsinki-NLP/opus-mt-en-roa" >>> tokenizer = MarianTokenizer.from_pretrained(model_name) >>> print(tokenizer.supported_language_codes) ['>>zlm_Latn<<', '>>mfe<<', '>>hat<<', '>>pap<<', '>>ast<<', '>>cat<<', '>>ind<<', '>>glg<<', '>>wln<<', '>>spa<<', '>>fra<<', '>>ron<<', '>>por<<', '>>ita<<', '>>oci<<', '>>arg<<', '>>min<<']
357_6_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/#multilingual-models
.md
>>> model = MarianMTModel.from_pretrained(model_name) >>> translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) >>> [tokenizer.decode(t, skip_special_tokens=True) for t in translated] ["c'est une phrase en anglais que nous voulons traduire en français", 'Isto deve ir para o português.', 'Y esto al español'] ``` Here is the code to see all available pretrained models on the hub: ```python from huggingface_hub import list_models
357_6_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/#multilingual-models
.md
model_list = list_models() org = "Helsinki-NLP" model_ids = [x.id for x in model_list if x.id.startswith(org)] suffix = [x.split("/")[1] for x in model_ids] old_style_multi_models = [f"{org}/{s}" for s in suffix if s != s.lower()] ```
357_6_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/#old-style-multi-lingual-models
.md
These are the old style multi-lingual models ported from the OPUS-MT-Train repo: and the members of each language group: ```python no-style ['Helsinki-NLP/opus-mt-NORTH_EU-NORTH_EU', 'Helsinki-NLP/opus-mt-ROMANCE-en', 'Helsinki-NLP/opus-mt-SCANDINAVIA-SCANDINAVIA', 'Helsinki-NLP/opus-mt-de-ZH', 'Helsinki-NLP/opus-mt-en-CELTIC', 'Helsinki-NLP/opus-mt-en-ROMANCE', 'Helsinki-NLP/opus-mt-es-NORWAY', 'Helsinki-NLP/opus-mt-fi-NORWAY', 'Helsinki-NLP/opus-mt-fi-ZH',
357_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/#old-style-multi-lingual-models
.md
'Helsinki-NLP/opus-mt-es-NORWAY', 'Helsinki-NLP/opus-mt-fi-NORWAY', 'Helsinki-NLP/opus-mt-fi-ZH', 'Helsinki-NLP/opus-mt-fi_nb_no_nn_ru_sv_en-SAMI', 'Helsinki-NLP/opus-mt-sv-NORWAY', 'Helsinki-NLP/opus-mt-sv-ZH'] GROUP_MEMBERS = { 'ZH': ['cmn', 'cn', 'yue', 'ze_zh', 'zh_cn', 'zh_CN', 'zh_HK', 'zh_tw', 'zh_TW', 'zh_yue', 'zhs', 'zht', 'zh'],
357_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/#old-style-multi-lingual-models
.md
'ZH': ['cmn', 'cn', 'yue', 'ze_zh', 'zh_cn', 'zh_CN', 'zh_HK', 'zh_tw', 'zh_TW', 'zh_yue', 'zhs', 'zht', 'zh'], 'ROMANCE': ['fr', 'fr_BE', 'fr_CA', 'fr_FR', 'wa', 'frp', 'oc', 'ca', 'rm', 'lld', 'fur', 'lij', 'lmo', 'es', 'es_AR', 'es_CL', 'es_CO', 'es_CR', 'es_DO', 'es_EC', 'es_ES', 'es_GT', 'es_HN', 'es_MX', 'es_NI', 'es_PA', 'es_PE', 'es_PR', 'es_SV', 'es_UY', 'es_VE', 'pt', 'pt_br', 'pt_BR', 'pt_PT', 'gl', 'lad', 'an', 'mwl', 'it', 'it_IT', 'co', 'nap', 'scn', 'vec', 'sc', 'ro', 'la'],
357_7_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/#old-style-multi-lingual-models
.md
'NORTH_EU': ['de', 'nl', 'fy', 'af', 'da', 'fo', 'is', 'no', 'nb', 'nn', 'sv'], 'SCANDINAVIA': ['da', 'fo', 'is', 'no', 'nb', 'nn', 'sv'], 'SAMI': ['se', 'sma', 'smj', 'smn', 'sms'], 'NORWAY': ['nb_NO', 'nb', 'nn_NO', 'nn', 'nog', 'no_nb', 'no'], 'CELTIC': ['ga', 'cy', 'br', 'gd', 'kw', 'gv'] } ``` Example of translating english to many romance languages, using old-style 2 character language codes ```python >>> from transformers import MarianMTModel, MarianTokenizer
357_7_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/#old-style-multi-lingual-models
.md
>>> src_text = [ ... ">>fr<< this is a sentence in english that we want to translate to french", ... ">>pt<< This should go to portuguese", ... ">>es<< And this to Spanish", ... ] >>> model_name = "Helsinki-NLP/opus-mt-en-ROMANCE" >>> tokenizer = MarianTokenizer.from_pretrained(model_name)
357_7_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/#old-style-multi-lingual-models
.md
>>> model_name = "Helsinki-NLP/opus-mt-en-ROMANCE" >>> tokenizer = MarianTokenizer.from_pretrained(model_name) >>> model = MarianMTModel.from_pretrained(model_name) >>> translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) >>> tgt_text = [tokenizer.decode(t, skip_special_tokens=True) for t in translated] ["c'est une phrase en anglais que nous voulons traduire en français", 'Isto deve ir para o português.', 'Y esto al español'] ```
357_7_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/#resources
.md
- [Translation task guide](../tasks/translation) - [Summarization task guide](../tasks/summarization) - [Causal language modeling task guide](../tasks/language_modeling)
357_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/#marianconfig
.md
This is the configuration class to store the configuration of a [`MarianModel`]. It is used to instantiate an Marian model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Marian [Helsinki-NLP/opus-mt-en-de](https://huggingface.co/Helsinki-NLP/opus-mt-en-de) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
357_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/#marianconfig
.md
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 58101): Vocabulary size of the Marian model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`MarianModel`] or [`TFMarianModel`]. d_model (`int`, *optional*, defaults to 1024): Dimensionality of the layers and the pooler layer.
357_9_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/#marianconfig
.md
d_model (`int`, *optional*, defaults to 1024): Dimensionality of the layers and the pooler layer. encoder_layers (`int`, *optional*, defaults to 12): Number of encoder layers. decoder_layers (`int`, *optional*, defaults to 12): Number of decoder layers. encoder_attention_heads (`int`, *optional*, defaults to 16): Number of attention heads for each attention layer in the Transformer encoder. decoder_attention_heads (`int`, *optional*, defaults to 16):
357_9_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/#marianconfig
.md
decoder_attention_heads (`int`, *optional*, defaults to 16): Number of attention heads for each attention layer in the Transformer decoder. decoder_ffn_dim (`int`, *optional*, defaults to 4096): Dimensionality of the "intermediate" (often named feed-forward) layer in decoder. encoder_ffn_dim (`int`, *optional*, defaults to 4096): Dimensionality of the "intermediate" (often named feed-forward) layer in decoder. activation_function (`str` or `function`, *optional*, defaults to `"gelu"`):
357_9_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/#marianconfig
.md
activation_function (`str` or `function`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"silu"` and `"gelu_new"` are supported. dropout (`float`, *optional*, defaults to 0.1): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities.
357_9_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/#marianconfig
.md
attention_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. activation_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for activations inside the fully connected layer. max_position_embeddings (`int`, *optional*, defaults to 1024): The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). init_std (`float`, *optional*, defaults to 0.02):
357_9_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/#marianconfig
.md
just in case (e.g., 512 or 1024 or 2048). init_std (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. encoder_layerdrop (`float`, *optional*, defaults to 0.0): The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more details. decoder_layerdrop (`float`, *optional*, defaults to 0.0):
357_9_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/#marianconfig
.md
for more details. decoder_layerdrop (`float`, *optional*, defaults to 0.0): The LayerDrop probability for the decoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more details. scale_embedding (`bool`, *optional*, defaults to `False`): Scale embeddings by diving by sqrt(d_model). use_cache (`bool`, *optional*, defaults to `True`): Whether or not the model should return the last key/values attentions (not used by all models) forced_eos_token_id (`int`, *optional*, defaults to 0):
357_9_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/#marianconfig
.md
forced_eos_token_id (`int`, *optional*, defaults to 0): The id of the token to force as the last generated token when `max_length` is reached. Usually set to `eos_token_id`. Examples: ```python >>> from transformers import MarianModel, MarianConfig
357_9_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/#marianconfig
.md
>>> # Initializing a Marian Helsinki-NLP/opus-mt-en-de style configuration >>> configuration = MarianConfig() >>> # Initializing a model from the Helsinki-NLP/opus-mt-en-de style configuration >>> model = MarianModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
357_9_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/#mariantokenizer
.md
Construct a Marian tokenizer. Based on [SentencePiece](https://github.com/google/sentencepiece). This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. Args: source_spm (`str`): [SentencePiece](https://github.com/google/sentencepiece) file (generally has a .spm extension) that contains the vocabulary for the source language. target_spm (`str`):
357_10_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/#mariantokenizer
.md
contains the vocabulary for the source language. target_spm (`str`): [SentencePiece](https://github.com/google/sentencepiece) file (generally has a .spm extension) that contains the vocabulary for the target language. source_lang (`str`, *optional*): A string representing the source language. target_lang (`str`, *optional*): A string representing the target language. unk_token (`str`, *optional*, defaults to `"<unk>"`):
357_10_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/#mariantokenizer
.md
A string representing the target language. unk_token (`str`, *optional*, defaults to `"<unk>"`): The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. eos_token (`str`, *optional*, defaults to `"</s>"`): The end of sequence token. pad_token (`str`, *optional*, defaults to `"<pad>"`): The token used for padding, for example when batching sequences of different lengths. model_max_length (`int`, *optional*, defaults to 512):
357_10_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/#mariantokenizer
.md
model_max_length (`int`, *optional*, defaults to 512): The maximum sentence length the model accepts. additional_special_tokens (`List[str]`, *optional*, defaults to `["<eop>", "<eod>"]`): Additional special tokens used by the tokenizer. sp_model_kwargs (`dict`, *optional*): Will be passed to the `SentencePieceProcessor.__init__()` method. The [Python wrapper for SentencePiece](https://github.com/google/sentencepiece/tree/master/python) can be used, among other things, to set:
357_10_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/#mariantokenizer
.md
SentencePiece](https://github.com/google/sentencepiece/tree/master/python) can be used, among other things, to set: - `enable_sampling`: Enable subword regularization. - `nbest_size`: Sampling parameters for unigram. Invalid for BPE-Dropout. - `nbest_size = {0,1}`: No sampling is performed. - `nbest_size > 1`: samples from the nbest_size results. - `nbest_size < 0`: assuming that nbest_size is infinite and samples from the all hypothesis (lattice)
357_10_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/#mariantokenizer
.md
- `nbest_size < 0`: assuming that nbest_size is infinite and samples from the all hypothesis (lattice) using forward-filtering-and-backward-sampling algorithm. - `alpha`: Smoothing parameter for unigram sampling, and dropout probability of merge operations for BPE-dropout. Examples: ```python >>> from transformers import MarianForCausalLM, MarianTokenizer
357_10_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/#mariantokenizer
.md
>>> model = MarianForCausalLM.from_pretrained("Helsinki-NLP/opus-mt-en-de") >>> tokenizer = MarianTokenizer.from_pretrained("Helsinki-NLP/opus-mt-en-de") >>> src_texts = ["I am a small frog.", "Tom asked his teacher for advice."] >>> tgt_texts = ["Ich bin ein kleiner Frosch.", "Tom bat seinen Lehrer um Rat."] # optional >>> inputs = tokenizer(src_texts, text_target=tgt_texts, return_tensors="pt", padding=True)
357_10_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/#mariantokenizer
.md
>>> outputs = model(**inputs) # should work ``` Methods: build_inputs_with_special_tokens <frameworkcontent> <pt>
357_10_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/#marianmodel
.md
The bare Marian Model outputting raw hidden-states without any specific head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
357_11_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/#marianmodel
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`MarianConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the
357_11_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/#marianmodel
.md
load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
357_11_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/#marianmtmodel
.md
The Marian Model with a language modeling head. Can be used for summarization. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
357_12_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/#marianmtmodel
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`MarianConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the
357_12_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/#marianmtmodel
.md
load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
357_12_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/#marianforcausallm
.md
No docstring available for MarianForCausalLM Methods: forward </pt> <tf>
357_13_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/#tfmarianmodel
.md
No docstring available for TFMarianModel Methods: call
357_14_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/#tfmarianmtmodel
.md
No docstring available for TFMarianMTModel Methods: call </tf> <jax>
357_15_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/#flaxmarianmodel
.md
No docstring available for FlaxMarianModel Methods: __call__
357_16_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/#flaxmarianmtmodel
.md
No docstring available for FlaxMarianMTModel Methods: __call__ </jax> </frameworkcontent>
357_17_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vision-text-dual-encoder.md
https://huggingface.co/docs/transformers/en/model_doc/vision-text-dual-encoder/
.md
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
358_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vision-text-dual-encoder.md
https://huggingface.co/docs/transformers/en/model_doc/vision-text-dual-encoder/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
358_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vision-text-dual-encoder.md
https://huggingface.co/docs/transformers/en/model_doc/vision-text-dual-encoder/#overview
.md
The [`VisionTextDualEncoderModel`] can be used to initialize a vision-text dual encoder model with any pretrained vision autoencoding model as the vision encoder (*e.g.* [ViT](vit), [BEiT](beit), [DeiT](deit)) and any pretrained text autoencoding model as the text encoder (*e.g.* [RoBERTa](roberta), [BERT](bert)). Two projection layers are added on top of both the vision and text encoder to project the output embeddings
358_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vision-text-dual-encoder.md
https://huggingface.co/docs/transformers/en/model_doc/vision-text-dual-encoder/#overview
.md
to a shared latent space. The projection layers are randomly initialized so the model should be fine-tuned on a downstream task. This model can be used to align the vision-text embeddings using CLIP like contrastive image-text training and then can be used for zero-shot vision tasks such image-classification or retrieval. In [LiT: Zero-Shot Transfer with Locked-image Text Tuning](https://arxiv.org/abs/2111.07991) it is shown how
358_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vision-text-dual-encoder.md
https://huggingface.co/docs/transformers/en/model_doc/vision-text-dual-encoder/#overview
.md
In [LiT: Zero-Shot Transfer with Locked-image Text Tuning](https://arxiv.org/abs/2111.07991) it is shown how leveraging pre-trained (locked/frozen) image and text model for contrastive learning yields significant improvement on new zero-shot vision tasks such as image classification or retrieval.
358_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vision-text-dual-encoder.md
https://huggingface.co/docs/transformers/en/model_doc/vision-text-dual-encoder/#visiontextdualencoderconfig
.md
[`VisionTextDualEncoderConfig`] is the configuration class to store the configuration of a [`VisionTextDualEncoderModel`]. It is used to instantiate [`VisionTextDualEncoderModel`] model according to the specified arguments, defining the text model and vision model configs. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: projection_dim (`int`, *optional*, defaults to 512):
358_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vision-text-dual-encoder.md
https://huggingface.co/docs/transformers/en/model_doc/vision-text-dual-encoder/#visiontextdualencoderconfig
.md
documentation from [`PretrainedConfig`] for more information. Args: projection_dim (`int`, *optional*, defaults to 512): Dimensionality of text and vision projection layers. logit_scale_init_value (`float`, *optional*, defaults to 2.6592): The initial value of the *logit_scale* parameter. Default is used as per the original CLIP implementation. kwargs (*optional*): Dictionary of keyword arguments. Examples: ```python
358_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vision-text-dual-encoder.md
https://huggingface.co/docs/transformers/en/model_doc/vision-text-dual-encoder/#visiontextdualencoderconfig
.md
kwargs (*optional*): Dictionary of keyword arguments. Examples: ```python >>> from transformers import ViTConfig, BertConfig, VisionTextDualEncoderConfig, VisionTextDualEncoderModel
358_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vision-text-dual-encoder.md
https://huggingface.co/docs/transformers/en/model_doc/vision-text-dual-encoder/#visiontextdualencoderconfig
.md
>>> # Initializing a BERT and ViT configuration >>> config_vision = ViTConfig() >>> config_text = BertConfig() >>> config = VisionTextDualEncoderConfig.from_vision_text_configs(config_vision, config_text, projection_dim=512) >>> # Initializing a BERT and ViT model (with random weights) >>> model = VisionTextDualEncoderModel(config=config) >>> # Accessing the model configuration >>> config_vision = model.config.vision_config >>> config_text = model.config.text_config
358_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vision-text-dual-encoder.md
https://huggingface.co/docs/transformers/en/model_doc/vision-text-dual-encoder/#visiontextdualencoderconfig
.md
>>> # Saving the model, including its configuration >>> model.save_pretrained("vit-bert") >>> # loading model and config from pretrained folder >>> vision_text_config = VisionTextDualEncoderConfig.from_pretrained("vit-bert") >>> model = VisionTextDualEncoderModel.from_pretrained("vit-bert", config=vision_text_config) ```
358_2_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vision-text-dual-encoder.md
https://huggingface.co/docs/transformers/en/model_doc/vision-text-dual-encoder/#visiontextdualencoderprocessor
.md
Constructs a VisionTextDualEncoder processor which wraps an image processor and a tokenizer into a single processor. [`VisionTextDualEncoderProcessor`] offers all the functionalities of [`AutoImageProcessor`] and [`AutoTokenizer`]. See the [`~VisionTextDualEncoderProcessor.__call__`] and [`~VisionTextDualEncoderProcessor.decode`] for more information. Args: image_processor ([`AutoImageProcessor`], *optional*): The image processor is a required input. tokenizer ([`PreTrainedTokenizer`], *optional*):
358_3_0