source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/efficientformer.md
https://huggingface.co/docs/transformers/en/model_doc/efficientformer/#efficientformermodel
.md
The bare EfficientFormer Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch [nn.Module](https://pytorch.org/docs/stable/nn.html#nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`EfficientFormerConfig`]): Model configuration class with all the parameters of the model.
398_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/efficientformer.md
https://huggingface.co/docs/transformers/en/model_doc/efficientformer/#efficientformermodel
.md
Parameters: config ([`EfficientFormerConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
398_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/efficientformer.md
https://huggingface.co/docs/transformers/en/model_doc/efficientformer/#efficientformerforimageclassification
.md
EfficientFormer Model transformer with an image classification head on top (a linear layer on top of the final hidden state of the [CLS] token) e.g. for ImageNet. This model is a PyTorch [nn.Module](https://pytorch.org/docs/stable/nn.html#nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`EfficientFormerConfig`]): Model configuration class with all the parameters of the model.
398_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/efficientformer.md
https://huggingface.co/docs/transformers/en/model_doc/efficientformer/#efficientformerforimageclassification
.md
Parameters: config ([`EfficientFormerConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
398_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/efficientformer.md
https://huggingface.co/docs/transformers/en/model_doc/efficientformer/#efficientformerforimageclassificationwithteacher
.md
EfficientFormer Model transformer with image classification heads on top (a linear layer on top of the final hidden state of the [CLS] token and a linear layer on top of the final hidden state of the distillation token) e.g. for ImageNet. <Tip warning={true}> This model supports inference-only. Fine-tuning with distillation (i.e. with a teacher) is not yet supported. </Tip> This model is a PyTorch [nn.Module](https://pytorch.org/docs/stable/nn.html#nn.Module) subclass. Use it as a
398_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/efficientformer.md
https://huggingface.co/docs/transformers/en/model_doc/efficientformer/#efficientformerforimageclassificationwithteacher
.md
</Tip> This model is a PyTorch [nn.Module](https://pytorch.org/docs/stable/nn.html#nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`EfficientFormerConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
398_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/efficientformer.md
https://huggingface.co/docs/transformers/en/model_doc/efficientformer/#efficientformerforimageclassificationwithteacher
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward </pt> <tf>
398_8_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/efficientformer.md
https://huggingface.co/docs/transformers/en/model_doc/efficientformer/#tfefficientformermodel
.md
No docstring available for TFEfficientFormerModel Methods: call
398_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/efficientformer.md
https://huggingface.co/docs/transformers/en/model_doc/efficientformer/#tfefficientformerforimageclassification
.md
No docstring available for TFEfficientFormerForImageClassification Methods: call
398_10_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/efficientformer.md
https://huggingface.co/docs/transformers/en/model_doc/efficientformer/#tfefficientformerforimageclassificationwithteacher
.md
No docstring available for TFEfficientFormerForImageClassificationWithTeacher Methods: call </tf> </frameworkcontent>
398_11_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/madlad-400.md
https://huggingface.co/docs/transformers/en/model_doc/madlad-400/
.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
399_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/madlad-400.md
https://huggingface.co/docs/transformers/en/model_doc/madlad-400/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
399_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/madlad-400.md
https://huggingface.co/docs/transformers/en/model_doc/madlad-400/#overview
.md
MADLAD-400 models were released in the paper [MADLAD-400: A Multilingual And Document-Level Large Audited Dataset](MADLAD-400: A Multilingual And Document-Level Large Audited Dataset). The abstract from the paper is the following: *We introduce MADLAD-400, a manually audited, general domain 3T token monolingual dataset based on CommonCrawl, spanning 419 languages. We discuss the limitations revealed by self-auditing MADLAD-400, and the role data auditing
399_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/madlad-400.md
https://huggingface.co/docs/transformers/en/model_doc/madlad-400/#overview
.md
the limitations revealed by self-auditing MADLAD-400, and the role data auditing had in the dataset creation process. We then train and release a 10.7B-parameter multilingual machine translation model on 250 billion tokens covering over 450 languages using publicly available data, and find that it is competitive with models that are significantly larger, and report the results on different domains. In addition, we train a 8B-parameter language model, and assess the results on few-shot
399_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/madlad-400.md
https://huggingface.co/docs/transformers/en/model_doc/madlad-400/#overview
.md
translation. We make the baseline models 1 available to the research community.* This model was added by [Juarez Bochi](https://huggingface.co/jbochi). The original checkpoints can be found [here](https://github.com/google-research/google-research/tree/master/madlad_400). This is a machine translation model that supports many low-resource languages, and that is competitive with models that are significantly larger. One can directly use MADLAD-400 weights without finetuning the model: ```python
399_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/madlad-400.md
https://huggingface.co/docs/transformers/en/model_doc/madlad-400/#overview
.md
One can directly use MADLAD-400 weights without finetuning the model: ```python >>> from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
399_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/madlad-400.md
https://huggingface.co/docs/transformers/en/model_doc/madlad-400/#overview
.md
>>> model = AutoModelForSeq2SeqLM.from_pretrained("google/madlad400-3b-mt") >>> tokenizer = AutoTokenizer.from_pretrained("google/madlad400-3b-mt")
399_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/madlad-400.md
https://huggingface.co/docs/transformers/en/model_doc/madlad-400/#overview
.md
>>> inputs = tokenizer("<2pt> I love pizza!", return_tensors="pt") >>> outputs = model.generate(**inputs) >>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)) ['Eu amo pizza!'] ``` Google has released the following variants: - [google/madlad400-3b-mt](https://huggingface.co/google/madlad400-3b-mt) - [google/madlad400-7b-mt](https://huggingface.co/google/madlad400-7b-mt) - [google/madlad400-7b-mt-bt](https://huggingface.co/google/madlad400-7b-mt-bt)
399_1_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/madlad-400.md
https://huggingface.co/docs/transformers/en/model_doc/madlad-400/#overview
.md
- [google/madlad400-7b-mt-bt](https://huggingface.co/google/madlad400-7b-mt-bt) - [google/madlad400-10b-mt](https://huggingface.co/google/madlad400-10b-mt) The original checkpoints can be found [here](https://github.com/google-research/google-research/tree/master/madlad_400). <Tip> Refer to [T5's documentation page](t5) for all API references, code examples, and notebooks. For more details regarding training and evaluation of the MADLAD-400, refer to the model card. </Tip>
399_1_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba.md
https://huggingface.co/docs/transformers/en/model_doc/mamba/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
400_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba.md
https://huggingface.co/docs/transformers/en/model_doc/mamba/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
400_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba.md
https://huggingface.co/docs/transformers/en/model_doc/mamba/#overview
.md
The Mamba model was proposed in [Mamba: Linear-Time Sequence Modeling with Selective State Spaces](https://arxiv.org/abs/2312.00752) by Albert Gu and Tri Dao. This model is a new paradigm architecture based on `state-space-models`. You can read more about the intuition behind these [here](https://srush.github.io/annotated-s4/). The abstract from the paper is the following:
400_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba.md
https://huggingface.co/docs/transformers/en/model_doc/mamba/#overview
.md
*Foundation models, now powering most of the exciting applications in deep learning, are almost universally based on the Transformer architecture and its core attention module. Many subquadratic-time architectures such as linear attention, gated convolution and recurrent models, and structured state space models (SSMs) have been developed to address Transformers' computational inefficiency on long sequences, but they have not performed as well as attention on important modalities such as language. We
400_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba.md
https://huggingface.co/docs/transformers/en/model_doc/mamba/#overview
.md
inefficiency on long sequences, but they have not performed as well as attention on important modalities such as language. We identify that a key weakness of such models is their inability to perform content-based reasoning, and make several improvements. First, simply letting the SSM parameters be functions of the input addresses their weakness with discrete modalities, allowing the model to selectively propagate or forget information along the sequence length dimension depending on the current token.
400_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba.md
https://huggingface.co/docs/transformers/en/model_doc/mamba/#overview
.md
the model to selectively propagate or forget information along the sequence length dimension depending on the current token. Second, even though this change prevents the use of efficient convolutions, we design a hardware-aware parallel algorithm in recurrent mode. We integrate these selective SSMs into a simplified end-to-end neural network architecture without attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5× higher throughput than Transformers) and linear scaling in sequence length,
400_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba.md
https://huggingface.co/docs/transformers/en/model_doc/mamba/#overview
.md
MLP blocks (Mamba). Mamba enjoys fast inference (5× higher throughput than Transformers) and linear scaling in sequence length, and its performance improves on real data up to million-length sequences. As a general sequence model backbone, Mamba achieves state-of-the-art performance across several modalities such as language, audio, and genomics. On language modeling, our Mamba-3B model outperforms Transformers of the same size and matches Transformers twice its size, both in pretraining and downstream
400_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba.md
https://huggingface.co/docs/transformers/en/model_doc/mamba/#overview
.md
model outperforms Transformers of the same size and matches Transformers twice its size, both in pretraining and downstream evaluation.*
400_1_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba.md
https://huggingface.co/docs/transformers/en/model_doc/mamba/#overview
.md
Tips: - Mamba is a new `state space model` architecture that rivals the classic Transformers. It is based on the line of progress on structured state space models, with an efficient hardware-aware design and implementation in the spirit of [FlashAttention](https://github.com/Dao-AILab/flash-attention). - Mamba stacks `mixer` layers, which are the equivalent of `Attention` layers. The core logic of `mamba` is held in the `MambaMixer` class.
400_1_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba.md
https://huggingface.co/docs/transformers/en/model_doc/mamba/#overview
.md
- Two implementations cohabit: one is optimized and uses fast cuda kernels, while the other one is naive but can run on any device! - The current implementation leverages the original cuda kernels: the equivalent of flash attention for Mamba are hosted in the [`mamba-ssm`](https://github.com/state-spaces/mamba) and the [`causal_conv1d`](https://github.com/Dao-AILab/causal-conv1d) repositories. Make sure to install them if your hardware supports them!
400_1_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba.md
https://huggingface.co/docs/transformers/en/model_doc/mamba/#overview
.md
- Contributions to make the naive path faster are welcome 🤗 This model was contributed by [ArthurZ](https://huggingface.co/ArthurZ). The original code can be found [here](https://github.com/state-spaces/mamba).
400_1_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba.md
https://huggingface.co/docs/transformers/en/model_doc/mamba/#a-simple-generation-example
.md
```python from transformers import MambaConfig, MambaForCausalLM, AutoTokenizer import torch tokenizer = AutoTokenizer.from_pretrained("state-spaces/mamba-130m-hf") model = MambaForCausalLM.from_pretrained("state-spaces/mamba-130m-hf") input_ids = tokenizer("Hey how are you doing?", return_tensors= "pt")["input_ids"] out = model.generate(input_ids, max_new_tokens=10) print(tokenizer.batch_decode(out)) ```
400_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba.md
https://huggingface.co/docs/transformers/en/model_doc/mamba/#peft-finetuning
.md
The slow version is not very stable for training, and the fast one needs `float32`! ```python from datasets import load_dataset from trl import SFTTrainer from peft import LoraConfig from transformers import AutoTokenizer, AutoModelForCausalLM, TrainingArguments model_id = "state-spaces/mamba-130m-hf" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) dataset = load_dataset("Abirate/english_quotes", split="train") training_args = TrainingArguments(
400_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba.md
https://huggingface.co/docs/transformers/en/model_doc/mamba/#peft-finetuning
.md
dataset = load_dataset("Abirate/english_quotes", split="train") training_args = TrainingArguments( output_dir="./results", num_train_epochs=3, per_device_train_batch_size=4, logging_dir='./logs', logging_steps=10, learning_rate=2e-3 ) lora_config = LoraConfig( r=8, target_modules=["x_proj", "embeddings", "in_proj", "out_proj"], task_type="CAUSAL_LM", bias="none" ) trainer = SFTTrainer( model=model, processing_class=tokenizer, args=training_args, peft_config=lora_config, train_dataset=dataset,
400_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba.md
https://huggingface.co/docs/transformers/en/model_doc/mamba/#peft-finetuning
.md
model=model, processing_class=tokenizer, args=training_args, peft_config=lora_config, train_dataset=dataset, dataset_text_field="quote", ) trainer.train() ```
400_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba.md
https://huggingface.co/docs/transformers/en/model_doc/mamba/#mambaconfig
.md
This is the configuration class to store the configuration of a [`MambaModel`]. It is used to instantiate a MAMBA model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the MAMBA [state-spaces/mamba-2.8b](https://huggingface.co/state-spaces/mamba-2.8b) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
400_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba.md
https://huggingface.co/docs/transformers/en/model_doc/mamba/#mambaconfig
.md
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 50280): Vocabulary size of the MAMBA model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`MambaModel`]. hidden_size (`int`, *optional*, defaults to 768): Dimensionality of the embeddings and hidden states.
400_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba.md
https://huggingface.co/docs/transformers/en/model_doc/mamba/#mambaconfig
.md
hidden_size (`int`, *optional*, defaults to 768): Dimensionality of the embeddings and hidden states. state_size (`int`, *optional*, defaults to 16): shape of the state space latents. num_hidden_layers (`int`, *optional*, defaults to 32): Number of hidden layers in the model. layer_norm_epsilon (`float`, *optional*, defaults to 1e-05): The epsilon to use in the layer normalization layers. pad_token_id (`int`, *optional*, defaults to 0): Padding token id. bos_token_id (`int`, *optional*, defaults to 0):
400_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba.md
https://huggingface.co/docs/transformers/en/model_doc/mamba/#mambaconfig
.md
pad_token_id (`int`, *optional*, defaults to 0): Padding token id. bos_token_id (`int`, *optional*, defaults to 0): The id of the beginning of sentence token in the vocabulary. eos_token_id (`int`, *optional*, defaults to 0): The id of the end of sentence token in the vocabulary. expand (`int`, *optional*, defaults to 2): Expanding factor used to determine the intermediate size. conv_kernel (`int`, *optional*, defaults to 4): Size of the convolution kernel.
400_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba.md
https://huggingface.co/docs/transformers/en/model_doc/mamba/#mambaconfig
.md
conv_kernel (`int`, *optional*, defaults to 4): Size of the convolution kernel. use_bias (`bool`, *optional*, defaults to `False`): Whether or not to use bias in ["in_proj", "out_proj"] of the mixer block use_conv_bias (`bool`, *optional*, defaults to `True`): Whether or not to use bias in the convolution layer of the mixer block. hidden_act (`str`, *optional*, defaults to `"silu"`): The non-linear activation function (function or string) in the decoder.
400_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba.md
https://huggingface.co/docs/transformers/en/model_doc/mamba/#mambaconfig
.md
hidden_act (`str`, *optional*, defaults to `"silu"`): The non-linear activation function (function or string) in the decoder. initializer_range (`float`, *optional*, defaults to 0.1): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. residual_in_fp32 (`bool`, *optional*, defaults to `True`): Whether or not residuals should be in `float32`. If set to `False` residuals will keep the same `dtype` as the rest of the model
400_4_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba.md
https://huggingface.co/docs/transformers/en/model_doc/mamba/#mambaconfig
.md
time_step_rank (`Union[int,str]`, *optional*, defaults to `"auto"`): Rank of the discretization projection matrix. `"auto"` means that it will default to `math.ceil(self.hidden_size / 16)` time_step_scale (`float`, *optional*, defaults to 1.0): Scale used used to scale `dt_proj.bias`. time_step_min (`float`, *optional*, defaults to 0.001): Minimum `time_step` used to bound `dt_proj.bias`. time_step_max (`float`, *optional*, defaults to 0.1): Maximum `time_step` used to bound `dt_proj.bias`.
400_4_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba.md
https://huggingface.co/docs/transformers/en/model_doc/mamba/#mambaconfig
.md
time_step_max (`float`, *optional*, defaults to 0.1): Maximum `time_step` used to bound `dt_proj.bias`. time_step_init_scheme (`float`, *optional*, defaults to `"random"`): Init scheme used for `dt_proj.weight`. Should be one of `["random","uniform"]` time_step_floor (`float`, *optional*, defaults to 0.0001): Minimum clamping value of the `dt_proj.bias` layer initialization. rescale_prenorm_residual (`bool`, *optional*, defaults to `False`): Whether or not to rescale `out_proj` weights when initializing.
400_4_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba.md
https://huggingface.co/docs/transformers/en/model_doc/mamba/#mambaconfig
.md
Whether or not to rescale `out_proj` weights when initializing. use_cache (`bool`, *optional*, defaults to `True`): Whether or not the cache should be used. use_mambapy (`bool`, *optional*, defaults to `False`): Determines the fallback strategy during training if the CUDA-based official implementation of Mamba is not avaiable. If `True`, the mamba.py implementation is used. If `False`, the naive and slower implementation is used. Consider switching to the naive version if memory is limited. Example:
400_4_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba.md
https://huggingface.co/docs/transformers/en/model_doc/mamba/#mambaconfig
.md
Example: ```python >>> from transformers import MambaConfig, MambaModel
400_4_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba.md
https://huggingface.co/docs/transformers/en/model_doc/mamba/#mambaconfig
.md
>>> # Initializing a Mamba configuration >>> configuration = MambaConfig() >>> # Initializing a model (with random weights) from the configuration >>> model = MambaModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
400_4_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba.md
https://huggingface.co/docs/transformers/en/model_doc/mamba/#mambamodel
.md
The bare MAMBA Model transformer outputting raw hidden-states without any specific head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
400_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba.md
https://huggingface.co/docs/transformers/en/model_doc/mamba/#mambamodel
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`MambaConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
400_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba.md
https://huggingface.co/docs/transformers/en/model_doc/mamba/#mambamodel
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
400_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba.md
https://huggingface.co/docs/transformers/en/model_doc/mamba/#mambalmheadmodel
.md
The MAMBA Model transformer with a language modeling head on top (linear layer with weights tied to the input embeddings). This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
400_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba.md
https://huggingface.co/docs/transformers/en/model_doc/mamba/#mambalmheadmodel
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`MambaConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
400_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba.md
https://huggingface.co/docs/transformers/en/model_doc/mamba/#mambalmheadmodel
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
400_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cvt.md
https://huggingface.co/docs/transformers/en/model_doc/cvt/
.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
401_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cvt.md
https://huggingface.co/docs/transformers/en/model_doc/cvt/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
401_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cvt.md
https://huggingface.co/docs/transformers/en/model_doc/cvt/#overview
.md
The CvT model was proposed in [CvT: Introducing Convolutions to Vision Transformers](https://arxiv.org/abs/2103.15808) by Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan and Lei Zhang. The Convolutional vision Transformer (CvT) improves the [Vision Transformer (ViT)](vit) in performance and efficiency by introducing convolutions into ViT to yield the best of both designs. The abstract from the paper is the following:
401_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cvt.md
https://huggingface.co/docs/transformers/en/model_doc/cvt/#overview
.md
The abstract from the paper is the following: *We present in this paper a new architecture, named Convolutional vision Transformer (CvT), that improves Vision Transformer (ViT) in performance and efficiency by introducing convolutions into ViT to yield the best of both designs. This is accomplished through two primary modifications: a hierarchy of Transformers containing a new convolutional token embedding, and a convolutional Transformer
401_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cvt.md
https://huggingface.co/docs/transformers/en/model_doc/cvt/#overview
.md
block leveraging a convolutional projection. These changes introduce desirable properties of convolutional neural networks (CNNs) to the ViT architecture (\ie shift, scale, and distortion invariance) while maintaining the merits of Transformers (\ie dynamic attention, global context, and better generalization). We validate CvT by conducting extensive experiments, showing that this approach achieves
401_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cvt.md
https://huggingface.co/docs/transformers/en/model_doc/cvt/#overview
.md
state-of-the-art performance over other Vision Transformers and ResNets on ImageNet-1k, with fewer parameters and lower FLOPs. In addition, performance gains are maintained when pretrained on larger datasets (\eg ImageNet-22k) and fine-tuned to downstream tasks. Pre-trained on ImageNet-22k, our CvT-W24 obtains a top-1 accuracy of 87.7\% on the ImageNet-1k val set. Finally, our results show that the positional encoding,
401_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cvt.md
https://huggingface.co/docs/transformers/en/model_doc/cvt/#overview
.md
a crucial component in existing Vision Transformers, can be safely removed in our model, simplifying the design for higher resolution vision tasks.* This model was contributed by [anugunj](https://huggingface.co/anugunj). The original code can be found [here](https://github.com/microsoft/CvT).
401_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cvt.md
https://huggingface.co/docs/transformers/en/model_doc/cvt/#usage-tips
.md
- CvT models are regular Vision Transformers, but trained with convolutions. They outperform the [original model (ViT)](vit) when fine-tuned on ImageNet-1K and CIFAR-100. - You can check out demo notebooks regarding inference as well as fine-tuning on custom data [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/VisionTransformer) (you can just replace [`ViTFeatureExtractor`] by [`AutoImageProcessor`] and [`ViTForImageClassification`] by [`CvtForImageClassification`]).
401_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cvt.md
https://huggingface.co/docs/transformers/en/model_doc/cvt/#usage-tips
.md
- The available checkpoints are either (1) pre-trained on [ImageNet-22k](http://www.image-net.org/) (a collection of 14 million images and 22k classes) only, (2) also fine-tuned on ImageNet-22k or (3) also fine-tuned on [ImageNet-1k](http://www.image-net.org/challenges/LSVRC/2012/) (also referred to as ILSVRC 2012, a collection of 1.3 million images and 1,000 classes).
401_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cvt.md
https://huggingface.co/docs/transformers/en/model_doc/cvt/#resources
.md
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with CvT. <PipelineTag pipeline="image-classification"/> - [`CvtForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb).
401_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cvt.md
https://huggingface.co/docs/transformers/en/model_doc/cvt/#resources
.md
- See also: [Image classification task guide](../tasks/image_classification) If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
401_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cvt.md
https://huggingface.co/docs/transformers/en/model_doc/cvt/#cvtconfig
.md
This is the configuration class to store the configuration of a [`CvtModel`]. It is used to instantiate a CvT model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the CvT [microsoft/cvt-13](https://huggingface.co/microsoft/cvt-13) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
401_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cvt.md
https://huggingface.co/docs/transformers/en/model_doc/cvt/#cvtconfig
.md
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: num_channels (`int`, *optional*, defaults to 3): The number of input channels. patch_sizes (`List[int]`, *optional*, defaults to `[7, 3, 3]`): The kernel size of each encoder's patch embedding. patch_stride (`List[int]`, *optional*, defaults to `[4, 2, 2]`): The stride size of each encoder's patch embedding.
401_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cvt.md
https://huggingface.co/docs/transformers/en/model_doc/cvt/#cvtconfig
.md
patch_stride (`List[int]`, *optional*, defaults to `[4, 2, 2]`): The stride size of each encoder's patch embedding. patch_padding (`List[int]`, *optional*, defaults to `[2, 1, 1]`): The padding size of each encoder's patch embedding. embed_dim (`List[int]`, *optional*, defaults to `[64, 192, 384]`): Dimension of each of the encoder blocks. num_heads (`List[int]`, *optional*, defaults to `[1, 3, 6]`): Number of attention heads for each attention layer in each block of the Transformer encoder.
401_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cvt.md
https://huggingface.co/docs/transformers/en/model_doc/cvt/#cvtconfig
.md
Number of attention heads for each attention layer in each block of the Transformer encoder. depth (`List[int]`, *optional*, defaults to `[1, 2, 10]`): The number of layers in each encoder block. mlp_ratios (`List[float]`, *optional*, defaults to `[4.0, 4.0, 4.0, 4.0]`): Ratio of the size of the hidden layer compared to the size of the input layer of the Mix FFNs in the encoder blocks. attention_drop_rate (`List[float]`, *optional*, defaults to `[0.0, 0.0, 0.0]`):
401_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cvt.md
https://huggingface.co/docs/transformers/en/model_doc/cvt/#cvtconfig
.md
encoder blocks. attention_drop_rate (`List[float]`, *optional*, defaults to `[0.0, 0.0, 0.0]`): The dropout ratio for the attention probabilities. drop_rate (`List[float]`, *optional*, defaults to `[0.0, 0.0, 0.0]`): The dropout ratio for the patch embeddings probabilities. drop_path_rate (`List[float]`, *optional*, defaults to `[0.0, 0.0, 0.1]`): The dropout probability for stochastic depth, used in the blocks of the Transformer encoder.
401_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cvt.md
https://huggingface.co/docs/transformers/en/model_doc/cvt/#cvtconfig
.md
The dropout probability for stochastic depth, used in the blocks of the Transformer encoder. qkv_bias (`List[bool]`, *optional*, defaults to `[True, True, True]`): The bias bool for query, key and value in attentions cls_token (`List[bool]`, *optional*, defaults to `[False, False, True]`): Whether or not to add a classification token to the output of each of the last 3 stages. qkv_projection_method (`List[string]`, *optional*, defaults to ["dw_bn", "dw_bn", "dw_bn"]`):
401_4_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cvt.md
https://huggingface.co/docs/transformers/en/model_doc/cvt/#cvtconfig
.md
qkv_projection_method (`List[string]`, *optional*, defaults to ["dw_bn", "dw_bn", "dw_bn"]`): The projection method for query, key and value Default is depth-wise convolutions with batch norm. For Linear projection use "avg". kernel_qkv (`List[int]`, *optional*, defaults to `[3, 3, 3]`): The kernel size for query, key and value in attention layer padding_kv (`List[int]`, *optional*, defaults to `[1, 1, 1]`): The padding size for key and value in attention layer
401_4_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cvt.md
https://huggingface.co/docs/transformers/en/model_doc/cvt/#cvtconfig
.md
padding_kv (`List[int]`, *optional*, defaults to `[1, 1, 1]`): The padding size for key and value in attention layer stride_kv (`List[int]`, *optional*, defaults to `[2, 2, 2]`): The stride size for key and value in attention layer padding_q (`List[int]`, *optional*, defaults to `[1, 1, 1]`): The padding size for query in attention layer stride_q (`List[int]`, *optional*, defaults to `[1, 1, 1]`): The stride size for query in attention layer initializer_range (`float`, *optional*, defaults to 0.02):
401_4_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cvt.md
https://huggingface.co/docs/transformers/en/model_doc/cvt/#cvtconfig
.md
The stride size for query in attention layer initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-6): The epsilon used by the layer normalization layers. Example: ```python >>> from transformers import CvtConfig, CvtModel
401_4_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cvt.md
https://huggingface.co/docs/transformers/en/model_doc/cvt/#cvtconfig
.md
>>> # Initializing a Cvt msft/cvt style configuration >>> configuration = CvtConfig() >>> # Initializing a model (with random weights) from the msft/cvt style configuration >>> model = CvtModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ``` <frameworkcontent> <pt>
401_4_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cvt.md
https://huggingface.co/docs/transformers/en/model_doc/cvt/#cvtmodel
.md
The bare Cvt Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`CvtConfig`]): Model configuration class with all the parameters of the model.
401_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cvt.md
https://huggingface.co/docs/transformers/en/model_doc/cvt/#cvtmodel
.md
behavior. Parameters: config ([`CvtConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
401_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cvt.md
https://huggingface.co/docs/transformers/en/model_doc/cvt/#cvtforimageclassification
.md
Cvt Model transformer with an image classification head on top (a linear layer on top of the final hidden state of the [CLS] token) e.g. for ImageNet. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`CvtConfig`]): Model configuration class with all the parameters of the model.
401_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cvt.md
https://huggingface.co/docs/transformers/en/model_doc/cvt/#cvtforimageclassification
.md
behavior. Parameters: config ([`CvtConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward </pt> <tf>
401_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cvt.md
https://huggingface.co/docs/transformers/en/model_doc/cvt/#tfcvtmodel
.md
No docstring available for TFCvtModel Methods: call
401_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cvt.md
https://huggingface.co/docs/transformers/en/model_doc/cvt/#tfcvtforimageclassification
.md
No docstring available for TFCvtForImageClassification Methods: call </tf> </frameworkcontent>
401_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinov2.md
https://huggingface.co/docs/transformers/en/model_doc/dinov2/
.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
402_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinov2.md
https://huggingface.co/docs/transformers/en/model_doc/dinov2/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
402_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinov2.md
https://huggingface.co/docs/transformers/en/model_doc/dinov2/#overview
.md
The DINOv2 model was proposed in [DINOv2: Learning Robust Visual Features without Supervision](https://arxiv.org/abs/2304.07193) by
402_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinov2.md
https://huggingface.co/docs/transformers/en/model_doc/dinov2/#overview
.md
Maxime Oquab, Timothée Darcet, Théo Moutakanni, Huy Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, Mahmoud Assran, Nicolas Ballas, Wojciech Galuba, Russell Howes, Po-Yao Huang, Shang-Wen Li, Ishan Misra, Michael Rabbat, Vasu Sharma, Gabriel Synnaeve, Hu Xu, Hervé Jegou, Julien Mairal, Patrick Labatut, Armand Joulin, Piotr Bojanowski.
402_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinov2.md
https://huggingface.co/docs/transformers/en/model_doc/dinov2/#overview
.md
DINOv2 is an upgrade of [DINO](https://arxiv.org/abs/2104.14294), a self-supervised method applied on [Vision Transformers](vit). This method enables all-purpose visual features, i.e., features that work across image distributions and tasks without finetuning. The abstract from the paper is the following:
402_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinov2.md
https://huggingface.co/docs/transformers/en/model_doc/dinov2/#overview
.md
*The recent breakthroughs in natural language processing for model pretraining on large quantities of data have opened the way for similar foundation models in computer vision. These models could greatly simplify the use of images in any system by producing all-purpose visual features, i.e., features that work across image distributions and tasks without finetuning. This work shows that existing pretraining methods, especially self-supervised methods, can produce such features if trained on enough curated
402_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinov2.md
https://huggingface.co/docs/transformers/en/model_doc/dinov2/#overview
.md
that existing pretraining methods, especially self-supervised methods, can produce such features if trained on enough curated data from diverse sources. We revisit existing approaches and combine different techniques to scale our pretraining in terms of data and model size. Most of the technical contributions aim at accelerating and stabilizing the training at scale. In terms of data, we propose an automatic pipeline to build a dedicated, diverse, and curated image dataset instead of uncurated data, as
402_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinov2.md
https://huggingface.co/docs/transformers/en/model_doc/dinov2/#overview
.md
data, we propose an automatic pipeline to build a dedicated, diverse, and curated image dataset instead of uncurated data, as typically done in the self-supervised literature. In terms of models, we train a ViT model (Dosovitskiy et al., 2020) with 1B parameters and distill it into a series of smaller models that surpass the best available all-purpose features, OpenCLIP (Ilharco et al., 2021) on most of the benchmarks at image and pixel levels.*
402_1_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinov2.md
https://huggingface.co/docs/transformers/en/model_doc/dinov2/#overview
.md
This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/facebookresearch/dinov2).
402_1_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinov2.md
https://huggingface.co/docs/transformers/en/model_doc/dinov2/#usage-tips
.md
The model can be traced using `torch.jit.trace` which leverages JIT compilation to optimize the model making it faster to run. Note this still produces some mis-matched elements and the difference between the original model and the traced model is of the order of 1e-4. ```python import torch from transformers import AutoImageProcessor, AutoModel from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw)
402_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinov2.md
https://huggingface.co/docs/transformers/en/model_doc/dinov2/#usage-tips
.md
url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) processor = AutoImageProcessor.from_pretrained('facebook/dinov2-base') model = AutoModel.from_pretrained('facebook/dinov2-base') inputs = processor(images=image, return_tensors="pt") outputs = model(**inputs) last_hidden_states = outputs[0] # We have to force return_dict=False for tracing model.config.return_dict = False
402_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinov2.md
https://huggingface.co/docs/transformers/en/model_doc/dinov2/#usage-tips
.md
# We have to force return_dict=False for tracing model.config.return_dict = False with torch.no_grad(): traced_model = torch.jit.trace(model, [inputs.pixel_values]) traced_outputs = traced_model(inputs.pixel_values) print((last_hidden_states - traced_outputs[0]).abs().max()) ```
402_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinov2.md
https://huggingface.co/docs/transformers/en/model_doc/dinov2/#resources
.md
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with DINOv2. - Demo notebooks for DINOv2 can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/DINOv2). 🌎 <PipelineTag pipeline="image-classification"/>
402_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinov2.md
https://huggingface.co/docs/transformers/en/model_doc/dinov2/#resources
.md
<PipelineTag pipeline="image-classification"/> - [`Dinov2ForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb). - See also: [Image classification task guide](../tasks/image_classification)
402_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinov2.md
https://huggingface.co/docs/transformers/en/model_doc/dinov2/#resources
.md
- See also: [Image classification task guide](../tasks/image_classification) If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
402_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinov2.md
https://huggingface.co/docs/transformers/en/model_doc/dinov2/#dinov2config
.md
This is the configuration class to store the configuration of a [`Dinov2Model`]. It is used to instantiate an Dinov2 model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Dinov2 [google/dinov2-base-patch16-224](https://huggingface.co/google/dinov2-base-patch16-224) architecture.
402_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinov2.md
https://huggingface.co/docs/transformers/en/model_doc/dinov2/#dinov2config
.md
[google/dinov2-base-patch16-224](https://huggingface.co/google/dinov2-base-patch16-224) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: hidden_size (`int`, *optional*, defaults to 768): Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (`int`, *optional*, defaults to 12): Number of hidden layers in the Transformer encoder.
402_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinov2.md
https://huggingface.co/docs/transformers/en/model_doc/dinov2/#dinov2config
.md
num_hidden_layers (`int`, *optional*, defaults to 12): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 12): Number of attention heads for each attention layer in the Transformer encoder. mlp_ratio (`int`, *optional*, defaults to 4): Ratio of the hidden size of the MLPs relative to the `hidden_size`. hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`):
402_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinov2.md
https://huggingface.co/docs/transformers/en/model_doc/dinov2/#dinov2config
.md
hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported. hidden_dropout_prob (`float`, *optional*, defaults to 0.0): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities.
402_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinov2.md
https://huggingface.co/docs/transformers/en/model_doc/dinov2/#dinov2config
.md
attention_probs_dropout_prob (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-06): The epsilon used by the layer normalization layers. image_size (`int`, *optional*, defaults to 224): The size (resolution) of each image.
402_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinov2.md
https://huggingface.co/docs/transformers/en/model_doc/dinov2/#dinov2config
.md
image_size (`int`, *optional*, defaults to 224): The size (resolution) of each image. patch_size (`int`, *optional*, defaults to 14): The size (resolution) of each patch. num_channels (`int`, *optional*, defaults to 3): The number of input channels. qkv_bias (`bool`, *optional*, defaults to `True`): Whether to add a bias to the queries, keys and values. layerscale_value (`float`, *optional*, defaults to 1.0): Initial value to use for layer scale. drop_path_rate (`float`, *optional*, defaults to 0.0):
402_4_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinov2.md
https://huggingface.co/docs/transformers/en/model_doc/dinov2/#dinov2config
.md
Initial value to use for layer scale. drop_path_rate (`float`, *optional*, defaults to 0.0): Stochastic depth rate per sample (when applied in the main path of residual layers). use_swiglu_ffn (`bool`, *optional*, defaults to `False`): Whether to use the SwiGLU feedforward neural network. out_features (`List[str]`, *optional*): If used as backbone, list of features to output. Can be any of `"stem"`, `"stage1"`, `"stage2"`, etc.
402_4_6