source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/poolformer.md
https://huggingface.co/docs/transformers/en/model_doc/poolformer/#overview
.md
*Transformers have shown great potential in computer vision tasks. A common belief is their attention-based token mixer module contributes most to their competence. However, recent works show the attention-based module in transformers can be replaced by spatial MLPs and the resulted models still perform quite well. Based on this observation, we hypothesize that the general architecture of the transformers, instead of the specific token mixer module, is more essential to the model's performance. To verify
386_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/poolformer.md
https://huggingface.co/docs/transformers/en/model_doc/poolformer/#overview
.md
of the transformers, instead of the specific token mixer module, is more essential to the model's performance. To verify this, we deliberately replace the attention module in transformers with an embarrassingly simple spatial pooling operator to conduct only the most basic token mixing. Surprisingly, we observe that the derived model, termed as PoolFormer, achieves competitive performance on multiple computer vision tasks. For example, on ImageNet-1K, PoolFormer achieves 82.1% top-1 accuracy, surpassing
386_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/poolformer.md
https://huggingface.co/docs/transformers/en/model_doc/poolformer/#overview
.md
on multiple computer vision tasks. For example, on ImageNet-1K, PoolFormer achieves 82.1% top-1 accuracy, surpassing well-tuned vision transformer/MLP-like baselines DeiT-B/ResMLP-B24 by 0.3%/1.1% accuracy with 35%/52% fewer parameters and 48%/60% fewer MACs. The effectiveness of PoolFormer verifies our hypothesis and urges us to initiate the concept of "MetaFormer", a general architecture abstracted from transformers without specifying the token mixer. Based on the extensive experiments, we argue that
386_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/poolformer.md
https://huggingface.co/docs/transformers/en/model_doc/poolformer/#overview
.md
architecture abstracted from transformers without specifying the token mixer. Based on the extensive experiments, we argue that MetaFormer is the key player in achieving superior results for recent transformer and MLP-like models on vision tasks. This work calls for more future research dedicated to improving MetaFormer instead of focusing on the token mixer modules. Additionally, our proposed PoolFormer could serve as a starting baseline for future MetaFormer architecture design.*
386_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/poolformer.md
https://huggingface.co/docs/transformers/en/model_doc/poolformer/#overview
.md
The figure below illustrates the architecture of PoolFormer. Taken from the [original paper](https://arxiv.org/abs/2111.11418). <img width="600" src="https://user-images.githubusercontent.com/15921929/142746124-1ab7635d-2536-4a0e-ad43-b4fe2c5a525d.png"/> This model was contributed by [heytanay](https://huggingface.co/heytanay). The original code can be found [here](https://github.com/sail-sg/poolformer).
386_1_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/poolformer.md
https://huggingface.co/docs/transformers/en/model_doc/poolformer/#usage-tips
.md
- PoolFormer has a hierarchical architecture, where instead of Attention, a simple Average Pooling layer is present. All checkpoints of the model can be found on the [hub](https://huggingface.co/models?other=poolformer). - One can use [`PoolFormerImageProcessor`] to prepare images for the model. - As most models, PoolFormer comes in different sizes, the details of which can be found in the table below. | **Model variant** | **Depths** | **Hidden sizes** | **Params (M)** | **ImageNet-1k Top 1** |
386_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/poolformer.md
https://huggingface.co/docs/transformers/en/model_doc/poolformer/#usage-tips
.md
| **Model variant** | **Depths** | **Hidden sizes** | **Params (M)** | **ImageNet-1k Top 1** | | :---------------: | ------------- | ------------------- | :------------: | :-------------------: | | s12 | [2, 2, 6, 2] | [64, 128, 320, 512] | 12 | 77.2 | | s24 | [4, 4, 12, 4] | [64, 128, 320, 512] | 21 | 80.3 | | s36 | [6, 6, 18, 6] | [64, 128, 320, 512] | 31 | 81.4 |
386_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/poolformer.md
https://huggingface.co/docs/transformers/en/model_doc/poolformer/#usage-tips
.md
| s36 | [6, 6, 18, 6] | [64, 128, 320, 512] | 31 | 81.4 | | m36 | [6, 6, 18, 6] | [96, 192, 384, 768] | 56 | 82.1 | | m48 | [8, 8, 24, 8] | [96, 192, 384, 768] | 73 | 82.5 |
386_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/poolformer.md
https://huggingface.co/docs/transformers/en/model_doc/poolformer/#resources
.md
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with PoolFormer. <PipelineTag pipeline="image-classification"/> - [`PoolFormerForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb).
386_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/poolformer.md
https://huggingface.co/docs/transformers/en/model_doc/poolformer/#resources
.md
- See also: [Image classification task guide](../tasks/image_classification) If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
386_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/poolformer.md
https://huggingface.co/docs/transformers/en/model_doc/poolformer/#poolformerconfig
.md
This is the configuration class to store the configuration of [`PoolFormerModel`]. It is used to instantiate a PoolFormer model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the PoolFormer [sail/poolformer_s12](https://huggingface.co/sail/poolformer_s12) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
386_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/poolformer.md
https://huggingface.co/docs/transformers/en/model_doc/poolformer/#poolformerconfig
.md
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: num_channels (`int`, *optional*, defaults to 3): The number of channels in the input image. patch_size (`int`, *optional*, defaults to 16): The size of the input patch. stride (`int`, *optional*, defaults to 16): The stride of the input patch. pool_size (`int`, *optional*, defaults to 3): The size of the pooling window.
386_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/poolformer.md
https://huggingface.co/docs/transformers/en/model_doc/poolformer/#poolformerconfig
.md
The stride of the input patch. pool_size (`int`, *optional*, defaults to 3): The size of the pooling window. mlp_ratio (`float`, *optional*, defaults to 4.0): The ratio of the number of channels in the output of the MLP to the number of channels in the input. depths (`list`, *optional*, defaults to `[2, 2, 6, 2]`): The depth of each encoder block. hidden_sizes (`list`, *optional*, defaults to `[64, 128, 320, 512]`): The hidden sizes of each encoder block.
386_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/poolformer.md
https://huggingface.co/docs/transformers/en/model_doc/poolformer/#poolformerconfig
.md
hidden_sizes (`list`, *optional*, defaults to `[64, 128, 320, 512]`): The hidden sizes of each encoder block. patch_sizes (`list`, *optional*, defaults to `[7, 3, 3, 3]`): The size of the input patch for each encoder block. strides (`list`, *optional*, defaults to `[4, 2, 2, 2]`): The stride of the input patch for each encoder block. padding (`list`, *optional*, defaults to `[2, 1, 1, 1]`): The padding of the input patch for each encoder block. num_encoder_blocks (`int`, *optional*, defaults to 4):
386_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/poolformer.md
https://huggingface.co/docs/transformers/en/model_doc/poolformer/#poolformerconfig
.md
The padding of the input patch for each encoder block. num_encoder_blocks (`int`, *optional*, defaults to 4): The number of encoder blocks. drop_path_rate (`float`, *optional*, defaults to 0.0): The dropout rate for the dropout layers. hidden_act (`str`, *optional*, defaults to `"gelu"`): The activation function for the hidden layers. use_layer_scale (`bool`, *optional*, defaults to `True`): Whether to use layer scale. layer_scale_init_value (`float`, *optional*, defaults to 1e-05):
386_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/poolformer.md
https://huggingface.co/docs/transformers/en/model_doc/poolformer/#poolformerconfig
.md
Whether to use layer scale. layer_scale_init_value (`float`, *optional*, defaults to 1e-05): The initial value for the layer scale. initializer_range (`float`, *optional*, defaults to 0.02): The initializer range for the weights. Example: ```python >>> from transformers import PoolFormerConfig, PoolFormerModel
386_4_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/poolformer.md
https://huggingface.co/docs/transformers/en/model_doc/poolformer/#poolformerconfig
.md
>>> # Initializing a PoolFormer sail/poolformer_s12 style configuration >>> configuration = PoolFormerConfig() >>> # Initializing a model (with random weights) from the sail/poolformer_s12 style configuration >>> model = PoolFormerModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
386_4_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/poolformer.md
https://huggingface.co/docs/transformers/en/model_doc/poolformer/#poolformerfeatureextractor
.md
No docstring available for PoolFormerFeatureExtractor Methods: __call__
386_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/poolformer.md
https://huggingface.co/docs/transformers/en/model_doc/poolformer/#poolformerimageprocessor
.md
Constructs a PoolFormer image processor. Args: do_resize (`bool`, *optional*, defaults to `True`): Whether to resize the image's (height, width) dimensions to the specified `size`. Can be overridden by `do_resize` in the `preprocess` method. size (`Dict[str, int]` *optional*, defaults to `{"shortest_edge": 224}`): Size of the image after resizing. Can be overridden by `size` in the `preprocess` method. If crop_pct is unset: - size is `{"height": h, "width": w}`: the image is resized to `(h, w)`.
386_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/poolformer.md
https://huggingface.co/docs/transformers/en/model_doc/poolformer/#poolformerimageprocessor
.md
unset: - size is `{"height": h, "width": w}`: the image is resized to `(h, w)`. - size is `{"shortest_edge": s}`: the shortest edge of the image is resized to s whilst maintaining the aspect ratio. If crop_pct is set: - size is `{"height": h, "width": w}`: the image is resized to `(int(floor(h/crop_pct)), int(floor(w/crop_pct)))` - size is `{"height": c, "width": c}`: the shortest edge of the image is resized to `int(floor(c/crop_pct)` whilst maintaining the aspect ratio.
386_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/poolformer.md
https://huggingface.co/docs/transformers/en/model_doc/poolformer/#poolformerimageprocessor
.md
whilst maintaining the aspect ratio. - size is `{"shortest_edge": c}`: the shortest edge of the image is resized to `int(floor(c/crop_pct)` whilst maintaining the aspect ratio. crop_pct (`float`, *optional*, defaults to 0.9): Percentage of the image to crop from the center. Can be overridden by `crop_pct` in the `preprocess` method. resample (`PILImageResampling`, *optional*, defaults to `Resampling.BICUBIC`):
386_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/poolformer.md
https://huggingface.co/docs/transformers/en/model_doc/poolformer/#poolformerimageprocessor
.md
method. resample (`PILImageResampling`, *optional*, defaults to `Resampling.BICUBIC`): Resampling filter to use if resizing the image. Can be overridden by `resample` in the `preprocess` method. do_center_crop (`bool`, *optional*, defaults to `True`): Whether to center crop the image. If the input size is smaller than `crop_size` along any edge, the image is padded with 0's and then center cropped. Can be overridden by `do_center_crop` in the `preprocess` method.
386_6_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/poolformer.md
https://huggingface.co/docs/transformers/en/model_doc/poolformer/#poolformerimageprocessor
.md
is padded with 0's and then center cropped. Can be overridden by `do_center_crop` in the `preprocess` method. crop_size (`Dict[str, int]`, *optional*, defaults to `{"height": 224, "width": 224}`): Size of the image after applying center crop. Only has an effect if `do_center_crop` is set to `True`. Can be overridden by the `crop_size` parameter in the `preprocess` method. rescale_factor (`int` or `float`, *optional*, defaults to `1/255`):
386_6_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/poolformer.md
https://huggingface.co/docs/transformers/en/model_doc/poolformer/#poolformerimageprocessor
.md
rescale_factor (`int` or `float`, *optional*, defaults to `1/255`): Scale factor to use if rescaling the image. Can be overridden by the `rescale_factor` parameter in the `preprocess` method. do_rescale (`bool`, *optional*, defaults to `True`): Whether to rescale the image by the specified scale `rescale_factor`. Can be overridden by the `do_rescale` parameter in the `preprocess` method. do_normalize (`bool`, *optional*, defaults to `True`):
386_6_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/poolformer.md
https://huggingface.co/docs/transformers/en/model_doc/poolformer/#poolformerimageprocessor
.md
parameter in the `preprocess` method. do_normalize (`bool`, *optional*, defaults to `True`): Controls whether to normalize the image. Can be overridden by the `do_normalize` parameter in the `preprocess` method. image_mean (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_MEAN`): Mean to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the `image_mean` parameter in the `preprocess` method.
386_6_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/poolformer.md
https://huggingface.co/docs/transformers/en/model_doc/poolformer/#poolformerimageprocessor
.md
channels in the image. Can be overridden by the `image_mean` parameter in the `preprocess` method. image_std (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_STD`): Standard deviation to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the `image_std` parameter in the `preprocess` method. Methods: preprocess
386_6_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/poolformer.md
https://huggingface.co/docs/transformers/en/model_doc/poolformer/#poolformermodel
.md
The bare PoolFormer Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`PoolFormerConfig`]): Model configuration class with all the parameters of the model.
386_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/poolformer.md
https://huggingface.co/docs/transformers/en/model_doc/poolformer/#poolformermodel
.md
behavior. Parameters: config ([`PoolFormerConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
386_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/poolformer.md
https://huggingface.co/docs/transformers/en/model_doc/poolformer/#poolformerforimageclassification
.md
PoolFormer Model transformer with an image classification head on top This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`PoolFormerConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
386_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/poolformer.md
https://huggingface.co/docs/transformers/en/model_doc/poolformer/#poolformerforimageclassification
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
386_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chameleon.md
https://huggingface.co/docs/transformers/en/model_doc/chameleon/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
387_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chameleon.md
https://huggingface.co/docs/transformers/en/model_doc/chameleon/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
387_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chameleon.md
https://huggingface.co/docs/transformers/en/model_doc/chameleon/#overview
.md
The Chameleon model was proposed in [Chameleon: Mixed-Modal Early-Fusion Foundation Models ](https://arxiv.org/abs/2405.09818v1) by META AI Chameleon Team. Chameleon is a Vision-Language Model that use vector quantization to tokenize images which enables the model to generate multimodal output. The model takes images and texts as input, including an interleaved format, and generates textual response. Image generation module is not released yet. The abstract from the paper is the following:
387_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chameleon.md
https://huggingface.co/docs/transformers/en/model_doc/chameleon/#overview
.md
The abstract from the paper is the following: *We present Chameleon, a family of early-fusion token-based mixed-modal models capable of understanding and generating images and text in any arbitrary sequence. We outline a stable training approach from inception, an alignment recipe, and an architectural parameterization tailored for the early-fusion, token-based, mixed-modal setting. The models are evaluated on a comprehensive range
387_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chameleon.md
https://huggingface.co/docs/transformers/en/model_doc/chameleon/#overview
.md
early-fusion, token-based, mixed-modal setting. The models are evaluated on a comprehensive range of tasks, including visual question answering, image captioning, text generation, image generation, and long-form mixed modal generation. Chameleon demonstrates broad and general capabilities, including state-of-the-art performance in image captioning tasks, outperforms Llama-2 in text-only tasks while being competitive with models such as Mixtral 8x7B and Gemini-Pro, and performs non-trivial image
387_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chameleon.md
https://huggingface.co/docs/transformers/en/model_doc/chameleon/#overview
.md
being competitive with models such as Mixtral 8x7B and Gemini-Pro, and performs non-trivial image generation, all in a single model. It also matches or exceeds the performance of much larger models, including Gemini Pro and GPT-4V, according to human judgments on a new long-form mixed-modal generation evaluation, where either the prompt or outputs contain mixed sequences of both images and text. Chameleon marks a significant step forward in unified modeling of full multimodal documents*
387_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chameleon.md
https://huggingface.co/docs/transformers/en/model_doc/chameleon/#overview
.md
text. Chameleon marks a significant step forward in unified modeling of full multimodal documents* <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/chameleon_arch.png" alt="drawing" width="600"/>
387_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chameleon.md
https://huggingface.co/docs/transformers/en/model_doc/chameleon/#overview
.md
alt="drawing" width="600"/> <small> Chameleon incorporates a vector quantizer module to transform images into discrete tokens. That also enables image generation using an auto-regressive transformer. Taken from the <a href="https://arxiv.org/abs/2405.09818v1">original paper.</a> </small> This model was contributed by [joaogante](https://huggingface.co/joaogante) and [RaushanTurganbay](https://huggingface.co/RaushanTurganbay).
387_1_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chameleon.md
https://huggingface.co/docs/transformers/en/model_doc/chameleon/#overview
.md
The original code can be found [here](https://github.com/facebookresearch/chameleon).
387_1_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chameleon.md
https://huggingface.co/docs/transformers/en/model_doc/chameleon/#usage-tips
.md
- We advise users to use `padding_side="left"` when computing batched generation as it leads to more accurate results. Simply make sure to set `processor.tokenizer.padding_side = "left"` before generating. - Note that Chameleon was tuned for safety alignment. If the model is refusing to answer, consider asking a more concrete question, instead of an open question.
387_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chameleon.md
https://huggingface.co/docs/transformers/en/model_doc/chameleon/#usage-tips
.md
- Chameleon generates in chat format which means that the generated text will always be the "assistant's turn". You can enable a text completion generation by passing `return_for_text_completion=True` when calling the processor. > [!NOTE]
387_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chameleon.md
https://huggingface.co/docs/transformers/en/model_doc/chameleon/#usage-tips
.md
> [!NOTE] > Chameleon implementation in Transformers uses a special image token to indicate where to merge image embeddings. For special image token we didn't add a new one but used one of the reserved tokens: `<reserved08707>`. You have to add `<image>` to your prompt in the place where the image should be embedded for correct generation.
387_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chameleon.md
https://huggingface.co/docs/transformers/en/model_doc/chameleon/#single-image-inference
.md
Chameleon is a gated model so make sure to have access and login to Hugging Face Hub using a token. Here's how to load the model and perform inference in half-precision (`torch.bfloat16`): ```python from transformers import ChameleonProcessor, ChameleonForConditionalGeneration import torch from PIL import Image import requests
387_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chameleon.md
https://huggingface.co/docs/transformers/en/model_doc/chameleon/#single-image-inference
.md
processor = ChameleonProcessor.from_pretrained("facebook/chameleon-7b") model = ChameleonForConditionalGeneration.from_pretrained("facebook/chameleon-7b", torch_dtype=torch.bfloat16, device_map="cuda") # prepare image and text prompt url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) prompt = "What do you see in this image?<image>" inputs = processor(images=image, text=prompt, return_tensors="pt").to(model.device, dtype=torch.bfloat16)
387_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chameleon.md
https://huggingface.co/docs/transformers/en/model_doc/chameleon/#single-image-inference
.md
inputs = processor(images=image, text=prompt, return_tensors="pt").to(model.device, dtype=torch.bfloat16) # autoregressively complete prompt output = model.generate(**inputs, max_new_tokens=50) print(processor.decode(output[0], skip_special_tokens=True)) ```
387_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chameleon.md
https://huggingface.co/docs/transformers/en/model_doc/chameleon/#multi-image-inference
.md
Chameleon can perform inference with multiple images as input, where images either belong to the same prompt or different prompts (in batched inference). Here is how you can do it: ```python from transformers import ChameleonProcessor, ChameleonForConditionalGeneration import torch from PIL import Image import requests processor = ChameleonProcessor.from_pretrained("facebook/chameleon-7b")
387_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chameleon.md
https://huggingface.co/docs/transformers/en/model_doc/chameleon/#multi-image-inference
.md
processor = ChameleonProcessor.from_pretrained("facebook/chameleon-7b") model = ChameleonForConditionalGeneration.from_pretrained("facebook/chameleon-7b", torch_dtype=torch.bfloat16, device_map="cuda") # Get three different images url = "https://www.ilankelman.org/stopsigns/australia.jpg" image_stop = Image.open(requests.get(url, stream=True).raw) url = "http://images.cocodataset.org/val2017/000000039769.jpg" image_cats = Image.open(requests.get(url, stream=True).raw)
387_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chameleon.md
https://huggingface.co/docs/transformers/en/model_doc/chameleon/#multi-image-inference
.md
url = "http://images.cocodataset.org/val2017/000000039769.jpg" image_cats = Image.open(requests.get(url, stream=True).raw) url = "https://huggingface.co/microsoft/kosmos-2-patch14-224/resolve/main/snowman.jpg" image_snowman = Image.open(requests.get(url, stream=True).raw) # Prepare a batched prompt, where the first one is a multi-image prompt and the second is not prompts = [ "What do these images have in common?<image><image>", "<image>What is shown in this image?" ]
387_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chameleon.md
https://huggingface.co/docs/transformers/en/model_doc/chameleon/#multi-image-inference
.md
# We can simply feed images in the order they have to be used in the text prompt # Each "<image>" token uses one image leaving the next for the subsequent "<image>" tokens inputs = processor(images=[image_stop, image_cats, image_snowman], text=prompts, padding=True, return_tensors="pt").to(device="cuda", dtype=torch.bfloat16) # Generate generate_ids = model.generate(**inputs, max_new_tokens=50) processor.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False) ```
387_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chameleon.md
https://huggingface.co/docs/transformers/en/model_doc/chameleon/#quantization-using-bitsandbytes
.md
The model can be loaded in 8 or 4 bits, greatly reducing the memory requirements while maintaining the performance of the original model. First make sure to install bitsandbytes, `pip install bitsandbytes` and to have access to a GPU/accelerator that is supported by the library. <Tip>
387_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chameleon.md
https://huggingface.co/docs/transformers/en/model_doc/chameleon/#quantization-using-bitsandbytes
.md
<Tip> bitsandbytes is being refactored to support multiple backends beyond CUDA. Currently, ROCm (AMD GPU) and Intel CPU implementations are mature, with Intel XPU in progress and Apple Silicon support expected by Q4/Q1. For installation instructions and the latest backend updates, visit [this link](https://huggingface.co/docs/bitsandbytes/main/en/installation#multi-backend).
387_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chameleon.md
https://huggingface.co/docs/transformers/en/model_doc/chameleon/#quantization-using-bitsandbytes
.md
We value your feedback to help identify bugs before the full release! Check out [these docs](https://huggingface.co/docs/bitsandbytes/main/en/non_cuda_backends) for more details and feedback links. </Tip> Simply change the snippet above with: ```python from transformers import ChameleonForConditionalGeneration, BitsAndBytesConfig
387_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chameleon.md
https://huggingface.co/docs/transformers/en/model_doc/chameleon/#quantization-using-bitsandbytes
.md
# specify how to quantize the model quantization_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16, ) model = ChameleonForConditionalGeneration.from_pretrained("facebook/chameleon-7b", quantization_config=quantization_config, device_map="cuda") ```
387_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chameleon.md
https://huggingface.co/docs/transformers/en/model_doc/chameleon/#use-flash-attention-2-and-sdpa-to-further-speed-up-generation
.md
The models supports both, Flash-Attention 2 and PyTorch's [`torch.nn.functional.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html) which can be enables for optimization. SDPA is the default options when you load the model, If you want to switch for Flash Attention 2, first make sure to install flash-attn. Refer to the [original repository](https://github.com/Dao-AILab/flash-attention) regarding that package installation. Simply
387_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chameleon.md
https://huggingface.co/docs/transformers/en/model_doc/chameleon/#use-flash-attention-2-and-sdpa-to-further-speed-up-generation
.md
Refer to the [original repository](https://github.com/Dao-AILab/flash-attention) regarding that package installation. Simply change the snippet above with:
387_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chameleon.md
https://huggingface.co/docs/transformers/en/model_doc/chameleon/#use-flash-attention-2-and-sdpa-to-further-speed-up-generation
.md
```python from transformers import ChameleonForConditionalGeneration
387_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chameleon.md
https://huggingface.co/docs/transformers/en/model_doc/chameleon/#use-flash-attention-2-and-sdpa-to-further-speed-up-generation
.md
model_id = "facebook/chameleon-7b" model = ChameleonForConditionalGeneration.from_pretrained( model_id, torch_dtype=torch.bfloat16, low_cpu_mem_usage=True, attn_implementation="flash_attention_2" ).to(0) ```
387_6_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chameleon.md
https://huggingface.co/docs/transformers/en/model_doc/chameleon/#chameleonconfig
.md
This is the configuration class to store the configuration of a [`ChameleonModel`]. It is used to instantiate a chameleon model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the [meta/chameleon-7B](https://huggingface.co/meta/chameleon-7B). Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
387_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chameleon.md
https://huggingface.co/docs/transformers/en/model_doc/chameleon/#chameleonconfig
.md
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 65536): Vocabulary size of the chameleon model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`ChameleonModel`]; this includes text and image tokens. hidden_size (`int`, *optional*, defaults to 4096):
387_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chameleon.md
https://huggingface.co/docs/transformers/en/model_doc/chameleon/#chameleonconfig
.md
hidden_size (`int`, *optional*, defaults to 4096): Dimension of the hidden representations. intermediate_size (`int`, *optional*, defaults to 11008): Dimension of the MLP representations. num_hidden_layers (`int`, *optional*, defaults to 32): Number of hidden layers in the Transformer decoder. num_attention_heads (`int`, *optional*, defaults to 32): Number of attention heads for each attention layer in the Transformer decoder. num_key_value_heads (`int`, *optional*, defaults to 32):
387_7_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chameleon.md
https://huggingface.co/docs/transformers/en/model_doc/chameleon/#chameleonconfig
.md
num_key_value_heads (`int`, *optional*, defaults to 32): This is the number of key_value heads that should be used to implement Grouped Query Attention. If `num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if `num_key_value_heads=1 the model will use Multi Query Attention (MQA) otherwise GQA is used. When converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
387_7_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chameleon.md
https://huggingface.co/docs/transformers/en/model_doc/chameleon/#chameleonconfig
.md
converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed by meanpooling all the original heads within that group. For more details checkout [this paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to `num_attention_heads`. hidden_act (`str` or `function`, *optional*, defaults to `"silu"`): The non-linear activation function (function or string) in the decoder. max_position_embeddings (`int`, *optional*, defaults to 4096):
387_7_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chameleon.md
https://huggingface.co/docs/transformers/en/model_doc/chameleon/#chameleonconfig
.md
max_position_embeddings (`int`, *optional*, defaults to 4096): The maximum sequence length that this model might ever be used with. Chameleon supports up to 4096 tokens. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. rms_norm_eps (`float`, *optional*, defaults to 1e-05): The epsilon used by the rms normalization layers. use_cache (`bool`, *optional*, defaults to `True`):
387_7_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chameleon.md
https://huggingface.co/docs/transformers/en/model_doc/chameleon/#chameleonconfig
.md
The epsilon used by the rms normalization layers. use_cache (`bool`, *optional*, defaults to `True`): Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if `config.is_decoder=True`. pad_token_id (`int`, *optional*): Padding token id. bos_token_id (`int`, *optional*, defaults to 1): Beginning of stream token id. eos_token_id (`int`, *optional*, defaults to 2): End of stream token id. tie_word_embeddings (`bool`, *optional*, defaults to `False`):
387_7_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chameleon.md
https://huggingface.co/docs/transformers/en/model_doc/chameleon/#chameleonconfig
.md
End of stream token id. tie_word_embeddings (`bool`, *optional*, defaults to `False`): Whether to tie weight embeddings rope_theta (`float`, *optional*, defaults to 10000.0): The base period of the RoPE embeddings. rope_scaling (`Dict`, *optional*): Dictionary containing the scaling configuration for the RoPE embeddings. Currently supports two scaling strategies: linear and dynamic. Their scaling factor must be a float greater than 1. The expected format is
387_7_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chameleon.md
https://huggingface.co/docs/transformers/en/model_doc/chameleon/#chameleonconfig
.md
strategies: linear and dynamic. Their scaling factor must be a float greater than 1. The expected format is `{"type": strategy name, "factor": scaling factor}`. When using this flag, don't update `max_position_embeddings` to the expected new maximum. See the following thread for more information on how these scaling strategies behave: https://www.reddit.com/r/Localchameleon/comments/14mrgpr/dynamically_scaled_rope_further_increases/. This is an
387_7_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chameleon.md
https://huggingface.co/docs/transformers/en/model_doc/chameleon/#chameleonconfig
.md
https://www.reddit.com/r/Localchameleon/comments/14mrgpr/dynamically_scaled_rope_further_increases/. This is an experimental feature, subject to breaking API changes in future versions. attention_bias (`bool`, defaults to `False`, *optional*, defaults to `False`): Whether to use a bias in the query, key, value and output projection layers during self-attention. attention_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities.
387_7_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chameleon.md
https://huggingface.co/docs/transformers/en/model_doc/chameleon/#chameleonconfig
.md
attention_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. model_parallel_size (`int`, *optional*, defaults to 1): Number of shards used when training the model. This will be used in qk layernorm because the original Chameleon inference doesn't do reduction in those layers and each rank has its own biases. swin_norm (`bool`, *optional*, defaults to `False`): Use Swin Transformer normalization. vq_config (`dict`, *optional*):
387_7_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chameleon.md
https://huggingface.co/docs/transformers/en/model_doc/chameleon/#chameleonconfig
.md
swin_norm (`bool`, *optional*, defaults to `False`): Use Swin Transformer normalization. vq_config (`dict`, *optional*): ChameleonVQConfig instance containing the configuration for the VQ-VAE model. vocabulary_map (`dict`, *optional*): A dictionary containing the vocabulary map from the tokenizer. Used to obtain tokens from the image inputs. mlp_bias (`bool`, *optional*, defaults to `False`): Whether to use a bias in up_proj, down_proj and gate_proj layers in the MLP layers. ```python
387_7_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chameleon.md
https://huggingface.co/docs/transformers/en/model_doc/chameleon/#chameleonconfig
.md
Whether to use a bias in up_proj, down_proj and gate_proj layers in the MLP layers. ```python >>> from transformers import ChameleonModel, ChameleonConfig
387_7_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chameleon.md
https://huggingface.co/docs/transformers/en/model_doc/chameleon/#chameleonconfig
.md
>>> # Initializing a chameleon chameleon-7b style configuration >>> configuration = ChameleonConfig() >>> # Initializing a model from the chameleon-7b style configuration >>> model = ChameleonModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
387_7_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chameleon.md
https://huggingface.co/docs/transformers/en/model_doc/chameleon/#chameleonvqvaeconfig
.md
This is the configuration class to store the configuration of a [`ChameleonVQModel`]. It is used to instantiate a `ChameleonVQModel` according to the specified arguments, defining the model architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Instantiating a configuration with the defaults will yield a similar configuration to the VQModel of the
387_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chameleon.md
https://huggingface.co/docs/transformers/en/model_doc/chameleon/#chameleonvqvaeconfig
.md
configuration with the defaults will yield a similar configuration to the VQModel of the [meta/chameleon-7B](https://huggingface.co/meta/chameleon-7B). Args: embed_dim (`int`, *optional*, defaults to 256): Dimensionality of each embedding vector. num_embeddings (`int`, *optional*, defaults to 8192): Number of codebook embeddings. double_latent (`bool`, *optional*, defaults to `False`): Whether to use double z channels. latent_channels (`int`, *optional*, defaults to 256):
387_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chameleon.md
https://huggingface.co/docs/transformers/en/model_doc/chameleon/#chameleonvqvaeconfig
.md
Whether to use double z channels. latent_channels (`int`, *optional*, defaults to 256): Number of channels for the latent space. resolution (`int`, *optional*, defaults to 512): Resolution of the input images. in_channels (`int`, *optional*, defaults to 3): Number of input channels. base_channels (`int`, *optional*, defaults to 128): Base channel count. channel_multiplier (`List[int]`, *optional*, defaults to `[1, 1, 2, 2, 4]`): Channel multipliers for each resolution.
387_8_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chameleon.md
https://huggingface.co/docs/transformers/en/model_doc/chameleon/#chameleonvqvaeconfig
.md
channel_multiplier (`List[int]`, *optional*, defaults to `[1, 1, 2, 2, 4]`): Channel multipliers for each resolution. num_res_blocks (`int`, *optional*, defaults to 2): Number of residual blocks. attn_resolutions (`List[int]`, *optional*): Resolutions to apply attention. dropout (`float`, *optional*, defaults to 0.0): Dropout rate. attn_type (`str`, *optional*, defaults to `"vanilla"`): Attention type used in VQ-GAN encoder. Can be "vanilla" or None.
387_8_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chameleon.md
https://huggingface.co/docs/transformers/en/model_doc/chameleon/#chameleonvqvaeconfig
.md
attn_type (`str`, *optional*, defaults to `"vanilla"`): Attention type used in VQ-GAN encoder. Can be "vanilla" or None. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
387_8_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chameleon.md
https://huggingface.co/docs/transformers/en/model_doc/chameleon/#chameleonprocessor
.md
Constructs a Chameleon processor which wraps a Chameleon image processor and a Chameleon tokenizer into a single processor. [`ChameleonProcessor`] offers all the functionalities of [`ChameleonImageProcessor`] and [`LlamaTokenizerFast`]. See the [`~ChameleonProcessor.__call__`] and [`~ChameleonProcessor.decode`] for more information. Args: image_processor ([`ChameleonImageProcessor`]): The image processor is a required input. tokenizer ([`LlamaTokenizerFast`]): The tokenizer is a required input.
387_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chameleon.md
https://huggingface.co/docs/transformers/en/model_doc/chameleon/#chameleonprocessor
.md
The image processor is a required input. tokenizer ([`LlamaTokenizerFast`]): The tokenizer is a required input. image_seq_length (`int`, *optional*, defaults to 1024): Sequence length of one image embedding. image_token (`str`, *optional*, defaults to `"<image>"`): The special token used to indicate image in the text.
387_9_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chameleon.md
https://huggingface.co/docs/transformers/en/model_doc/chameleon/#chameleonimageprocessor
.md
Constructs a Chameleon image processor. Args: do_resize (`bool`, *optional*, defaults to `True`): Whether to resize the image's (height, width) dimensions to the specified `size`. Can be overridden by `do_resize` in the `preprocess` method. size (`Dict[str, int]` *optional*, defaults to `{"shortest_edge": 512}`): Size of the image after resizing. The shortest edge of the image is resized to size["shortest_edge"], with
387_10_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chameleon.md
https://huggingface.co/docs/transformers/en/model_doc/chameleon/#chameleonimageprocessor
.md
Size of the image after resizing. The shortest edge of the image is resized to size["shortest_edge"], with the longest edge resized to keep the input aspect ratio. Can be overridden by `size` in the `preprocess` method. resample (`PILImageResampling`, *optional*, defaults to 1): Resampling filter to use if resizing the image. Can be overridden by `resample` in the `preprocess` method. do_center_crop (`bool`, *optional*, defaults to `True`):
387_10_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chameleon.md
https://huggingface.co/docs/transformers/en/model_doc/chameleon/#chameleonimageprocessor
.md
do_center_crop (`bool`, *optional*, defaults to `True`): Whether to center crop the image to the specified `crop_size`. Can be overridden by `do_center_crop` in the `preprocess` method. crop_size (`Dict[str, int]` *optional*, defaults to {"height": 512, "width": 512}): Size of the output image after applying `center_crop`. Can be overridden by `crop_size` in the `preprocess` method. do_rescale (`bool`, *optional*, defaults to `True`):
387_10_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chameleon.md
https://huggingface.co/docs/transformers/en/model_doc/chameleon/#chameleonimageprocessor
.md
method. do_rescale (`bool`, *optional*, defaults to `True`): Whether to rescale the image by the specified scale `rescale_factor`. Can be overridden by `do_rescale` in the `preprocess` method. rescale_factor (`int` or `float`, *optional*, defaults to 0.0078): Scale factor to use if rescaling the image. Can be overridden by `rescale_factor` in the `preprocess` method. do_normalize (`bool`, *optional*, defaults to `True`):
387_10_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chameleon.md
https://huggingface.co/docs/transformers/en/model_doc/chameleon/#chameleonimageprocessor
.md
method. do_normalize (`bool`, *optional*, defaults to `True`): Whether to normalize the image. Can be overridden by `do_normalize` in the `preprocess` method. image_mean (`float` or `List[float]`, *optional*, defaults to `[1.0, 1.0, 1.0]`): Mean to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the `image_mean` parameter in the `preprocess` method.
387_10_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chameleon.md
https://huggingface.co/docs/transformers/en/model_doc/chameleon/#chameleonimageprocessor
.md
channels in the image. Can be overridden by the `image_mean` parameter in the `preprocess` method. image_std (`float` or `List[float]`, *optional*, defaults to `[1.0, 1.0, 1.0]`): Standard deviation to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the `image_std` parameter in the `preprocess` method. Can be overridden by the `image_std` parameter in the `preprocess` method.
387_10_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chameleon.md
https://huggingface.co/docs/transformers/en/model_doc/chameleon/#chameleonimageprocessor
.md
Can be overridden by the `image_std` parameter in the `preprocess` method. do_convert_rgb (`bool`, *optional*, defaults to `True`): Whether to convert the image to RGB. Methods: preprocess
387_10_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chameleon.md
https://huggingface.co/docs/transformers/en/model_doc/chameleon/#chameleonvqvae
.md
The VQ-VAE model used in Chameleon for encoding/decoding images into discrete tokens. This model follows the "Make-a-scene: Scene-based text-to-image generation with human priors" paper from [ Oran Gafni, Adam Polyak, Oron Ashual, Shelly Sheynin, Devi Parikh, and Yaniv Taigman](https://arxiv.org/abs/2203.13131). This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
387_11_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chameleon.md
https://huggingface.co/docs/transformers/en/model_doc/chameleon/#chameleonvqvae
.md
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters:
387_11_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chameleon.md
https://huggingface.co/docs/transformers/en/model_doc/chameleon/#chameleonvqvae
.md
and behavior. Parameters: config ([`ChameleonVQVAEConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
387_11_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chameleon.md
https://huggingface.co/docs/transformers/en/model_doc/chameleon/#chameleonmodel
.md
The bare chameleon Model outputting raw hidden-states without any specific head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
387_12_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chameleon.md
https://huggingface.co/docs/transformers/en/model_doc/chameleon/#chameleonmodel
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`ChameleonConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the
387_12_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chameleon.md
https://huggingface.co/docs/transformers/en/model_doc/chameleon/#chameleonmodel
.md
load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Transformer decoder consisting of *config.num_hidden_layers* layers. Each layer is a [`ChameleonDecoderLayer`] Args: config: ChameleonConfig Methods: forward
387_12_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chameleon.md
https://huggingface.co/docs/transformers/en/model_doc/chameleon/#chameleonforconditionalgeneration
.md
Chameleon Model with a head on top used for outputting logits for next token prediction. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
387_13_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chameleon.md
https://huggingface.co/docs/transformers/en/model_doc/chameleon/#chameleonforconditionalgeneration
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`ChameleonConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the
387_13_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chameleon.md
https://huggingface.co/docs/transformers/en/model_doc/chameleon/#chameleonforconditionalgeneration
.md
load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
387_13_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yoso.md
https://huggingface.co/docs/transformers/en/model_doc/yoso/
.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
388_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yoso.md
https://huggingface.co/docs/transformers/en/model_doc/yoso/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
388_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yoso.md
https://huggingface.co/docs/transformers/en/model_doc/yoso/#overview
.md
The YOSO model was proposed in [You Only Sample (Almost) Once: Linear Cost Self-Attention Via Bernoulli Sampling](https://arxiv.org/abs/2111.09714) by Zhanpeng Zeng, Yunyang Xiong, Sathya N. Ravi, Shailesh Acharya, Glenn Fung, Vikas Singh. YOSO approximates standard softmax self-attention via a Bernoulli sampling scheme based on Locality Sensitive Hashing (LSH). In principle, all the Bernoulli random variables can be sampled with a single hash. The abstract from the paper is the following:
388_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yoso.md
https://huggingface.co/docs/transformers/en/model_doc/yoso/#overview
.md
a single hash. The abstract from the paper is the following: *Transformer-based models are widely used in natural language processing (NLP). Central to the transformer model is the self-attention mechanism, which captures the interactions of token pairs in the input sequences and depends quadratically on the sequence length. Training such models on longer sequences is expensive. In this paper, we show that a Bernoulli sampling
388_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yoso.md
https://huggingface.co/docs/transformers/en/model_doc/yoso/#overview
.md
on the sequence length. Training such models on longer sequences is expensive. In this paper, we show that a Bernoulli sampling attention mechanism based on Locality Sensitive Hashing (LSH), decreases the quadratic complexity of such models to linear. We bypass the quadratic cost by considering self-attention as a sum of individual tokens associated with Bernoulli random variables that can, in principle, be sampled at once by a single hash (although in practice, this number may be a small constant).
388_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/yoso.md
https://huggingface.co/docs/transformers/en/model_doc/yoso/#overview
.md
This leads to an efficient sampling scheme to estimate self-attention which relies on specific modifications of LSH (to enable deployment on GPU architectures). We evaluate our algorithm on the GLUE benchmark with standard 512 sequence length where we see favorable performance relative to a standard pretrained Transformer. On the Long Range Arena (LRA) benchmark, for evaluating performance on long sequences, our method achieves results consistent with softmax self-attention but with sizable
388_1_3