source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#single-media-inference
.md
# Preprocess the inputs text_prompt = processor.apply_chat_template(conversation, add_generation_prompt=True) # Excepted output: '<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\n<|vision_start|><|video_pad|><|vision_end|>What happened in the video?<|im_end|>\n<|im_start|>assistant\n' inputs = processor(text=[text_prompt], videos=[video], padding=True, return_tensors="pt") inputs = inputs.to('cuda')
323_2_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#single-media-inference
.md
inputs = processor(text=[text_prompt], videos=[video], padding=True, return_tensors="pt") inputs = inputs.to('cuda') # Inference: Generation of the output output_ids = model.generate(**inputs, max_new_tokens=128) generated_ids = [output_ids[len(input_ids):] for input_ids, output_ids in zip(inputs.input_ids, output_ids)] output_text = processor.batch_decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True) print(output_text) ```
323_2_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#batch-mixed-media-inference
.md
The model can batch inputs composed of mixed samples of various types such as images, videos, and text. Here is an example. ```python image1 = Image.open("/path/to/image1.jpg") image2 = Image.open("/path/to/image2.jpg") image3 = Image.open("/path/to/image3.jpg") image4 = Image.open("/path/to/image4.jpg") image5 = Image.open("/path/to/image5.jpg") video = fetch_video({ "type": "video", "video": "/path/to/video.mp4", "fps": 1.0 })
323_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#batch-mixed-media-inference
.md
# Conversation for the first image conversation1 = [ { "role": "user", "content": [ {"type": "image"}, {"type": "text", "text": "Describe this image."} ] } ] # Conversation with two images conversation2 = [ { "role": "user", "content": [ {"type": "image"}, {"type": "image"}, {"type": "text", "text": "What is written in the pictures?"} ] } ] # Conversation with pure text conversation3 = [ { "role": "user", "content": "who are you?" } ]
323_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#batch-mixed-media-inference
.md
# Conversation with pure text conversation3 = [ { "role": "user", "content": "who are you?" } ] # Conversation with mixed midia conversation4 = [ { "role": "user", "content": [ {"type": "image"}, {"type": "image"}, {"type": "video"}, {"type": "text", "text": "What are the common elements in these medias?"}, ], } ]
323_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#batch-mixed-media-inference
.md
conversations = [conversation1, conversation2, conversation3, conversation4] # Preparation for batch inference texts = [processor.apply_chat_template(msg, add_generation_prompt=True) for msg in conversations] inputs = processor( text=texts, images=[image1, image2, image3, image4, image5], videos=[video], padding=True, return_tensors="pt", ) inputs = inputs.to('cuda')
323_3_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#batch-mixed-media-inference
.md
# Batch Inference output_ids = model.generate(**inputs, max_new_tokens=128) generated_ids = [output_ids[len(input_ids):] for input_ids, output_ids in zip(inputs.input_ids, output_ids)] output_text = processor.batch_decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True) print(output_text) ```
323_3_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#image-resolution-trade-off
.md
The model supports a wide range of resolution inputs. By default, it uses the native resolution for input, but higher resolutions can enhance performance at the cost of more computation. Users can set the minimum and maximum number of pixels to achieve an optimal configuration for their needs. ```python min_pixels = 224*224 max_pixels = 2048*2048 processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-7B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels) ```
323_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#image-resolution-trade-off
.md
processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-7B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels) ``` In case of limited GPU RAM, one can reduce the resolution as follows: ```python min_pixels = 256*28*28 max_pixels = 1024*28*28 processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-7B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels) ```
323_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#image-resolution-trade-off
.md
processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-7B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels) ``` This ensures each image gets encoded using a number between 256-1024 tokens. The 28 comes from the fact that the model uses a patch size of 14 and a temporal patch size of 2 (14 x 2 = 28).
323_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#multiple-image-inputs
.md
By default, images and video content are directly included in the conversation. When handling multiple images, it's helpful to add labels to the images and videos for better reference. Users can control this behavior with the following settings: ```python conversation = [ { "role": "user", "content": [ {"type": "image"}, {"type": "text", "text": "Hello, how are you?"} ] }, { "role": "assistant", "content": "I'm doing well, thank you for asking. How can I assist you today?" }, { "role": "user",
323_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#multiple-image-inputs
.md
] }, { "role": "assistant", "content": "I'm doing well, thank you for asking. How can I assist you today?" }, { "role": "user", "content": [ {"type": "text", "text": "Can you describe these images and video?"}, {"type": "image"}, {"type": "image"}, {"type": "video"}, {"type": "text", "text": "These are from my vacation."} ] }, { "role": "assistant", "content": "I'd be happy to describe the images and video for you. Could you please provide more context about your vacation?" }, { "role": "user",
323_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#multiple-image-inputs
.md
}, { "role": "user", "content": "It was a trip to the mountains. Can you see the details in the images and video?" } ]
323_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#multiple-image-inputs
.md
# default: prompt_without_id = processor.apply_chat_template(conversation, add_generation_prompt=True)
323_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#multiple-image-inputs
.md
# Excepted output: '<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\n<|vision_start|><|image_pad|><|vision_end|>Hello, how are you?<|im_end|>\n<|im_start|>assistant\nI'm doing well, thank you for asking. How can I assist you today?<|im_end|>\n<|im_start|>user\nCan you describe these images and video?<|vision_start|><|image_pad|><|vision_end|><|vision_start|><|image_pad|><|vision_end|><|vision_start|><|video_pad|><|vision_end|>These are from my
323_5_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#multiple-image-inputs
.md
are from my vacation.<|im_end|>\n<|im_start|>assistant\nI'd be happy to describe the images and video for you. Could you please provide more context about your vacation?<|im_end|>\n<|im_start|>user\nIt was a trip to the mountains. Can you see the details in the images and video?<|im_end|>\n<|im_start|>assistant\n'
323_5_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#multiple-image-inputs
.md
# add ids prompt_with_id = processor.apply_chat_template(conversation, add_generation_prompt=True, add_vision_id=True)
323_5_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#multiple-image-inputs
.md
# Excepted output: '<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\nPicture 1: <|vision_start|><|image_pad|><|vision_end|>Hello, how are you?<|im_end|>\n<|im_start|>assistant\nI'm doing well, thank you for asking. How can I assist you today?<|im_end|>\n<|im_start|>user\nCan you describe these images and video?Picture 2: <|vision_start|><|image_pad|><|vision_end|>Picture 3: <|vision_start|><|image_pad|><|vision_end|>Video 1: <|vision_start|><|video_pad|><|vision_end|>These are
323_5_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#multiple-image-inputs
.md
3: <|vision_start|><|image_pad|><|vision_end|>Video 1: <|vision_start|><|video_pad|><|vision_end|>These are from my vacation.<|im_end|>\n<|im_start|>assistant\nI'd be happy to describe the images and video for you. Could you please provide more context about your vacation?<|im_end|>\n<|im_start|>user\nIt was a trip to the mountains. Can you see the details in the images and video?<|im_end|>\n<|im_start|>assistant\n'
323_5_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#multiple-image-inputs
.md
```
323_5_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#flash-attention-2-to-speed-up-generation
.md
First, make sure to install the latest version of Flash Attention 2: ```bash pip install -U flash-attn --no-build-isolation ``` Also, you should have a hardware that is compatible with Flash-Attention 2. Read more about it in the official documentation of the [flash attention repository](https://github.com/Dao-AILab/flash-attention). FlashAttention-2 can only be used when a model is loaded in `torch.float16` or `torch.bfloat16`.
323_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#flash-attention-2-to-speed-up-generation
.md
To load and run a model using Flash Attention-2, simply add `attn_implementation="flash_attention_2"` when loading the model as follows: ```python from transformers import Qwen2VLForConditionalGeneration
323_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#flash-attention-2-to-speed-up-generation
.md
model = Qwen2VLForConditionalGeneration.from_pretrained( "Qwen/Qwen2-VL-7B-Instruct", torch_dtype=torch.bfloat16, attn_implementation="flash_attention_2", ) ```
323_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#qwen2vlconfig
.md
This is the configuration class to store the configuration of a [`Qwen2VLModel`]. It is used to instantiate a Qwen2-VL model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of Qwen2-VL-7B-Instruct [Qwen/Qwen2-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct). Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
323_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#qwen2vlconfig
.md
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 152064): Vocabulary size of the Qwen2VL model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`Qwen2VLModel`] hidden_size (`int`, *optional*, defaults to 8192): Dimension of the hidden representations.
323_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#qwen2vlconfig
.md
hidden_size (`int`, *optional*, defaults to 8192): Dimension of the hidden representations. intermediate_size (`int`, *optional*, defaults to 29568): Dimension of the MLP representations. num_hidden_layers (`int`, *optional*, defaults to 80): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 64): Number of attention heads for each attention layer in the Transformer encoder. num_key_value_heads (`int`, *optional*, defaults to 8):
323_7_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#qwen2vlconfig
.md
num_key_value_heads (`int`, *optional*, defaults to 8): This is the number of key_value heads that should be used to implement Grouped Query Attention. If `num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if `num_key_value_heads=1` the model will use Multi Query Attention (MQA) otherwise GQA is used. When converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
323_7_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#qwen2vlconfig
.md
converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed by meanpooling all the original heads within that group. For more details checkout [this paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to `32`. hidden_act (`str` or `function`, *optional*, defaults to `"silu"`): The non-linear activation function (function or string) in the decoder. max_position_embeddings (`int`, *optional*, defaults to 32768):
323_7_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#qwen2vlconfig
.md
max_position_embeddings (`int`, *optional*, defaults to 32768): The maximum sequence length that this model might ever be used with. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. rms_norm_eps (`float`, *optional*, defaults to 1e-05): The epsilon used by the rms normalization layers. use_cache (`bool`, *optional*, defaults to `True`):
323_7_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#qwen2vlconfig
.md
The epsilon used by the rms normalization layers. use_cache (`bool`, *optional*, defaults to `True`): Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if `config.is_decoder=True`. tie_word_embeddings (`bool`, *optional*, defaults to `False`): Whether the model's input and output word embeddings should be tied. rope_theta (`float`, *optional*, defaults to 1000000.0): The base period of the RoPE embeddings.
323_7_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#qwen2vlconfig
.md
rope_theta (`float`, *optional*, defaults to 1000000.0): The base period of the RoPE embeddings. use_sliding_window (`bool`, *optional*, defaults to `False`): Whether to use sliding window attention. sliding_window (`int`, *optional*, defaults to 4096): Sliding window attention (SWA) window size. If not specified, will default to `4096`. max_window_layers (`int`, *optional*, defaults to 80):
323_7_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#qwen2vlconfig
.md
max_window_layers (`int`, *optional*, defaults to 80): The number of layers that use SWA (Sliding Window Attention). The bottom layers use SWA while the top use full attention. attention_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. vision_config (`Dict`, *optional*): The config for the visual encoder initialization. rope_scaling (`Dict`, *optional*):
323_7_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#qwen2vlconfig
.md
vision_config (`Dict`, *optional*): The config for the visual encoder initialization. rope_scaling (`Dict`, *optional*): Dictionary containing the scaling configuration for the RoPE embeddings. NOTE: if you apply new rope type and you expect the model to work on longer `max_position_embeddings`, we recommend you to update this value accordingly. Expected contents: `rope_type` (`str`): The sub-variant of RoPE to use. Can be one of ['default', 'linear', 'dynamic', 'yarn', 'longrope',
323_7_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#qwen2vlconfig
.md
`rope_type` (`str`): The sub-variant of RoPE to use. Can be one of ['default', 'linear', 'dynamic', 'yarn', 'longrope', 'llama3'], with 'default' being the original RoPE implementation. `factor` (`float`, *optional*): Used with all rope types except 'default'. The scaling factor to apply to the RoPE embeddings. In most scaling types, a `factor` of x will enable the model to handle sequences of length x * original maximum pre-trained length. `original_max_position_embeddings` (`int`, *optional*):
323_7_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#qwen2vlconfig
.md
original maximum pre-trained length. `original_max_position_embeddings` (`int`, *optional*): Used with 'dynamic', 'longrope' and 'llama3'. The original max position embeddings used during pretraining. `attention_factor` (`float`, *optional*): Used with 'yarn' and 'longrope'. The scaling factor to be applied on the attention computation. If unspecified, it defaults to value recommended by the implementation, using the `factor` field to infer the suggested value. `beta_fast` (`float`, *optional*):
323_7_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#qwen2vlconfig
.md
`factor` field to infer the suggested value. `beta_fast` (`float`, *optional*): Only used with 'yarn'. Parameter to set the boundary for extrapolation (only) in the linear ramp function. If unspecified, it defaults to 32. `beta_slow` (`float`, *optional*): Only used with 'yarn'. Parameter to set the boundary for interpolation (only) in the linear ramp function. If unspecified, it defaults to 1. `short_factor` (`List[float]`, *optional*):
323_7_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#qwen2vlconfig
.md
ramp function. If unspecified, it defaults to 1. `short_factor` (`List[float]`, *optional*): Only used with 'longrope'. The scaling factor to be applied to short contexts (< `original_max_position_embeddings`). Must be a list of numbers with the same length as the hidden size divided by the number of attention heads divided by 2 `long_factor` (`List[float]`, *optional*): Only used with 'longrope'. The scaling factor to be applied to long contexts (<
323_7_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#qwen2vlconfig
.md
`long_factor` (`List[float]`, *optional*): Only used with 'longrope'. The scaling factor to be applied to long contexts (< `original_max_position_embeddings`). Must be a list of numbers with the same length as the hidden size divided by the number of attention heads divided by 2 `low_freq_factor` (`float`, *optional*): Only used with 'llama3'. Scaling factor applied to low frequency components of the RoPE `high_freq_factor` (`float`, *optional*):
323_7_14
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#qwen2vlconfig
.md
`high_freq_factor` (`float`, *optional*): Only used with 'llama3'. Scaling factor applied to high frequency components of the RoPE ```python >>> from transformers import Qwen2VLForConditionalGeneration, Qwen2VLConfig
323_7_15
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#qwen2vlconfig
.md
>>> # Initializing a Qwen2VL style configuration >>> configuration = Qwen2VLConfig() >>> # Initializing a model from the Qwen2-VL-7B style configuration >>> model = Qwen2VLForConditionalGeneration(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
323_7_16
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#qwen2vlimageprocessor
.md
Constructs a Qwen2-VL image processor that dynamically resizes images based on the original images. Args: do_resize (`bool`, *optional*, defaults to `True`): Whether to resize the image's (height, width) dimensions. resample (`PILImageResampling`, *optional*, defaults to `Resampling.BICUBIC`): Resampling filter to use when resizing the image. do_rescale (`bool`, *optional*, defaults to `True`): Whether to rescale the image by the specified scale `rescale_factor`.
323_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#qwen2vlimageprocessor
.md
do_rescale (`bool`, *optional*, defaults to `True`): Whether to rescale the image by the specified scale `rescale_factor`. rescale_factor (`int` or `float`, *optional*, defaults to `1/255`): Scale factor to use if rescaling the image. do_normalize (`bool`, *optional*, defaults to `True`): Whether to normalize the image. image_mean (`float` or `List[float]`, *optional*, defaults to `[0.48145466, 0.4578275, 0.40821073]`):
323_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#qwen2vlimageprocessor
.md
image_mean (`float` or `List[float]`, *optional*, defaults to `[0.48145466, 0.4578275, 0.40821073]`): Mean to use if normalizing the image. This is a float or list of floats for each channel in the image. image_std (`float` or `List[float]`, *optional*, defaults to `[0.26862954, 0.26130258, 0.27577711]`): Standard deviation to use if normalizing the image. This is a float or list of floats for each channel in the image. do_convert_rgb (`bool`, *optional*, defaults to `True`):
323_8_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#qwen2vlimageprocessor
.md
do_convert_rgb (`bool`, *optional*, defaults to `True`): Whether to convert the image to RGB. min_pixels (`int`, *optional*, defaults to `56 * 56`): The min pixels of the image to resize the image. max_pixels (`int`, *optional*, defaults to `28 * 28 * 1280`): The max pixels of the image to resize the image. patch_size (`int`, *optional*, defaults to 14): The spacial patch size of the vision encoder. temporal_patch_size (`int`, *optional*, defaults to 2): The temporal patch size of the vision encoder.
323_8_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#qwen2vlimageprocessor
.md
temporal_patch_size (`int`, *optional*, defaults to 2): The temporal patch size of the vision encoder. merge_size (`int`, *optional*, defaults to 2): The merge size of the vision encoder to llm encoder. Methods: preprocess
323_8_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#qwen2vlprocessor
.md
Constructs a Qwen2-VL processor which wraps a Qwen2-VL image processor and a Qwen2 tokenizer into a single processor. [`Qwen2VLProcessor`] offers all the functionalities of [`Qwen2VLImageProcessor`] and [`Qwen2TokenizerFast`]. See the [`~Qwen2VLProcessor.__call__`] and [`~Qwen2VLProcessor.decode`] for more information. Args: image_processor ([`Qwen2VLImageProcessor`], *optional*): The image processor is a required input. tokenizer ([`Qwen2TokenizerFast`], *optional*): The tokenizer is a required input.
323_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#qwen2vlprocessor
.md
The image processor is a required input. tokenizer ([`Qwen2TokenizerFast`], *optional*): The tokenizer is a required input. chat_template (`str`, *optional*): A Jinja template which will be used to convert lists of messages in a chat into a tokenizable string.
323_9_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#qwen2vlmodel
.md
The bare Qwen2VL Model outputting raw hidden-states without any specific head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
323_10_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#qwen2vlmodel
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`Qwen2VLConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the
323_10_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#qwen2vlmodel
.md
load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
323_10_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#qwen2vlforconditionalgeneration
.md
No docstring available for Qwen2VLForConditionalGeneration Methods: forward
323_11_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/superpoint.md
https://huggingface.co/docs/transformers/en/model_doc/superpoint/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the MIT License; you may not use this file except in compliance with the License. Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
324_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/superpoint.md
https://huggingface.co/docs/transformers/en/model_doc/superpoint/
.md
specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
324_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/superpoint.md
https://huggingface.co/docs/transformers/en/model_doc/superpoint/#overview
.md
The SuperPoint model was proposed in [SuperPoint: Self-Supervised Interest Point Detection and Description](https://arxiv.org/abs/1712.07629) by Daniel DeTone, Tomasz Malisiewicz and Andrew Rabinovich. This model is the result of a self-supervised training of a fully-convolutional network for interest point detection and description. The model is able to detect interest points that are repeatable under homographic transformations and
324_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/superpoint.md
https://huggingface.co/docs/transformers/en/model_doc/superpoint/#overview
.md
description. The model is able to detect interest points that are repeatable under homographic transformations and provide a descriptor for each point. The use of the model in its own is limited, but it can be used as a feature extractor for other tasks such as homography estimation, image matching, etc. The abstract from the paper is the following: *This paper presents a self-supervised framework for training interest point detectors and descriptors suitable for a
324_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/superpoint.md
https://huggingface.co/docs/transformers/en/model_doc/superpoint/#overview
.md
*This paper presents a self-supervised framework for training interest point detectors and descriptors suitable for a large number of multiple-view geometry problems in computer vision. As opposed to patch-based neural networks, our fully-convolutional model operates on full-sized images and jointly computes pixel-level interest point locations and associated descriptors in one forward pass. We introduce Homographic Adaptation, a multi-scale, multi-homography
324_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/superpoint.md
https://huggingface.co/docs/transformers/en/model_doc/superpoint/#overview
.md
associated descriptors in one forward pass. We introduce Homographic Adaptation, a multi-scale, multi-homography approach for boosting interest point detection repeatability and performing cross-domain adaptation (e.g., synthetic-to-real). Our model, when trained on the MS-COCO generic image dataset using Homographic Adaptation, is able to repeatedly detect a much richer set of interest points than the initial pre-adapted deep model and any other
324_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/superpoint.md
https://huggingface.co/docs/transformers/en/model_doc/superpoint/#overview
.md
to repeatedly detect a much richer set of interest points than the initial pre-adapted deep model and any other traditional corner detector. The final system gives rise to state-of-the-art homography estimation results on HPatches when compared to LIFT, SIFT and ORB.* <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/superpoint_architecture.png" alt="drawing" width="500"/>
324_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/superpoint.md
https://huggingface.co/docs/transformers/en/model_doc/superpoint/#overview
.md
alt="drawing" width="500"/> <small> SuperPoint overview. Taken from the <a href="https://arxiv.org/abs/1712.07629v4">original paper.</a> </small>
324_1_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/superpoint.md
https://huggingface.co/docs/transformers/en/model_doc/superpoint/#usage-tips
.md
Here is a quick example of using the model to detect interest points in an image: ```python from transformers import AutoImageProcessor, SuperPointForKeypointDetection import torch from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) processor = AutoImageProcessor.from_pretrained("magic-leap-community/superpoint") model = SuperPointForKeypointDetection.from_pretrained("magic-leap-community/superpoint")
324_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/superpoint.md
https://huggingface.co/docs/transformers/en/model_doc/superpoint/#usage-tips
.md
inputs = processor(image, return_tensors="pt") outputs = model(**inputs) ``` The outputs contain the list of keypoint coordinates with their respective score and description (a 256-long vector). You can also feed multiple images to the model. Due to the nature of SuperPoint, to output a dynamic number of keypoints, you will need to use the mask attribute to retrieve the respective information : ```python from transformers import AutoImageProcessor, SuperPointForKeypointDetection import torch
324_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/superpoint.md
https://huggingface.co/docs/transformers/en/model_doc/superpoint/#usage-tips
.md
```python from transformers import AutoImageProcessor, SuperPointForKeypointDetection import torch from PIL import Image import requests
324_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/superpoint.md
https://huggingface.co/docs/transformers/en/model_doc/superpoint/#usage-tips
.md
url_image_1 = "http://images.cocodataset.org/val2017/000000039769.jpg" image_1 = Image.open(requests.get(url_image_1, stream=True).raw) url_image_2 = "http://images.cocodataset.org/test-stuff2017/000000000568.jpg" image_2 = Image.open(requests.get(url_image_2, stream=True).raw) images = [image_1, image_2] processor = AutoImageProcessor.from_pretrained("magic-leap-community/superpoint") model = SuperPointForKeypointDetection.from_pretrained("magic-leap-community/superpoint")
324_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/superpoint.md
https://huggingface.co/docs/transformers/en/model_doc/superpoint/#usage-tips
.md
inputs = processor(images, return_tensors="pt") outputs = model(**inputs) image_sizes = [(image.height, image.width) for image in images] outputs = processor.post_process_keypoint_detection(outputs, image_sizes)
324_2_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/superpoint.md
https://huggingface.co/docs/transformers/en/model_doc/superpoint/#usage-tips
.md
for output in outputs: for keypoints, scores, descriptors in zip(output["keypoints"], output["scores"], output["descriptors"]): print(f"Keypoints: {keypoints}") print(f"Scores: {scores}") print(f"Descriptors: {descriptors}") ``` You can then print the keypoints on the image of your choice to visualize the result: ```python import matplotlib.pyplot as plt
324_2_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/superpoint.md
https://huggingface.co/docs/transformers/en/model_doc/superpoint/#usage-tips
.md
plt.axis("off") plt.imshow(image_1) plt.scatter( outputs[0]["keypoints"][:, 0], outputs[0]["keypoints"][:, 1], c=outputs[0]["scores"] * 100, s=outputs[0]["scores"] * 50, alpha=0.8 ) plt.savefig(f"output_image.png") ``` ![image/png](https://cdn-uploads.huggingface.co/production/uploads/632885ba1558dac67c440aa8/ZtFmphEhx8tcbEQqOolyE.png) This model was contributed by [stevenbucaille](https://huggingface.co/stevenbucaille).
324_2_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/superpoint.md
https://huggingface.co/docs/transformers/en/model_doc/superpoint/#usage-tips
.md
This model was contributed by [stevenbucaille](https://huggingface.co/stevenbucaille). The original code can be found [here](https://github.com/magicleap/SuperPointPretrainedNetwork).
324_2_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/superpoint.md
https://huggingface.co/docs/transformers/en/model_doc/superpoint/#resources
.md
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with SuperPoint. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
324_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/superpoint.md
https://huggingface.co/docs/transformers/en/model_doc/superpoint/#resources
.md
- A notebook showcasing inference and visualization with SuperPoint can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/SuperPoint/Inference_with_SuperPoint_to_detect_interest_points_in_an_image.ipynb). 🌎
324_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/superpoint.md
https://huggingface.co/docs/transformers/en/model_doc/superpoint/#superpointconfig
.md
This is the configuration class to store the configuration of a [`SuperPointForKeypointDetection`]. It is used to instantiate a SuperPoint model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the SuperPoint [magic-leap-community/superpoint](https://huggingface.co/magic-leap-community/superpoint) architecture.
324_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/superpoint.md
https://huggingface.co/docs/transformers/en/model_doc/superpoint/#superpointconfig
.md
[magic-leap-community/superpoint](https://huggingface.co/magic-leap-community/superpoint) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: encoder_hidden_sizes (`List`, *optional*, defaults to `[64, 64, 128, 128]`): The number of channels in each convolutional layer in the encoder.
324_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/superpoint.md
https://huggingface.co/docs/transformers/en/model_doc/superpoint/#superpointconfig
.md
The number of channels in each convolutional layer in the encoder. decoder_hidden_size (`int`, *optional*, defaults to 256): The hidden size of the decoder. keypoint_decoder_dim (`int`, *optional*, defaults to 65): The output dimension of the keypoint decoder. descriptor_decoder_dim (`int`, *optional*, defaults to 256): The output dimension of the descriptor decoder. keypoint_threshold (`float`, *optional*, defaults to 0.005): The threshold to use for extracting keypoints.
324_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/superpoint.md
https://huggingface.co/docs/transformers/en/model_doc/superpoint/#superpointconfig
.md
keypoint_threshold (`float`, *optional*, defaults to 0.005): The threshold to use for extracting keypoints. max_keypoints (`int`, *optional*, defaults to -1): The maximum number of keypoints to extract. If `-1`, will extract all keypoints. nms_radius (`int`, *optional*, defaults to 4): The radius for non-maximum suppression. border_removal_distance (`int`, *optional*, defaults to 4): The distance from the border to remove keypoints. initializer_range (`float`, *optional*, defaults to 0.02):
324_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/superpoint.md
https://huggingface.co/docs/transformers/en/model_doc/superpoint/#superpointconfig
.md
The distance from the border to remove keypoints. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. Example: ```python >>> from transformers import SuperPointConfig, SuperPointForKeypointDetection
324_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/superpoint.md
https://huggingface.co/docs/transformers/en/model_doc/superpoint/#superpointconfig
.md
>>> # Initializing a SuperPoint superpoint style configuration >>> configuration = SuperPointConfig() >>> # Initializing a model from the superpoint style configuration >>> model = SuperPointForKeypointDetection(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
324_4_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/superpoint.md
https://huggingface.co/docs/transformers/en/model_doc/superpoint/#superpointimageprocessor
.md
Constructs a SuperPoint image processor. Args: do_resize (`bool`, *optional*, defaults to `True`): Controls whether to resize the image's (height, width) dimensions to the specified `size`. Can be overriden by `do_resize` in the `preprocess` method. size (`Dict[str, int]` *optional*, defaults to `{"height": 480, "width": 640}`): Resolution of the output image after `resize` is applied. Only has an effect if `do_resize` is set to `True`. Can be overriden by `size` in the `preprocess` method.
324_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/superpoint.md
https://huggingface.co/docs/transformers/en/model_doc/superpoint/#superpointimageprocessor
.md
`True`. Can be overriden by `size` in the `preprocess` method. do_rescale (`bool`, *optional*, defaults to `True`): Whether to rescale the image by the specified scale `rescale_factor`. Can be overriden by `do_rescale` in the `preprocess` method. rescale_factor (`int` or `float`, *optional*, defaults to `1/255`): Scale factor to use if rescaling the image. Can be overriden by `rescale_factor` in the `preprocess` method. Methods: preprocess - post_process_keypoint_detection
324_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/superpoint.md
https://huggingface.co/docs/transformers/en/model_doc/superpoint/#superpointforkeypointdetection
.md
SuperPoint model outputting keypoints and descriptors. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`SuperPointConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
324_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/superpoint.md
https://huggingface.co/docs/transformers/en/model_doc/superpoint/#superpointforkeypointdetection
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. SuperPoint model. It consists of a SuperPointEncoder, a SuperPointInterestPointDecoder and a SuperPointDescriptorDecoder. SuperPoint was proposed in `SuperPoint: Self-Supervised Interest Point Detection and
324_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/superpoint.md
https://huggingface.co/docs/transformers/en/model_doc/superpoint/#superpointforkeypointdetection
.md
SuperPointDescriptorDecoder. SuperPoint was proposed in `SuperPoint: Self-Supervised Interest Point Detection and Description <https://arxiv.org/abs/1712.07629>`__ by Daniel DeTone, Tomasz Malisiewicz, and Andrew Rabinovich. It is a fully convolutional neural network that extracts keypoints and descriptors from an image. It is trained in a self-supervised manner, using a combination of a photometric loss and a loss based on the homographic adaptation of
324_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/superpoint.md
https://huggingface.co/docs/transformers/en/model_doc/superpoint/#superpointforkeypointdetection
.md
self-supervised manner, using a combination of a photometric loss and a loss based on the homographic adaptation of keypoints. It is made of a convolutional encoder and two decoders: one for keypoints and one for descriptors. Methods: forward
324_6_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flan-t5.md
https://huggingface.co/docs/transformers/en/model_doc/flan-t5/
.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
325_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flan-t5.md
https://huggingface.co/docs/transformers/en/model_doc/flan-t5/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
325_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flan-t5.md
https://huggingface.co/docs/transformers/en/model_doc/flan-t5/#overview
.md
FLAN-T5 was released in the paper [Scaling Instruction-Finetuned Language Models](https://arxiv.org/pdf/2210.11416.pdf) - it is an enhanced version of T5 that has been finetuned in a mixture of tasks. One can directly use FLAN-T5 weights without finetuning the model: ```python >>> from transformers import AutoModelForSeq2SeqLM, AutoTokenizer >>> model = AutoModelForSeq2SeqLM.from_pretrained("google/flan-t5-small") >>> tokenizer = AutoTokenizer.from_pretrained("google/flan-t5-small")
325_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flan-t5.md
https://huggingface.co/docs/transformers/en/model_doc/flan-t5/#overview
.md
>>> inputs = tokenizer("A step by step recipe to make bolognese pasta:", return_tensors="pt") >>> outputs = model.generate(**inputs) >>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)) ['Pour a cup of bolognese into a large bowl and add the pasta'] ``` FLAN-T5 includes the same improvements as T5 version 1.1 (see [here](https://huggingface.co/docs/transformers/model_doc/t5v1.1) for the full details of the model's improvements.) Google has released the following variants:
325_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flan-t5.md
https://huggingface.co/docs/transformers/en/model_doc/flan-t5/#overview
.md
Google has released the following variants: - [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) - [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) - [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) - [google/flan-t5-xl](https://huggingface.co/google/flan-t5-xl) - [google/flan-t5-xxl](https://huggingface.co/google/flan-t5-xxl).
325_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flan-t5.md
https://huggingface.co/docs/transformers/en/model_doc/flan-t5/#overview
.md
- [google/flan-t5-xxl](https://huggingface.co/google/flan-t5-xxl). The original checkpoints can be found [here](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-t5-checkpoints). <Tip> Refer to [T5's documentation page](t5) for all API reference, code examples and notebooks. For more details regarding training and evaluation of the FLAN-T5, refer to the model card. </Tip>
325_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ctrl.md
https://huggingface.co/docs/transformers/en/model_doc/ctrl/
.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
326_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ctrl.md
https://huggingface.co/docs/transformers/en/model_doc/ctrl/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
326_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ctrl.md
https://huggingface.co/docs/transformers/en/model_doc/ctrl/#ctrl
.md
<div class="flex flex-wrap space-x-1"> <a href="https://huggingface.co/models?filter=ctrl"> <img alt="Models" src="https://img.shields.io/badge/All_model_pages-ctrl-blueviolet"> </a> <a href="https://huggingface.co/spaces/docs-demos/tiny-ctrl"> <img alt="Spaces" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue"> </a> </div>
326_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ctrl.md
https://huggingface.co/docs/transformers/en/model_doc/ctrl/#overview
.md
CTRL model was proposed in [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) by Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher. It's a causal (unidirectional) transformer pre-trained using language modeling on a very large corpus of ~140 GB of text data with the first token reserved as a control code (such as Links, Books, Wikipedia etc.). The abstract from the paper is the following:
326_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ctrl.md
https://huggingface.co/docs/transformers/en/model_doc/ctrl/#overview
.md
The abstract from the paper is the following: *Large-scale language models show promising text generation capabilities, but users cannot easily control particular aspects of the generated text. We release CTRL, a 1.63 billion-parameter conditional transformer language model, trained to condition on control codes that govern style, content, and task-specific behavior. Control codes were derived from structure that naturally co-occurs with raw text, preserving the advantages of unsupervised learning while
326_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ctrl.md
https://huggingface.co/docs/transformers/en/model_doc/ctrl/#overview
.md
derived from structure that naturally co-occurs with raw text, preserving the advantages of unsupervised learning while providing more explicit control over text generation. These codes also allow CTRL to predict which parts of the training data are most likely given a sequence. This provides a potential method for analyzing large amounts of data via model-based source attribution.* This model was contributed by [keskarnitishr](https://huggingface.co/keskarnitishr). The original code can be found
326_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ctrl.md
https://huggingface.co/docs/transformers/en/model_doc/ctrl/#overview
.md
This model was contributed by [keskarnitishr](https://huggingface.co/keskarnitishr). The original code can be found [here](https://github.com/salesforce/ctrl).
326_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ctrl.md
https://huggingface.co/docs/transformers/en/model_doc/ctrl/#usage-tips
.md
- CTRL makes use of control codes to generate text: it requires generations to be started by certain words, sentences or links to generate coherent text. Refer to the [original implementation](https://github.com/salesforce/ctrl) for more information. - CTRL is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than the left. - CTRL was trained with a causal language modeling (CLM) objective and is therefore powerful at predicting the next
326_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ctrl.md
https://huggingface.co/docs/transformers/en/model_doc/ctrl/#usage-tips
.md
the left. - CTRL was trained with a causal language modeling (CLM) objective and is therefore powerful at predicting the next token in a sequence. Leveraging this feature allows CTRL to generate syntactically coherent text as it can be observed in the *run_generation.py* example script. - The PyTorch models can take the `past_key_values` as input, which is the previously computed key/value attention pairs.
326_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ctrl.md
https://huggingface.co/docs/transformers/en/model_doc/ctrl/#usage-tips
.md
- The PyTorch models can take the `past_key_values` as input, which is the previously computed key/value attention pairs. TensorFlow models accepts `past` as input. Using the `past_key_values` value prevents the model from re-computing pre-computed values in the context of text generation. See the [`forward`](model_doc/ctrl#transformers.CTRLModel.forward) method for more information on the usage of this argument.
326_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ctrl.md
https://huggingface.co/docs/transformers/en/model_doc/ctrl/#resources
.md
- [Text classification task guide](../tasks/sequence_classification) - [Causal language modeling task guide](../tasks/language_modeling)
326_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ctrl.md
https://huggingface.co/docs/transformers/en/model_doc/ctrl/#ctrlconfig
.md
This is the configuration class to store the configuration of a [`CTRLModel`] or a [`TFCTRLModel`]. It is used to instantiate a CTRL model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the [Salesforce/ctrl](https://huggingface.co/Salesforce/ctrl) architecture from SalesForce. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
326_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ctrl.md
https://huggingface.co/docs/transformers/en/model_doc/ctrl/#ctrlconfig
.md
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 246534): Vocabulary size of the CTRL model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`CTRLModel`] or [`TFCTRLModel`]. n_positions (`int`, *optional*, defaults to 256):
326_5_1