source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chinese_clip.md | https://huggingface.co/docs/transformers/en/model_doc/chinese_clip/#usage-example | .md | >>> url = "https://clip-cn-beijing.oss-cn-beijing.aliyuncs.com/pokemon.jpeg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> # Squirtle, Bulbasaur, Charmander, Pikachu in English
>>> texts = ["杰尼龟", "妙蛙种子", "小火龙", "皮卡丘"]
>>> # compute image feature
>>> inputs = processor(images=image, return_tensors="pt")
>>> image_features = model.get_image_features(**inputs)
>>> image_features = image_features / image_features.norm(p=2, dim=-1, keepdim=True) # normalize | 105_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chinese_clip.md | https://huggingface.co/docs/transformers/en/model_doc/chinese_clip/#usage-example | .md | >>> # compute text features
>>> inputs = processor(text=texts, padding=True, return_tensors="pt")
>>> text_features = model.get_text_features(**inputs)
>>> text_features = text_features / text_features.norm(p=2, dim=-1, keepdim=True) # normalize | 105_2_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chinese_clip.md | https://huggingface.co/docs/transformers/en/model_doc/chinese_clip/#usage-example | .md | >>> # compute image-text similarity scores
>>> inputs = processor(text=texts, images=image, return_tensors="pt", padding=True)
>>> outputs = model(**inputs)
>>> logits_per_image = outputs.logits_per_image # this is the image-text similarity score
>>> probs = logits_per_image.softmax(dim=1) # probs: [[1.2686e-03, 5.4499e-02, 6.7968e-04, 9.4355e-01]]
```
Currently, following scales of pretrained Chinese-CLIP models are available on 🤗 Hub: | 105_2_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chinese_clip.md | https://huggingface.co/docs/transformers/en/model_doc/chinese_clip/#usage-example | .md | ```
Currently, following scales of pretrained Chinese-CLIP models are available on 🤗 Hub:
- [OFA-Sys/chinese-clip-vit-base-patch16](https://huggingface.co/OFA-Sys/chinese-clip-vit-base-patch16)
- [OFA-Sys/chinese-clip-vit-large-patch14](https://huggingface.co/OFA-Sys/chinese-clip-vit-large-patch14)
- [OFA-Sys/chinese-clip-vit-large-patch14-336px](https://huggingface.co/OFA-Sys/chinese-clip-vit-large-patch14-336px) | 105_2_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chinese_clip.md | https://huggingface.co/docs/transformers/en/model_doc/chinese_clip/#usage-example | .md | - [OFA-Sys/chinese-clip-vit-large-patch14-336px](https://huggingface.co/OFA-Sys/chinese-clip-vit-large-patch14-336px)
- [OFA-Sys/chinese-clip-vit-huge-patch14](https://huggingface.co/OFA-Sys/chinese-clip-vit-huge-patch14) | 105_2_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chinese_clip.md | https://huggingface.co/docs/transformers/en/model_doc/chinese_clip/#chineseclipconfig | .md | [`ChineseCLIPConfig`] is the configuration class to store the configuration of a [`ChineseCLIPModel`]. It is used
to instantiate Chinese-CLIP model according to the specified arguments, defining the text model and vision model
configs. Instantiating a configuration with the defaults will yield a similar configuration to that of the
Chinese-CLIP [OFA-Sys/chinese-clip-vit-base-patch16](https://huggingface.co/OFA-Sys/chinese-clip-vit-base-patch16)
architecture. | 105_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chinese_clip.md | https://huggingface.co/docs/transformers/en/model_doc/chinese_clip/#chineseclipconfig | .md | architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
text_config (`dict`, *optional*):
Dictionary of configuration options used to initialize [`ChineseCLIPTextConfig`].
vision_config (`dict`, *optional*):
Dictionary of configuration options used to initialize [`ChineseCLIPVisionConfig`].
projection_dim (`int`, *optional*, defaults to 512): | 105_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chinese_clip.md | https://huggingface.co/docs/transformers/en/model_doc/chinese_clip/#chineseclipconfig | .md | projection_dim (`int`, *optional*, defaults to 512):
Dimensionality of text and vision projection layers.
logit_scale_init_value (`float`, *optional*, defaults to 2.6592):
The initial value of the *logit_scale* parameter. Default is used as per the original ChineseCLIP
implementation.
kwargs (*optional*):
Dictionary of keyword arguments.
Example:
```python
>>> from transformers import ChineseCLIPConfig, ChineseCLIPModel | 105_3_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chinese_clip.md | https://huggingface.co/docs/transformers/en/model_doc/chinese_clip/#chineseclipconfig | .md | >>> # Initializing a ChineseCLIPConfig with OFA-Sys/chinese-clip-vit-base-patch16 style configuration
>>> configuration = ChineseCLIPConfig()
>>> # Initializing a ChineseCLIPModel (with random weights) from the OFA-Sys/chinese-clip-vit-base-patch16 style configuration
>>> model = ChineseCLIPModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
>>> # We can also initialize a ChineseCLIPConfig from a ChineseCLIPTextConfig and a ChineseCLIPVisionConfig | 105_3_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chinese_clip.md | https://huggingface.co/docs/transformers/en/model_doc/chinese_clip/#chineseclipconfig | .md | >>> # We can also initialize a ChineseCLIPConfig from a ChineseCLIPTextConfig and a ChineseCLIPVisionConfig
>>> # Initializing a ChineseCLIPTextConfig and ChineseCLIPVisionConfig configuration
>>> config_text = ChineseCLIPTextConfig()
>>> config_vision = ChineseCLIPVisionConfig()
>>> config = ChineseCLIPConfig.from_text_vision_configs(config_text, config_vision)
```
Methods: from_text_vision_configs | 105_3_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chinese_clip.md | https://huggingface.co/docs/transformers/en/model_doc/chinese_clip/#chinesecliptextconfig | .md | This is the configuration class to store the configuration of a [`ChineseCLIPModel`]. It is used to instantiate a
Chinese CLIP model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the Chinese CLIP
[OFA-Sys/chinese-clip-vit-base-patch16](https:
//huggingface.co/OFA-Sys/chinese-clip-vit-base-patch16) architecture. | 105_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chinese_clip.md | https://huggingface.co/docs/transformers/en/model_doc/chinese_clip/#chinesecliptextconfig | .md | [OFA-Sys/chinese-clip-vit-base-patch16](https:
//huggingface.co/OFA-Sys/chinese-clip-vit-base-patch16) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 30522):
Vocabulary size of the CHINESE_CLIP model. Defines the number of different tokens that can be represented | 105_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chinese_clip.md | https://huggingface.co/docs/transformers/en/model_doc/chinese_clip/#chinesecliptextconfig | .md | Vocabulary size of the CHINESE_CLIP model. Defines the number of different tokens that can be represented
by the `inputs_ids` passed when calling [`ChineseCLIPModel`].
hidden_size (`int`, *optional*, defaults to 768):
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (`int`, *optional*, defaults to 12):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 12): | 105_4_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chinese_clip.md | https://huggingface.co/docs/transformers/en/model_doc/chinese_clip/#chinesecliptextconfig | .md | Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 12):
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (`int`, *optional*, defaults to 3072):
Dimensionality of the "intermediate" (often named feed-forward) layer in the Transformer encoder.
hidden_act (`str` or `Callable`, *optional*, defaults to `"gelu"`): | 105_4_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chinese_clip.md | https://huggingface.co/docs/transformers/en/model_doc/chinese_clip/#chinesecliptextconfig | .md | hidden_act (`str` or `Callable`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"silu"` and `"gelu_new"` are supported.
hidden_dropout_prob (`float`, *optional*, defaults to 0.1):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1):
The dropout ratio for the attention probabilities. | 105_4_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chinese_clip.md | https://huggingface.co/docs/transformers/en/model_doc/chinese_clip/#chinesecliptextconfig | .md | attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1):
The dropout ratio for the attention probabilities.
max_position_embeddings (`int`, *optional*, defaults to 512):
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (`int`, *optional*, defaults to 2):
The vocabulary size of the `token_type_ids` passed when calling [`ChineseCLIPModel`]. | 105_4_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chinese_clip.md | https://huggingface.co/docs/transformers/en/model_doc/chinese_clip/#chinesecliptextconfig | .md | The vocabulary size of the `token_type_ids` passed when calling [`ChineseCLIPModel`].
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
initializer_factor (`float`, *optional*, defaults to 1.0):
A factor for initializing all weight matrices (should be kept to 1, used internally for initialization
testing).
layer_norm_eps (`float`, *optional*, defaults to 1e-12): | 105_4_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chinese_clip.md | https://huggingface.co/docs/transformers/en/model_doc/chinese_clip/#chinesecliptextconfig | .md | testing).
layer_norm_eps (`float`, *optional*, defaults to 1e-12):
The epsilon used by the layer normalization layers.
pad_token_id (`int`, *optional*, defaults to 0):
Padding token id.
position_embedding_type (`str`, *optional*, defaults to `"absolute"`):
Type of position embedding. Choose one of `"absolute"`, `"relative_key"`, `"relative_key_query"`. For
positional embeddings use `"absolute"`. For more information on `"relative_key"`, please refer to | 105_4_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chinese_clip.md | https://huggingface.co/docs/transformers/en/model_doc/chinese_clip/#chinesecliptextconfig | .md | positional embeddings use `"absolute"`. For more information on `"relative_key"`, please refer to
[Self-Attention with Relative Position Representations (Shaw et al.)](https://arxiv.org/abs/1803.02155).
For more information on `"relative_key_query"`, please refer to *Method 4* in [Improve Transformer Models
with Better Relative Position Embeddings (Huang et al.)](https://arxiv.org/abs/2009.13658).
use_cache (`bool`, *optional*, defaults to `True`): | 105_4_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chinese_clip.md | https://huggingface.co/docs/transformers/en/model_doc/chinese_clip/#chinesecliptextconfig | .md | use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if `config.is_decoder=True`.
Example:
```python
>>> from transformers import ChineseCLIPTextConfig, ChineseCLIPTextModel | 105_4_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chinese_clip.md | https://huggingface.co/docs/transformers/en/model_doc/chinese_clip/#chinesecliptextconfig | .md | >>> # Initializing a ChineseCLIPTextConfig with OFA-Sys/chinese-clip-vit-base-patch16 style configuration
>>> configuration = ChineseCLIPTextConfig()
>>> # Initializing a ChineseCLIPTextModel (with random weights) from the OFA-Sys/chinese-clip-vit-base-patch16 style configuration
>>> model = ChineseCLIPTextModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
``` | 105_4_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chinese_clip.md | https://huggingface.co/docs/transformers/en/model_doc/chinese_clip/#chineseclipvisionconfig | .md | This is the configuration class to store the configuration of a [`ChineseCLIPModel`]. It is used to instantiate an
ChineseCLIP model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the ChineseCLIP
[OFA-Sys/chinese-clip-vit-base-patch16](https://huggingface.co/OFA-Sys/chinese-clip-vit-base-patch16) architecture. | 105_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chinese_clip.md | https://huggingface.co/docs/transformers/en/model_doc/chinese_clip/#chineseclipvisionconfig | .md | [OFA-Sys/chinese-clip-vit-base-patch16](https://huggingface.co/OFA-Sys/chinese-clip-vit-base-patch16) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
hidden_size (`int`, *optional*, defaults to 768):
Dimensionality of the encoder layers and the pooler layer.
intermediate_size (`int`, *optional*, defaults to 3072): | 105_5_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chinese_clip.md | https://huggingface.co/docs/transformers/en/model_doc/chinese_clip/#chineseclipvisionconfig | .md | Dimensionality of the encoder layers and the pooler layer.
intermediate_size (`int`, *optional*, defaults to 3072):
Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
projection_dim (`int`, *optional*, defaults to 512):
Dimensionality of text and vision projection layers.
num_hidden_layers (`int`, *optional*, defaults to 12):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 12): | 105_5_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chinese_clip.md | https://huggingface.co/docs/transformers/en/model_doc/chinese_clip/#chineseclipvisionconfig | .md | Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 12):
Number of attention heads for each attention layer in the Transformer encoder.
num_channels (`int`, *optional*, defaults to 3):
The number of input channels.
image_size (`int`, *optional*, defaults to 224):
The size (resolution) of each image.
patch_size (`int`, *optional*, defaults to 32):
The size (resolution) of each patch.
hidden_act (`str` or `function`, *optional*, defaults to `"quick_gelu"`): | 105_5_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chinese_clip.md | https://huggingface.co/docs/transformers/en/model_doc/chinese_clip/#chineseclipvisionconfig | .md | The size (resolution) of each patch.
hidden_act (`str` or `function`, *optional*, defaults to `"quick_gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"selu"` and `"gelu_new"` `"quick_gelu"` are supported.
layer_norm_eps (`float`, *optional*, defaults to 1e-05):
The epsilon used by the layer normalization layers.
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities. | 105_5_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chinese_clip.md | https://huggingface.co/docs/transformers/en/model_doc/chinese_clip/#chineseclipvisionconfig | .md | attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
initializer_factor (`float`, *optional*, defaults to 1.0):
A factor for initializing all weight matrices (should be kept to 1, used internally for initialization
testing).
Example:
```python | 105_5_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chinese_clip.md | https://huggingface.co/docs/transformers/en/model_doc/chinese_clip/#chineseclipvisionconfig | .md | testing).
Example:
```python
>>> from transformers import ChineseCLIPVisionConfig, ChineseCLIPVisionModel | 105_5_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chinese_clip.md | https://huggingface.co/docs/transformers/en/model_doc/chinese_clip/#chineseclipvisionconfig | .md | >>> # Initializing a ChineseCLIPVisionConfig with OFA-Sys/chinese-clip-vit-base-patch16 style configuration
>>> configuration = ChineseCLIPVisionConfig()
>>> # Initializing a ChineseCLIPVisionModel (with random weights) from the OFA-Sys/chinese-clip-vit-base-patch16 style configuration
>>> model = ChineseCLIPVisionModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
``` | 105_5_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chinese_clip.md | https://huggingface.co/docs/transformers/en/model_doc/chinese_clip/#chineseclipimageprocessor | .md | Constructs a Chinese-CLIP image processor.
Args:
do_resize (`bool`, *optional*, defaults to `True`):
Whether to resize the image's (height, width) dimensions to the specified `size`. Can be overridden by
`do_resize` in the `preprocess` method.
size (`Dict[str, int]` *optional*, defaults to `{"shortest_edge": 224}`):
Size of the image after resizing. The shortest edge of the image is resized to size["shortest_edge"], with | 105_6_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chinese_clip.md | https://huggingface.co/docs/transformers/en/model_doc/chinese_clip/#chineseclipimageprocessor | .md | Size of the image after resizing. The shortest edge of the image is resized to size["shortest_edge"], with
the longest edge resized to keep the input aspect ratio. Can be overridden by `size` in the `preprocess`
method.
resample (`PILImageResampling`, *optional*, defaults to `Resampling.BICUBIC`):
Resampling filter to use if resizing the image. Can be overridden by `resample` in the `preprocess` method.
do_center_crop (`bool`, *optional*, defaults to `True`): | 105_6_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chinese_clip.md | https://huggingface.co/docs/transformers/en/model_doc/chinese_clip/#chineseclipimageprocessor | .md | do_center_crop (`bool`, *optional*, defaults to `True`):
Whether to center crop the image to the specified `crop_size`. Can be overridden by `do_center_crop` in the
`preprocess` method.
crop_size (`Dict[str, int]` *optional*, defaults to 224):
Size of the output image after applying `center_crop`. Can be overridden by `crop_size` in the `preprocess`
method.
do_rescale (`bool`, *optional*, defaults to `True`): | 105_6_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chinese_clip.md | https://huggingface.co/docs/transformers/en/model_doc/chinese_clip/#chineseclipimageprocessor | .md | method.
do_rescale (`bool`, *optional*, defaults to `True`):
Whether to rescale the image by the specified scale `rescale_factor`. Can be overridden by `do_rescale` in
the `preprocess` method.
rescale_factor (`int` or `float`, *optional*, defaults to `1/255`):
Scale factor to use if rescaling the image. Can be overridden by `rescale_factor` in the `preprocess`
method.
do_normalize (`bool`, *optional*, defaults to `True`): | 105_6_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chinese_clip.md | https://huggingface.co/docs/transformers/en/model_doc/chinese_clip/#chineseclipimageprocessor | .md | method.
do_normalize (`bool`, *optional*, defaults to `True`):
Whether to normalize the image. Can be overridden by `do_normalize` in the `preprocess` method.
image_mean (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_MEAN`):
Mean to use if normalizing the image. This is a float or list of floats the length of the number of
channels in the image. Can be overridden by the `image_mean` parameter in the `preprocess` method. | 105_6_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chinese_clip.md | https://huggingface.co/docs/transformers/en/model_doc/chinese_clip/#chineseclipimageprocessor | .md | channels in the image. Can be overridden by the `image_mean` parameter in the `preprocess` method.
image_std (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_STD`):
Standard deviation to use if normalizing the image. This is a float or list of floats the length of the
number of channels in the image. Can be overridden by the `image_std` parameter in the `preprocess` method.
Can be overridden by the `image_std` parameter in the `preprocess` method. | 105_6_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chinese_clip.md | https://huggingface.co/docs/transformers/en/model_doc/chinese_clip/#chineseclipimageprocessor | .md | Can be overridden by the `image_std` parameter in the `preprocess` method.
do_convert_rgb (`bool`, *optional*, defaults to `True`):
Whether to convert the image to RGB.
Methods: preprocess | 105_6_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chinese_clip.md | https://huggingface.co/docs/transformers/en/model_doc/chinese_clip/#chineseclipfeatureextractor | .md | No docstring available for ChineseCLIPFeatureExtractor | 105_7_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chinese_clip.md | https://huggingface.co/docs/transformers/en/model_doc/chinese_clip/#chineseclipprocessor | .md | Constructs a Chinese-CLIP processor which wraps a Chinese-CLIP image processor and a Chinese-CLIP tokenizer into a
single processor.
[`ChineseCLIPProcessor`] offers all the functionalities of [`ChineseCLIPImageProcessor`] and [`BertTokenizerFast`].
See the [`~ChineseCLIPProcessor.__call__`] and [`~ChineseCLIPProcessor.decode`] for more information.
Args:
image_processor ([`ChineseCLIPImageProcessor`], *optional*):
The image processor is a required input.
tokenizer ([`BertTokenizerFast`], *optional*): | 105_8_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chinese_clip.md | https://huggingface.co/docs/transformers/en/model_doc/chinese_clip/#chineseclipprocessor | .md | The image processor is a required input.
tokenizer ([`BertTokenizerFast`], *optional*):
The tokenizer is a required input. | 105_8_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chinese_clip.md | https://huggingface.co/docs/transformers/en/model_doc/chinese_clip/#chineseclipmodel | .md | This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`ChineseCLIPConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the | 105_9_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chinese_clip.md | https://huggingface.co/docs/transformers/en/model_doc/chinese_clip/#chineseclipmodel | .md | Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
- get_text_features
- get_image_features | 105_9_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chinese_clip.md | https://huggingface.co/docs/transformers/en/model_doc/chinese_clip/#chinesecliptextmodel | .md | The text model from CHINESE_CLIP without any head or projection on top.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`ChineseCLIPConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the | 105_10_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chinese_clip.md | https://huggingface.co/docs/transformers/en/model_doc/chinese_clip/#chinesecliptextmodel | .md | Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of
cross-attention is added between the self-attention layers, following the architecture described in [Attention is | 105_10_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chinese_clip.md | https://huggingface.co/docs/transformers/en/model_doc/chinese_clip/#chinesecliptextmodel | .md | cross-attention is added between the self-attention layers, following the architecture described in [Attention is
all you need](https://arxiv.org/abs/1706.03762) by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit,
Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin.
To behave as an decoder the model needs to be initialized with the `is_decoder` argument of the configuration set | 105_10_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chinese_clip.md | https://huggingface.co/docs/transformers/en/model_doc/chinese_clip/#chinesecliptextmodel | .md | To behave as an decoder the model needs to be initialized with the `is_decoder` argument of the configuration set
to `True`. To be used in a Seq2Seq model, the model needs to initialized with both `is_decoder` argument and
`add_cross_attention` set to `True`; an `encoder_hidden_states` is then expected as an input to the forward pass.
Methods: forward | 105_10_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chinese_clip.md | https://huggingface.co/docs/transformers/en/model_doc/chinese_clip/#chineseclipvisionmodel | .md | The vision model from CHINESE_CLIP without any head or projection on top.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`ChineseCLIPConfig`]): Model configuration class with all the parameters of the model. | 105_11_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/chinese_clip.md | https://huggingface.co/docs/transformers/en/model_doc/chinese_clip/#chineseclipvisionmodel | .md | behavior.
Parameters:
config ([`ChineseCLIPConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 105_11_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phobert.md | https://huggingface.co/docs/transformers/en/model_doc/phobert/ | .md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 106_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phobert.md | https://huggingface.co/docs/transformers/en/model_doc/phobert/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 106_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phobert.md | https://huggingface.co/docs/transformers/en/model_doc/phobert/#overview | .md | The PhoBERT model was proposed in [PhoBERT: Pre-trained language models for Vietnamese](https://www.aclweb.org/anthology/2020.findings-emnlp.92.pdf) by Dat Quoc Nguyen, Anh Tuan Nguyen.
The abstract from the paper is the following:
*We present PhoBERT with two versions, PhoBERT-base and PhoBERT-large, the first public large-scale monolingual
language models pre-trained for Vietnamese. Experimental results show that PhoBERT consistently outperforms the recent | 106_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phobert.md | https://huggingface.co/docs/transformers/en/model_doc/phobert/#overview | .md | language models pre-trained for Vietnamese. Experimental results show that PhoBERT consistently outperforms the recent
best pre-trained multilingual model XLM-R (Conneau et al., 2020) and improves the state-of-the-art in multiple
Vietnamese-specific NLP tasks including Part-of-speech tagging, Dependency parsing, Named-entity recognition and
Natural language inference.* | 106_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phobert.md | https://huggingface.co/docs/transformers/en/model_doc/phobert/#overview | .md | Natural language inference.*
This model was contributed by [dqnguyen](https://huggingface.co/dqnguyen). The original code can be found [here](https://github.com/VinAIResearch/PhoBERT). | 106_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phobert.md | https://huggingface.co/docs/transformers/en/model_doc/phobert/#usage-example | .md | ```python
>>> import torch
>>> from transformers import AutoModel, AutoTokenizer
>>> phobert = AutoModel.from_pretrained("vinai/phobert-base")
>>> tokenizer = AutoTokenizer.from_pretrained("vinai/phobert-base")
>>> # INPUT TEXT MUST BE ALREADY WORD-SEGMENTED!
>>> line = "Tôi là sinh_viên trường đại_học Công_nghệ ."
>>> input_ids = torch.tensor([tokenizer.encode(line)])
>>> with torch.no_grad():
... features = phobert(input_ids) # Models outputs are now tuples | 106_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phobert.md | https://huggingface.co/docs/transformers/en/model_doc/phobert/#usage-example | .md | >>> with torch.no_grad():
... features = phobert(input_ids) # Models outputs are now tuples
>>> # With TensorFlow 2.0+:
>>> # from transformers import TFAutoModel
>>> # phobert = TFAutoModel.from_pretrained("vinai/phobert-base")
```
<Tip>
PhoBERT implementation is the same as BERT, except for tokenization. Refer to [BERT documentation](bert) for information on
configuration classes and their parameters. PhoBERT-specific tokenizer is documented below.
</Tip> | 106_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phobert.md | https://huggingface.co/docs/transformers/en/model_doc/phobert/#phoberttokenizer | .md | Construct a PhoBERT tokenizer. Based on Byte-Pair-Encoding.
This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
Args:
vocab_file (`str`):
Path to the vocabulary file.
merges_file (`str`):
Path to the merges file.
bos_token (`st`, *optional*, defaults to `"<s>"`):
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
<Tip> | 106_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phobert.md | https://huggingface.co/docs/transformers/en/model_doc/phobert/#phoberttokenizer | .md | The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
<Tip>
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the `cls_token`.
</Tip>
eos_token (`str`, *optional*, defaults to `"</s>"`):
The end of sequence token.
<Tip>
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the `sep_token`. | 106_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phobert.md | https://huggingface.co/docs/transformers/en/model_doc/phobert/#phoberttokenizer | .md | The token used is the `sep_token`.
</Tip>
sep_token (`str`, *optional*, defaults to `"</s>"`):
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (`str`, *optional*, defaults to `"<s>"`): | 106_3_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phobert.md | https://huggingface.co/docs/transformers/en/model_doc/phobert/#phoberttokenizer | .md | token of a sequence built with special tokens.
cls_token (`str`, *optional*, defaults to `"<s>"`):
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (`str`, *optional*, defaults to `"<unk>"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead. | 106_3_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phobert.md | https://huggingface.co/docs/transformers/en/model_doc/phobert/#phoberttokenizer | .md | The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (`str`, *optional*, defaults to `"<pad>"`):
The token used for padding, for example when batching sequences of different lengths.
mask_token (`str`, *optional*, defaults to `"<mask>"`):
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict. | 106_3_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fuyu.md | https://huggingface.co/docs/transformers/en/model_doc/fuyu/ | .md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 107_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fuyu.md | https://huggingface.co/docs/transformers/en/model_doc/fuyu/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 107_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fuyu.md | https://huggingface.co/docs/transformers/en/model_doc/fuyu/#overview | .md | The Fuyu model was created by [ADEPT](https://www.adept.ai/blog/fuyu-8b), and authored by Rohan Bavishi, Erich Elsen, Curtis Hawthorne, Maxwell Nye, Augustus Odena, Arushi Somani, Sağnak Taşırlar.
The authors introduced Fuyu-8B, a decoder-only multimodal model based on the classic transformers architecture, with query and key normalization. A linear encoder is added to create multimodal embeddings from image inputs. | 107_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fuyu.md | https://huggingface.co/docs/transformers/en/model_doc/fuyu/#overview | .md | By treating image tokens like text tokens and using a special image-newline character, the model knows when an image line ends. Image positional embeddings are removed. This avoids the need for different training phases for various image resolutions. With 8 billion parameters and licensed under CC-BY-NC, Fuyu-8B is notable for its ability to handle both text and images, its impressive context size of 16K, and its overall performance.
<Tip warning={true}> | 107_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fuyu.md | https://huggingface.co/docs/transformers/en/model_doc/fuyu/#overview | .md | <Tip warning={true}>
The `Fuyu` models were trained using `bfloat16`, but the original inference uses `float16` The checkpoints uploaded on the hub use `torch_dtype = 'float16'` which will be
used by the `AutoModel` API to cast the checkpoints from `torch.float32` to `torch.float16`. | 107_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fuyu.md | https://huggingface.co/docs/transformers/en/model_doc/fuyu/#overview | .md | The `dtype` of the online weights is mostly irrelevant, unless you are using `torch_dtype="auto"` when initializing a model using `model = AutoModelForCausalLM.from_pretrained("path", torch_dtype = "auto")`. The reason is that the model will first be downloaded ( using the `dtype` of the checkpoints online) then it will be cast to the default `dtype` of `torch` (becomes `torch.float32`). Users should specify the `torch_dtype` they want, and if they don't it will be `torch.float32`. | 107_1_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fuyu.md | https://huggingface.co/docs/transformers/en/model_doc/fuyu/#overview | .md | Finetuning the model in `float16` is not recommended and known to produce `nan`, as such the model should be fine-tuned in `bfloat16`.
</Tip>
Tips:
- To convert the model, you need to clone the original repository using `git clone https://github.com/persimmon-ai-labs/adept-inference`, then get the checkpoints:
```bash
git clone https://github.com/persimmon-ai-labs/adept-inference
wget path/to/fuyu-8b-model-weights.tar
tar -xvf fuyu-8b-model-weights.tar | 107_1_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fuyu.md | https://huggingface.co/docs/transformers/en/model_doc/fuyu/#overview | .md | wget path/to/fuyu-8b-model-weights.tar
tar -xvf fuyu-8b-model-weights.tar
python src/transformers/models/fuyu/convert_fuyu_weights_to_hf.py --input_dir /path/to/downloaded/fuyu/weights/ --output_dir /output/path \
--pt_model_path /path/to/fuyu_8b_release/iter_0001251/mp_rank_00/model_optim_rng.pt
--ada_lib_path /path/to/adept-inference
```
For the chat model:
```bash
wget https://axtkn4xl5cip.objectstorage.us-phoenix-1.oci.customer-oci.com/n/axtkn4xl5cip/b/adept-public-data/o/8b_chat_model_release.tar | 107_1_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fuyu.md | https://huggingface.co/docs/transformers/en/model_doc/fuyu/#overview | .md | tar -xvf 8b_base_model_release.tar
```
Then, model can be loaded via:
```py
from transformers import FuyuConfig, FuyuForCausalLM
model_config = FuyuConfig()
model = FuyuForCausalLM(model_config).from_pretrained('/output/path')
```
Inputs need to be passed through a specific Processor to have the correct formats.
A processor requires an image_processor and a tokenizer. Hence, inputs can be loaded via:
```py
from PIL import Image
from transformers import AutoTokenizer | 107_1_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fuyu.md | https://huggingface.co/docs/transformers/en/model_doc/fuyu/#overview | .md | ```py
from PIL import Image
from transformers import AutoTokenizer
from transformers.models.fuyu.processing_fuyu import FuyuProcessor
from transformers.models.fuyu.image_processing_fuyu import FuyuImageProcessor | 107_1_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fuyu.md | https://huggingface.co/docs/transformers/en/model_doc/fuyu/#overview | .md | tokenizer = AutoTokenizer.from_pretrained('adept-hf-collab/fuyu-8b')
image_processor = FuyuImageProcessor()
processor = FuyuProcessor(image_processor=image_processor, tokenizer=tokenizer)
text_prompt = "Generate a coco-style caption.\\n"
bus_image_url = "https://huggingface.co/datasets/hf-internal-testing/fixtures-captioning/resolve/main/bus.png"
bus_image_pil = Image.open(io.BytesIO(requests.get(bus_image_url).content))
inputs_to_model = processor(images=bus_image_pil, text=text_prompt) | 107_1_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fuyu.md | https://huggingface.co/docs/transformers/en/model_doc/fuyu/#overview | .md | ```
This model was contributed by [Molbap](https://huggingface.co/Molbap).
The original code can be found [here](https://github.com/persimmon-ai-labs/adept-inference).
- Fuyu uses a `sentencepiece` based tokenizer, with a `Unigram` model. It supports bytefallback, which is only available in `tokenizers==0.14.0` for the fast tokenizer.
The `LlamaTokenizer` is used as it is a standard wrapper around sentencepiece. | 107_1_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fuyu.md | https://huggingface.co/docs/transformers/en/model_doc/fuyu/#overview | .md | The `LlamaTokenizer` is used as it is a standard wrapper around sentencepiece.
- The authors suggest to use the following prompt for image captioning: `f"Generate a coco-style caption.\\n"` | 107_1_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fuyu.md | https://huggingface.co/docs/transformers/en/model_doc/fuyu/#fuyuconfig | .md | This is the configuration class to store the configuration of a [`FuyuForCausalLM`]. It is used to instantiate an
Fuyu model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the
[adept/fuyu-8b](https://huggingface.co/adept/fuyu-8b).
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the | 107_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fuyu.md | https://huggingface.co/docs/transformers/en/model_doc/fuyu/#fuyuconfig | .md | Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 262144):
Vocabulary size of the Fuyu model. Defines the number of different tokens that can be represented by the
`inputs_ids` passed when calling [`FuyuForCausalLM`]
hidden_size (`int`, *optional*, defaults to 4096):
Dimension of the hidden representations. | 107_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fuyu.md | https://huggingface.co/docs/transformers/en/model_doc/fuyu/#fuyuconfig | .md | hidden_size (`int`, *optional*, defaults to 4096):
Dimension of the hidden representations.
intermediate_size (`int`, *optional*, defaults to 16384):
Dimension of the MLP representations.
num_hidden_layers (`int`, *optional*, defaults to 36):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 64):
Number of attention heads for each attention layer in the Transformer encoder.
hidden_act (`str` or `function`, *optional*, defaults to `"relu2"`): | 107_2_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fuyu.md | https://huggingface.co/docs/transformers/en/model_doc/fuyu/#fuyuconfig | .md | hidden_act (`str` or `function`, *optional*, defaults to `"relu2"`):
The non-linear activation function (function or string) in the decoder.
max_position_embeddings (`int`, *optional*, defaults to 16384):
The maximum sequence length that this model might ever be used with.
image_size (`int`, *optional*, defaults to 300):
The input image size.
patch_size (`int`, *optional*, defaults to 30):
The input vision transformer encoding patch size.
num_channels (`int`, *optional*, defaults to 3): | 107_2_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fuyu.md | https://huggingface.co/docs/transformers/en/model_doc/fuyu/#fuyuconfig | .md | The input vision transformer encoding patch size.
num_channels (`int`, *optional*, defaults to 3):
The input image number of channels.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (`float`, *optional*, defaults to 1e-05):
The epsilon used by the rms normalization layers.
use_cache (`bool`, *optional*, defaults to `True`): | 107_2_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fuyu.md | https://huggingface.co/docs/transformers/en/model_doc/fuyu/#fuyuconfig | .md | The epsilon used by the rms normalization layers.
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if `config.is_decoder=True`. Whether to tie weight embeddings
tie_word_embeddings (`bool`, *optional*, defaults to `False`):
Whether to tie input and output embeddings.
rope_theta (`float`, *optional*, defaults to 25000.0):
The base period of the RoPE embeddings.
rope_scaling (`Dict`, *optional*): | 107_2_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fuyu.md | https://huggingface.co/docs/transformers/en/model_doc/fuyu/#fuyuconfig | .md | The base period of the RoPE embeddings.
rope_scaling (`Dict`, *optional*):
Dictionary containing the scaling configuration for the RoPE embeddings. Currently supports two scaling
strategies: linear and dynamic. Their scaling factor must be a float greater than 1. The expected format is
`{"type": strategy name, "factor": scaling factor}`. When using this flag, don't update
`max_position_embeddings` to the expected new maximum. See the following thread for more information on how | 107_2_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fuyu.md | https://huggingface.co/docs/transformers/en/model_doc/fuyu/#fuyuconfig | .md | `max_position_embeddings` to the expected new maximum. See the following thread for more information on how
these scaling strategies behave:
https://www.reddit.com/r/LocalFuyu/comments/14mrgpr/dynamically_scaled_rope_further_increases/. This is an
experimental feature, subject to breaking API changes in future versions.
qk_layernorm (`bool`, *optional*, defaults to `True`):
Whether or not to normalize the Queries and Keys after projecting the hidden states | 107_2_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fuyu.md | https://huggingface.co/docs/transformers/en/model_doc/fuyu/#fuyuconfig | .md | Whether or not to normalize the Queries and Keys after projecting the hidden states
hidden_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio after applying the MLP to the hidden states.
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio after computing the attention scores.
partial_rotary_factor (`float`, *optional*, defaults to 0.5):
Percentage of the query and keys which will have rotary embedding.
pad_token_id (`int`, *optional*):
The id of the *padding* token. | 107_2_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fuyu.md | https://huggingface.co/docs/transformers/en/model_doc/fuyu/#fuyuconfig | .md | pad_token_id (`int`, *optional*):
The id of the *padding* token.
bos_token_id (`int`, *optional*, defaults to 1):
The id of the *beginning-of-sequence* token.
eos_token_id (`Union[int, List[int]]`, *optional*, defaults to 2):
The id of the *end-of-sequence* token. Optionally, use a list to set multiple *end-of-sequence* tokens.
text_config (`dict`, *optional*):
Dictionary of configuration options used to initialize the `language``[`Aut`].
```python
>>> from transformers import FuyuConfig | 107_2_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fuyu.md | https://huggingface.co/docs/transformers/en/model_doc/fuyu/#fuyuconfig | .md | >>> # Initializing a Fuyu fuyu-7b style configuration
>>> configuration = FuyuConfig()
``` | 107_2_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fuyu.md | https://huggingface.co/docs/transformers/en/model_doc/fuyu/#fuyuforcausallm | .md | Fuyu Model with a language modeling head on top for causal language model conditioned on image patches and text.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | 107_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fuyu.md | https://huggingface.co/docs/transformers/en/model_doc/fuyu/#fuyuforcausallm | .md | etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`FuyuConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the | 107_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fuyu.md | https://huggingface.co/docs/transformers/en/model_doc/fuyu/#fuyuforcausallm | .md | load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 107_3_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fuyu.md | https://huggingface.co/docs/transformers/en/model_doc/fuyu/#fuyuimageprocessor | .md | This class should handle the image processing part before the main FuyuForCausalLM. In particular, it should
handle:
- Processing Images:
Taking a batch of images as input. If the images are variable-sized, it resizes them based on the desired patch
dimensions. The image output is always img_h, img_w of (1080, 1920)
Then, it patches up these images using the patchify_image function.
- Creating Image Input IDs: | 107_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fuyu.md | https://huggingface.co/docs/transformers/en/model_doc/fuyu/#fuyuimageprocessor | .md | Then, it patches up these images using the patchify_image function.
- Creating Image Input IDs:
For each patch, a placeholder ID is given to identify where these patches belong in a token sequence. For
variable-sized images, each line of patches is terminated with a newline ID.
- Image Patch Indices:
For each image patch, the code maintains an index where these patches should be inserted in a token stream.
Args:
do_resize (`bool`, *optional*, defaults to `True`): | 107_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fuyu.md | https://huggingface.co/docs/transformers/en/model_doc/fuyu/#fuyuimageprocessor | .md | Args:
do_resize (`bool`, *optional*, defaults to `True`):
Whether to resize the image to `size`.
size (`Dict[str, int]`, *optional*, defaults to `{"height": 1080, "width": 1920}`):
Dictionary in the format `{"height": int, "width": int}` specifying the size of the output image.
resample (`PILImageResampling`, *optional*, defaults to `Resampling.BILINEAR`):
`PILImageResampling` filter to use when resizing the image e.g. `PILImageResampling.BILINEAR`.
do_pad (`bool`, *optional*, defaults to `True`): | 107_4_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fuyu.md | https://huggingface.co/docs/transformers/en/model_doc/fuyu/#fuyuimageprocessor | .md | do_pad (`bool`, *optional*, defaults to `True`):
Whether to pad the image to `size`.
padding_value (`float`, *optional*, defaults to 1.0):
The value to pad the image with.
padding_mode (`str`, *optional*, defaults to `"constant"`):
The padding mode to use when padding the image.
do_normalize (`bool`, *optional*, defaults to `True`):
Whether to normalize the image.
image_mean (`float`, *optional*, defaults to 0.5):
The mean to use when normalizing the image.
image_std (`float`, *optional*, defaults to 0.5): | 107_4_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fuyu.md | https://huggingface.co/docs/transformers/en/model_doc/fuyu/#fuyuimageprocessor | .md | The mean to use when normalizing the image.
image_std (`float`, *optional*, defaults to 0.5):
The standard deviation to use when normalizing the image.
do_rescale (`bool`, *optional*, defaults to `True`):
Whether to rescale the image.
rescale_factor (`float`, *optional*, defaults to `1 / 255`):
The factor to use when rescaling the image.
patch_size (`Dict[str, int]`, *optional*, defaults to `{"height": 30, "width": 30}`): | 107_4_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fuyu.md | https://huggingface.co/docs/transformers/en/model_doc/fuyu/#fuyuimageprocessor | .md | patch_size (`Dict[str, int]`, *optional*, defaults to `{"height": 30, "width": 30}`):
Dictionary in the format `{"height": int, "width": int}` specifying the size of the patches.
Methods: __call__ | 107_4_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fuyu.md | https://huggingface.co/docs/transformers/en/model_doc/fuyu/#fuyuprocessor | .md | Constructs a Fuyu processor which wraps a Fuyu image processor and a Llama tokenizer into a single processor.
[`FuyuProcessor`] offers all the functionalities of [`FuyuImageProcessor`] and [`LlamaTokenizerFast`]. See the
[`~FuyuProcessor.__call__`] and [`~FuyuProcessor.decode`] for more information.
Args:
image_processor ([`FuyuImageProcessor`]):
The image processor is a required input.
tokenizer ([`LlamaTokenizerFast`]):
The tokenizer is a required input.
Methods: __call__ | 107_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/myt5.md | https://huggingface.co/docs/transformers/en/model_doc/myt5/ | .md | <!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 108_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/myt5.md | https://huggingface.co/docs/transformers/en/model_doc/myt5/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 108_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/myt5.md | https://huggingface.co/docs/transformers/en/model_doc/myt5/#overview | .md | The myt5 model was proposed in [MYTE: Morphology-Driven Byte Encoding for Better and Fairer Multilingual Language Modeling](https://arxiv.org/pdf/2403.10691.pdf) by Tomasz Limisiewicz, Terra Blevins, Hila Gonen, Orevaoghene Ahia, and Luke Zettlemoyer.
MyT5 (**My**te **T5**) is a multilingual language model based on T5 architecture.
The model uses a **m**orphologically-driven **byte** (**MYTE**) representation described in our paper. | 108_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/myt5.md | https://huggingface.co/docs/transformers/en/model_doc/myt5/#overview | .md | The model uses a **m**orphologically-driven **byte** (**MYTE**) representation described in our paper.
**MYTE** uses codepoints corresponding to morphemes in contrast to characters used in UTF-8 encoding.
As a pre-requisite, we used unsupervised morphological segmentation ([Morfessor](https://aclanthology.org/E14-2006.pdf)) to obtain morpheme inventories for 99 languages. | 108_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/myt5.md | https://huggingface.co/docs/transformers/en/model_doc/myt5/#overview | .md | However, the morphological segmentation step is not needed when using the pre-defined morpheme inventory from the hub (see: [Tomli/myt5-base](https://huggingface.co/Tomlim/myt5-base)).
The abstract from the paper is the following: | 108_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/myt5.md | https://huggingface.co/docs/transformers/en/model_doc/myt5/#overview | .md | *A major consideration in multilingual language modeling is how to best represent languages with diverse vocabularies and scripts. Although contemporary text encoding methods cover most of the world’s writing systems, they exhibit bias towards the high-resource languages of the Global West. As a result, texts of underrepresented languages tend to be segmented into long sequences of linguistically meaningless units. To address the disparities, we introduce a new paradigm that encodes the same information | 108_1_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/myt5.md | https://huggingface.co/docs/transformers/en/model_doc/myt5/#overview | .md | of linguistically meaningless units. To address the disparities, we introduce a new paradigm that encodes the same information with segments of consistent size across diverse languages. Our encoding convention (MYTE) is based on morphemes, as their inventories are more balanced across languages than characters, which are used in previous methods. We show that MYTE produces shorter encodings for all 99 analyzed languages, with the most notable improvements for non-European languages and non-Latin scripts. | 108_1_4 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.