source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
|
https://huggingface.co/docs/transformers/en/model_doc/clip/#clipmodel
|
.md
|
| 32 | 16 | 0.19 | 0.162 | 1.177 | 0.154 | 1.233 |
| 32 | 64 | 0.216 | 0.181 | 1.19 | 0.176 | 1.228 |
|
254_8_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
|
https://huggingface.co/docs/transformers/en/model_doc/clip/#resources
|
.md
|
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with CLIP.
- [Fine tuning CLIP with Remote Sensing (Satellite) images and captions](https://huggingface.co/blog/fine-tune-clip-rsicd), a blog post about how to fine-tune CLIP with [RSICD dataset](https://github.com/201528014227051/RSICD_optimal) and comparison of performance changes due to data augmentation.
|
254_9_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
|
https://huggingface.co/docs/transformers/en/model_doc/clip/#resources
|
.md
|
- This [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/contrastive-image-text) shows how to train a CLIP-like vision-text dual encoder model using a pre-trained vision and text encoder using [COCO dataset](https://cocodataset.org/#home).
<PipelineTag pipeline="image-to-text"/>
|
254_9_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
|
https://huggingface.co/docs/transformers/en/model_doc/clip/#resources
|
.md
|
<PipelineTag pipeline="image-to-text"/>
- A [notebook](https://colab.research.google.com/drive/1tuoAC5F4sC7qid56Z0ap-stR3rwdk0ZV?usp=sharing) on how to use a pretrained CLIP for inference with beam search for image captioning. 🌎
**Image retrieval**
- A [notebook](https://colab.research.google.com/drive/1bLVwVKpAndpEDHqjzxVPr_9nGrSbuOQd?usp=sharing) on image retrieval using pretrained CLIP and computing MRR(Mean Reciprocal Rank) score. 🌎
|
254_9_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
|
https://huggingface.co/docs/transformers/en/model_doc/clip/#resources
|
.md
|
- A [notebook](https://colab.research.google.com/github/deep-diver/image_search_with_natural_language/blob/main/notebooks/Image_Search_CLIP.ipynb) on image retrieval and showing the similarity score. 🌎
- A [notebook](https://colab.research.google.com/drive/1xO-wC_m_GNzgjIBQ4a4znvQkvDoZJvH4?usp=sharing) on how to map images and texts to the same vector space using Multilingual CLIP. 🌎
|
254_9_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
|
https://huggingface.co/docs/transformers/en/model_doc/clip/#resources
|
.md
|
- A [notebook](https://colab.research.google.com/github/vivien000/clip-demo/blob/master/clip.ipynb#scrollTo=uzdFhRGqiWkR) on how to run CLIP on semantic image search using [Unsplash](https://unsplash.com) and [TMDB](https://www.themoviedb.org/) datasets. 🌎
**Explainability**
- A [notebook](https://colab.research.google.com/github/hila-chefer/Transformer-MM-Explainability/blob/main/CLIP_explainability.ipynb) on how to visualize similarity between input token and image segment. 🌎
|
254_9_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
|
https://huggingface.co/docs/transformers/en/model_doc/clip/#resources
|
.md
|
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we will review it.
The resource should ideally demonstrate something new instead of duplicating an existing resource.
|
254_9_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
|
https://huggingface.co/docs/transformers/en/model_doc/clip/#clipconfig
|
.md
|
[`CLIPConfig`] is the configuration class to store the configuration of a [`CLIPModel`]. It is used to instantiate
a CLIP model according to the specified arguments, defining the text model and vision model configs. Instantiating
a configuration with the defaults will yield a similar configuration to that of the CLIP
[openai/clip-vit-base-patch32](https://huggingface.co/openai/clip-vit-base-patch32) architecture.
|
254_10_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
|
https://huggingface.co/docs/transformers/en/model_doc/clip/#clipconfig
|
.md
|
[openai/clip-vit-base-patch32](https://huggingface.co/openai/clip-vit-base-patch32) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
text_config (`dict`, *optional*):
Dictionary of configuration options used to initialize [`CLIPTextConfig`].
vision_config (`dict`, *optional*):
Dictionary of configuration options used to initialize [`CLIPVisionConfig`].
|
254_10_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
|
https://huggingface.co/docs/transformers/en/model_doc/clip/#clipconfig
|
.md
|
vision_config (`dict`, *optional*):
Dictionary of configuration options used to initialize [`CLIPVisionConfig`].
projection_dim (`int`, *optional*, defaults to 512):
Dimensionality of text and vision projection layers.
logit_scale_init_value (`float`, *optional*, defaults to 2.6592):
The initial value of the *logit_scale* parameter. Default is used as per the original CLIP implementation.
kwargs (*optional*):
Dictionary of keyword arguments.
Example:
```python
|
254_10_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
|
https://huggingface.co/docs/transformers/en/model_doc/clip/#clipconfig
|
.md
|
kwargs (*optional*):
Dictionary of keyword arguments.
Example:
```python
>>> from transformers import CLIPConfig, CLIPModel
|
254_10_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
|
https://huggingface.co/docs/transformers/en/model_doc/clip/#clipconfig
|
.md
|
>>> # Initializing a CLIPConfig with openai/clip-vit-base-patch32 style configuration
>>> configuration = CLIPConfig()
>>> # Initializing a CLIPModel (with random weights) from the openai/clip-vit-base-patch32 style configuration
>>> model = CLIPModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
>>> # We can also initialize a CLIPConfig from a CLIPTextConfig and a CLIPVisionConfig
>>> from transformers import CLIPTextConfig, CLIPVisionConfig
|
254_10_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
|
https://huggingface.co/docs/transformers/en/model_doc/clip/#clipconfig
|
.md
|
>>> # Initializing a CLIPText and CLIPVision configuration
>>> config_text = CLIPTextConfig()
>>> config_vision = CLIPVisionConfig()
>>> config = CLIPConfig.from_text_vision_configs(config_text, config_vision)
```
Methods: from_text_vision_configs
|
254_10_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
|
https://huggingface.co/docs/transformers/en/model_doc/clip/#cliptextconfig
|
.md
|
This is the configuration class to store the configuration of a [`CLIPTextModel`]. It is used to instantiate a CLIP
text encoder according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the text encoder of the CLIP
[openai/clip-vit-base-patch32](https://huggingface.co/openai/clip-vit-base-patch32) architecture.
|
254_11_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
|
https://huggingface.co/docs/transformers/en/model_doc/clip/#cliptextconfig
|
.md
|
[openai/clip-vit-base-patch32](https://huggingface.co/openai/clip-vit-base-patch32) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 49408):
Vocabulary size of the CLIP text model. Defines the number of different tokens that can be represented by
the `inputs_ids` passed when calling [`CLIPModel`].
|
254_11_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
|
https://huggingface.co/docs/transformers/en/model_doc/clip/#cliptextconfig
|
.md
|
the `inputs_ids` passed when calling [`CLIPModel`].
hidden_size (`int`, *optional*, defaults to 512):
Dimensionality of the encoder layers and the pooler layer.
intermediate_size (`int`, *optional*, defaults to 2048):
Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
projection_dim (`int`, *optional*, defaults to 512):
Dimensionality of text and vision projection layers.
num_hidden_layers (`int`, *optional*, defaults to 12):
|
254_11_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
|
https://huggingface.co/docs/transformers/en/model_doc/clip/#cliptextconfig
|
.md
|
Dimensionality of text and vision projection layers.
num_hidden_layers (`int`, *optional*, defaults to 12):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 8):
Number of attention heads for each attention layer in the Transformer encoder.
max_position_embeddings (`int`, *optional*, defaults to 77):
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
|
254_11_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
|
https://huggingface.co/docs/transformers/en/model_doc/clip/#cliptextconfig
|
.md
|
just in case (e.g., 512 or 1024 or 2048).
hidden_act (`str` or `function`, *optional*, defaults to `"quick_gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"selu"` and `"gelu_new"` `"quick_gelu"` are supported.
layer_norm_eps (`float`, *optional*, defaults to 1e-05):
The epsilon used by the layer normalization layers.
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
|
254_11_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
|
https://huggingface.co/docs/transformers/en/model_doc/clip/#cliptextconfig
|
.md
|
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
initializer_factor (`float`, *optional*, defaults to 1.0):
A factor for initializing all weight matrices (should be kept to 1, used internally for initialization
testing).
pad_token_id (`int`, *optional*, defaults to 1):
Padding token id.
|
254_11_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
|
https://huggingface.co/docs/transformers/en/model_doc/clip/#cliptextconfig
|
.md
|
testing).
pad_token_id (`int`, *optional*, defaults to 1):
Padding token id.
bos_token_id (`int`, *optional*, defaults to 49406):
Beginning of stream token id.
eos_token_id (`int`, *optional*, defaults to 49407):
End of stream token id.
Example:
```python
>>> from transformers import CLIPTextConfig, CLIPTextModel
|
254_11_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
|
https://huggingface.co/docs/transformers/en/model_doc/clip/#cliptextconfig
|
.md
|
>>> # Initializing a CLIPTextConfig with openai/clip-vit-base-patch32 style configuration
>>> configuration = CLIPTextConfig()
>>> # Initializing a CLIPTextModel (with random weights) from the openai/clip-vit-base-patch32 style configuration
>>> model = CLIPTextModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
254_11_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
|
https://huggingface.co/docs/transformers/en/model_doc/clip/#clipvisionconfig
|
.md
|
This is the configuration class to store the configuration of a [`CLIPVisionModel`]. It is used to instantiate a
CLIP vision encoder according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the vision encoder of the CLIP
[openai/clip-vit-base-patch32](https://huggingface.co/openai/clip-vit-base-patch32) architecture.
|
254_12_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
|
https://huggingface.co/docs/transformers/en/model_doc/clip/#clipvisionconfig
|
.md
|
[openai/clip-vit-base-patch32](https://huggingface.co/openai/clip-vit-base-patch32) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
hidden_size (`int`, *optional*, defaults to 768):
Dimensionality of the encoder layers and the pooler layer.
intermediate_size (`int`, *optional*, defaults to 3072):
|
254_12_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
|
https://huggingface.co/docs/transformers/en/model_doc/clip/#clipvisionconfig
|
.md
|
Dimensionality of the encoder layers and the pooler layer.
intermediate_size (`int`, *optional*, defaults to 3072):
Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
projection_dim (`int`, *optional*, defaults to 512):
Dimensionality of text and vision projection layers.
num_hidden_layers (`int`, *optional*, defaults to 12):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 12):
|
254_12_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
|
https://huggingface.co/docs/transformers/en/model_doc/clip/#clipvisionconfig
|
.md
|
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 12):
Number of attention heads for each attention layer in the Transformer encoder.
num_channels (`int`, *optional*, defaults to 3):
The number of input channels.
image_size (`int`, *optional*, defaults to 224):
The size (resolution) of each image.
patch_size (`int`, *optional*, defaults to 32):
The size (resolution) of each patch.
hidden_act (`str` or `function`, *optional*, defaults to `"quick_gelu"`):
|
254_12_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
|
https://huggingface.co/docs/transformers/en/model_doc/clip/#clipvisionconfig
|
.md
|
The size (resolution) of each patch.
hidden_act (`str` or `function`, *optional*, defaults to `"quick_gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"selu"` and `"gelu_new"` `"quick_gelu"` are supported.
layer_norm_eps (`float`, *optional*, defaults to 1e-05):
The epsilon used by the layer normalization layers.
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
|
254_12_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
|
https://huggingface.co/docs/transformers/en/model_doc/clip/#clipvisionconfig
|
.md
|
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
initializer_factor (`float`, *optional*, defaults to 1.0):
A factor for initializing all weight matrices (should be kept to 1, used internally for initialization
testing).
Example:
```python
|
254_12_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
|
https://huggingface.co/docs/transformers/en/model_doc/clip/#clipvisionconfig
|
.md
|
testing).
Example:
```python
>>> from transformers import CLIPVisionConfig, CLIPVisionModel
|
254_12_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
|
https://huggingface.co/docs/transformers/en/model_doc/clip/#clipvisionconfig
|
.md
|
>>> # Initializing a CLIPVisionConfig with openai/clip-vit-base-patch32 style configuration
>>> configuration = CLIPVisionConfig()
>>> # Initializing a CLIPVisionModel (with random weights) from the openai/clip-vit-base-patch32 style configuration
>>> model = CLIPVisionModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
254_12_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
|
https://huggingface.co/docs/transformers/en/model_doc/clip/#cliptokenizer
|
.md
|
Construct a CLIP tokenizer. Based on byte-level Byte-Pair-Encoding.
This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
Args:
vocab_file (`str`):
Path to the vocabulary file.
merges_file (`str`):
Path to the merges file.
errors (`str`, *optional*, defaults to `"replace"`):
Paradigm to follow when decoding bytes to UTF-8. See
|
254_13_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
|
https://huggingface.co/docs/transformers/en/model_doc/clip/#cliptokenizer
|
.md
|
errors (`str`, *optional*, defaults to `"replace"`):
Paradigm to follow when decoding bytes to UTF-8. See
[bytes.decode](https://docs.python.org/3/library/stdtypes.html#bytes.decode) for more information.
unk_token (`str`, *optional*, defaults to `"<|endoftext|>"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
bos_token (`str`, *optional*, defaults to `"<|startoftext|>"`):
The beginning of sequence token.
|
254_13_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
|
https://huggingface.co/docs/transformers/en/model_doc/clip/#cliptokenizer
|
.md
|
token instead.
bos_token (`str`, *optional*, defaults to `"<|startoftext|>"`):
The beginning of sequence token.
eos_token (`str`, *optional*, defaults to `"<|endoftext|>"`):
The end of sequence token.
pad_token (`str`, *optional*, defaults to `"<|endoftext|>"`):
The token used for padding, for example when batching sequences of different lengths.
Methods: build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
|
254_13_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
|
https://huggingface.co/docs/transformers/en/model_doc/clip/#cliptokenizerfast
|
.md
|
Construct a "fast" CLIP tokenizer (backed by HuggingFace's *tokenizers* library). Based on byte-level
Byte-Pair-Encoding.
This tokenizer inherits from [`PreTrainedTokenizerFast`] which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
Args:
vocab_file (`str`, *optional*):
Path to the vocabulary file.
merges_file (`str`, *optional*):
Path to the merges file.
tokenizer_file (`str`, *optional*):
|
254_14_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
|
https://huggingface.co/docs/transformers/en/model_doc/clip/#cliptokenizerfast
|
.md
|
Path to the vocabulary file.
merges_file (`str`, *optional*):
Path to the merges file.
tokenizer_file (`str`, *optional*):
The path to a tokenizer file to use instead of the vocab file.
unk_token (`str`, *optional*, defaults to `"<|endoftext|>"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
bos_token (`str`, *optional*, defaults to `"<|startoftext|>"`):
The beginning of sequence token.
|
254_14_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
|
https://huggingface.co/docs/transformers/en/model_doc/clip/#cliptokenizerfast
|
.md
|
token instead.
bos_token (`str`, *optional*, defaults to `"<|startoftext|>"`):
The beginning of sequence token.
eos_token (`str`, *optional*, defaults to `"<|endoftext|>"`):
The end of sequence token.
pad_token (`str`, *optional*, defaults to `"<|endoftext|>"`):
The token used for padding, for example when batching sequences of different lengths.
|
254_14_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
|
https://huggingface.co/docs/transformers/en/model_doc/clip/#clipimageprocessor
|
.md
|
Constructs a CLIP image processor.
Args:
do_resize (`bool`, *optional*, defaults to `True`):
Whether to resize the image's (height, width) dimensions to the specified `size`. Can be overridden by
`do_resize` in the `preprocess` method.
size (`Dict[str, int]` *optional*, defaults to `{"shortest_edge": 224}`):
Size of the image after resizing. The shortest edge of the image is resized to size["shortest_edge"], with
|
254_15_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
|
https://huggingface.co/docs/transformers/en/model_doc/clip/#clipimageprocessor
|
.md
|
Size of the image after resizing. The shortest edge of the image is resized to size["shortest_edge"], with
the longest edge resized to keep the input aspect ratio. Can be overridden by `size` in the `preprocess`
method.
resample (`PILImageResampling`, *optional*, defaults to `Resampling.BICUBIC`):
Resampling filter to use if resizing the image. Can be overridden by `resample` in the `preprocess` method.
do_center_crop (`bool`, *optional*, defaults to `True`):
|
254_15_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
|
https://huggingface.co/docs/transformers/en/model_doc/clip/#clipimageprocessor
|
.md
|
do_center_crop (`bool`, *optional*, defaults to `True`):
Whether to center crop the image to the specified `crop_size`. Can be overridden by `do_center_crop` in the
`preprocess` method.
crop_size (`Dict[str, int]` *optional*, defaults to 224):
Size of the output image after applying `center_crop`. Can be overridden by `crop_size` in the `preprocess`
method.
do_rescale (`bool`, *optional*, defaults to `True`):
|
254_15_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
|
https://huggingface.co/docs/transformers/en/model_doc/clip/#clipimageprocessor
|
.md
|
method.
do_rescale (`bool`, *optional*, defaults to `True`):
Whether to rescale the image by the specified scale `rescale_factor`. Can be overridden by `do_rescale` in
the `preprocess` method.
rescale_factor (`int` or `float`, *optional*, defaults to `1/255`):
Scale factor to use if rescaling the image. Can be overridden by `rescale_factor` in the `preprocess`
method.
do_normalize (`bool`, *optional*, defaults to `True`):
|
254_15_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
|
https://huggingface.co/docs/transformers/en/model_doc/clip/#clipimageprocessor
|
.md
|
method.
do_normalize (`bool`, *optional*, defaults to `True`):
Whether to normalize the image. Can be overridden by `do_normalize` in the `preprocess` method.
image_mean (`float` or `List[float]`, *optional*, defaults to `[0.48145466, 0.4578275, 0.40821073]`):
Mean to use if normalizing the image. This is a float or list of floats the length of the number of
channels in the image. Can be overridden by the `image_mean` parameter in the `preprocess` method.
|
254_15_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
|
https://huggingface.co/docs/transformers/en/model_doc/clip/#clipimageprocessor
|
.md
|
channels in the image. Can be overridden by the `image_mean` parameter in the `preprocess` method.
image_std (`float` or `List[float]`, *optional*, defaults to `[0.26862954, 0.26130258, 0.27577711]`):
Standard deviation to use if normalizing the image. This is a float or list of floats the length of the
number of channels in the image. Can be overridden by the `image_std` parameter in the `preprocess` method.
Can be overridden by the `image_std` parameter in the `preprocess` method.
|
254_15_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
|
https://huggingface.co/docs/transformers/en/model_doc/clip/#clipimageprocessor
|
.md
|
Can be overridden by the `image_std` parameter in the `preprocess` method.
do_convert_rgb (`bool`, *optional*, defaults to `True`):
Whether to convert the image to RGB.
Methods: preprocess
|
254_15_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
|
https://huggingface.co/docs/transformers/en/model_doc/clip/#clipfeatureextractor
|
.md
|
No docstring available for CLIPFeatureExtractor
|
254_16_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
|
https://huggingface.co/docs/transformers/en/model_doc/clip/#clipprocessor
|
.md
|
Constructs a CLIP processor which wraps a CLIP image processor and a CLIP tokenizer into a single processor.
[`CLIPProcessor`] offers all the functionalities of [`CLIPImageProcessor`] and [`CLIPTokenizerFast`]. See the
[`~CLIPProcessor.__call__`] and [`~CLIPProcessor.decode`] for more information.
Args:
image_processor ([`CLIPImageProcessor`], *optional*):
The image processor is a required input.
tokenizer ([`CLIPTokenizerFast`], *optional*):
The tokenizer is a required input.
<frameworkcontent>
<pt>
|
254_17_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
|
https://huggingface.co/docs/transformers/en/model_doc/clip/#clipmodel
|
.md
|
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
|
254_18_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
|
https://huggingface.co/docs/transformers/en/model_doc/clip/#clipmodel
|
.md
|
and behavior.
Parameters:
config ([`CLIPConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
- get_text_features
- get_image_features
|
254_18_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
|
https://huggingface.co/docs/transformers/en/model_doc/clip/#cliptextmodel
|
.md
|
The text model from CLIP without any head or projection on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
254_19_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
|
https://huggingface.co/docs/transformers/en/model_doc/clip/#cliptextmodel
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`CLIPConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
254_19_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
|
https://huggingface.co/docs/transformers/en/model_doc/clip/#cliptextmodel
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
254_19_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
|
https://huggingface.co/docs/transformers/en/model_doc/clip/#cliptextmodelwithprojection
|
.md
|
CLIP Text Model with a projection layer on top (a linear layer on top of the pooled output).
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
254_20_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
|
https://huggingface.co/docs/transformers/en/model_doc/clip/#cliptextmodelwithprojection
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`CLIPConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
254_20_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
|
https://huggingface.co/docs/transformers/en/model_doc/clip/#cliptextmodelwithprojection
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
254_20_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
|
https://huggingface.co/docs/transformers/en/model_doc/clip/#clipvisionmodelwithprojection
|
.md
|
CLIP Vision Model with a projection layer on top (a linear layer on top of the pooled output).
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
254_21_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
|
https://huggingface.co/docs/transformers/en/model_doc/clip/#clipvisionmodelwithprojection
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`CLIPConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
254_21_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
|
https://huggingface.co/docs/transformers/en/model_doc/clip/#clipvisionmodelwithprojection
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
254_21_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
|
https://huggingface.co/docs/transformers/en/model_doc/clip/#clipvisionmodel
|
.md
|
The vision model from CLIP without any head or projection on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
254_22_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
|
https://huggingface.co/docs/transformers/en/model_doc/clip/#clipvisionmodel
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`CLIPConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
254_22_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
|
https://huggingface.co/docs/transformers/en/model_doc/clip/#clipvisionmodel
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
254_22_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
|
https://huggingface.co/docs/transformers/en/model_doc/clip/#clipforimageclassification
|
.md
|
CLIP vision encoder with an image classification head on top (a linear layer on top of the pooled final hidden states of
the patch tokens) e.g. for ImageNet.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
254_23_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
|
https://huggingface.co/docs/transformers/en/model_doc/clip/#clipforimageclassification
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`CLIPConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
254_23_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
|
https://huggingface.co/docs/transformers/en/model_doc/clip/#clipforimageclassification
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
</pt>
<tf>
|
254_23_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
|
https://huggingface.co/docs/transformers/en/model_doc/clip/#tfclipmodel
|
.md
|
No docstring available for TFCLIPModel
Methods: call
- get_text_features
- get_image_features
|
254_24_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
|
https://huggingface.co/docs/transformers/en/model_doc/clip/#tfcliptextmodel
|
.md
|
No docstring available for TFCLIPTextModel
Methods: call
|
254_25_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
|
https://huggingface.co/docs/transformers/en/model_doc/clip/#tfclipvisionmodel
|
.md
|
No docstring available for TFCLIPVisionModel
Methods: call
</tf>
<jax>
|
254_26_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
|
https://huggingface.co/docs/transformers/en/model_doc/clip/#flaxclipmodel
|
.md
|
No docstring available for FlaxCLIPModel
Methods: __call__
- get_text_features
- get_image_features
|
254_27_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
|
https://huggingface.co/docs/transformers/en/model_doc/clip/#flaxcliptextmodel
|
.md
|
No docstring available for FlaxCLIPTextModel
Methods: __call__
|
254_28_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
|
https://huggingface.co/docs/transformers/en/model_doc/clip/#flaxcliptextmodelwithprojection
|
.md
|
No docstring available for FlaxCLIPTextModelWithProjection
Methods: __call__
|
254_29_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/clip.md
|
https://huggingface.co/docs/transformers/en/model_doc/clip/#flaxclipvisionmodel
|
.md
|
No docstring available for FlaxCLIPVisionModel
Methods: __call__
</jax>
</frameworkcontent>
|
254_30_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/textnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/textnet/
|
.md
|
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
255_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/textnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/textnet/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
255_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/textnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/textnet/#overview
|
.md
|
The TextNet model was proposed in [FAST: Faster Arbitrarily-Shaped Text Detector with Minimalist Kernel Representation](https://arxiv.org/abs/2111.02394) by Zhe Chen, Jiahao Wang, Wenhai Wang, Guo Chen, Enze Xie, Ping Luo, Tong Lu. TextNet is a vision backbone useful for text detection tasks. It is the result of neural architecture search (NAS) on backbones with reward function as text detection task (to provide powerful features for text detection).
|
255_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/textnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/textnet/#overview
|
.md
|
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/fast_architecture.png"
alt="drawing" width="600"/>
<small> TextNet backbone as part of FAST. Taken from the <a href="https://arxiv.org/abs/2111.02394">original paper.</a> </small>
This model was contributed by [Raghavan](https://huggingface.co/Raghavan), [jadechoghari](https://huggingface.co/jadechoghari) and [nielsr](https://huggingface.co/nielsr).
|
255_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/textnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/textnet/#usage-tips
|
.md
|
TextNet is mainly used as a backbone network for the architecture search of text detection. Each stage of the backbone network is comprised of a stride-2 convolution and searchable blocks.
Specifically, we present a layer-level candidate set, defined as {conv3×3, conv1×3, conv3×1, identity}. As the 1×3 and 3×1 convolutions have asymmetric kernels and oriented structure priors, they may help to capture the features of extreme aspect-ratio and rotated text lines.
|
255_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/textnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/textnet/#usage-tips
|
.md
|
TextNet is the backbone for Fast, but can also be used as an efficient text/image classification, we add a `TextNetForImageClassification` as is it would allow people to train an image classifier on top of the pre-trained textnet weights
|
255_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/textnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/textnet/#textnetconfig
|
.md
|
This is the configuration class to store the configuration of a [`TextNextModel`]. It is used to instantiate a
TextNext model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the
[czczup/textnet-base](https://huggingface.co/czczup/textnet-base). Configuration objects inherit from
[`PretrainedConfig`] and can be used to control the model outputs.Read the documentation from [`PretrainedConfig`]
|
255_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/textnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/textnet/#textnetconfig
|
.md
|
[`PretrainedConfig`] and can be used to control the model outputs.Read the documentation from [`PretrainedConfig`]
for more information.
Args:
stem_kernel_size (`int`, *optional*, defaults to 3):
The kernel size for the initial convolution layer.
stem_stride (`int`, *optional*, defaults to 2):
The stride for the initial convolution layer.
stem_num_channels (`int`, *optional*, defaults to 3):
The num of channels in input for the initial convolution layer.
|
255_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/textnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/textnet/#textnetconfig
|
.md
|
stem_num_channels (`int`, *optional*, defaults to 3):
The num of channels in input for the initial convolution layer.
stem_out_channels (`int`, *optional*, defaults to 64):
The num of channels in out for the initial convolution layer.
stem_act_func (`str`, *optional*, defaults to `"relu"`):
The activation function for the initial convolution layer.
image_size (`Tuple[int, int]`, *optional*, defaults to `[640, 640]`):
The size (resolution) of each image.
|
255_3_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/textnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/textnet/#textnetconfig
|
.md
|
image_size (`Tuple[int, int]`, *optional*, defaults to `[640, 640]`):
The size (resolution) of each image.
conv_layer_kernel_sizes (`List[List[List[int]]]`, *optional*):
A list of stage-wise kernel sizes. If `None`, defaults to:
`[[[3, 3], [3, 3], [3, 3]], [[3, 3], [1, 3], [3, 3], [3, 1]], [[3, 3], [3, 3], [3, 1], [1, 3]], [[3, 3], [3, 1], [1, 3], [3, 3]]]`.
conv_layer_strides (`List[List[int]]`, *optional*):
A list of stage-wise strides. If `None`, defaults to:
|
255_3_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/textnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/textnet/#textnetconfig
|
.md
|
conv_layer_strides (`List[List[int]]`, *optional*):
A list of stage-wise strides. If `None`, defaults to:
`[[1, 2, 1], [2, 1, 1, 1], [2, 1, 1, 1], [2, 1, 1, 1]]`.
hidden_sizes (`List[int]`, *optional*, defaults to `[64, 64, 128, 256, 512]`):
Dimensionality (hidden size) at each stage.
batch_norm_eps (`float`, *optional*, defaults to 1e-05):
The epsilon used by the batch normalization layers.
initializer_range (`float`, *optional*, defaults to 0.02):
|
255_3_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/textnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/textnet/#textnetconfig
|
.md
|
The epsilon used by the batch normalization layers.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
out_features (`List[str]`, *optional*):
If used as backbone, list of features to output. Can be any of `"stem"`, `"stage1"`, `"stage2"`, etc.
(depending on how many stages the model has). If unset and `out_indices` is set, will default to the
|
255_3_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/textnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/textnet/#textnetconfig
|
.md
|
(depending on how many stages the model has). If unset and `out_indices` is set, will default to the
corresponding stages. If unset and `out_indices` is unset, will default to the last stage.
out_indices (`List[int]`, *optional*):
If used as backbone, list of indices of features to output. Can be any of 0, 1, 2, etc. (depending on how
many stages the model has). If unset and `out_features` is set, will default to the corresponding stages.
|
255_3_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/textnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/textnet/#textnetconfig
|
.md
|
many stages the model has). If unset and `out_features` is set, will default to the corresponding stages.
If unset and `out_features` is unset, will default to the last stage.
Examples:
```python
>>> from transformers import TextNetConfig, TextNetBackbone
|
255_3_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/textnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/textnet/#textnetconfig
|
.md
|
>>> # Initializing a TextNetConfig
>>> configuration = TextNetConfig()
>>> # Initializing a model (with random weights)
>>> model = TextNetBackbone(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
255_3_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/textnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/textnet/#textnetimageprocessor
|
.md
|
Constructs a TextNet image processor.
Args:
do_resize (`bool`, *optional*, defaults to `True`):
Whether to resize the image's (height, width) dimensions to the specified `size`. Can be overridden by
`do_resize` in the `preprocess` method.
size (`Dict[str, int]` *optional*, defaults to `{"shortest_edge": 640}`):
Size of the image after resizing. The shortest edge of the image is resized to size["shortest_edge"], with
|
255_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/textnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/textnet/#textnetimageprocessor
|
.md
|
Size of the image after resizing. The shortest edge of the image is resized to size["shortest_edge"], with
the longest edge resized to keep the input aspect ratio. Can be overridden by `size` in the `preprocess`
method.
size_divisor (`int`, *optional*, defaults to 32):
Ensures height and width are rounded to a multiple of this value after resizing.
resample (`PILImageResampling`, *optional*, defaults to `Resampling.BILINEAR`):
|
255_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/textnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/textnet/#textnetimageprocessor
|
.md
|
resample (`PILImageResampling`, *optional*, defaults to `Resampling.BILINEAR`):
Resampling filter to use if resizing the image. Can be overridden by `resample` in the `preprocess` method.
do_center_crop (`bool`, *optional*, defaults to `False`):
Whether to center crop the image to the specified `crop_size`. Can be overridden by `do_center_crop` in the
`preprocess` method.
crop_size (`Dict[str, int]` *optional*, defaults to 224):
|
255_4_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/textnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/textnet/#textnetimageprocessor
|
.md
|
`preprocess` method.
crop_size (`Dict[str, int]` *optional*, defaults to 224):
Size of the output image after applying `center_crop`. Can be overridden by `crop_size` in the `preprocess`
method.
do_rescale (`bool`, *optional*, defaults to `True`):
Whether to rescale the image by the specified scale `rescale_factor`. Can be overridden by `do_rescale` in
the `preprocess` method.
rescale_factor (`int` or `float`, *optional*, defaults to `1/255`):
|
255_4_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/textnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/textnet/#textnetimageprocessor
|
.md
|
the `preprocess` method.
rescale_factor (`int` or `float`, *optional*, defaults to `1/255`):
Scale factor to use if rescaling the image. Can be overridden by `rescale_factor` in the `preprocess`
method.
do_normalize (`bool`, *optional*, defaults to `True`):
Whether to normalize the image. Can be overridden by `do_normalize` in the `preprocess` method.
image_mean (`float` or `List[float]`, *optional*, defaults to `[0.485, 0.456, 0.406]`):
|
255_4_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/textnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/textnet/#textnetimageprocessor
|
.md
|
image_mean (`float` or `List[float]`, *optional*, defaults to `[0.485, 0.456, 0.406]`):
Mean to use if normalizing the image. This is a float or list of floats the length of the number of
channels in the image. Can be overridden by the `image_mean` parameter in the `preprocess` method.
image_std (`float` or `List[float]`, *optional*, defaults to `[0.229, 0.224, 0.225]`):
Standard deviation to use if normalizing the image. This is a float or list of floats the length of the
|
255_4_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/textnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/textnet/#textnetimageprocessor
|
.md
|
Standard deviation to use if normalizing the image. This is a float or list of floats the length of the
number of channels in the image. Can be overridden by the `image_std` parameter in the `preprocess` method.
Can be overridden by the `image_std` parameter in the `preprocess` method.
do_convert_rgb (`bool`, *optional*, defaults to `True`):
Whether to convert the image to RGB.
Methods: preprocess
|
255_4_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/textnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/textnet/#textnetmodel
|
.md
|
The bare Textnet model outputting raw features without any specific head on top.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`TextNetConfig`]): Model configuration class with all the parameters of the model.
|
255_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/textnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/textnet/#textnetmodel
|
.md
|
behavior.
Parameters:
config ([`TextNetConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
255_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/textnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/textnet/#textnetforimageclassification
|
.md
|
TextNet Model with an image classification head on top (a linear layer on top of the pooled features), e.g. for
ImageNet.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`TextNetConfig`]): Model configuration class with all the parameters of the model.
|
255_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/textnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/textnet/#textnetforimageclassification
|
.md
|
behavior.
Parameters:
config ([`TextNetConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
255_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/
|
.md
|
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
256_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
256_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#overview
|
.md
|
The RT-DETR model was proposed in [DETRs Beat YOLOs on Real-time Object Detection](https://arxiv.org/abs/2304.08069) by Wenyu Lv, Yian Zhao, Shangliang Xu, Jinman Wei, Guanzhong Wang, Cheng Cui, Yuning Du, Qingqing Dang, Yi Liu.
|
256_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#overview
|
.md
|
RT-DETR is an object detection model that stands for "Real-Time DEtection Transformer." This model is designed to perform object detection tasks with a focus on achieving real-time performance while maintaining high accuracy. Leveraging the transformer architecture, which has gained significant popularity in various fields of deep learning, RT-DETR processes images to identify and locate multiple objects within them.
The abstract from the paper is the following:
|
256_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#overview
|
.md
|
*Recently, end-to-end transformer-based detectors (DETRs) have achieved remarkable performance. However, the issue of the high computational cost of DETRs has not been effectively addressed, limiting their practical application and preventing them from fully exploiting the benefits of no post-processing, such as non-maximum suppression (NMS). In this paper, we first analyze the influence of NMS in modern real-time object detectors on inference speed, and establish an end-to-end speed benchmark. To avoid
|
256_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rt_detr.md
|
https://huggingface.co/docs/transformers/en/model_doc/rt_detr/#overview
|
.md
|
influence of NMS in modern real-time object detectors on inference speed, and establish an end-to-end speed benchmark. To avoid the inference delay caused by NMS, we propose a Real-Time DEtection TRansformer (RT-DETR), the first real-time end-to-end object detector to our best knowledge. Specifically, we design an efficient hybrid encoder to efficiently process multi-scale features by decoupling the intra-scale interaction and cross-scale fusion, and propose IoU-aware query selection to improve the
|
256_1_3
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.