source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitpose.md
|
https://huggingface.co/docs/transformers/en/model_doc/vitpose/#overview
|
.md
|
*Although no specific domain knowledge is considered in the design, plain vision transformers have shown excellent performance in visual recognition tasks. However, little effort has been made to reveal the potential of such simple structures for pose estimation tasks. In this paper, we show the surprisingly good capabilities of plain vision transformers for pose estimation from various aspects, namely simplicity in model structure, scalability in model size, flexibility in training paradigm, and
|
267_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitpose.md
|
https://huggingface.co/docs/transformers/en/model_doc/vitpose/#overview
|
.md
|
from various aspects, namely simplicity in model structure, scalability in model size, flexibility in training paradigm, and transferability of knowledge between models, through a simple baseline model called ViTPose. Specifically, ViTPose employs plain and non-hierarchical vision transformers as backbones to extract features for a given person instance and a lightweight decoder for pose estimation. It can be scaled up from 100M to 1B parameters by taking the advantages of the scalable model capacity and
|
267_1_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitpose.md
|
https://huggingface.co/docs/transformers/en/model_doc/vitpose/#overview
|
.md
|
for pose estimation. It can be scaled up from 100M to 1B parameters by taking the advantages of the scalable model capacity and high parallelism of transformers, setting a new Pareto front between throughput and performance. Besides, ViTPose is very flexible regarding the attention type, input resolution, pre-training and finetuning strategy, as well as dealing with multiple pose tasks. We also empirically demonstrate that the knowledge of large ViTPose models can be easily transferred to small ones via a
|
267_1_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitpose.md
|
https://huggingface.co/docs/transformers/en/model_doc/vitpose/#overview
|
.md
|
tasks. We also empirically demonstrate that the knowledge of large ViTPose models can be easily transferred to small ones via a simple knowledge token. Experimental results show that our basic ViTPose model outperforms representative methods on the challenging MS COCO Keypoint Detection benchmark, while the largest model sets a new state-of-the-art.*
|
267_1_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitpose.md
|
https://huggingface.co/docs/transformers/en/model_doc/vitpose/#overview
|
.md
|
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/vitpose-architecture.png"
alt="drawing" width="600"/>
<small> ViTPose architecture. Taken from the <a href="https://arxiv.org/abs/2204.12484">original paper.</a> </small>
This model was contributed by [nielsr](https://huggingface.co/nielsr) and [sangbumchoi](https://github.com/SangbumChoi).
The original code can be found [here](https://github.com/ViTAE-Transformer/ViTPose).
|
267_1_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitpose.md
|
https://huggingface.co/docs/transformers/en/model_doc/vitpose/#usage-tips
|
.md
|
ViTPose is a so-called top-down keypoint detection model. This means that one first uses an object detector, like [RT-DETR](rt_detr.md), to detect people (or other instances) in an image. Next, ViTPose takes the cropped images as input and predicts the keypoints for each of them.
```py
import torch
import requests
import numpy as np
from PIL import Image
from transformers import AutoProcessor, RTDetrForObjectDetection, VitPoseForPoseEstimation
device = "cuda" if torch.cuda.is_available() else "cpu"
|
267_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitpose.md
|
https://huggingface.co/docs/transformers/en/model_doc/vitpose/#usage-tips
|
.md
|
device = "cuda" if torch.cuda.is_available() else "cpu"
url = "http://images.cocodataset.org/val2017/000000000139.jpg"
image = Image.open(requests.get(url, stream=True).raw)
# ------------------------------------------------------------------------
# Stage 1. Detect humans on the image
# ------------------------------------------------------------------------
|
267_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitpose.md
|
https://huggingface.co/docs/transformers/en/model_doc/vitpose/#usage-tips
|
.md
|
# You can choose any detector of your choice
person_image_processor = AutoProcessor.from_pretrained("PekingU/rtdetr_r50vd_coco_o365")
person_model = RTDetrForObjectDetection.from_pretrained("PekingU/rtdetr_r50vd_coco_o365", device_map=device)
inputs = person_image_processor(images=image, return_tensors="pt").to(device)
with torch.no_grad():
outputs = person_model(**inputs)
|
267_2_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitpose.md
|
https://huggingface.co/docs/transformers/en/model_doc/vitpose/#usage-tips
|
.md
|
with torch.no_grad():
outputs = person_model(**inputs)
results = person_image_processor.post_process_object_detection(
outputs, target_sizes=torch.tensor([(image.height, image.width)]), threshold=0.3
)
result = results[0] # take first image results
# Human label refers 0 index in COCO dataset
person_boxes = result["boxes"][result["labels"] == 0]
person_boxes = person_boxes.cpu().numpy()
|
267_2_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitpose.md
|
https://huggingface.co/docs/transformers/en/model_doc/vitpose/#usage-tips
|
.md
|
# Convert boxes from VOC (x1, y1, x2, y2) to COCO (x1, y1, w, h) format
person_boxes[:, 2] = person_boxes[:, 2] - person_boxes[:, 0]
person_boxes[:, 3] = person_boxes[:, 3] - person_boxes[:, 1]
# ------------------------------------------------------------------------
# Stage 2. Detect keypoints for each person found
# ------------------------------------------------------------------------
|
267_2_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitpose.md
|
https://huggingface.co/docs/transformers/en/model_doc/vitpose/#usage-tips
|
.md
|
image_processor = AutoProcessor.from_pretrained("usyd-community/vitpose-base-simple")
model = VitPoseForPoseEstimation.from_pretrained("usyd-community/vitpose-base-simple", device_map=device)
inputs = image_processor(image, boxes=[person_boxes], return_tensors="pt").to(device)
with torch.no_grad():
outputs = model(**inputs)
pose_results = image_processor.post_process_pose_estimation(outputs, boxes=[person_boxes])
image_pose_result = pose_results[0] # results for first image
```
|
267_2_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitpose.md
|
https://huggingface.co/docs/transformers/en/model_doc/vitpose/#vitpose-models
|
.md
|
The best [checkpoints](https://huggingface.co/collections/usyd-community/vitpose-677fcfd0a0b2b5c8f79c4335) are those of the [ViTPose++ paper](https://arxiv.org/abs/2212.04246). ViTPose++ models employ a so-called [Mixture-of-Experts (MoE)](https://huggingface.co/blog/moe) architecture for the ViT backbone, resulting in better performance.
The ViTPose+ checkpoints use 6 experts, hence 6 different dataset indices can be passed.
An overview of the various dataset indices is provided below:
|
267_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitpose.md
|
https://huggingface.co/docs/transformers/en/model_doc/vitpose/#vitpose-models
|
.md
|
An overview of the various dataset indices is provided below:
- 0: [COCO validation 2017](https://cocodataset.org/#overview) dataset, using an object detector that gets 56 AP on the "person" class
- 1: [AiC](https://github.com/fabbrimatteo/AiC-Dataset) dataset
- 2: [MPII](https://www.mpi-inf.mpg.de/departments/computer-vision-and-machine-learning/software-and-datasets/mpii-human-pose-dataset) dataset
- 3: [AP-10K](https://github.com/AlexTheBad/AP-10K) dataset
|
267_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitpose.md
|
https://huggingface.co/docs/transformers/en/model_doc/vitpose/#vitpose-models
|
.md
|
- 3: [AP-10K](https://github.com/AlexTheBad/AP-10K) dataset
- 4: [APT-36K](https://github.com/pandorgan/APT-36K) dataset
- 5: [COCO-WholeBody](https://github.com/jin-s13/COCO-WholeBody) dataset
Pass the `dataset_index` argument in the forward of the model to indicate which experts to use for each example in the batch. Example usage is shown below:
```python
image_processor = AutoProcessor.from_pretrained("usyd-community/vitpose-plus-base")
|
267_3_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitpose.md
|
https://huggingface.co/docs/transformers/en/model_doc/vitpose/#vitpose-models
|
.md
|
```python
image_processor = AutoProcessor.from_pretrained("usyd-community/vitpose-plus-base")
model = VitPoseForPoseEstimation.from_pretrained("usyd-community/vitpose-plus-base", device=device)
|
267_3_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitpose.md
|
https://huggingface.co/docs/transformers/en/model_doc/vitpose/#vitpose-models
|
.md
|
inputs = image_processor(image, boxes=[person_boxes], return_tensors="pt").to(device)
dataset_index = torch.tensor([0], device=device) # must be a tensor of shape (batch_size,)
|
267_3_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitpose.md
|
https://huggingface.co/docs/transformers/en/model_doc/vitpose/#vitpose-models
|
.md
|
with torch.no_grad():
outputs = model(**inputs, dataset_index=dataset_index)
```
The ViTPose+ checkpoints use 6 experts, hence 6 different dataset indices can be passed.
An overview of the various dataset indices is provided below:
- 0: [COCO validation 2017](https://cocodataset.org/#overview) dataset, using an object detector that gets 56 AP on the "person" class
- 1: [AiC](https://github.com/fabbrimatteo/AiC-Dataset) dataset
|
267_3_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitpose.md
|
https://huggingface.co/docs/transformers/en/model_doc/vitpose/#vitpose-models
|
.md
|
- 1: [AiC](https://github.com/fabbrimatteo/AiC-Dataset) dataset
- 2: [MPII](https://www.mpi-inf.mpg.de/departments/computer-vision-and-machine-learning/software-and-datasets/mpii-human-pose-dataset) dataset
- 3: [AP-10K](https://github.com/AlexTheBad/AP-10K) dataset
- 4: [APT-36K](https://github.com/pandorgan/APT-36K) dataset
- 5: [COCO-WholeBody](https://github.com/jin-s13/COCO-WholeBody) dataset
|
267_3_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitpose.md
|
https://huggingface.co/docs/transformers/en/model_doc/vitpose/#visualization
|
.md
|
To visualize the various keypoints, one can either leverage the `supervision` [library](https://github.com/roboflow/supervision (requires `pip install supervision`):
```python
import supervision as sv
xy = torch.stack([pose_result['keypoints'] for pose_result in image_pose_result]).cpu().numpy()
scores = torch.stack([pose_result['scores'] for pose_result in image_pose_result]).cpu().numpy()
key_points = sv.KeyPoints(
xy=xy, confidence=scores
)
|
267_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitpose.md
|
https://huggingface.co/docs/transformers/en/model_doc/vitpose/#visualization
|
.md
|
edge_annotator = sv.EdgeAnnotator(
color=sv.Color.GREEN,
thickness=1
)
vertex_annotator = sv.VertexAnnotator(
color=sv.Color.RED,
radius=2
)
annotated_frame = edge_annotator.annotate(
scene=image.copy(),
key_points=key_points
)
annotated_frame = vertex_annotator.annotate(
scene=annotated_frame,
key_points=key_points
)
```
Alternatively, one can also visualize the keypoints using [OpenCV](https://opencv.org/) (requires `pip install opencv-python`):
```python
import math
import cv2
|
267_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitpose.md
|
https://huggingface.co/docs/transformers/en/model_doc/vitpose/#visualization
|
.md
|
def draw_points(image, keypoints, scores, pose_keypoint_color, keypoint_score_threshold, radius, show_keypoint_weight):
if pose_keypoint_color is not None:
assert len(pose_keypoint_color) == len(keypoints)
for kid, (kpt, kpt_score) in enumerate(zip(keypoints, scores)):
x_coord, y_coord = int(kpt[0]), int(kpt[1])
if kpt_score > keypoint_score_threshold:
color = tuple(int(c) for c in pose_keypoint_color[kid])
if show_keypoint_weight:
cv2.circle(image, (int(x_coord), int(y_coord)), radius, color, -1)
|
267_4_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitpose.md
|
https://huggingface.co/docs/transformers/en/model_doc/vitpose/#visualization
|
.md
|
if show_keypoint_weight:
cv2.circle(image, (int(x_coord), int(y_coord)), radius, color, -1)
transparency = max(0, min(1, kpt_score))
cv2.addWeighted(image, transparency, image, 1 - transparency, 0, dst=image)
else:
cv2.circle(image, (int(x_coord), int(y_coord)), radius, color, -1)
|
267_4_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitpose.md
|
https://huggingface.co/docs/transformers/en/model_doc/vitpose/#visualization
|
.md
|
def draw_links(image, keypoints, scores, keypoint_edges, link_colors, keypoint_score_threshold, thickness, show_keypoint_weight, stick_width = 2):
height, width, _ = image.shape
if keypoint_edges is not None and link_colors is not None:
assert len(link_colors) == len(keypoint_edges)
for sk_id, sk in enumerate(keypoint_edges):
x1, y1, score1 = (int(keypoints[sk[0], 0]), int(keypoints[sk[0], 1]), scores[sk[0]])
x2, y2, score2 = (int(keypoints[sk[1], 0]), int(keypoints[sk[1], 1]), scores[sk[1]])
if (
x1 > 0
|
267_4_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitpose.md
|
https://huggingface.co/docs/transformers/en/model_doc/vitpose/#visualization
|
.md
|
x2, y2, score2 = (int(keypoints[sk[1], 0]), int(keypoints[sk[1], 1]), scores[sk[1]])
if (
x1 > 0
and x1 < width
and y1 > 0
and y1 < height
and x2 > 0
and x2 < width
and y2 > 0
and y2 < height
and score1 > keypoint_score_threshold
and score2 > keypoint_score_threshold
):
color = tuple(int(c) for c in link_colors[sk_id])
if show_keypoint_weight:
X = (x1, x2)
Y = (y1, y2)
mean_x = np.mean(X)
mean_y = np.mean(Y)
length = ((Y[0] - Y[1]) ** 2 + (X[0] - X[1]) ** 2) ** 0.5
|
267_4_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitpose.md
|
https://huggingface.co/docs/transformers/en/model_doc/vitpose/#visualization
|
.md
|
X = (x1, x2)
Y = (y1, y2)
mean_x = np.mean(X)
mean_y = np.mean(Y)
length = ((Y[0] - Y[1]) ** 2 + (X[0] - X[1]) ** 2) ** 0.5
angle = math.degrees(math.atan2(Y[0] - Y[1], X[0] - X[1]))
polygon = cv2.ellipse2Poly(
(int(mean_x), int(mean_y)), (int(length / 2), int(stick_width)), int(angle), 0, 360, 1
)
cv2.fillConvexPoly(image, polygon, color)
transparency = max(0, min(1, 0.5 * (keypoints[sk[0], 2] + keypoints[sk[1], 2])))
cv2.addWeighted(image, transparency, image, 1 - transparency, 0, dst=image)
else:
|
267_4_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitpose.md
|
https://huggingface.co/docs/transformers/en/model_doc/vitpose/#visualization
|
.md
|
cv2.addWeighted(image, transparency, image, 1 - transparency, 0, dst=image)
else:
cv2.line(image, (x1, y1), (x2, y2), color, thickness=thickness)
|
267_4_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitpose.md
|
https://huggingface.co/docs/transformers/en/model_doc/vitpose/#visualization
|
.md
|
# Note: keypoint_edges and color palette are dataset-specific
keypoint_edges = model.config.edges
palette = np.array(
[
[255, 128, 0],
[255, 153, 51],
[255, 178, 102],
[230, 230, 0],
[255, 153, 255],
[153, 204, 255],
[255, 102, 255],
[255, 51, 255],
[102, 178, 255],
[51, 153, 255],
[255, 153, 153],
[255, 102, 102],
[255, 51, 51],
[153, 255, 153],
[102, 255, 102],
[51, 255, 51],
[0, 255, 0],
[0, 0, 255],
[255, 0, 0],
[255, 255, 255],
]
)
|
267_4_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitpose.md
|
https://huggingface.co/docs/transformers/en/model_doc/vitpose/#visualization
|
.md
|
link_colors = palette[[0, 0, 0, 0, 7, 7, 7, 9, 9, 9, 9, 9, 16, 16, 16, 16, 16, 16, 16]]
keypoint_colors = palette[[16, 16, 16, 16, 16, 9, 9, 9, 9, 9, 9, 0, 0, 0, 0, 0, 0]]
numpy_image = np.array(image)
for pose_result in image_pose_result:
scores = np.array(pose_result["scores"])
keypoints = np.array(pose_result["keypoints"])
# draw each point on image
draw_points(numpy_image, keypoints, scores, keypoint_colors, keypoint_score_threshold=0.3, radius=4, show_keypoint_weight=False)
|
267_4_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitpose.md
|
https://huggingface.co/docs/transformers/en/model_doc/vitpose/#visualization
|
.md
|
# draw links
draw_links(numpy_image, keypoints, scores, keypoint_edges, link_colors, keypoint_score_threshold=0.3, thickness=1, show_keypoint_weight=False)
pose_image = Image.fromarray(numpy_image)
pose_image
```
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/vitpose-coco.jpg" alt="drawing" width="600"/>
|
267_4_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitpose.md
|
https://huggingface.co/docs/transformers/en/model_doc/vitpose/#resources
|
.md
|
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ViTPose. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
- A demo of ViTPose on images and video can be found [here](https://huggingface.co/spaces/hysts/ViTPose-transformers).
|
267_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitpose.md
|
https://huggingface.co/docs/transformers/en/model_doc/vitpose/#resources
|
.md
|
- A demo of ViTPose on images and video can be found [here](https://huggingface.co/spaces/hysts/ViTPose-transformers).
- A notebook illustrating inference and visualization can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/ViTPose/Inference_with_ViTPose_for_human_pose_estimation.ipynb).
|
267_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitpose.md
|
https://huggingface.co/docs/transformers/en/model_doc/vitpose/#vitposeimageprocessor
|
.md
|
VitPoseImageProcessor
Constructs a VitPose image processor.
Args:
do_affine_transform (`bool`, *optional*, defaults to `True`):
Whether to apply an affine transformation to the input images.
size (`Dict[str, int]` *optional*, defaults to `{"height": 256, "width": 192}`):
Resolution of the image after `affine_transform` is applied. Only has an effect if `do_affine_transform` is set to `True`. Can
be overriden by `size` in the `preprocess` method.
do_rescale (`bool`, *optional*, defaults to `True`):
|
267_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitpose.md
|
https://huggingface.co/docs/transformers/en/model_doc/vitpose/#vitposeimageprocessor
|
.md
|
be overriden by `size` in the `preprocess` method.
do_rescale (`bool`, *optional*, defaults to `True`):
Whether or not to apply the scaling factor (to make pixel values floats between 0. and 1.).
rescale_factor (`int` or `float`, *optional*, defaults to `1/255`):
Scale factor to use if rescaling the image. Can be overriden by `rescale_factor` in the `preprocess`
method.
do_normalize (`bool`, *optional*, defaults to `True`):
Whether or not to normalize the input with mean and standard deviation.
|
267_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitpose.md
|
https://huggingface.co/docs/transformers/en/model_doc/vitpose/#vitposeimageprocessor
|
.md
|
do_normalize (`bool`, *optional*, defaults to `True`):
Whether or not to normalize the input with mean and standard deviation.
image_mean (`List[int]`, defaults to `[0.485, 0.456, 0.406]`, *optional*):
The sequence of means for each channel, to be used when normalizing images.
image_std (`List[int]`, defaults to `[0.229, 0.224, 0.225]`, *optional*):
The sequence of standard deviations for each channel, to be used when normalizing images.
- preprocess
- post_process_pose_estimation
|
267_6_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitpose.md
|
https://huggingface.co/docs/transformers/en/model_doc/vitpose/#vitposeconfig
|
.md
|
This is the configuration class to store the configuration of a [`VitPoseForPoseEstimation`]. It is used to instantiate a
VitPose model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the VitPose
[usyd-community/vitpose-base-simple](https://huggingface.co/usyd-community/vitpose-base-simple) architecture.
|
267_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitpose.md
|
https://huggingface.co/docs/transformers/en/model_doc/vitpose/#vitposeconfig
|
.md
|
[usyd-community/vitpose-base-simple](https://huggingface.co/usyd-community/vitpose-base-simple) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
backbone_config (`PretrainedConfig` or `dict`, *optional*, defaults to `VitPoseBackboneConfig()`):
|
267_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitpose.md
|
https://huggingface.co/docs/transformers/en/model_doc/vitpose/#vitposeconfig
|
.md
|
Args:
backbone_config (`PretrainedConfig` or `dict`, *optional*, defaults to `VitPoseBackboneConfig()`):
The configuration of the backbone model. Currently, only `backbone_config` with `vitpose_backbone` as `model_type` is supported.
backbone (`str`, *optional*):
Name of backbone to use when `backbone_config` is `None`. If `use_pretrained_backbone` is `True`, this
will load the corresponding pretrained weights from the timm or transformers library. If `use_pretrained_backbone`
|
267_7_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitpose.md
|
https://huggingface.co/docs/transformers/en/model_doc/vitpose/#vitposeconfig
|
.md
|
will load the corresponding pretrained weights from the timm or transformers library. If `use_pretrained_backbone`
is `False`, this loads the backbone's config and uses that to initialize the backbone with random weights.
use_pretrained_backbone (`bool`, *optional*, defaults to `False`):
Whether to use pretrained weights for the backbone.
use_timm_backbone (`bool`, *optional*, defaults to `False`):
Whether to load `backbone` from the timm library. If `False`, the backbone is loaded from the transformers
|
267_7_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitpose.md
|
https://huggingface.co/docs/transformers/en/model_doc/vitpose/#vitposeconfig
|
.md
|
Whether to load `backbone` from the timm library. If `False`, the backbone is loaded from the transformers
library.
backbone_kwargs (`dict`, *optional*):
Keyword arguments to be passed to AutoBackbone when loading from a checkpoint
e.g. `{'out_indices': (0, 1, 2, 3)}`. Cannot be specified if `backbone_config` is set.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
|
267_7_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitpose.md
|
https://huggingface.co/docs/transformers/en/model_doc/vitpose/#vitposeconfig
|
.md
|
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
scale_factor (`int`, *optional*, defaults to 4):
Factor to upscale the feature maps coming from the ViT backbone.
use_simple_decoder (`bool`, *optional*, defaults to `True`):
Whether to use a `VitPoseSimpleDecoder` to decode the feature maps from the backbone into heatmaps. Otherwise it uses `VitPoseClassicDecoder`.
Example:
```python
>>> from transformers import VitPoseConfig, VitPoseForPoseEstimation
|
267_7_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitpose.md
|
https://huggingface.co/docs/transformers/en/model_doc/vitpose/#vitposeconfig
|
.md
|
>>> # Initializing a VitPose configuration
>>> configuration = VitPoseConfig()
>>> # Initializing a model (with random weights) from the configuration
>>> model = VitPoseForPoseEstimation(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
267_7_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitpose.md
|
https://huggingface.co/docs/transformers/en/model_doc/vitpose/#vitposeforposeestimation
|
.md
|
The VitPose model with a pose estimation head on top.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`VitPoseConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
267_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vitpose.md
|
https://huggingface.co/docs/transformers/en/model_doc/vitpose/#vitposeforposeestimation
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
267_8_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert-japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/bert-japanese/
|
.md
|
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
268_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert-japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/bert-japanese/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
268_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert-japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/bert-japanese/#overview
|
.md
|
The BERT models trained on Japanese text.
There are models with two different tokenization methods:
- Tokenize with MeCab and WordPiece. This requires some extra dependencies, [fugashi](https://github.com/polm/fugashi) which is a wrapper around [MeCab](https://taku910.github.io/mecab/).
- Tokenize into characters.
To use *MecabTokenizer*, you should `pip install transformers["ja"]` (or `pip install -e .["ja"]` if you install
from source) to install dependencies.
|
268_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert-japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/bert-japanese/#overview
|
.md
|
from source) to install dependencies.
See [details on cl-tohoku repository](https://github.com/cl-tohoku/bert-japanese).
Example of using a model with MeCab and WordPiece tokenization:
```python
>>> import torch
>>> from transformers import AutoModel, AutoTokenizer
|
268_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert-japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/bert-japanese/#overview
|
.md
|
>>> bertjapanese = AutoModel.from_pretrained("cl-tohoku/bert-base-japanese")
>>> tokenizer = AutoTokenizer.from_pretrained("cl-tohoku/bert-base-japanese")
>>> ## Input Japanese Text
>>> line = "吾輩は猫である。"
>>> inputs = tokenizer(line, return_tensors="pt")
>>> print(tokenizer.decode(inputs["input_ids"][0]))
[CLS] 吾輩 は 猫 で ある 。 [SEP]
|
268_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert-japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/bert-japanese/#overview
|
.md
|
>>> print(tokenizer.decode(inputs["input_ids"][0]))
[CLS] 吾輩 は 猫 で ある 。 [SEP]
>>> outputs = bertjapanese(**inputs)
```
Example of using a model with Character tokenization:
```python
>>> bertjapanese = AutoModel.from_pretrained("cl-tohoku/bert-base-japanese-char")
>>> tokenizer = AutoTokenizer.from_pretrained("cl-tohoku/bert-base-japanese-char")
>>> ## Input Japanese Text
>>> line = "吾輩は猫である。"
>>> inputs = tokenizer(line, return_tensors="pt")
|
268_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert-japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/bert-japanese/#overview
|
.md
|
>>> ## Input Japanese Text
>>> line = "吾輩は猫である。"
>>> inputs = tokenizer(line, return_tensors="pt")
>>> print(tokenizer.decode(inputs["input_ids"][0]))
[CLS] 吾 輩 は 猫 で あ る 。 [SEP]
>>> outputs = bertjapanese(**inputs)
```
This model was contributed by [cl-tohoku](https://huggingface.co/cl-tohoku).
<Tip>
This implementation is the same as BERT, except for tokenization method. Refer to [BERT documentation](bert) for
API reference information.
</Tip>
|
268_1_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert-japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/bert-japanese/#bertjapanesetokenizer
|
.md
|
Construct a BERT tokenizer for Japanese text.
This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer
to: this superclass for more information regarding those methods.
Args:
vocab_file (`str`):
Path to a one-wordpiece-per-line vocabulary file.
spm_file (`str`, *optional*):
Path to [SentencePiece](https://github.com/google/sentencepiece) file (generally has a .spm or .model
extension) that contains the vocabulary.
|
268_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert-japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/bert-japanese/#bertjapanesetokenizer
|
.md
|
extension) that contains the vocabulary.
do_lower_case (`bool`, *optional*, defaults to `True`):
Whether to lower case the input. Only has an effect when do_basic_tokenize=True.
do_word_tokenize (`bool`, *optional*, defaults to `True`):
Whether to do word tokenization.
do_subword_tokenize (`bool`, *optional*, defaults to `True`):
Whether to do subword tokenization.
word_tokenizer_type (`str`, *optional*, defaults to `"basic"`):
Type of word tokenizer. Choose from ["basic", "mecab", "sudachi", "jumanpp"].
|
268_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert-japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/bert-japanese/#bertjapanesetokenizer
|
.md
|
Type of word tokenizer. Choose from ["basic", "mecab", "sudachi", "jumanpp"].
subword_tokenizer_type (`str`, *optional*, defaults to `"wordpiece"`):
Type of subword tokenizer. Choose from ["wordpiece", "character", "sentencepiece",].
mecab_kwargs (`dict`, *optional*):
Dictionary passed to the `MecabTokenizer` constructor.
sudachi_kwargs (`dict`, *optional*):
Dictionary passed to the `SudachiTokenizer` constructor.
jumanpp_kwargs (`dict`, *optional*):
Dictionary passed to the `JumanppTokenizer` constructor.
|
268_2_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/align.md
|
https://huggingface.co/docs/transformers/en/model_doc/align/
|
.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
269_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/align.md
|
https://huggingface.co/docs/transformers/en/model_doc/align/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
269_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/align.md
|
https://huggingface.co/docs/transformers/en/model_doc/align/#overview
|
.md
|
The ALIGN model was proposed in [Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision](https://arxiv.org/abs/2102.05918) by Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V. Le, Yunhsuan Sung, Zhen Li, Tom Duerig. ALIGN is a multi-modal vision and language model. It can be used for image-text similarity and for zero-shot image classification. ALIGN features a dual-encoder architecture with [EfficientNet](efficientnet) as its vision encoder
|
269_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/align.md
|
https://huggingface.co/docs/transformers/en/model_doc/align/#overview
|
.md
|
image classification. ALIGN features a dual-encoder architecture with [EfficientNet](efficientnet) as its vision encoder and [BERT](bert) as its text encoder, and learns to align visual and text representations with contrastive learning. Unlike previous work, ALIGN leverages a massive noisy dataset and shows that the scale of the corpus can be used to achieve SOTA representations with a simple recipe.
|
269_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/align.md
|
https://huggingface.co/docs/transformers/en/model_doc/align/#overview
|
.md
|
The abstract from the paper is the following:
|
269_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/align.md
|
https://huggingface.co/docs/transformers/en/model_doc/align/#overview
|
.md
|
*Pre-trained representations are becoming crucial for many NLP and perception tasks. While representation learning in NLP has transitioned to training on raw text without human annotations, visual and vision-language representations still rely heavily on curated training datasets that are expensive or require expert knowledge. For vision applications, representations are mostly learned using datasets with explicit class labels such as ImageNet or OpenImages. For vision-language, popular datasets like
|
269_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/align.md
|
https://huggingface.co/docs/transformers/en/model_doc/align/#overview
|
.md
|
learned using datasets with explicit class labels such as ImageNet or OpenImages. For vision-language, popular datasets like Conceptual Captions, MSCOCO, or CLIP all involve a non-trivial data collection (and cleaning) process. This costly curation process limits the size of datasets and hence hinders the scaling of trained models. In this paper, we leverage a noisy dataset of over one billion image alt-text pairs, obtained without expensive filtering or post-processing steps in the Conceptual Captions
|
269_1_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/align.md
|
https://huggingface.co/docs/transformers/en/model_doc/align/#overview
|
.md
|
over one billion image alt-text pairs, obtained without expensive filtering or post-processing steps in the Conceptual Captions dataset. A simple dual-encoder architecture learns to align visual and language representations of the image and text pairs using a contrastive loss. We show that the scale of our corpus can make up for its noise and leads to state-of-the-art representations even with such a simple learning scheme. Our visual representation achieves strong performance when transferred to
|
269_1_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/align.md
|
https://huggingface.co/docs/transformers/en/model_doc/align/#overview
|
.md
|
even with such a simple learning scheme. Our visual representation achieves strong performance when transferred to classification tasks such as ImageNet and VTAB. The aligned visual and language representations enables zero-shot image classification and also set new state-of-the-art results on Flickr30K and MSCOCO image-text retrieval benchmarks, even when compared with more sophisticated cross-attention models. The representations also enable cross-modality search with complex text and text + image
|
269_1_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/align.md
|
https://huggingface.co/docs/transformers/en/model_doc/align/#overview
|
.md
|
sophisticated cross-attention models. The representations also enable cross-modality search with complex text and text + image queries.*
|
269_1_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/align.md
|
https://huggingface.co/docs/transformers/en/model_doc/align/#overview
|
.md
|
This model was contributed by [Alara Dirik](https://huggingface.co/adirik).
The original code is not released, this implementation is based on the Kakao Brain implementation based on the original paper.
|
269_1_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/align.md
|
https://huggingface.co/docs/transformers/en/model_doc/align/#usage-example
|
.md
|
ALIGN uses EfficientNet to get visual features and BERT to get the text features. Both the text and visual features are then projected to a latent space with identical dimension. The dot product between the projected image and text features is then used as a similarity score.
|
269_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/align.md
|
https://huggingface.co/docs/transformers/en/model_doc/align/#usage-example
|
.md
|
[`AlignProcessor`] wraps [`EfficientNetImageProcessor`] and [`BertTokenizer`] into a single instance to both encode the text and preprocess the images. The following example shows how to get the image-text similarity scores using [`AlignProcessor`] and [`AlignModel`].
```python
import requests
import torch
from PIL import Image
from transformers import AlignProcessor, AlignModel
|
269_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/align.md
|
https://huggingface.co/docs/transformers/en/model_doc/align/#usage-example
|
.md
|
processor = AlignProcessor.from_pretrained("kakaobrain/align-base")
model = AlignModel.from_pretrained("kakaobrain/align-base")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
candidate_labels = ["an image of a cat", "an image of a dog"]
inputs = processor(images=image ,text=candidate_labels, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
|
269_2_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/align.md
|
https://huggingface.co/docs/transformers/en/model_doc/align/#usage-example
|
.md
|
inputs = processor(images=image ,text=candidate_labels, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
# this is the image-text similarity score
logits_per_image = outputs.logits_per_image
# we can take the softmax to get the label probabilities
probs = logits_per_image.softmax(dim=1)
print(probs)
```
|
269_2_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/align.md
|
https://huggingface.co/docs/transformers/en/model_doc/align/#resources
|
.md
|
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ALIGN.
- A blog post on [ALIGN and the COYO-700M dataset](https://huggingface.co/blog/vit-align).
- A zero-shot image classification [demo](https://huggingface.co/spaces/adirik/ALIGN-zero-shot-image-classification).
- [Model card](https://huggingface.co/kakaobrain/align-base) of `kakaobrain/align-base` model.
|
269_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/align.md
|
https://huggingface.co/docs/transformers/en/model_doc/align/#resources
|
.md
|
- [Model card](https://huggingface.co/kakaobrain/align-base) of `kakaobrain/align-base` model.
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we will review it. The resource should ideally demonstrate something new instead of duplicating an existing resource.
|
269_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/align.md
|
https://huggingface.co/docs/transformers/en/model_doc/align/#alignconfig
|
.md
|
[`AlignConfig`] is the configuration class to store the configuration of a [`AlignModel`]. It is used to
instantiate a ALIGN model according to the specified arguments, defining the text model and vision model configs.
Instantiating a configuration with the defaults will yield a similar configuration to that of the ALIGN
[kakaobrain/align-base](https://huggingface.co/kakaobrain/align-base) architecture.
|
269_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/align.md
|
https://huggingface.co/docs/transformers/en/model_doc/align/#alignconfig
|
.md
|
[kakaobrain/align-base](https://huggingface.co/kakaobrain/align-base) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
text_config (`dict`, *optional*):
Dictionary of configuration options used to initialize [`AlignTextConfig`].
vision_config (`dict`, *optional*):
Dictionary of configuration options used to initialize [`AlignVisionConfig`].
|
269_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/align.md
|
https://huggingface.co/docs/transformers/en/model_doc/align/#alignconfig
|
.md
|
vision_config (`dict`, *optional*):
Dictionary of configuration options used to initialize [`AlignVisionConfig`].
projection_dim (`int`, *optional*, defaults to 640):
Dimensionality of text and vision projection layers.
temperature_init_value (`float`, *optional*, defaults to 1.0):
The initial value of the *temperature* parameter. Default is used as per the original ALIGN implementation.
initializer_range (`float`, *optional*, defaults to 0.02):
|
269_4_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/align.md
|
https://huggingface.co/docs/transformers/en/model_doc/align/#alignconfig
|
.md
|
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
kwargs (*optional*):
Dictionary of keyword arguments.
Example:
```python
>>> from transformers import AlignConfig, AlignModel
|
269_4_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/align.md
|
https://huggingface.co/docs/transformers/en/model_doc/align/#alignconfig
|
.md
|
>>> # Initializing a AlignConfig with kakaobrain/align-base style configuration
>>> configuration = AlignConfig()
>>> # Initializing a AlignModel (with random weights) from the kakaobrain/align-base style configuration
>>> model = AlignModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
>>> # We can also initialize a AlignConfig from a AlignTextConfig and a AlignVisionConfig
>>> from transformers import AlignTextConfig, AlignVisionConfig
|
269_4_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/align.md
|
https://huggingface.co/docs/transformers/en/model_doc/align/#alignconfig
|
.md
|
>>> # Initializing ALIGN Text and Vision configurations
>>> config_text = AlignTextConfig()
>>> config_vision = AlignVisionConfig()
>>> config = AlignConfig.from_text_vision_configs(config_text, config_vision)
```
Methods: from_text_vision_configs
|
269_4_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/align.md
|
https://huggingface.co/docs/transformers/en/model_doc/align/#aligntextconfig
|
.md
|
This is the configuration class to store the configuration of a [`AlignTextModel`]. It is used to instantiate a
ALIGN text encoder according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the text encoder of the ALIGN
[kakaobrain/align-base](https://huggingface.co/kakaobrain/align-base) architecture. The default values here are
copied from BERT.
|
269_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/align.md
|
https://huggingface.co/docs/transformers/en/model_doc/align/#aligntextconfig
|
.md
|
copied from BERT.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 30522):
Vocabulary size of the Align Text model. Defines the number of different tokens that can be represented by
the `inputs_ids` passed when calling [`AlignTextModel`].
hidden_size (`int`, *optional*, defaults to 768):
|
269_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/align.md
|
https://huggingface.co/docs/transformers/en/model_doc/align/#aligntextconfig
|
.md
|
the `inputs_ids` passed when calling [`AlignTextModel`].
hidden_size (`int`, *optional*, defaults to 768):
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (`int`, *optional*, defaults to 12):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 12):
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (`int`, *optional*, defaults to 3072):
|
269_5_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/align.md
|
https://huggingface.co/docs/transformers/en/model_doc/align/#aligntextconfig
|
.md
|
intermediate_size (`int`, *optional*, defaults to 3072):
Dimensionality of the "intermediate" (often named feed-forward) layer in the Transformer encoder.
hidden_act (`str` or `Callable`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"silu"` and `"gelu_new"` are supported.
hidden_dropout_prob (`float`, *optional*, defaults to 0.1):
|
269_5_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/align.md
|
https://huggingface.co/docs/transformers/en/model_doc/align/#aligntextconfig
|
.md
|
`"relu"`, `"silu"` and `"gelu_new"` are supported.
hidden_dropout_prob (`float`, *optional*, defaults to 0.1):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1):
The dropout ratio for the attention probabilities.
max_position_embeddings (`int`, *optional*, defaults to 512):
The maximum sequence length that this model might ever be used with. Typically set this to something large
|
269_5_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/align.md
|
https://huggingface.co/docs/transformers/en/model_doc/align/#aligntextconfig
|
.md
|
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (`int`, *optional*, defaults to 2):
The vocabulary size of the `token_type_ids` passed when calling [`AlignTextModel`].
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (`float`, *optional*, defaults to 1e-12):
|
269_5_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/align.md
|
https://huggingface.co/docs/transformers/en/model_doc/align/#aligntextconfig
|
.md
|
layer_norm_eps (`float`, *optional*, defaults to 1e-12):
The epsilon used by the layer normalization layers.
pad_token_id (`int`, *optional*, defaults to 0):
Padding token id.
position_embedding_type (`str`, *optional*, defaults to `"absolute"`):
Type of position embedding. Choose one of `"absolute"`, `"relative_key"`, `"relative_key_query"`. For
positional embeddings use `"absolute"`. For more information on `"relative_key"`, please refer to
|
269_5_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/align.md
|
https://huggingface.co/docs/transformers/en/model_doc/align/#aligntextconfig
|
.md
|
positional embeddings use `"absolute"`. For more information on `"relative_key"`, please refer to
[Self-Attention with Relative Position Representations (Shaw et al.)](https://arxiv.org/abs/1803.02155).
For more information on `"relative_key_query"`, please refer to *Method 4* in [Improve Transformer Models
with Better Relative Position Embeddings (Huang et al.)](https://arxiv.org/abs/2009.13658).
use_cache (`bool`, *optional*, defaults to `True`):
|
269_5_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/align.md
|
https://huggingface.co/docs/transformers/en/model_doc/align/#aligntextconfig
|
.md
|
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if `config.is_decoder=True`.
Example:
```python
>>> from transformers import AlignTextConfig, AlignTextModel
|
269_5_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/align.md
|
https://huggingface.co/docs/transformers/en/model_doc/align/#aligntextconfig
|
.md
|
>>> # Initializing a AlignTextConfig with kakaobrain/align-base style configuration
>>> configuration = AlignTextConfig()
>>> # Initializing a AlignTextModel (with random weights) from the kakaobrain/align-base style configuration
>>> model = AlignTextModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
269_5_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/align.md
|
https://huggingface.co/docs/transformers/en/model_doc/align/#alignvisionconfig
|
.md
|
This is the configuration class to store the configuration of a [`AlignVisionModel`]. It is used to instantiate a
ALIGN vision encoder according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the vision encoder of the ALIGN
[kakaobrain/align-base](https://huggingface.co/kakaobrain/align-base) architecture. The default values are copied
from EfficientNet (efficientnet-b7)
|
269_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/align.md
|
https://huggingface.co/docs/transformers/en/model_doc/align/#alignvisionconfig
|
.md
|
from EfficientNet (efficientnet-b7)
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
num_channels (`int`, *optional*, defaults to 3):
The number of input channels.
image_size (`int`, *optional*, defaults to 600):
The input image size.
width_coefficient (`float`, *optional*, defaults to 2.0):
Scaling coefficient for network width at each stage.
|
269_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/align.md
|
https://huggingface.co/docs/transformers/en/model_doc/align/#alignvisionconfig
|
.md
|
width_coefficient (`float`, *optional*, defaults to 2.0):
Scaling coefficient for network width at each stage.
depth_coefficient (`float`, *optional*, defaults to 3.1):
Scaling coefficient for network depth at each stage.
depth_divisor `int`, *optional*, defaults to 8):
A unit of network width.
kernel_sizes (`List[int]`, *optional*, defaults to `[3, 3, 5, 3, 5, 5, 3]`):
List of kernel sizes to be used in each block.
in_channels (`List[int]`, *optional*, defaults to `[32, 16, 24, 40, 80, 112, 192]`):
|
269_6_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/align.md
|
https://huggingface.co/docs/transformers/en/model_doc/align/#alignvisionconfig
|
.md
|
in_channels (`List[int]`, *optional*, defaults to `[32, 16, 24, 40, 80, 112, 192]`):
List of input channel sizes to be used in each block for convolutional layers.
out_channels (`List[int]`, *optional*, defaults to `[16, 24, 40, 80, 112, 192, 320]`):
List of output channel sizes to be used in each block for convolutional layers.
depthwise_padding (`List[int]`, *optional*, defaults to `[]`):
List of block indices with square padding.
strides (`List[int]`, *optional*, defaults to `[1, 2, 2, 2, 1, 2, 1]`):
|
269_6_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/align.md
|
https://huggingface.co/docs/transformers/en/model_doc/align/#alignvisionconfig
|
.md
|
List of block indices with square padding.
strides (`List[int]`, *optional*, defaults to `[1, 2, 2, 2, 1, 2, 1]`):
List of stride sizes to be used in each block for convolutional layers.
num_block_repeats (`List[int]`, *optional*, defaults to `[1, 2, 2, 3, 3, 4, 1]`):
List of the number of times each block is to repeated.
expand_ratios (`List[int]`, *optional*, defaults to `[1, 6, 6, 6, 6, 6, 6]`):
List of scaling coefficient of each block.
squeeze_expansion_ratio (`float`, *optional*, defaults to 0.25):
|
269_6_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/align.md
|
https://huggingface.co/docs/transformers/en/model_doc/align/#alignvisionconfig
|
.md
|
List of scaling coefficient of each block.
squeeze_expansion_ratio (`float`, *optional*, defaults to 0.25):
Squeeze expansion ratio.
hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
The non-linear activation function (function or string) in each block. If string, `"gelu"`, `"relu"`,
`"selu", `"gelu_new"`, `"silu"` and `"mish"` are supported.
hidden_dim (`int`, *optional*, defaults to 1280):
The hidden dimension of the layer before the classification head.
|
269_6_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/align.md
|
https://huggingface.co/docs/transformers/en/model_doc/align/#alignvisionconfig
|
.md
|
hidden_dim (`int`, *optional*, defaults to 1280):
The hidden dimension of the layer before the classification head.
pooling_type (`str` or `function`, *optional*, defaults to `"mean"`):
Type of final pooling to be applied before the dense classification head. Available options are [`"mean"`,
`"max"`]
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
|
269_6_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/align.md
|
https://huggingface.co/docs/transformers/en/model_doc/align/#alignvisionconfig
|
.md
|
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
batch_norm_eps (`float`, *optional*, defaults to 1e-3):
The epsilon used by the batch normalization layers.
batch_norm_momentum (`float`, *optional*, defaults to 0.99):
The momentum used by the batch normalization layers.
drop_connect_rate (`float`, *optional*, defaults to 0.2):
The drop rate for skip connections.
Example:
```python
>>> from transformers import AlignVisionConfig, AlignVisionModel
|
269_6_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/align.md
|
https://huggingface.co/docs/transformers/en/model_doc/align/#alignvisionconfig
|
.md
|
>>> # Initializing a AlignVisionConfig with kakaobrain/align-base style configuration
>>> configuration = AlignVisionConfig()
>>> # Initializing a AlignVisionModel (with random weights) from the kakaobrain/align-base style configuration
>>> model = AlignVisionModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
269_6_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/align.md
|
https://huggingface.co/docs/transformers/en/model_doc/align/#alignprocessor
|
.md
|
Constructs an ALIGN processor which wraps [`EfficientNetImageProcessor`] and
[`BertTokenizer`]/[`BertTokenizerFast`] into a single processor that interits both the image processor and
tokenizer functionalities. See the [`~AlignProcessor.__call__`] and [`~OwlViTProcessor.decode`] for more
information.
The preferred way of passing kwargs is as a dictionary per modality, see usage example below.
```python
from transformers import AlignProcessor
from PIL import Image
model_id = "kakaobrain/align-base"
|
269_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/align.md
|
https://huggingface.co/docs/transformers/en/model_doc/align/#alignprocessor
|
.md
|
```python
from transformers import AlignProcessor
from PIL import Image
model_id = "kakaobrain/align-base"
processor = AlignProcessor.from_pretrained(model_id)
|
269_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/align.md
|
https://huggingface.co/docs/transformers/en/model_doc/align/#alignprocessor
|
.md
|
processor(
images=your_pil_image,
text=["What is that?"],
images_kwargs = {"crop_size": {"height": 224, "width": 224}},
text_kwargs = {"padding": "do_not_pad"},
common_kwargs = {"return_tensors": "pt"},
)
```
Args:
image_processor ([`EfficientNetImageProcessor`]):
The image processor is a required input.
tokenizer ([`BertTokenizer`, `BertTokenizerFast`]):
The tokenizer is a required input.
|
269_7_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/align.md
|
https://huggingface.co/docs/transformers/en/model_doc/align/#alignmodel
|
.md
|
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
|
269_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/align.md
|
https://huggingface.co/docs/transformers/en/model_doc/align/#alignmodel
|
.md
|
and behavior.
Parameters:
config ([`AlignConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
- get_text_features
- get_image_features
|
269_8_1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.