source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/omdet-turbo.md
|
https://huggingface.co/docs/transformers/en/model_doc/omdet-turbo/#omdetturboconfig
|
.md
|
image_size (`int`, *optional*, defaults to 640):
The size (resolution) of each image.
disable_custom_kernels (`bool`, *optional*, defaults to `False`):
Whether to disable custom kernels.
layer_norm_eps (`float`, *optional*, defaults to 1e-05):
The epsilon value for layer normalization.
batch_norm_eps (`float`, *optional*, defaults to 1e-05):
The epsilon value for batch normalization.
init_std (`float`, *optional*, defaults to 0.02):
|
317_5_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/omdet-turbo.md
|
https://huggingface.co/docs/transformers/en/model_doc/omdet-turbo/#omdetturboconfig
|
.md
|
The epsilon value for batch normalization.
init_std (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
text_projection_in_dim (`int`, *optional*, defaults to 512):
The input dimension for the text projection.
text_projection_out_dim (`int`, *optional*, defaults to 512):
The output dimension for the text projection.
task_encoder_hidden_dim (`int`, *optional*, defaults to 1024):
|
317_5_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/omdet-turbo.md
|
https://huggingface.co/docs/transformers/en/model_doc/omdet-turbo/#omdetturboconfig
|
.md
|
The output dimension for the text projection.
task_encoder_hidden_dim (`int`, *optional*, defaults to 1024):
The feedforward dimension for the task encoder.
class_embed_dim (`int`, *optional*, defaults to 512):
The dimension of the classes embeddings.
class_distance_type (`str`, *optional*, defaults to `"cosine"`):
The type of of distance to compare predicted classes to projected classes embeddings.
Can be `"cosine"` or `"dot"`.
num_queries (`int`, *optional*, defaults to 900):
The number of queries.
|
317_5_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/omdet-turbo.md
|
https://huggingface.co/docs/transformers/en/model_doc/omdet-turbo/#omdetturboconfig
|
.md
|
Can be `"cosine"` or `"dot"`.
num_queries (`int`, *optional*, defaults to 900):
The number of queries.
csp_activation (`str`, *optional*, defaults to `"silu"`):
The activation function of the Cross Stage Partial (CSP) networks of the encoder.
conv_norm_activation (`str`, *optional*, defaults to `"gelu"`):
The activation function of the ConvNormLayer layers of the encoder.
encoder_feedforward_activation (`str`, *optional*, defaults to `"relu"`):
|
317_5_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/omdet-turbo.md
|
https://huggingface.co/docs/transformers/en/model_doc/omdet-turbo/#omdetturboconfig
|
.md
|
encoder_feedforward_activation (`str`, *optional*, defaults to `"relu"`):
The activation function for the feedforward network of the encoder.
encoder_feedforward_dropout (`float`, *optional*, defaults to 0.0):
The dropout rate following the activation of the encoder feedforward network.
encoder_dropout (`float`, *optional*, defaults to 0.0):
The dropout rate of the encoder multi-head attention module.
hidden_expansion (`int`, *optional*, defaults to 1):
|
317_5_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/omdet-turbo.md
|
https://huggingface.co/docs/transformers/en/model_doc/omdet-turbo/#omdetturboconfig
|
.md
|
The dropout rate of the encoder multi-head attention module.
hidden_expansion (`int`, *optional*, defaults to 1):
The hidden expansion of the CSP networks in the encoder.
vision_features_channels (`tuple(int)`, *optional*, defaults to `[256, 256, 256]`):
The projected vision features channels used as inputs for the decoder.
encoder_hidden_dim (`int`, *optional*, defaults to 256):
The hidden dimension of the encoder.
encoder_in_channels (`List(int)`, *optional*, defaults to `[192, 384, 768]`):
|
317_5_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/omdet-turbo.md
|
https://huggingface.co/docs/transformers/en/model_doc/omdet-turbo/#omdetturboconfig
|
.md
|
The hidden dimension of the encoder.
encoder_in_channels (`List(int)`, *optional*, defaults to `[192, 384, 768]`):
The input channels for the encoder.
encoder_projection_indices (`List(int)`, *optional*, defaults to `[2]`):
The indices of the input features projected by each layers.
encoder_attention_heads (`int`, *optional*, defaults to 8):
The number of attention heads for the encoder.
encoder_dim_feedforward (`int`, *optional*, defaults to 2048):
The feedforward dimension for the encoder.
|
317_5_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/omdet-turbo.md
|
https://huggingface.co/docs/transformers/en/model_doc/omdet-turbo/#omdetturboconfig
|
.md
|
encoder_dim_feedforward (`int`, *optional*, defaults to 2048):
The feedforward dimension for the encoder.
encoder_layers (`int`, *optional*, defaults to 1):
The number of layers in the encoder.
positional_encoding_temperature (`int`, *optional*, defaults to 10000):
The positional encoding temperature in the encoder.
num_feature_levels (`int`, *optional*, defaults to 3):
The number of feature levels for the multi-scale deformable attention module of the decoder.
|
317_5_11
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/omdet-turbo.md
|
https://huggingface.co/docs/transformers/en/model_doc/omdet-turbo/#omdetturboconfig
|
.md
|
The number of feature levels for the multi-scale deformable attention module of the decoder.
decoder_hidden_dim (`int`, *optional*, defaults to 256):
The hidden dimension of the decoder.
decoder_num_heads (`int`, *optional*, defaults to 8):
The number of heads for the decoder.
decoder_num_layers (`int`, *optional*, defaults to 6):
The number of layers for the decoder.
decoder_activation (`str`, *optional*, defaults to `"relu"`):
The activation function for the decoder.
|
317_5_12
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/omdet-turbo.md
|
https://huggingface.co/docs/transformers/en/model_doc/omdet-turbo/#omdetturboconfig
|
.md
|
decoder_activation (`str`, *optional*, defaults to `"relu"`):
The activation function for the decoder.
decoder_dim_feedforward (`int`, *optional*, defaults to 2048):
The feedforward dimension for the decoder.
decoder_num_points (`int`, *optional*, defaults to 4):
The number of points sampled in the decoder multi-scale deformable attention module.
decoder_dropout (`float`, *optional*, defaults to 0.0):
The dropout rate for the decoder.
eval_size (`Tuple[int, int]`, *optional*):
|
317_5_13
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/omdet-turbo.md
|
https://huggingface.co/docs/transformers/en/model_doc/omdet-turbo/#omdetturboconfig
|
.md
|
The dropout rate for the decoder.
eval_size (`Tuple[int, int]`, *optional*):
Height and width used to computes the effective height and width of the position embeddings after taking
into account the stride (see RTDetr).
learn_initial_query (`bool`, *optional*, defaults to `False`):
Whether to learn the initial query.
cache_size (`int`, *optional*, defaults to 100):
The cache size for the classes and prompts caches.
is_encoder_decoder (`bool`, *optional*, defaults to `True`):
|
317_5_14
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/omdet-turbo.md
|
https://huggingface.co/docs/transformers/en/model_doc/omdet-turbo/#omdetturboconfig
|
.md
|
The cache size for the classes and prompts caches.
is_encoder_decoder (`bool`, *optional*, defaults to `True`):
Whether the model is used as an encoder-decoder model or not.
kwargs (`Dict[str, Any]`, *optional*):
Additional parameters from the architecture. The values in kwargs will be saved as part of the configuration
and can be used to control the model outputs.
Examples:
```python
>>> from transformers import OmDetTurboConfig, OmDetTurboForObjectDetection
|
317_5_15
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/omdet-turbo.md
|
https://huggingface.co/docs/transformers/en/model_doc/omdet-turbo/#omdetturboconfig
|
.md
|
>>> # Initializing a OmDet-Turbo omlab/omdet-turbo-swin-tiny-hf style configuration
>>> configuration = OmDetTurboConfig()
>>> # Initializing a model (with random weights) from the omlab/omdet-turbo-swin-tiny-hf style configuration
>>> model = OmDetTurboForObjectDetection(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
317_5_16
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/omdet-turbo.md
|
https://huggingface.co/docs/transformers/en/model_doc/omdet-turbo/#omdetturboprocessor
|
.md
|
Constructs a OmDet-Turbo processor which wraps a Deformable DETR image processor and an AutoTokenizer into a
single processor.
[`OmDetTurboProcessor`] offers all the functionalities of [`DetrImageProcessor`] and
[`AutoTokenizer`]. See the docstring of [`~OmDetTurboProcessor.__call__`] and [`~OmDetTurboProcessor.decode`]
for more information.
Args:
image_processor (`DetrImageProcessor`):
An instance of [`DetrImageProcessor`]. The image processor is a required input.
tokenizer (`AutoTokenizer`):
|
317_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/omdet-turbo.md
|
https://huggingface.co/docs/transformers/en/model_doc/omdet-turbo/#omdetturboprocessor
|
.md
|
An instance of [`DetrImageProcessor`]. The image processor is a required input.
tokenizer (`AutoTokenizer`):
An instance of ['PreTrainedTokenizer`]. The tokenizer is a required input.
Methods: post_process_grounded_object_detection
|
317_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/omdet-turbo.md
|
https://huggingface.co/docs/transformers/en/model_doc/omdet-turbo/#omdetturboforobjectdetection
|
.md
|
OmDetTurbo Model (consisting of a vision and a text backbone, and encoder-decoder architecture) outputting
bounding boxes and classes scores for tasks such as COCO detection.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
|
317_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/omdet-turbo.md
|
https://huggingface.co/docs/transformers/en/model_doc/omdet-turbo/#omdetturboforobjectdetection
|
.md
|
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`OmDetTurboConfig`]):
|
317_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/omdet-turbo.md
|
https://huggingface.co/docs/transformers/en/model_doc/omdet-turbo/#omdetturboforobjectdetection
|
.md
|
and behavior.
Parameters:
config ([`OmDetTurboConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
317_7_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlvit.md
|
https://huggingface.co/docs/transformers/en/model_doc/owlvit/
|
.md
|
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
318_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlvit.md
|
https://huggingface.co/docs/transformers/en/model_doc/owlvit/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
318_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlvit.md
|
https://huggingface.co/docs/transformers/en/model_doc/owlvit/#overview
|
.md
|
The OWL-ViT (short for Vision Transformer for Open-World Localization) was proposed in [Simple Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2205.06230) by Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby. OWL-ViT is an open-vocabulary object detection network trained on a variety of (image, text)
|
318_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlvit.md
|
https://huggingface.co/docs/transformers/en/model_doc/owlvit/#overview
|
.md
|
Thomas Kipf, and Neil Houlsby. OWL-ViT is an open-vocabulary object detection network trained on a variety of (image, text) pairs. It can be used to query an image with one or multiple text queries to search for and detect target objects described in text.
|
318_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlvit.md
|
https://huggingface.co/docs/transformers/en/model_doc/owlvit/#overview
|
.md
|
The abstract from the paper is the following:
|
318_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlvit.md
|
https://huggingface.co/docs/transformers/en/model_doc/owlvit/#overview
|
.md
|
*Combining simple architectures with large-scale pre-training has led to massive improvements in image classification. For object detection, pre-training and scaling approaches are less well established, especially in the long-tailed and open-vocabulary setting, where training data is relatively scarce. In this paper, we propose a strong recipe for transferring image-text models to open-vocabulary object detection. We use a standard Vision Transformer architecture with minimal modifications, contrastive
|
318_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlvit.md
|
https://huggingface.co/docs/transformers/en/model_doc/owlvit/#overview
|
.md
|
to open-vocabulary object detection. We use a standard Vision Transformer architecture with minimal modifications, contrastive image-text pre-training, and end-to-end detection fine-tuning. Our analysis of the scaling properties of this setup shows that increasing image-level pre-training and model size yield consistent improvements on the downstream detection task. We provide the adaptation strategies and regularizations needed to attain very strong performance on zero-shot text-conditioned and one-shot
|
318_1_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlvit.md
|
https://huggingface.co/docs/transformers/en/model_doc/owlvit/#overview
|
.md
|
adaptation strategies and regularizations needed to attain very strong performance on zero-shot text-conditioned and one-shot image-conditioned object detection. Code and models are available on GitHub.*
|
318_1_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlvit.md
|
https://huggingface.co/docs/transformers/en/model_doc/owlvit/#overview
|
.md
|
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/owlvit_architecture.jpg"
alt="drawing" width="600"/>
<small> OWL-ViT architecture. Taken from the <a href="https://arxiv.org/abs/2205.06230">original paper</a>. </small>
This model was contributed by [adirik](https://huggingface.co/adirik). The original code can be found [here](https://github.com/google-research/scenic/tree/main/scenic/projects/owl_vit).
|
318_1_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlvit.md
|
https://huggingface.co/docs/transformers/en/model_doc/owlvit/#usage-tips
|
.md
|
OWL-ViT is a zero-shot text-conditioned object detection model. OWL-ViT uses [CLIP](clip) as its multi-modal backbone, with a ViT-like Transformer to get visual features and a causal language model to get the text features. To use CLIP for detection, OWL-ViT removes the final token pooling layer of the vision model and attaches a lightweight classification and box head to each transformer output token. Open-vocabulary classification is enabled by replacing the fixed classification layer weights with the
|
318_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlvit.md
|
https://huggingface.co/docs/transformers/en/model_doc/owlvit/#usage-tips
|
.md
|
output token. Open-vocabulary classification is enabled by replacing the fixed classification layer weights with the class-name embeddings obtained from the text model. The authors first train CLIP from scratch and fine-tune it end-to-end with the classification and box heads on standard detection datasets using a bipartite matching loss. One or multiple text queries per image can be used to perform zero-shot text-conditioned object detection.
|
318_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlvit.md
|
https://huggingface.co/docs/transformers/en/model_doc/owlvit/#usage-tips
|
.md
|
[`OwlViTImageProcessor`] can be used to resize (or rescale) and normalize images for the model and [`CLIPTokenizer`] is used to encode the text. [`OwlViTProcessor`] wraps [`OwlViTImageProcessor`] and [`CLIPTokenizer`] into a single instance to both encode the text and prepare the images. The following example shows how to perform object detection using [`OwlViTProcessor`] and [`OwlViTForObjectDetection`].
```python
>>> import requests
>>> from PIL import Image
>>> import torch
|
318_2_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlvit.md
|
https://huggingface.co/docs/transformers/en/model_doc/owlvit/#usage-tips
|
.md
|
>>> from transformers import OwlViTProcessor, OwlViTForObjectDetection
>>> processor = OwlViTProcessor.from_pretrained("google/owlvit-base-patch32")
>>> model = OwlViTForObjectDetection.from_pretrained("google/owlvit-base-patch32")
|
318_2_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlvit.md
|
https://huggingface.co/docs/transformers/en/model_doc/owlvit/#usage-tips
|
.md
|
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> text_labels = [["a photo of a cat", "a photo of a dog"]]
>>> inputs = processor(text=text_labels, images=image, return_tensors="pt")
>>> outputs = model(**inputs)
|
318_2_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlvit.md
|
https://huggingface.co/docs/transformers/en/model_doc/owlvit/#usage-tips
|
.md
|
>>> # Target image sizes (height, width) to rescale box predictions [batch_size, 2]
>>> target_sizes = torch.tensor([(image.height, image.width)])
>>> # Convert outputs (bounding boxes and class logits) to Pascal VOC format (xmin, ymin, xmax, ymax)
>>> results = processor.post_process_grounded_object_detection(
... outputs=outputs, target_sizes=target_sizes, threshold=0.1, text_labels=text_labels
... )
>>> # Retrieve predictions for the first image for the corresponding text queries
|
318_2_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlvit.md
|
https://huggingface.co/docs/transformers/en/model_doc/owlvit/#usage-tips
|
.md
|
... )
>>> # Retrieve predictions for the first image for the corresponding text queries
>>> result = results[0]
>>> boxes, scores, text_labels = result["boxes"], result["scores"], result["text_labels"]
>>> for box, score, text_label in zip(boxes, scores, text_labels):
... box = [round(i, 2) for i in box.tolist()]
... print(f"Detected {text_label} with confidence {round(score.item(), 3)} at location {box}")
Detected a photo of a cat with confidence 0.707 at location [324.97, 20.44, 640.58, 373.29]
|
318_2_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlvit.md
|
https://huggingface.co/docs/transformers/en/model_doc/owlvit/#usage-tips
|
.md
|
Detected a photo of a cat with confidence 0.707 at location [324.97, 20.44, 640.58, 373.29]
Detected a photo of a cat with confidence 0.717 at location [1.46, 55.26, 315.55, 472.17]
```
|
318_2_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlvit.md
|
https://huggingface.co/docs/transformers/en/model_doc/owlvit/#resources
|
.md
|
A demo notebook on using OWL-ViT for zero- and one-shot (image-guided) object detection can be found [here](https://github.com/huggingface/notebooks/blob/main/examples/zeroshot_object_detection_with_owlvit.ipynb).
|
318_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlvit.md
|
https://huggingface.co/docs/transformers/en/model_doc/owlvit/#owlvitconfig
|
.md
|
[`OwlViTConfig`] is the configuration class to store the configuration of an [`OwlViTModel`]. It is used to
instantiate an OWL-ViT model according to the specified arguments, defining the text model and vision model
configs. Instantiating a configuration with the defaults will yield a similar configuration to that of the OWL-ViT
[google/owlvit-base-patch32](https://huggingface.co/google/owlvit-base-patch32) architecture.
|
318_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlvit.md
|
https://huggingface.co/docs/transformers/en/model_doc/owlvit/#owlvitconfig
|
.md
|
[google/owlvit-base-patch32](https://huggingface.co/google/owlvit-base-patch32) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
text_config (`dict`, *optional*):
Dictionary of configuration options used to initialize [`OwlViTTextConfig`].
vision_config (`dict`, *optional*):
Dictionary of configuration options used to initialize [`OwlViTVisionConfig`].
|
318_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlvit.md
|
https://huggingface.co/docs/transformers/en/model_doc/owlvit/#owlvitconfig
|
.md
|
vision_config (`dict`, *optional*):
Dictionary of configuration options used to initialize [`OwlViTVisionConfig`].
projection_dim (`int`, *optional*, defaults to 512):
Dimensionality of text and vision projection layers.
logit_scale_init_value (`float`, *optional*, defaults to 2.6592):
The initial value of the *logit_scale* parameter. Default is used as per the original OWL-ViT
implementation.
return_dict (`bool`, *optional*, defaults to `True`):
|
318_4_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlvit.md
|
https://huggingface.co/docs/transformers/en/model_doc/owlvit/#owlvitconfig
|
.md
|
implementation.
return_dict (`bool`, *optional*, defaults to `True`):
Whether or not the model should return a dictionary. If `False`, returns a tuple.
kwargs (*optional*):
Dictionary of keyword arguments.
Methods: from_text_vision_configs
|
318_4_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlvit.md
|
https://huggingface.co/docs/transformers/en/model_doc/owlvit/#owlvittextconfig
|
.md
|
This is the configuration class to store the configuration of an [`OwlViTTextModel`]. It is used to instantiate an
OwlViT text encoder according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the OwlViT
[google/owlvit-base-patch32](https://huggingface.co/google/owlvit-base-patch32) architecture.
|
318_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlvit.md
|
https://huggingface.co/docs/transformers/en/model_doc/owlvit/#owlvittextconfig
|
.md
|
[google/owlvit-base-patch32](https://huggingface.co/google/owlvit-base-patch32) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 49408):
Vocabulary size of the OWL-ViT text model. Defines the number of different tokens that can be represented
by the `inputs_ids` passed when calling [`OwlViTTextModel`].
|
318_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlvit.md
|
https://huggingface.co/docs/transformers/en/model_doc/owlvit/#owlvittextconfig
|
.md
|
by the `inputs_ids` passed when calling [`OwlViTTextModel`].
hidden_size (`int`, *optional*, defaults to 512):
Dimensionality of the encoder layers and the pooler layer.
intermediate_size (`int`, *optional*, defaults to 2048):
Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
num_hidden_layers (`int`, *optional*, defaults to 12):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 8):
|
318_5_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlvit.md
|
https://huggingface.co/docs/transformers/en/model_doc/owlvit/#owlvittextconfig
|
.md
|
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 8):
Number of attention heads for each attention layer in the Transformer encoder.
max_position_embeddings (`int`, *optional*, defaults to 16):
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
hidden_act (`str` or `function`, *optional*, defaults to `"quick_gelu"`):
|
318_5_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlvit.md
|
https://huggingface.co/docs/transformers/en/model_doc/owlvit/#owlvittextconfig
|
.md
|
just in case (e.g., 512 or 1024 or 2048).
hidden_act (`str` or `function`, *optional*, defaults to `"quick_gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"selu"` and `"gelu_new"` `"quick_gelu"` are supported.
layer_norm_eps (`float`, *optional*, defaults to 1e-05):
The epsilon used by the layer normalization layers.
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
|
318_5_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlvit.md
|
https://huggingface.co/docs/transformers/en/model_doc/owlvit/#owlvittextconfig
|
.md
|
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
initializer_factor (`float`, *optional*, defaults to 1.0):
A factor for initializing all weight matrices (should be kept to 1, used internally for initialization
testing).
pad_token_id (`int`, *optional*, defaults to 0):
|
318_5_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlvit.md
|
https://huggingface.co/docs/transformers/en/model_doc/owlvit/#owlvittextconfig
|
.md
|
testing).
pad_token_id (`int`, *optional*, defaults to 0):
The id of the padding token in the input sequences.
bos_token_id (`int`, *optional*, defaults to 49406):
The id of the beginning-of-sequence token in the input sequences.
eos_token_id (`int`, *optional*, defaults to 49407):
The id of the end-of-sequence token in the input sequences.
Example:
```python
>>> from transformers import OwlViTTextConfig, OwlViTTextModel
|
318_5_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlvit.md
|
https://huggingface.co/docs/transformers/en/model_doc/owlvit/#owlvittextconfig
|
.md
|
>>> # Initializing a OwlViTTextModel with google/owlvit-base-patch32 style configuration
>>> configuration = OwlViTTextConfig()
>>> # Initializing a OwlViTTextConfig from the google/owlvit-base-patch32 style configuration
>>> model = OwlViTTextModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
318_5_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlvit.md
|
https://huggingface.co/docs/transformers/en/model_doc/owlvit/#owlvitvisionconfig
|
.md
|
This is the configuration class to store the configuration of an [`OwlViTVisionModel`]. It is used to instantiate
an OWL-ViT image encoder according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the OWL-ViT
[google/owlvit-base-patch32](https://huggingface.co/google/owlvit-base-patch32) architecture.
|
318_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlvit.md
|
https://huggingface.co/docs/transformers/en/model_doc/owlvit/#owlvitvisionconfig
|
.md
|
[google/owlvit-base-patch32](https://huggingface.co/google/owlvit-base-patch32) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
hidden_size (`int`, *optional*, defaults to 768):
Dimensionality of the encoder layers and the pooler layer.
intermediate_size (`int`, *optional*, defaults to 3072):
|
318_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlvit.md
|
https://huggingface.co/docs/transformers/en/model_doc/owlvit/#owlvitvisionconfig
|
.md
|
Dimensionality of the encoder layers and the pooler layer.
intermediate_size (`int`, *optional*, defaults to 3072):
Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
num_hidden_layers (`int`, *optional*, defaults to 12):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 12):
Number of attention heads for each attention layer in the Transformer encoder.
num_channels (`int`, *optional*, defaults to 3):
|
318_6_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlvit.md
|
https://huggingface.co/docs/transformers/en/model_doc/owlvit/#owlvitvisionconfig
|
.md
|
Number of attention heads for each attention layer in the Transformer encoder.
num_channels (`int`, *optional*, defaults to 3):
Number of channels in the input images.
image_size (`int`, *optional*, defaults to 768):
The size (resolution) of each image.
patch_size (`int`, *optional*, defaults to 32):
The size (resolution) of each patch.
hidden_act (`str` or `function`, *optional*, defaults to `"quick_gelu"`):
|
318_6_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlvit.md
|
https://huggingface.co/docs/transformers/en/model_doc/owlvit/#owlvitvisionconfig
|
.md
|
The size (resolution) of each patch.
hidden_act (`str` or `function`, *optional*, defaults to `"quick_gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"selu"` and `"gelu_new"` `"quick_gelu"` are supported.
layer_norm_eps (`float`, *optional*, defaults to 1e-05):
The epsilon used by the layer normalization layers.
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
|
318_6_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlvit.md
|
https://huggingface.co/docs/transformers/en/model_doc/owlvit/#owlvitvisionconfig
|
.md
|
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
initializer_factor (`float`, *optional*, defaults to 1.0):
A factor for initializing all weight matrices (should be kept to 1, used internally for initialization
testing).
Example:
```python
|
318_6_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlvit.md
|
https://huggingface.co/docs/transformers/en/model_doc/owlvit/#owlvitvisionconfig
|
.md
|
testing).
Example:
```python
>>> from transformers import OwlViTVisionConfig, OwlViTVisionModel
|
318_6_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlvit.md
|
https://huggingface.co/docs/transformers/en/model_doc/owlvit/#owlvitvisionconfig
|
.md
|
>>> # Initializing a OwlViTVisionModel with google/owlvit-base-patch32 style configuration
>>> configuration = OwlViTVisionConfig()
>>> # Initializing a OwlViTVisionModel model from the google/owlvit-base-patch32 style configuration
>>> model = OwlViTVisionModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
318_6_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlvit.md
|
https://huggingface.co/docs/transformers/en/model_doc/owlvit/#owlvitimageprocessor
|
.md
|
Constructs an OWL-ViT image processor.
This image processor inherits from [`ImageProcessingMixin`] which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
Args:
do_resize (`bool`, *optional*, defaults to `True`):
Whether to resize the shorter edge of the input to a certain `size`.
size (`Dict[str, int]`, *optional*, defaults to {"height": 768, "width": 768}):
|
318_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlvit.md
|
https://huggingface.co/docs/transformers/en/model_doc/owlvit/#owlvitimageprocessor
|
.md
|
size (`Dict[str, int]`, *optional*, defaults to {"height": 768, "width": 768}):
The size to use for resizing the image. Only has an effect if `do_resize` is set to `True`. If `size` is a
sequence like (h, w), output size will be matched to this. If `size` is an int, then image will be resized
to (size, size).
resample (`int`, *optional*, defaults to `Resampling.BICUBIC`):
An optional resampling filter. This can be one of `PIL.Image.Resampling.NEAREST`,
|
318_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlvit.md
|
https://huggingface.co/docs/transformers/en/model_doc/owlvit/#owlvitimageprocessor
|
.md
|
An optional resampling filter. This can be one of `PIL.Image.Resampling.NEAREST`,
`PIL.Image.Resampling.BOX`, `PIL.Image.Resampling.BILINEAR`, `PIL.Image.Resampling.HAMMING`,
`PIL.Image.Resampling.BICUBIC` or `PIL.Image.Resampling.LANCZOS`. Only has an effect if `do_resize` is set
to `True`.
do_center_crop (`bool`, *optional*, defaults to `False`):
Whether to crop the input at the center. If the input size is smaller than `crop_size` along any edge, the
image is padded with 0's and then center cropped.
|
318_7_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlvit.md
|
https://huggingface.co/docs/transformers/en/model_doc/owlvit/#owlvitimageprocessor
|
.md
|
image is padded with 0's and then center cropped.
crop_size (`int`, *optional*, defaults to {"height": 768, "width": 768}):
The size to use for center cropping the image. Only has an effect if `do_center_crop` is set to `True`.
do_rescale (`bool`, *optional*, defaults to `True`):
Whether to rescale the input by a certain factor.
rescale_factor (`float`, *optional*, defaults to `1/255`):
The factor to use for rescaling the image. Only has an effect if `do_rescale` is set to `True`.
|
318_7_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlvit.md
|
https://huggingface.co/docs/transformers/en/model_doc/owlvit/#owlvitimageprocessor
|
.md
|
The factor to use for rescaling the image. Only has an effect if `do_rescale` is set to `True`.
do_normalize (`bool`, *optional*, defaults to `True`):
Whether or not to normalize the input with `image_mean` and `image_std`. Desired output size when applying
center-cropping. Only has an effect if `do_center_crop` is set to `True`.
image_mean (`List[int]`, *optional*, defaults to `[0.48145466, 0.4578275, 0.40821073]`):
The sequence of means for each channel, to be used when normalizing images.
|
318_7_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlvit.md
|
https://huggingface.co/docs/transformers/en/model_doc/owlvit/#owlvitimageprocessor
|
.md
|
The sequence of means for each channel, to be used when normalizing images.
image_std (`List[int]`, *optional*, defaults to `[0.26862954, 0.26130258, 0.27577711]`):
The sequence of standard deviations for each channel, to be used when normalizing images.
Methods: preprocess
- post_process_object_detection
- post_process_image_guided_detection
|
318_7_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlvit.md
|
https://huggingface.co/docs/transformers/en/model_doc/owlvit/#owlvitprocessor
|
.md
|
OwlViTProcessor
Constructs an OWL-ViT processor which wraps [`OwlViTImageProcessor`] and [`CLIPTokenizer`]/[`CLIPTokenizerFast`]
into a single processor that interits both the image processor and tokenizer functionalities. See the
[`~OwlViTProcessor.__call__`] and [`~OwlViTProcessor.decode`] for more information.
Args:
image_processor ([`OwlViTImageProcessor`], *optional*):
The image processor is a required input.
tokenizer ([`CLIPTokenizer`, `CLIPTokenizerFast`], *optional*):
|
318_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlvit.md
|
https://huggingface.co/docs/transformers/en/model_doc/owlvit/#owlvitprocessor
|
.md
|
The image processor is a required input.
tokenizer ([`CLIPTokenizer`, `CLIPTokenizerFast`], *optional*):
The tokenizer is a required input.
- __call__
- post_process_grounded_object_detection
- post_process_image_guided_detection
|
318_8_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlvit.md
|
https://huggingface.co/docs/transformers/en/model_doc/owlvit/#owlvitmodel
|
.md
|
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
|
318_9_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlvit.md
|
https://huggingface.co/docs/transformers/en/model_doc/owlvit/#owlvitmodel
|
.md
|
and behavior.
Parameters:
config ([`OwlViTConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
- get_text_features
- get_image_features
|
318_9_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlvit.md
|
https://huggingface.co/docs/transformers/en/model_doc/owlvit/#owlvittextmodel
|
.md
|
No docstring available for OwlViTTextModel
Methods: forward
|
318_10_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlvit.md
|
https://huggingface.co/docs/transformers/en/model_doc/owlvit/#owlvitvisionmodel
|
.md
|
No docstring available for OwlViTVisionModel
Methods: forward
|
318_11_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlvit.md
|
https://huggingface.co/docs/transformers/en/model_doc/owlvit/#owlvitforobjectdetection
|
.md
|
No docstring available for OwlViTForObjectDetection
Methods: forward
- image_guided_detection
|
318_12_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/granitemoe.md
|
https://huggingface.co/docs/transformers/en/model_doc/granitemoe/
|
.md
|
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
319_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/granitemoe.md
|
https://huggingface.co/docs/transformers/en/model_doc/granitemoe/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
319_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/granitemoe.md
|
https://huggingface.co/docs/transformers/en/model_doc/granitemoe/#overview
|
.md
|
The GraniteMoe model was proposed in [Power Scheduler: A Batch Size and Token Number Agnostic Learning Rate Scheduler](https://arxiv.org/abs/2408.13359) by Yikang Shen, Matthew Stallone, Mayank Mishra, Gaoyuan Zhang, Shawn Tan, Aditya Prasad, Adriana Meza Soria, David D. Cox and Rameswar Panda.
|
319_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/granitemoe.md
|
https://huggingface.co/docs/transformers/en/model_doc/granitemoe/#overview
|
.md
|
PowerMoE-3B is a 3B sparse Mixture-of-Experts (sMoE) language model trained with the Power learning rate scheduler. It sparsely activates 800M parameters for each token. It is trained on a mix of open-source and proprietary datasets. PowerMoE-3B has shown promising results compared to other dense models with 2x activate parameters across various benchmarks, including natural language multi-choices, code generation, and math reasoning.
The abstract from the paper is the following:
|
319_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/granitemoe.md
|
https://huggingface.co/docs/transformers/en/model_doc/granitemoe/#overview
|
.md
|
The abstract from the paper is the following:
*Finding the optimal learning rate for language model pretraining is a challenging task.
|
319_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/granitemoe.md
|
https://huggingface.co/docs/transformers/en/model_doc/granitemoe/#overview
|
.md
|
This is not only because there is a complicated correlation between learning rate, batch size, number of training tokens, model size, and other hyperparameters but also because it is prohibitively expensive to perform a hyperparameter search for large language models with Billions or Trillions of parameters. Recent studies propose using small proxy models and small corpus to perform hyperparameter searches and transposing the optimal parameters to large models and large corpus. While the zero-shot
|
319_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/granitemoe.md
|
https://huggingface.co/docs/transformers/en/model_doc/granitemoe/#overview
|
.md
|
to perform hyperparameter searches and transposing the optimal parameters to large models and large corpus. While the zero-shot transferability is theoretically and empirically proven for model size related hyperparameters, like depth and width, the zero-shot transfer from small corpus to large corpus is underexplored.
|
319_1_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/granitemoe.md
|
https://huggingface.co/docs/transformers/en/model_doc/granitemoe/#overview
|
.md
|
In this paper, we study the correlation between optimal learning rate, batch size, and number of training tokens for the recently proposed WSD scheduler. After thousands of small experiments, we found a power-law relationship between variables and demonstrated its transferability across model sizes. Based on the observation, we propose a new learning rate scheduler, Power scheduler, that is agnostic about the number of training tokens and batch size. The experiment shows that combining the Power scheduler
|
319_1_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/granitemoe.md
|
https://huggingface.co/docs/transformers/en/model_doc/granitemoe/#overview
|
.md
|
that is agnostic about the number of training tokens and batch size. The experiment shows that combining the Power scheduler with Maximum Update Parameterization (\mup) can consistently achieve impressive performance with one set of hyperparameters regardless of the number of training tokens, batch size, model size, and even model architecture. Our 3B dense and MoE models trained with the Power scheduler achieve comparable performance as state-of-the-art small language models.
|
319_1_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/granitemoe.md
|
https://huggingface.co/docs/transformers/en/model_doc/granitemoe/#overview
|
.md
|
We [open source](https://huggingface.co/collections/ibm/power-lm-66be64ae647ddf11b9808000) these pretrained models.*
Tips:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
|
319_1_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/granitemoe.md
|
https://huggingface.co/docs/transformers/en/model_doc/granitemoe/#overview
|
.md
|
model_path = "ibm/PowerMoE-3b"
tokenizer = AutoTokenizer.from_pretrained(model_path)
# drop device_map if running on CPU
model = AutoModelForCausalLM.from_pretrained(model_path, device_map="auto")
model.eval()
# change input text as desired
prompt = "Write a code to find the maximum value in a list of numbers."
|
319_1_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/granitemoe.md
|
https://huggingface.co/docs/transformers/en/model_doc/granitemoe/#overview
|
.md
|
# tokenize the text
input_tokens = tokenizer(prompt, return_tensors="pt")
# generate output tokens
output = model.generate(**input_tokens, max_new_tokens=100)
# decode output tokens into text
output = tokenizer.batch_decode(output)
# loop over the batch to print, in this example the batch size is 1
for i in output:
print(i)
```
This model was contributed by [mayank-mishra](https://huggingface.co/mayank-mishra).
|
319_1_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/granitemoe.md
|
https://huggingface.co/docs/transformers/en/model_doc/granitemoe/#granitemoeconfig
|
.md
|
This is the configuration class to store the configuration of a [`GraniteMoeModel`]. It is used to instantiate an GraniteMoe
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the GraniteMoe-3B.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
|
319_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/granitemoe.md
|
https://huggingface.co/docs/transformers/en/model_doc/granitemoe/#granitemoeconfig
|
.md
|
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 32000):
Vocabulary size of the GraniteMoe model. Defines the number of different tokens that can be represented by the
`inputs_ids` passed when calling [`GraniteMoeModel`]
hidden_size (`int`, *optional*, defaults to 4096):
Dimension of the hidden representations.
intermediate_size (`int`, *optional*, defaults to 11008):
Dimension of the MLP representations.
|
319_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/granitemoe.md
|
https://huggingface.co/docs/transformers/en/model_doc/granitemoe/#granitemoeconfig
|
.md
|
intermediate_size (`int`, *optional*, defaults to 11008):
Dimension of the MLP representations.
num_hidden_layers (`int`, *optional*, defaults to 32):
Number of hidden layers in the Transformer decoder.
num_attention_heads (`int`, *optional*, defaults to 32):
Number of attention heads for each attention layer in the Transformer decoder.
num_key_value_heads (`int`, *optional*):
This is the number of key_value heads that should be used to implement Grouped Query Attention. If
|
319_2_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/granitemoe.md
|
https://huggingface.co/docs/transformers/en/model_doc/granitemoe/#granitemoeconfig
|
.md
|
This is the number of key_value heads that should be used to implement Grouped Query Attention. If
`num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if
`num_key_value_heads=1` the model will use Multi Query Attention (MQA) otherwise GQA is used. When
converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
by meanpooling all the original heads within that group. For more details checkout [this
|
319_2_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/granitemoe.md
|
https://huggingface.co/docs/transformers/en/model_doc/granitemoe/#granitemoeconfig
|
.md
|
by meanpooling all the original heads within that group. For more details checkout [this
paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to
`num_attention_heads`.
hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
The non-linear activation function (function or string) in the decoder.
max_position_embeddings (`int`, *optional*, defaults to 2048):
The maximum sequence length that this model might ever be used with.
|
319_2_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/granitemoe.md
|
https://huggingface.co/docs/transformers/en/model_doc/granitemoe/#granitemoeconfig
|
.md
|
The maximum sequence length that this model might ever be used with.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
rms_norm_eps (`float`, *optional*, defaults to 1e-06):
The epsilon used by the rms normalization layers.
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models). Only
|
319_2_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/granitemoe.md
|
https://huggingface.co/docs/transformers/en/model_doc/granitemoe/#granitemoeconfig
|
.md
|
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if `config.is_decoder=True`.
pad_token_id (`int`, *optional*):
Padding token id.
bos_token_id (`int`, *optional*, defaults to 1):
Beginning of stream token id.
eos_token_id (`int`, *optional*, defaults to 2):
End of stream token id.
tie_word_embeddings (`bool`, *optional*, defaults to `False`):
Whether to tie weight embeddings
rope_theta (`float`, *optional*, defaults to 10000.0):
|
319_2_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/granitemoe.md
|
https://huggingface.co/docs/transformers/en/model_doc/granitemoe/#granitemoeconfig
|
.md
|
Whether to tie weight embeddings
rope_theta (`float`, *optional*, defaults to 10000.0):
The base period of the RoPE embeddings.
rope_scaling (`Dict`, *optional*):
Dictionary containing the scaling configuration for the RoPE embeddings. Currently supports two scaling
strategies: linear and dynamic. Their scaling factor must be a float greater than 1. The expected format is
`{"type": strategy name, "factor": scaling factor}`. When using this flag, don't update
|
319_2_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/granitemoe.md
|
https://huggingface.co/docs/transformers/en/model_doc/granitemoe/#granitemoeconfig
|
.md
|
`{"type": strategy name, "factor": scaling factor}`. When using this flag, don't update
`max_position_embeddings` to the expected new maximum. See the following thread for more information on how
these scaling strategies behave:
https://www.reddit.com/r/LocalLLaMA/comments/14mrgpr/dynamically_scaled_rope_further_increases/. This is an
experimental feature, subject to breaking API changes in future versions.
attention_bias (`bool`, *optional*, defaults to `False`):
|
319_2_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/granitemoe.md
|
https://huggingface.co/docs/transformers/en/model_doc/granitemoe/#granitemoeconfig
|
.md
|
attention_bias (`bool`, *optional*, defaults to `False`):
Whether to use a bias in the query, key, value and output projection layers during self-attention.
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
embedding_multiplier (`float`, *optional*, defaults to 1.0): embedding multiplier
logits_scaling (`float`, *optional*, defaults to 1.0): divisor for output logits
residual_multiplier (`float`, *optional*, defaults to 1.0): residual multiplier
|
319_2_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/granitemoe.md
|
https://huggingface.co/docs/transformers/en/model_doc/granitemoe/#granitemoeconfig
|
.md
|
residual_multiplier (`float`, *optional*, defaults to 1.0): residual multiplier
attention_multiplier (`float`, *optional*, defaults to 1.0): attention multiplier
num_local_experts (`int`, *optional*, defaults to 8): total number of experts
num_experts_per_tok (`int`, *optional*, defaults to 2): number of experts per token
output_router_logits (`bool`, *optional*, defaults to `False`):
Whether or not the router logits should be returned by the model. Enabeling this will also
|
319_2_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/granitemoe.md
|
https://huggingface.co/docs/transformers/en/model_doc/granitemoe/#granitemoeconfig
|
.md
|
Whether or not the router logits should be returned by the model. Enabeling this will also
allow the model to output the auxiliary loss.
router_aux_loss_coef (`float`, *optional*, defaults to 0.001): router auxialiary loss coefficient
```python
>>> from transformers import GraniteMoeModel, GraniteMoeConfig
|
319_2_11
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/granitemoe.md
|
https://huggingface.co/docs/transformers/en/model_doc/granitemoe/#granitemoeconfig
|
.md
|
>>> # Initializing a GraniteMoe granitemoe-3b style configuration
>>> configuration = GraniteMoeConfig()
>>> # Initializing a model from the granitemoe-7b style configuration
>>> model = GraniteMoeModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
319_2_12
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/granitemoe.md
|
https://huggingface.co/docs/transformers/en/model_doc/granitemoe/#granitemoemodel
|
.md
|
The bare GraniteMoe Model outputting raw hidden-states without any specific head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
319_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/granitemoe.md
|
https://huggingface.co/docs/transformers/en/model_doc/granitemoe/#granitemoemodel
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`GraniteMoeConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
|
319_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/granitemoe.md
|
https://huggingface.co/docs/transformers/en/model_doc/granitemoe/#granitemoemodel
|
.md
|
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Transformer decoder consisting of *config.num_hidden_layers* layers. Each layer is a [`GraniteMoeDecoderLayer`]
Args:
config: GraniteMoeConfig
Methods: forward
|
319_3_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/granitemoe.md
|
https://huggingface.co/docs/transformers/en/model_doc/granitemoe/#granitemoeforcausallm
|
.md
|
No docstring available for GraniteMoeForCausalLM
Methods: forward
|
319_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rwkv.md
|
https://huggingface.co/docs/transformers/en/model_doc/rwkv/
|
.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
320_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rwkv.md
|
https://huggingface.co/docs/transformers/en/model_doc/rwkv/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
320_0_1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.