source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip.md
|
https://huggingface.co/docs/transformers/en/model_doc/blip/#blipvisionconfig
|
.md
|
This is the configuration class to store the configuration of a [`BlipVisionModel`]. It is used to instantiate a
BLIP vision model according to the specified arguments, defining the model architecture. Instantiating a
configuration defaults will yield a similar configuration to that of the Blip-base
[Salesforce/blip-vqa-base](https://huggingface.co/Salesforce/blip-vqa-base) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
151_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip.md
|
https://huggingface.co/docs/transformers/en/model_doc/blip/#blipvisionconfig
|
.md
|
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
hidden_size (`int`, *optional*, defaults to 768):
Dimensionality of the encoder layers and the pooler layer.
intermediate_size (`int`, *optional*, defaults to 3072):
Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
num_hidden_layers (`int`, *optional*, defaults to 12):
|
151_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip.md
|
https://huggingface.co/docs/transformers/en/model_doc/blip/#blipvisionconfig
|
.md
|
num_hidden_layers (`int`, *optional*, defaults to 12):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 12):
Number of attention heads for each attention layer in the Transformer encoder.
image_size (`int`, *optional*, defaults to 384):
The size (resolution) of each image.
patch_size (`int`, *optional*, defaults to 16):
The size (resolution) of each patch.
hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`):
|
151_5_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip.md
|
https://huggingface.co/docs/transformers/en/model_doc/blip/#blipvisionconfig
|
.md
|
The size (resolution) of each patch.
hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"selu"` and `"gelu_new"` `"gelu"` are supported.
layer_norm_eps (`float`, *optional*, defaults to 1e-5):
The epsilon used by the layer normalization layers.
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
|
151_5_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip.md
|
https://huggingface.co/docs/transformers/en/model_doc/blip/#blipvisionconfig
|
.md
|
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
initializer_range (`float`, *optional*, defaults to 1e-10):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
Example:
```python
>>> from transformers import BlipVisionConfig, BlipVisionModel
|
151_5_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip.md
|
https://huggingface.co/docs/transformers/en/model_doc/blip/#blipvisionconfig
|
.md
|
>>> # Initializing a BlipVisionConfig with Salesforce/blip-vqa-base style configuration
>>> configuration = BlipVisionConfig()
>>> # Initializing a BlipVisionModel (with random weights) from the Salesforce/blip-vqa-base style configuration
>>> model = BlipVisionModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
151_5_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip.md
|
https://huggingface.co/docs/transformers/en/model_doc/blip/#blipprocessor
|
.md
|
Constructs a BLIP processor which wraps a BERT tokenizer and BLIP image processor into a single processor.
[`BlipProcessor`] offers all the functionalities of [`BlipImageProcessor`] and [`BertTokenizerFast`]. See the
docstring of [`~BlipProcessor.__call__`] and [`~BlipProcessor.decode`] for more information.
Args:
image_processor (`BlipImageProcessor`):
An instance of [`BlipImageProcessor`]. The image processor is a required input.
tokenizer (`BertTokenizerFast`):
|
151_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip.md
|
https://huggingface.co/docs/transformers/en/model_doc/blip/#blipprocessor
|
.md
|
An instance of [`BlipImageProcessor`]. The image processor is a required input.
tokenizer (`BertTokenizerFast`):
An instance of ['BertTokenizerFast`]. The tokenizer is a required input.
|
151_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip.md
|
https://huggingface.co/docs/transformers/en/model_doc/blip/#blipimageprocessor
|
.md
|
Constructs a BLIP image processor.
Args:
do_resize (`bool`, *optional*, defaults to `True`):
Whether to resize the image's (height, width) dimensions to the specified `size`. Can be overridden by the
`do_resize` parameter in the `preprocess` method.
size (`dict`, *optional*, defaults to `{"height": 384, "width": 384}`):
Size of the output image after resizing. Can be overridden by the `size` parameter in the `preprocess`
method.
|
151_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip.md
|
https://huggingface.co/docs/transformers/en/model_doc/blip/#blipimageprocessor
|
.md
|
Size of the output image after resizing. Can be overridden by the `size` parameter in the `preprocess`
method.
resample (`PILImageResampling`, *optional*, defaults to `Resampling.BICUBIC`):
Resampling filter to use if resizing the image. Only has an effect if `do_resize` is set to `True`. Can be
overridden by the `resample` parameter in the `preprocess` method.
do_rescale (`bool`, *optional*, defaults to `True`):
Whether to rescale the image by the specified scale `rescale_factor`. Can be overridden by the
|
151_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip.md
|
https://huggingface.co/docs/transformers/en/model_doc/blip/#blipimageprocessor
|
.md
|
Whether to rescale the image by the specified scale `rescale_factor`. Can be overridden by the
`do_rescale` parameter in the `preprocess` method.
rescale_factor (`int` or `float`, *optional*, defaults to `1/255`):
Scale factor to use if rescaling the image. Only has an effect if `do_rescale` is set to `True`. Can be
overridden by the `rescale_factor` parameter in the `preprocess` method.
do_normalize (`bool`, *optional*, defaults to `True`):
|
151_7_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip.md
|
https://huggingface.co/docs/transformers/en/model_doc/blip/#blipimageprocessor
|
.md
|
overridden by the `rescale_factor` parameter in the `preprocess` method.
do_normalize (`bool`, *optional*, defaults to `True`):
Whether to normalize the image. Can be overridden by the `do_normalize` parameter in the `preprocess`
method. Can be overridden by the `do_normalize` parameter in the `preprocess` method.
image_mean (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_MEAN`):
Mean to use if normalizing the image. This is a float or list of floats the length of the number of
|
151_7_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip.md
|
https://huggingface.co/docs/transformers/en/model_doc/blip/#blipimageprocessor
|
.md
|
Mean to use if normalizing the image. This is a float or list of floats the length of the number of
channels in the image. Can be overridden by the `image_mean` parameter in the `preprocess` method. Can be
overridden by the `image_mean` parameter in the `preprocess` method.
image_std (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_STD`):
Standard deviation to use if normalizing the image. This is a float or list of floats the length of the
|
151_7_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip.md
|
https://huggingface.co/docs/transformers/en/model_doc/blip/#blipimageprocessor
|
.md
|
Standard deviation to use if normalizing the image. This is a float or list of floats the length of the
number of channels in the image. Can be overridden by the `image_std` parameter in the `preprocess` method.
Can be overridden by the `image_std` parameter in the `preprocess` method.
do_convert_rgb (`bool`, *optional*, defaults to `True`):
Whether to convert the image to RGB.
Methods: preprocess
<frameworkcontent>
<pt>
|
151_7_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip.md
|
https://huggingface.co/docs/transformers/en/model_doc/blip/#blipmodel
|
.md
|
`BlipModel` is going to be deprecated in future versions, please use `BlipForConditionalGeneration`, `BlipForImageTextRetrieval` or `BlipForQuestionAnswering` depending on your usecase.
This model is going to be deprecated in future versions. Please use `BlipForConditionalGeneration`, `BlipForQuestionAnswering` or `BlipForImageTextRetrieval` depending on your usecase.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
|
151_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip.md
|
https://huggingface.co/docs/transformers/en/model_doc/blip/#blipmodel
|
.md
|
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
|
151_8_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip.md
|
https://huggingface.co/docs/transformers/en/model_doc/blip/#blipmodel
|
.md
|
and behavior.
Parameters:
config ([`BlipConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
- get_text_features
- get_image_features
|
151_8_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip.md
|
https://huggingface.co/docs/transformers/en/model_doc/blip/#bliptextmodel
|
.md
|
The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of
cross-attention is added between the self-attention layers, following the architecture described in [Attention is
all you need](https://arxiv.org/abs/1706.03762) by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit,
Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin. argument and `is_decoder` set to `True`; an
|
151_9_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip.md
|
https://huggingface.co/docs/transformers/en/model_doc/blip/#bliptextmodel
|
.md
|
Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin. argument and `is_decoder` set to `True`; an
`encoder_hidden_states` is then expected as an input to the forward pass.
Methods: forward
|
151_9_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip.md
|
https://huggingface.co/docs/transformers/en/model_doc/blip/#blipvisionmodel
|
.md
|
No docstring available for BlipVisionModel
Methods: forward
|
151_10_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip.md
|
https://huggingface.co/docs/transformers/en/model_doc/blip/#blipforconditionalgeneration
|
.md
|
BLIP Model for image captioning. The model consists of a vision encoder and a text decoder. One can optionally pass
`input_ids` to the model, which serve as a text prompt, to make the text decoder continue the prompt. Otherwise,
the decoder starts generating text from the [BOS] (beginning-of-sequence) token. will start generating the caption
from the text input. If no text input is provided, the decoder will start with the [BOS] token only.
|
151_11_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip.md
|
https://huggingface.co/docs/transformers/en/model_doc/blip/#blipforconditionalgeneration
|
.md
|
from the text input. If no text input is provided, the decoder will start with the [BOS] token only.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
151_11_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip.md
|
https://huggingface.co/docs/transformers/en/model_doc/blip/#blipforconditionalgeneration
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`BlipConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
151_11_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip.md
|
https://huggingface.co/docs/transformers/en/model_doc/blip/#blipforconditionalgeneration
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
151_11_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip.md
|
https://huggingface.co/docs/transformers/en/model_doc/blip/#blipforimagetextretrieval
|
.md
|
BLIP Model with a vision and text projector, and a classification head on top. The model is used in the context of
image-text retrieval. Given an image and a text, the model returns the probability of the text being relevant to
the image.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
|
151_12_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip.md
|
https://huggingface.co/docs/transformers/en/model_doc/blip/#blipforimagetextretrieval
|
.md
|
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`BlipConfig`]): Model configuration class with all the parameters of the model.
|
151_12_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip.md
|
https://huggingface.co/docs/transformers/en/model_doc/blip/#blipforimagetextretrieval
|
.md
|
and behavior.
Parameters:
config ([`BlipConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
151_12_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip.md
|
https://huggingface.co/docs/transformers/en/model_doc/blip/#blipforquestionanswering
|
.md
|
BLIP Model for visual question answering. The model consists of a vision encoder, a text encoder as well as a text
decoder. The vision encoder will encode the input image, the text encoder will encode the input question together
with the encoding of the image, and the text decoder will output the answer to the question.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
|
151_13_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip.md
|
https://huggingface.co/docs/transformers/en/model_doc/blip/#blipforquestionanswering
|
.md
|
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
|
151_13_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip.md
|
https://huggingface.co/docs/transformers/en/model_doc/blip/#blipforquestionanswering
|
.md
|
and behavior.
Parameters:
config ([`BlipConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
</pt>
<tf>
|
151_13_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip.md
|
https://huggingface.co/docs/transformers/en/model_doc/blip/#tfblipmodel
|
.md
|
No docstring available for TFBlipModel
Methods: call
- get_text_features
- get_image_features
|
151_14_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip.md
|
https://huggingface.co/docs/transformers/en/model_doc/blip/#tfbliptextmodel
|
.md
|
No docstring available for TFBlipTextModel
Methods: call
|
151_15_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip.md
|
https://huggingface.co/docs/transformers/en/model_doc/blip/#tfblipvisionmodel
|
.md
|
No docstring available for TFBlipVisionModel
Methods: call
|
151_16_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip.md
|
https://huggingface.co/docs/transformers/en/model_doc/blip/#tfblipforconditionalgeneration
|
.md
|
No docstring available for TFBlipForConditionalGeneration
Methods: call
|
151_17_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip.md
|
https://huggingface.co/docs/transformers/en/model_doc/blip/#tfblipforimagetextretrieval
|
.md
|
No docstring available for TFBlipForImageTextRetrieval
Methods: call
|
151_18_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip.md
|
https://huggingface.co/docs/transformers/en/model_doc/blip/#tfblipforquestionanswering
|
.md
|
No docstring available for TFBlipForQuestionAnswering
Methods: call
</tf>
</frameworkcontent>
|
151_19_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/persimmon.md
|
https://huggingface.co/docs/transformers/en/model_doc/persimmon/
|
.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
152_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/persimmon.md
|
https://huggingface.co/docs/transformers/en/model_doc/persimmon/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
152_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/persimmon.md
|
https://huggingface.co/docs/transformers/en/model_doc/persimmon/#overview
|
.md
|
The Persimmon model was created by [ADEPT](https://www.adept.ai/blog/persimmon-8b), and authored by Erich Elsen, Augustus Odena, Maxwell Nye, Sağnak Taşırlar, Tri Dao, Curtis Hawthorne, Deepak Moparthi, Arushi Somani.
|
152_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/persimmon.md
|
https://huggingface.co/docs/transformers/en/model_doc/persimmon/#overview
|
.md
|
The authors introduced Persimmon-8B, a decoder model based on the classic transformers architecture, with query and key normalization. Persimmon-8B is a fully permissively-licensed model with approximately 8 billion parameters, released under the Apache license. Some of the key attributes of Persimmon-8B are long context size (16K), performance, and capabilities for multimodal extensions.
|
152_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/persimmon.md
|
https://huggingface.co/docs/transformers/en/model_doc/persimmon/#overview
|
.md
|
The authors showcase their approach to model evaluation, focusing on practical text generation, mirroring how users interact with language models. The work also includes a comparative analysis, pitting Persimmon-8B against other prominent models (MPT 7B Instruct and Llama 2 Base 7B 1-Shot), across various evaluation tasks. The results demonstrate Persimmon-8B's competitive performance, even with limited training data.
|
152_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/persimmon.md
|
https://huggingface.co/docs/transformers/en/model_doc/persimmon/#overview
|
.md
|
In terms of model details, the work outlines the architecture and training methodology of Persimmon-8B, providing insights into its design choices, sequence length, and dataset composition. The authors present a fast inference code that outperforms traditional implementations through operator fusion and CUDA graph utilization while maintaining code coherence. They express their anticipation of how the community will leverage this contribution to drive innovation, hinting at further upcoming releases as
|
152_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/persimmon.md
|
https://huggingface.co/docs/transformers/en/model_doc/persimmon/#overview
|
.md
|
anticipation of how the community will leverage this contribution to drive innovation, hinting at further upcoming releases as part of an ongoing series of developments.
|
152_1_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/persimmon.md
|
https://huggingface.co/docs/transformers/en/model_doc/persimmon/#overview
|
.md
|
This model was contributed by [ArthurZ](https://huggingface.co/ArthurZ).
The original code can be found [here](https://github.com/persimmon-ai-labs/adept-inference).
|
152_1_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/persimmon.md
|
https://huggingface.co/docs/transformers/en/model_doc/persimmon/#usage-tips
|
.md
|
<Tip warning={true}>
The `Persimmon` models were trained using `bfloat16`, but the original inference uses `float16` The checkpoints uploaded on the hub use `torch_dtype = 'float16'` which will be
used by the `AutoModel` API to cast the checkpoints from `torch.float32` to `torch.float16`.
|
152_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/persimmon.md
|
https://huggingface.co/docs/transformers/en/model_doc/persimmon/#usage-tips
|
.md
|
The `dtype` of the online weights is mostly irrelevant, unless you are using `torch_dtype="auto"` when initializing a model using `model = AutoModelForCausalLM.from_pretrained("path", torch_dtype = "auto")`. The reason is that the model will first be downloaded ( using the `dtype` of the checkpoints online) then it will be cast to the default `dtype` of `torch` (becomes `torch.float32`). Users should specify the `torch_dtype` they want, and if they don't it will be `torch.float32`.
|
152_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/persimmon.md
|
https://huggingface.co/docs/transformers/en/model_doc/persimmon/#usage-tips
|
.md
|
Finetuning the model in `float16` is not recommended and known to produce `nan`, as such the model should be fine-tuned in `bfloat16`.
</Tip>
Tips:
- To convert the model, you need to clone the original repository using `git clone https://github.com/persimmon-ai-labs/adept-inference`, then get the checkpoints:
```bash
git clone https://github.com/persimmon-ai-labs/adept-inference
|
152_2_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/persimmon.md
|
https://huggingface.co/docs/transformers/en/model_doc/persimmon/#usage-tips
|
.md
|
```bash
git clone https://github.com/persimmon-ai-labs/adept-inference
wget https://axtkn4xl5cip.objectstorage.us-phoenix-1.oci.customer-oci.com/n/axtkn4xl5cip/b/adept-public-data/o/8b_base_model_release.tar
tar -xvf 8b_base_model_release.tar
python src/transformers/models/persimmon/convert_persimmon_weights_to_hf.py --input_dir /path/to/downloaded/persimmon/weights/ --output_dir /output/path \
--pt_model_path /path/to/8b_chat_model_release/iter_0001251/mp_rank_00/model_optim_rng.pt
|
152_2_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/persimmon.md
|
https://huggingface.co/docs/transformers/en/model_doc/persimmon/#usage-tips
|
.md
|
--pt_model_path /path/to/8b_chat_model_release/iter_0001251/mp_rank_00/model_optim_rng.pt
--ada_lib_path /path/to/adept-inference
```
For the chat model:
```bash
wget https://axtkn4xl5cip.objectstorage.us-phoenix-1.oci.customer-oci.com/n/axtkn4xl5cip/b/adept-public-data/o/8b_chat_model_release.tar
tar -xvf 8b_base_model_release.tar
```
Thereafter, models can be loaded via:
```py
from transformers import PersimmonForCausalLM, PersimmonTokenizer
|
152_2_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/persimmon.md
|
https://huggingface.co/docs/transformers/en/model_doc/persimmon/#usage-tips
|
.md
|
model = PersimmonForCausalLM.from_pretrained("/output/path")
tokenizer = PersimmonTokenizer.from_pretrained("/output/path")
```
- Perismmon uses a `sentencepiece` based tokenizer, with a `Unigram` model. It supports bytefallback, which is only available in `tokenizers==0.14.0` for the fast tokenizer.
The `LlamaTokenizer` is used as it is a standard wrapper around sentencepiece. The `chat` template will be updated with the templating functions in a follow up PR!
|
152_2_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/persimmon.md
|
https://huggingface.co/docs/transformers/en/model_doc/persimmon/#usage-tips
|
.md
|
- The authors suggest to use the following prompt format for the chat mode: `f"human: {prompt}\n\nadept:"`
|
152_2_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/persimmon.md
|
https://huggingface.co/docs/transformers/en/model_doc/persimmon/#persimmonconfig
|
.md
|
This is the configuration class to store the configuration of a [`PersimmonModel`]. It is used to instantiate an
Persimmon model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the
[adept/persimmon-8b-base](https://huggingface.co/adept/persimmon-8b-base).
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
152_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/persimmon.md
|
https://huggingface.co/docs/transformers/en/model_doc/persimmon/#persimmonconfig
|
.md
|
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 262144):
Vocabulary size of the Persimmon model. Defines the number of different tokens that can be represented by
the `inputs_ids` passed when calling [`PersimmonModel`]
hidden_size (`int`, *optional*, defaults to 4096):
Dimension of the hidden representations.
|
152_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/persimmon.md
|
https://huggingface.co/docs/transformers/en/model_doc/persimmon/#persimmonconfig
|
.md
|
hidden_size (`int`, *optional*, defaults to 4096):
Dimension of the hidden representations.
intermediate_size (`int`, *optional*, defaults to 16384):
Dimension of the MLP representations.
num_hidden_layers (`int`, *optional*, defaults to 36):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 64):
Number of attention heads for each attention layer in the Transformer encoder.
hidden_act (`str` or `function`, *optional*, defaults to `"relu2"`):
|
152_3_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/persimmon.md
|
https://huggingface.co/docs/transformers/en/model_doc/persimmon/#persimmonconfig
|
.md
|
hidden_act (`str` or `function`, *optional*, defaults to `"relu2"`):
The non-linear activation function (function or string) in the decoder.
max_position_embeddings (`int`, *optional*, defaults to 16384):
The maximum sequence length that this model might ever be used with.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (`float`, *optional*, defaults to 1e-5):
|
152_3_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/persimmon.md
|
https://huggingface.co/docs/transformers/en/model_doc/persimmon/#persimmonconfig
|
.md
|
layer_norm_eps (`float`, *optional*, defaults to 1e-5):
The epsilon used by the rms normalization layers.
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if `config.is_decoder=True`.
tie_word_embeddings(`bool`, *optional*, defaults to `False`):
Whether to tie weight embeddings
rope_theta (`float`, *optional*, defaults to 25000.0):
The base period of the RoPE embeddings.
|
152_3_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/persimmon.md
|
https://huggingface.co/docs/transformers/en/model_doc/persimmon/#persimmonconfig
|
.md
|
Whether to tie weight embeddings
rope_theta (`float`, *optional*, defaults to 25000.0):
The base period of the RoPE embeddings.
rope_scaling (`Dict`, *optional*):
Dictionary containing the scaling configuration for the RoPE embeddings. NOTE: if you apply new rope type
and you expect the model to work on longer `max_position_embeddings`, we recommend you to update this value
accordingly.
Expected contents:
`rope_type` (`str`):
|
152_3_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/persimmon.md
|
https://huggingface.co/docs/transformers/en/model_doc/persimmon/#persimmonconfig
|
.md
|
accordingly.
Expected contents:
`rope_type` (`str`):
The sub-variant of RoPE to use. Can be one of ['default', 'linear', 'dynamic', 'yarn', 'longrope',
'llama3'], with 'default' being the original RoPE implementation.
`factor` (`float`, *optional*):
Used with all rope types except 'default'. The scaling factor to apply to the RoPE embeddings. In
most scaling types, a `factor` of x will enable the model to handle sequences of length x *
original maximum pre-trained length.
|
152_3_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/persimmon.md
|
https://huggingface.co/docs/transformers/en/model_doc/persimmon/#persimmonconfig
|
.md
|
original maximum pre-trained length.
`original_max_position_embeddings` (`int`, *optional*):
Used with 'dynamic', 'longrope' and 'llama3'. The original max position embeddings used during
pretraining.
`attention_factor` (`float`, *optional*):
Used with 'yarn' and 'longrope'. The scaling factor to be applied on the attention
computation. If unspecified, it defaults to value recommended by the implementation, using the
`factor` field to infer the suggested value.
`beta_fast` (`float`, *optional*):
|
152_3_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/persimmon.md
|
https://huggingface.co/docs/transformers/en/model_doc/persimmon/#persimmonconfig
|
.md
|
`factor` field to infer the suggested value.
`beta_fast` (`float`, *optional*):
Only used with 'yarn'. Parameter to set the boundary for extrapolation (only) in the linear
ramp function. If unspecified, it defaults to 32.
`beta_slow` (`float`, *optional*):
Only used with 'yarn'. Parameter to set the boundary for interpolation (only) in the linear
ramp function. If unspecified, it defaults to 1.
`short_factor` (`List[float]`, *optional*):
|
152_3_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/persimmon.md
|
https://huggingface.co/docs/transformers/en/model_doc/persimmon/#persimmonconfig
|
.md
|
ramp function. If unspecified, it defaults to 1.
`short_factor` (`List[float]`, *optional*):
Only used with 'longrope'. The scaling factor to be applied to short contexts (<
`original_max_position_embeddings`). Must be a list of numbers with the same length as the hidden
size divided by the number of attention heads divided by 2
`long_factor` (`List[float]`, *optional*):
Only used with 'longrope'. The scaling factor to be applied to long contexts (<
|
152_3_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/persimmon.md
|
https://huggingface.co/docs/transformers/en/model_doc/persimmon/#persimmonconfig
|
.md
|
`long_factor` (`List[float]`, *optional*):
Only used with 'longrope'. The scaling factor to be applied to long contexts (<
`original_max_position_embeddings`). Must be a list of numbers with the same length as the hidden
size divided by the number of attention heads divided by 2
`low_freq_factor` (`float`, *optional*):
Only used with 'llama3'. Scaling factor applied to low frequency components of the RoPE
`high_freq_factor` (`float`, *optional*):
|
152_3_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/persimmon.md
|
https://huggingface.co/docs/transformers/en/model_doc/persimmon/#persimmonconfig
|
.md
|
`high_freq_factor` (`float`, *optional*):
Only used with 'llama3'. Scaling factor applied to high frequency components of the RoPE
qk_layernorm (`bool`, *optional*, default to `True`):
Whether or not to normalize the Queries and Keys after projecting the hidden states
hidden_dropout (`float`, *optional*, default to 0.0):
The dropout ratio after applying the MLP to the hidden states.
attention_dropout (`float`, *optional*, default to 0.0):
The dropout ratio after computing the attention scores.
|
152_3_11
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/persimmon.md
|
https://huggingface.co/docs/transformers/en/model_doc/persimmon/#persimmonconfig
|
.md
|
attention_dropout (`float`, *optional*, default to 0.0):
The dropout ratio after computing the attention scores.
partial_rotary_factor (`float`, *optional*, default to 0.5):
Percentage of the query and keys which will have rotary embedding.
Example:
```python
>>> from transformers import PersimmonModel, PersimmonConfig
|
152_3_12
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/persimmon.md
|
https://huggingface.co/docs/transformers/en/model_doc/persimmon/#persimmonconfig
|
.md
|
>>> # Initializing a Persimmon persimmon-7b style configuration
>>> configuration = PersimmonConfig()
```
|
152_3_13
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/persimmon.md
|
https://huggingface.co/docs/transformers/en/model_doc/persimmon/#persimmonmodel
|
.md
|
The bare Persimmon Model outputting raw hidden-states without any specific head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
152_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/persimmon.md
|
https://huggingface.co/docs/transformers/en/model_doc/persimmon/#persimmonmodel
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`PersimmonConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
|
152_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/persimmon.md
|
https://huggingface.co/docs/transformers/en/model_doc/persimmon/#persimmonmodel
|
.md
|
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Transformer decoder consisting of *config.num_hidden_layers* layers. Each layer is a [`PersimmonDecoderLayer`]
Args:
config: PersimmonConfig
Methods: forward
|
152_4_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/persimmon.md
|
https://huggingface.co/docs/transformers/en/model_doc/persimmon/#persimmonforcausallm
|
.md
|
No docstring available for PersimmonForCausalLM
Methods: forward
|
152_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/persimmon.md
|
https://huggingface.co/docs/transformers/en/model_doc/persimmon/#persimmonforsequenceclassification
|
.md
|
The Persimmon transformer with a sequence classification head on top (linear layer).
[`PersimmonForSequenceClassification`] uses the last token in order to do the classification, as other causal
models (e.g. GPT-2) do.
Since it does classification on the last token, it requires to know the position of the last token. If a
`pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If
|
152_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/persimmon.md
|
https://huggingface.co/docs/transformers/en/model_doc/persimmon/#persimmonforsequenceclassification
|
.md
|
`pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If
no `pad_token_id` is defined, it simply takes the last value in each row of the batch. Since it cannot guess the
padding tokens when `inputs_embeds` are passed instead of `input_ids`, it does the same (take the last value in
each row of the batch).
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
|
152_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/persimmon.md
|
https://huggingface.co/docs/transformers/en/model_doc/persimmon/#persimmonforsequenceclassification
|
.md
|
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
|
152_6_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/persimmon.md
|
https://huggingface.co/docs/transformers/en/model_doc/persimmon/#persimmonforsequenceclassification
|
.md
|
and behavior.
Parameters:
config ([`PersimmonConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
152_6_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/persimmon.md
|
https://huggingface.co/docs/transformers/en/model_doc/persimmon/#persimmonfortokenclassification
|
.md
|
The Persimmon Model transformer with a token classification head on top (a linear layer on top of the hidden-states
output) e.g. for Named-Entity-Recognition (NER) tasks.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
|
152_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/persimmon.md
|
https://huggingface.co/docs/transformers/en/model_doc/persimmon/#persimmonfortokenclassification
|
.md
|
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`PersimmonConfig`]):
|
152_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/persimmon.md
|
https://huggingface.co/docs/transformers/en/model_doc/persimmon/#persimmonfortokenclassification
|
.md
|
and behavior.
Parameters:
config ([`PersimmonConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
152_7_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deplot.md
|
https://huggingface.co/docs/transformers/en/model_doc/deplot/
|
.md
|
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
153_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deplot.md
|
https://huggingface.co/docs/transformers/en/model_doc/deplot/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
153_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deplot.md
|
https://huggingface.co/docs/transformers/en/model_doc/deplot/#overview
|
.md
|
DePlot was proposed in the paper [DePlot: One-shot visual language reasoning by plot-to-table translation](https://arxiv.org/abs/2212.10505) from Fangyu Liu, Julian Martin Eisenschlos, Francesco Piccinno, Syrine Krichene, Chenxi Pang, Kenton Lee, Mandar Joshi, Wenhu Chen, Nigel Collier, Yasemin Altun.
The abstract of the paper states the following:
|
153_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deplot.md
|
https://huggingface.co/docs/transformers/en/model_doc/deplot/#overview
|
.md
|
*Visual language such as charts and plots is ubiquitous in the human world. Comprehending plots and charts requires strong reasoning skills. Prior state-of-the-art (SOTA) models require at least tens of thousands of training examples and their reasoning capabilities are still much limited, especially on complex human-written queries. This paper presents the first one-shot solution to visual language reasoning. We decompose the challenge of visual language reasoning into two steps: (1) plot-to-text
|
153_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deplot.md
|
https://huggingface.co/docs/transformers/en/model_doc/deplot/#overview
|
.md
|
solution to visual language reasoning. We decompose the challenge of visual language reasoning into two steps: (1) plot-to-text translation, and (2) reasoning over the translated text. The key in this method is a modality conversion module, named as DePlot, which translates the image of a plot or chart to a linearized table. The output of DePlot can then be directly used to prompt a pretrained large language model (LLM), exploiting the few-shot reasoning capabilities of LLMs. To obtain DePlot, we
|
153_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deplot.md
|
https://huggingface.co/docs/transformers/en/model_doc/deplot/#overview
|
.md
|
to prompt a pretrained large language model (LLM), exploiting the few-shot reasoning capabilities of LLMs. To obtain DePlot, we standardize the plot-to-table task by establishing unified task formats and metrics, and train DePlot end-to-end on this task. DePlot can then be used off-the-shelf together with LLMs in a plug-and-play fashion. Compared with a SOTA model finetuned on more than >28k data points, DePlot+LLM with just one-shot prompting achieves a 24.0% improvement over finetuned SOTA on
|
153_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deplot.md
|
https://huggingface.co/docs/transformers/en/model_doc/deplot/#overview
|
.md
|
on more than >28k data points, DePlot+LLM with just one-shot prompting achieves a 24.0% improvement over finetuned SOTA on human-written queries from the task of chart QA.*
|
153_1_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deplot.md
|
https://huggingface.co/docs/transformers/en/model_doc/deplot/#overview
|
.md
|
DePlot is a model that is trained using `Pix2Struct` architecture. You can find more information about `Pix2Struct` in the [Pix2Struct documentation](https://huggingface.co/docs/transformers/main/en/model_doc/pix2struct).
DePlot is a Visual Question Answering subset of `Pix2Struct` architecture. It renders the input question on the image and predicts the answer.
|
153_1_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deplot.md
|
https://huggingface.co/docs/transformers/en/model_doc/deplot/#usage-example
|
.md
|
Currently one checkpoint is available for DePlot:
- `google/deplot`: DePlot fine-tuned on ChartQA dataset
```python
from transformers import AutoProcessor, Pix2StructForConditionalGeneration
import requests
from PIL import Image
|
153_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deplot.md
|
https://huggingface.co/docs/transformers/en/model_doc/deplot/#usage-example
|
.md
|
model = Pix2StructForConditionalGeneration.from_pretrained("google/deplot")
processor = AutoProcessor.from_pretrained("google/deplot")
url = "https://raw.githubusercontent.com/vis-nlp/ChartQA/main/ChartQA%20Dataset/val/png/5090.png"
image = Image.open(requests.get(url, stream=True).raw)
|
153_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deplot.md
|
https://huggingface.co/docs/transformers/en/model_doc/deplot/#usage-example
|
.md
|
inputs = processor(images=image, text="Generate underlying data table of the figure below:", return_tensors="pt")
predictions = model.generate(**inputs, max_new_tokens=512)
print(processor.decode(predictions[0], skip_special_tokens=True))
```
|
153_2_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deplot.md
|
https://huggingface.co/docs/transformers/en/model_doc/deplot/#fine-tuning
|
.md
|
To fine-tune DePlot, refer to the pix2struct [fine-tuning notebook](https://github.com/huggingface/notebooks/blob/main/examples/image_captioning_pix2struct.ipynb). For `Pix2Struct` models, we have found out that fine-tuning the model with Adafactor and cosine learning rate scheduler leads to faster convergence:
```python
from transformers.optimization import Adafactor, get_cosine_schedule_with_warmup
|
153_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deplot.md
|
https://huggingface.co/docs/transformers/en/model_doc/deplot/#fine-tuning
|
.md
|
optimizer = Adafactor(self.parameters(), scale_parameter=False, relative_step=False, lr=0.01, weight_decay=1e-05)
scheduler = get_cosine_schedule_with_warmup(optimizer, num_warmup_steps=1000, num_training_steps=40000)
```
<Tip>
DePlot is a model trained using `Pix2Struct` architecture. For API reference, see [`Pix2Struct` documentation](pix2struct).
</Tip>
|
153_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma2.md
|
https://huggingface.co/docs/transformers/en/model_doc/gemma2/
|
.md
|
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
154_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma2.md
|
https://huggingface.co/docs/transformers/en/model_doc/gemma2/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
154_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma2.md
|
https://huggingface.co/docs/transformers/en/model_doc/gemma2/#overview
|
.md
|
The Gemma2 model was proposed in [Gemma2: Open Models Based on Gemini Technology and Research](https://blog.google/technology/developers/google-gemma-2/) by Gemma2 Team, Google.
Two Gemma2 models are released, with parameters sizes of 9 billion (9B) and 27 billion (27B).
The abstract from the blog post is the following:
|
154_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma2.md
|
https://huggingface.co/docs/transformers/en/model_doc/gemma2/#overview
|
.md
|
*Now we’re officially releasing Gemma 2 to researchers and developers globally. Available in both 9 billion (9B) and 27 billion (27B) parameter sizes, Gemma 2 is higher-performing and more efficient at inference than the first generation, with significant safety advancements built in. In fact, at 27B, it offers competitive alternatives to models more than twice its size, delivering the kind of performance that was only possible with proprietary models as recently as December.*
Tips:
|
154_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma2.md
|
https://huggingface.co/docs/transformers/en/model_doc/gemma2/#overview
|
.md
|
Tips:
- The original checkpoints can be converted using the conversion script `src/transformers/models/Gemma2/convert_Gemma2_weights_to_hf.py`
<Tip warning={true}>
|
154_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma2.md
|
https://huggingface.co/docs/transformers/en/model_doc/gemma2/#overview
|
.md
|
<Tip warning={true}>
- Gemma2 uses sliding window attention every second layer, which makes it unsuitable for typical kv caching with [`~DynamicCache`] or tuples of tensors. To enable caching in Gemma2 forward call, you must initialize a [`~HybridCache`] instance and pass it as `past_key_values` to the forward call. Note, that you also have to prepare `cache_position` if the `past_key_values` already contains previous keys and values.
</Tip>
|
154_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma2.md
|
https://huggingface.co/docs/transformers/en/model_doc/gemma2/#overview
|
.md
|
</Tip>
This model was contributed by [Arthur Zucker](https://huggingface.co/ArthurZ), [Pedro Cuenca](https://huggingface.co/pcuenq) and [Tom Arsen]().
|
154_1_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma2.md
|
https://huggingface.co/docs/transformers/en/model_doc/gemma2/#gemma2config
|
.md
|
This is the configuration class to store the configuration of a [`Gemma2Model`]. It is used to instantiate an Gemma2
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the Gemma2-7B.
e.g. [google/gemma2-7b](https://huggingface.co/google/gemma2-7b)
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
154_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma2.md
|
https://huggingface.co/docs/transformers/en/model_doc/gemma2/#gemma2config
|
.md
|
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 256000):
Vocabulary size of the Gemma2 model. Defines the number of different tokens that can be represented by the
`inputs_ids` passed when calling [`Gemma2Model`]
hidden_size (`int`, *optional*, defaults to 2304):
Dimension of the hidden representations.
|
154_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma2.md
|
https://huggingface.co/docs/transformers/en/model_doc/gemma2/#gemma2config
|
.md
|
hidden_size (`int`, *optional*, defaults to 2304):
Dimension of the hidden representations.
intermediate_size (`int`, *optional*, defaults to 9216):
Dimension of the MLP representations.
num_hidden_layers (`int`, *optional*, defaults to 26):
Number of hidden layers in the Transformer decoder.
num_attention_heads (`int`, *optional*, defaults to 8):
Number of attention heads for each attention layer in the Transformer decoder.
num_key_value_heads (`int`, *optional*, defaults to 4):
|
154_2_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma2.md
|
https://huggingface.co/docs/transformers/en/model_doc/gemma2/#gemma2config
|
.md
|
num_key_value_heads (`int`, *optional*, defaults to 4):
This is the number of key_value heads that should be used to implement Grouped Query Attention. If
`num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if
`num_key_value_heads=1` the model will use Multi Query Attention (MQA) otherwise GQA is used. When
converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
|
154_2_3
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.