source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pix2struct.md | https://huggingface.co/docs/transformers/en/model_doc/pix2struct/#overview | .md | > Visually-situated language is ubiquitous -- sources range from textbooks with diagrams to web pages with images and tables, to mobile apps with buttons and forms. Perhaps due to this diversity, previous work has typically relied on domain-specific recipes with limited sharing of the underlying data, model architectures, and objectives. We present Pix2Struct, a pretrained image-to-text model for purely visual language understanding, which can be finetuned on tasks containing visually-situated language. | 422_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pix2struct.md | https://huggingface.co/docs/transformers/en/model_doc/pix2struct/#overview | .md | model for purely visual language understanding, which can be finetuned on tasks containing visually-situated language. Pix2Struct is pretrained by learning to parse masked screenshots of web pages into simplified HTML. The web, with its richness of visual elements cleanly reflected in the HTML structure, provides a large source of pretraining data well suited to the diversity of downstream tasks. Intuitively, this objective subsumes common pretraining signals such as OCR, language modeling, image | 422_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pix2struct.md | https://huggingface.co/docs/transformers/en/model_doc/pix2struct/#overview | .md | of downstream tasks. Intuitively, this objective subsumes common pretraining signals such as OCR, language modeling, image captioning. In addition to the novel pretraining strategy, we introduce a variable-resolution input representation and a more flexible integration of language and vision inputs, where language prompts such as questions are rendered directly on top of the input image. For the first time, we show that a single pretrained model can achieve state-of-the-art results in six out of nine tasks | 422_1_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pix2struct.md | https://huggingface.co/docs/transformers/en/model_doc/pix2struct/#overview | .md | image. For the first time, we show that a single pretrained model can achieve state-of-the-art results in six out of nine tasks across four domains: documents, illustrations, user interfaces, and natural images. | 422_1_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pix2struct.md | https://huggingface.co/docs/transformers/en/model_doc/pix2struct/#overview | .md | Tips:
Pix2Struct has been fine tuned on a variety of tasks and datasets, ranging from image captioning, visual question answering (VQA) over different inputs (books, charts, science diagrams), captioning UI components etc. The full list can be found in Table 1 of the paper. | 422_1_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pix2struct.md | https://huggingface.co/docs/transformers/en/model_doc/pix2struct/#overview | .md | We therefore advise you to use these models for the tasks they have been fine tuned on. For instance, if you want to use Pix2Struct for UI captioning, you should use the model fine tuned on the UI dataset. If you want to use Pix2Struct for image captioning, you should use the model fine tuned on the natural images captioning dataset and so on.
If you want to use the model to perform conditional text captioning, make sure to use the processor with `add_special_tokens=False`. | 422_1_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pix2struct.md | https://huggingface.co/docs/transformers/en/model_doc/pix2struct/#overview | .md | This model was contributed by [ybelkada](https://huggingface.co/ybelkada).
The original code can be found [here](https://github.com/google-research/pix2struct). | 422_1_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pix2struct.md | https://huggingface.co/docs/transformers/en/model_doc/pix2struct/#resources | .md | - [Fine-tuning Notebook](https://github.com/huggingface/notebooks/blob/main/examples/image_captioning_pix2struct.ipynb)
- [All models](https://huggingface.co/models?search=pix2struct) | 422_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pix2struct.md | https://huggingface.co/docs/transformers/en/model_doc/pix2struct/#pix2structconfig | .md | [`Pix2StructConfig`] is the configuration class to store the configuration of a
[`Pix2StructForConditionalGeneration`]. It is used to instantiate a Pix2Struct model according to the specified
arguments, defining the text model and vision model configs. Instantiating a configuration with the defaults will
yield a similar configuration to that of the Pix2Struct-base
[google/pix2struct-base](https://huggingface.co/google/pix2struct-base) architecture. | 422_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pix2struct.md | https://huggingface.co/docs/transformers/en/model_doc/pix2struct/#pix2structconfig | .md | [google/pix2struct-base](https://huggingface.co/google/pix2struct-base) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
text_config (`dict`, *optional*):
Dictionary of configuration options used to initialize [`Pix2StructTextConfig`].
vision_config (`dict`, *optional*):
Dictionary of configuration options used to initialize [`Pix2StructVisionConfig`]. | 422_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pix2struct.md | https://huggingface.co/docs/transformers/en/model_doc/pix2struct/#pix2structconfig | .md | vision_config (`dict`, *optional*):
Dictionary of configuration options used to initialize [`Pix2StructVisionConfig`].
initializer_factor (`float`, *optional*, defaults to 1.0):
Factor to multiply the initialization range with.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
is_vqa (`bool`, *optional*, defaults to `False`):
Whether the model has been fine-tuned for VQA or not.
kwargs (*optional*): | 422_3_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pix2struct.md | https://huggingface.co/docs/transformers/en/model_doc/pix2struct/#pix2structconfig | .md | is_vqa (`bool`, *optional*, defaults to `False`):
Whether the model has been fine-tuned for VQA or not.
kwargs (*optional*):
Dictionary of keyword arguments.
Example:
```python
>>> from transformers import Pix2StructConfig, Pix2StructForConditionalGeneration | 422_3_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pix2struct.md | https://huggingface.co/docs/transformers/en/model_doc/pix2struct/#pix2structconfig | .md | >>> # Initializing a Pix2StructConfig with google/pix2struct-base style configuration
>>> configuration = Pix2StructConfig()
>>> # Initializing a Pix2StructForConditionalGeneration (with random weights) from the google/pix2struct-base style configuration
>>> model = Pix2StructForConditionalGeneration(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
>>> # We can also initialize a Pix2StructConfig from a Pix2StructTextConfig and a Pix2StructVisionConfig | 422_3_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pix2struct.md | https://huggingface.co/docs/transformers/en/model_doc/pix2struct/#pix2structconfig | .md | >>> # We can also initialize a Pix2StructConfig from a Pix2StructTextConfig and a Pix2StructVisionConfig
>>> # Initializing a Pix2Struct text and Pix2Struct vision configuration
>>> config_text = Pix2StructTextConfig()
>>> config_vision = Pix2StructVisionConfig()
>>> config = Pix2StructConfig.from_text_vision_configs(config_text, config_vision)
```
Methods: from_text_vision_configs | 422_3_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pix2struct.md | https://huggingface.co/docs/transformers/en/model_doc/pix2struct/#pix2structtextconfig | .md | This is the configuration class to store the configuration of a [`Pix2StructTextModel`]. It is used to instantiate
a Pix2Struct text model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the Pix2Struct text decoder used by
the [google/pix2struct-base](https://huggingface.co/google/pix2struct-base) architecture. | 422_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pix2struct.md | https://huggingface.co/docs/transformers/en/model_doc/pix2struct/#pix2structtextconfig | .md | the [google/pix2struct-base](https://huggingface.co/google/pix2struct-base) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 50244):
Vocabulary size of the `Pix2Struct` text model. Defines the number of different tokens that can be
represented by the `inputs_ids` passed when calling [`Pix2StructTextModel`]. | 422_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pix2struct.md | https://huggingface.co/docs/transformers/en/model_doc/pix2struct/#pix2structtextconfig | .md | represented by the `inputs_ids` passed when calling [`Pix2StructTextModel`].
hidden_size (`int`, *optional*, defaults to 768):
Dimensionality of the encoder layers and the pooler layer.
d_kv (`int`, *optional*, defaults to 64):
Dimensionality of the key, query, value projections in each attention head.
d_ff (`int`, *optional*, defaults to 2048):
Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
num_layers (`int`, *optional*, defaults to 12): | 422_4_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pix2struct.md | https://huggingface.co/docs/transformers/en/model_doc/pix2struct/#pix2structtextconfig | .md | num_layers (`int`, *optional*, defaults to 12):
Number of hidden layers in the Transformer encoder.
num_heads (`int`, *optional*, defaults to 12):
Number of attention heads for each attention layer in the Transformer encoder.
relative_attention_num_buckets (`int`, *optional*, defaults to 32):
The number of buckets to use for each attention layer.
relative_attention_max_distance (`int`, *optional*, defaults to 128):
The maximum distance of the longer sequences for the bucket separation. | 422_4_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pix2struct.md | https://huggingface.co/docs/transformers/en/model_doc/pix2struct/#pix2structtextconfig | .md | The maximum distance of the longer sequences for the bucket separation.
dropout_rate (`float`, *optional*, defaults to 0.1):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
layer_norm_epsilon (`float`, *optional*, defaults to 1e-6):
The epsilon used by the layer normalization layers.
initializer_factor (`float`, *optional*, defaults to 1.0):
A factor for initializing all weight matrices (should be kept to 1, used internally for initialization
testing). | 422_4_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pix2struct.md | https://huggingface.co/docs/transformers/en/model_doc/pix2struct/#pix2structtextconfig | .md | A factor for initializing all weight matrices (should be kept to 1, used internally for initialization
testing).
dense_act_fn (`Union[Callable, str]`, *optional*, defaults to `"gelu_new"`):
The non-linear activation function (function or string).
decoder_start_token_id (`int`, *optional*, defaults to 0):
The id of the `decoder_start_token_id` token.
use_cache (`bool`, *optional*, defaults to `False`):
Whether or not the model should return the last key/values attentions (not used by all models). | 422_4_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pix2struct.md | https://huggingface.co/docs/transformers/en/model_doc/pix2struct/#pix2structtextconfig | .md | Whether or not the model should return the last key/values attentions (not used by all models).
pad_token_id (`int`, *optional*, defaults to 0):
The id of the `padding` token.
eos_token_id (`int`, *optional*, defaults to 1):
The id of the `end-of-sequence` token.
Example:
```python
>>> from transformers import Pix2StructTextConfig, Pix2StructTextModel | 422_4_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pix2struct.md | https://huggingface.co/docs/transformers/en/model_doc/pix2struct/#pix2structtextconfig | .md | >>> # Initializing a Pix2StructTextConfig with google/pix2struct-base style configuration
>>> configuration = Pix2StructTextConfig()
>>> # Initializing a Pix2StructTextModel (with random weights) from the google/pix2struct-base style configuration
>>> model = Pix2StructTextModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
``` | 422_4_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pix2struct.md | https://huggingface.co/docs/transformers/en/model_doc/pix2struct/#pix2structvisionconfig | .md | This is the configuration class to store the configuration of a [`Pix2StructVisionModel`]. It is used to
instantiate a Pix2Struct vision model according to the specified arguments, defining the model architecture.
Instantiating a configuration defaults will yield a similar configuration to that of the Pix2Struct-base
[google/pix2struct-base](https://huggingface.co/google/pix2struct-base) architecture. | 422_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pix2struct.md | https://huggingface.co/docs/transformers/en/model_doc/pix2struct/#pix2structvisionconfig | .md | [google/pix2struct-base](https://huggingface.co/google/pix2struct-base) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
hidden_size (`int`, *optional*, defaults to 768):
Dimensionality of the encoder layers and the pooler layer.
patch_embed_hidden_size (`int`, *optional*, defaults to 768): | 422_5_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pix2struct.md | https://huggingface.co/docs/transformers/en/model_doc/pix2struct/#pix2structvisionconfig | .md | Dimensionality of the encoder layers and the pooler layer.
patch_embed_hidden_size (`int`, *optional*, defaults to 768):
Dimensionality of the input patch_embedding layer in the Transformer encoder.
d_ff (`int`, *optional*, defaults to 2048):
Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
d_kv (`int`, *optional*, defaults to 64):
Dimensionality of the key, query, value projections per attention head.
num_hidden_layers (`int`, *optional*, defaults to 12): | 422_5_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pix2struct.md | https://huggingface.co/docs/transformers/en/model_doc/pix2struct/#pix2structvisionconfig | .md | Dimensionality of the key, query, value projections per attention head.
num_hidden_layers (`int`, *optional*, defaults to 12):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 12):
Number of attention heads for each attention layer in the Transformer encoder.
dense_act_fn (`str` or `function`, *optional*, defaults to `"gelu_new"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, | 422_5_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pix2struct.md | https://huggingface.co/docs/transformers/en/model_doc/pix2struct/#pix2structvisionconfig | .md | The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"selu"` and `"gelu_new"` `"gelu"` are supported.
layer_norm_eps (`float`, *optional*, defaults to 1e-06):
The epsilon used by the layer normalization layers.
dropout_rate (`float`, *optional*, defaults to 0.0):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_dropout (`float`, *optional*, defaults to 0.0): | 422_5_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pix2struct.md | https://huggingface.co/docs/transformers/en/model_doc/pix2struct/#pix2structvisionconfig | .md | attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
initializer_range (`float`, *optional*, defaults to 1e-10):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
initializer_factor (`float`, *optional*, defaults to 1.0):
A factor for initializing all weight matrices (should be kept to 1, used internally for initialization
testing).
seq_len (`int`, *optional*, defaults to 4096): | 422_5_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pix2struct.md | https://huggingface.co/docs/transformers/en/model_doc/pix2struct/#pix2structvisionconfig | .md | testing).
seq_len (`int`, *optional*, defaults to 4096):
Maximum sequence length (here number of patches) supported by the model.
relative_attention_num_buckets (`int`, *optional*, defaults to 32):
The number of buckets to use for each attention layer.
relative_attention_max_distance (`int`, *optional*, defaults to 128):
The maximum distance (in tokens) to use for each attention layer.
Example:
```python
>>> from transformers import Pix2StructVisionConfig, Pix2StructVisionModel | 422_5_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pix2struct.md | https://huggingface.co/docs/transformers/en/model_doc/pix2struct/#pix2structvisionconfig | .md | >>> # Initializing a Pix2StructVisionConfig with google/pix2struct-base style configuration
>>> configuration = Pix2StructVisionConfig()
>>> # Initializing a Pix2StructVisionModel (with random weights) from the google/pix2struct-base style configuration
>>> model = Pix2StructVisionModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
``` | 422_5_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pix2struct.md | https://huggingface.co/docs/transformers/en/model_doc/pix2struct/#pix2structprocessor | .md | Constructs a PIX2STRUCT processor which wraps a BERT tokenizer and PIX2STRUCT image processor into a single
processor.
[`Pix2StructProcessor`] offers all the functionalities of [`Pix2StructImageProcessor`] and [`T5TokenizerFast`]. See
the docstring of [`~Pix2StructProcessor.__call__`] and [`~Pix2StructProcessor.decode`] for more information.
Args:
image_processor (`Pix2StructImageProcessor`):
An instance of [`Pix2StructImageProcessor`]. The image processor is a required input. | 422_6_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pix2struct.md | https://huggingface.co/docs/transformers/en/model_doc/pix2struct/#pix2structprocessor | .md | An instance of [`Pix2StructImageProcessor`]. The image processor is a required input.
tokenizer (Union[`T5TokenizerFast`, `T5Tokenizer`]):
An instance of ['T5TokenizerFast`] or ['T5Tokenizer`]. The tokenizer is a required input. | 422_6_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pix2struct.md | https://huggingface.co/docs/transformers/en/model_doc/pix2struct/#pix2structimageprocessor | .md | Constructs a Pix2Struct image processor.
Args:
do_convert_rgb (`bool`, *optional*, defaults to `True`):
Whether to convert the image to RGB.
do_normalize (`bool`, *optional*, defaults to `True`):
Whether to normalize the image. Can be overridden by the `do_normalize` parameter in the `preprocess`
method. According to Pix2Struct paper and code, the image is normalized with its own mean and standard
deviation.
patch_size (`Dict[str, int]`, *optional*, defaults to `{"height": 16, "width": 16}`): | 422_7_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pix2struct.md | https://huggingface.co/docs/transformers/en/model_doc/pix2struct/#pix2structimageprocessor | .md | deviation.
patch_size (`Dict[str, int]`, *optional*, defaults to `{"height": 16, "width": 16}`):
The patch size to use for the image. According to Pix2Struct paper and code, the patch size is 16x16.
max_patches (`int`, *optional*, defaults to 2048):
The maximum number of patches to extract from the image as per the [Pix2Struct
paper](https://arxiv.org/pdf/2210.03347.pdf).
is_vqa (`bool`, *optional*, defaults to `False`): | 422_7_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pix2struct.md | https://huggingface.co/docs/transformers/en/model_doc/pix2struct/#pix2structimageprocessor | .md | paper](https://arxiv.org/pdf/2210.03347.pdf).
is_vqa (`bool`, *optional*, defaults to `False`):
Whether or not the image processor is for the VQA task. If `True` and `header_text` is passed in, text is
rendered onto the input images.
Methods: preprocess | 422_7_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pix2struct.md | https://huggingface.co/docs/transformers/en/model_doc/pix2struct/#pix2structtextmodel | .md | The standalone text decoder of Pix2Struct
The Pix2Struct model was proposed in [Pix2Struct: Screenshot Parsing as Pretraining for Visual Language
Understanding](https://arxiv.org/abs/2210.03347) by Kenton Lee, Mandar Joshi, Iulia Turc, Hexiang Hu, Fangyu Liu,
Julian Eisenschlos, Urvashi Khandelwal, Peter Shaw, Ming-Wei Chang, Kristina Toutanova. It's an encoder decoder
transformer pre-trained in a image-to-text setting. | 422_8_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pix2struct.md | https://huggingface.co/docs/transformers/en/model_doc/pix2struct/#pix2structtextmodel | .md | transformer pre-trained in a image-to-text setting.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | 422_8_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pix2struct.md | https://huggingface.co/docs/transformers/en/model_doc/pix2struct/#pix2structtextmodel | .md | etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config (Union[`Pix2StructConfig`, `Pix2StructTextConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the | 422_8_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pix2struct.md | https://huggingface.co/docs/transformers/en/model_doc/pix2struct/#pix2structtextmodel | .md | load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 422_8_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pix2struct.md | https://huggingface.co/docs/transformers/en/model_doc/pix2struct/#pix2structvisionmodel | .md | The bare Pix2StructVision Model transformer outputting raw hidden-states without any specific head on top.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`Pix2StructConfig`]): Model configuration class with all the parameters of the model. | 422_9_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pix2struct.md | https://huggingface.co/docs/transformers/en/model_doc/pix2struct/#pix2structvisionmodel | .md | behavior.
Parameters:
config ([`Pix2StructConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 422_9_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pix2struct.md | https://huggingface.co/docs/transformers/en/model_doc/pix2struct/#pix2structforconditionalgeneration | .md | A conditional generation model with a language modeling head. Can be used for sequence generation tasks.
The Pix2Struct model was proposed in [Pix2Struct: Screenshot Parsing as Pretraining for Visual Language
Understanding](https://arxiv.org/abs/2210.03347) by Kenton Lee, Mandar Joshi, Iulia Turc, Hexiang Hu, Fangyu Liu,
Julian Eisenschlos, Urvashi Khandelwal, Peter Shaw, Ming-Wei Chang, Kristina Toutanova. It's an encoder decoder
transformer pre-trained in a image-to-text setting. | 422_10_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pix2struct.md | https://huggingface.co/docs/transformers/en/model_doc/pix2struct/#pix2structforconditionalgeneration | .md | transformer pre-trained in a image-to-text setting.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | 422_10_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pix2struct.md | https://huggingface.co/docs/transformers/en/model_doc/pix2struct/#pix2structforconditionalgeneration | .md | etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config (Union[`Pix2StructConfig`, `Pix2StructTextConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the | 422_10_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pix2struct.md | https://huggingface.co/docs/transformers/en/model_doc/pix2struct/#pix2structforconditionalgeneration | .md | load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 422_10_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba2.md | https://huggingface.co/docs/transformers/en/model_doc/mamba2/ | .md | <!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 423_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba2.md | https://huggingface.co/docs/transformers/en/model_doc/mamba2/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 423_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba2.md | https://huggingface.co/docs/transformers/en/model_doc/mamba2/#overview | .md | The Mamba2 model was proposed in [Transformers are SSMs: Generalized Models and Efficient Algorithms Through Structured State Space Duality](https://arxiv.org/abs/2405.21060) by Tri Dao and Albert Gu. It is a State Space Model similar to Mamba 1, with better performances in a simplified architecture.
The abstract from the paper is the following: | 423_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba2.md | https://huggingface.co/docs/transformers/en/model_doc/mamba2/#overview | .md | *While Transformers have been the main architecture behind deep learning's success in language modeling, state-space models (SSMs) such as Mamba have recently been shown to match or outperform Transformers at small to medium scale. We show that these families of models are actually quite closely related, and develop a rich framework of theoretical connections between SSMs and variants of attention, connected through various decompositions of a well-studied class of structured semiseparable matrices. Our | 423_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba2.md | https://huggingface.co/docs/transformers/en/model_doc/mamba2/#overview | .md | of attention, connected through various decompositions of a well-studied class of structured semiseparable matrices. Our state space duality (SSD) framework allows us to design a new architecture (Mamba-2) whose core layer is an a refinement of Mamba's selective SSM that is 2-8X faster, while continuing to be competitive with Transformers on language modeling.* | 423_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba2.md | https://huggingface.co/docs/transformers/en/model_doc/mamba2/#overview | .md | Tips:
This version should support all implementations of Mamba 2, and in particular [Mamba-2 codestral](https://huggingface.co/mistralai/Mamba-Codestral-7B-v0.1) from Mistral AI. In particular, mamba 2 codestral was released with a number of `groups` equal to 8, which can be thought intuitively as similar to the number of kv heads in an attention-based model. | 423_1_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba2.md | https://huggingface.co/docs/transformers/en/model_doc/mamba2/#overview | .md | This model has two different forward passes, `torch_forward` or `cuda_kernels_forward`. The latter uses the original cuda kernels if they are found in your environment, and is slower on the prefill i.e. requires a "warmup run" due to high cpu overhead, see [here](https://github.com/state-spaces/mamba/issues/389#issuecomment-2171755306) and [also here](https://github.com/state-spaces/mamba/issues/355#issuecomment-2147597457). Without compilation, the `torch_forward` implementation is faster by a factor 3 to | 423_1_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba2.md | https://huggingface.co/docs/transformers/en/model_doc/mamba2/#overview | .md | Without compilation, the `torch_forward` implementation is faster by a factor 3 to 4. Further, there are no positional embeddings in this model, but there is an `attention_mask` and a specific logic to mask out hidden states in two places in the case of batched generation, see [here](https://github.com/state-spaces/mamba/issues/66#issuecomment-1863563829) as well. Due to this, in addition to the reimplementation of mamba2 kernels, batched generation and cached generation are expected to have slight | 423_1_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba2.md | https://huggingface.co/docs/transformers/en/model_doc/mamba2/#overview | .md | in addition to the reimplementation of mamba2 kernels, batched generation and cached generation are expected to have slight discrepancies. Further, the results given by the cuda kernels or the torch forward are expected to be slightly different. The SSM algorithm heavily relies on tensor contractions, which have matmul equivalents but the order of operations is slightly different, making the difference greater at smaller precisions. | 423_1_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba2.md | https://huggingface.co/docs/transformers/en/model_doc/mamba2/#overview | .md | Another note, shutdown of hidden states corresponding to padding tokens is done in 2 places and mostly has been tested with left-padding. Right-padding will propagate noise down the line and is not guaranteed to yield satisfactory results. `tokenizer.padding_side = "left"` ensures you are using the correct padding side.
This model was contributed by [Molbap](https://huggingface.co/Molbap), with tremendous help from [Anton Vlasjuk](https://github.com/vasqu). | 423_1_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba2.md | https://huggingface.co/docs/transformers/en/model_doc/mamba2/#overview | .md | The original code can be found [here](https://github.com/state-spaces/mamba). | 423_1_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba2.md | https://huggingface.co/docs/transformers/en/model_doc/mamba2/#a-simple-generation-example | .md | ```python
from transformers import Mamba2Config, Mamba2ForCausalLM, AutoTokenizer
import torch
model_id = 'mistralai/Mamba-Codestral-7B-v0.1'
tokenizer = AutoTokenizer.from_pretrained(model_id, revision='refs/pr/9', from_slow=True, legacy=False)
model = Mamba2ForCausalLM.from_pretrained(model_id, revision='refs/pr/9')
input_ids = tokenizer("Hey how are you doing?", return_tensors= "pt")["input_ids"] | 423_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba2.md | https://huggingface.co/docs/transformers/en/model_doc/mamba2/#a-simple-generation-example | .md | out = model.generate(input_ids, max_new_tokens=10)
print(tokenizer.batch_decode(out))
```
Here's a draft script for finetuning:
```python
from trl import SFTTrainer
from peft import LoraConfig
from transformers import AutoTokenizer, Mamba2ForCausalLM, TrainingArguments
model_id = 'mistralai/Mamba-Codestral-7B-v0.1'
tokenizer = AutoTokenizer.from_pretrained(model_id, revision='refs/pr/9', from_slow=True, legacy=False)
tokenizer.pad_token = tokenizer.eos_token | 423_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba2.md | https://huggingface.co/docs/transformers/en/model_doc/mamba2/#a-simple-generation-example | .md | tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "left" #enforce padding side left | 423_2_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba2.md | https://huggingface.co/docs/transformers/en/model_doc/mamba2/#a-simple-generation-example | .md | model = Mamba2ForCausalLM.from_pretrained(model_id, revision='refs/pr/9')
dataset = load_dataset("Abirate/english_quotes", split="train")
# Without CUDA kernels, batch size of 2 occupies one 80GB device
# but precision can be reduced.
# Experiments and trials welcome!
training_args = TrainingArguments(
output_dir="./results",
num_train_epochs=3,
per_device_train_batch_size=2,
logging_dir='./logs',
logging_steps=10,
learning_rate=2e-3
)
lora_config = LoraConfig(
r=8, | 423_2_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba2.md | https://huggingface.co/docs/transformers/en/model_doc/mamba2/#a-simple-generation-example | .md | per_device_train_batch_size=2,
logging_dir='./logs',
logging_steps=10,
learning_rate=2e-3
)
lora_config = LoraConfig(
r=8,
target_modules=["embeddings", "in_proj", "out_proj"],
task_type="CAUSAL_LM",
bias="none"
)
trainer = SFTTrainer(
model=model,
tokenizer=tokenizer,
args=training_args,
peft_config=lora_config,
train_dataset=dataset,
dataset_text_field="quote",
)
trainer.train()
``` | 423_2_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba2.md | https://huggingface.co/docs/transformers/en/model_doc/mamba2/#mamba2config | .md | This is the configuration class to store the configuration of a [`Mamba2Model`]. It is used to instantiate a MAMBA2
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the MAMBA2
[state-spaces/mamba2-2.8b](https://huggingface.co/state-spaces/mamba2-2.8b) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the | 423_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba2.md | https://huggingface.co/docs/transformers/en/model_doc/mamba2/#mamba2config | .md | Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
num_heads (`int`, *optional*, defaults to 128):
Number of heads for the evolution matrices of mamba 2.
head_dim (`int`, *optional*, defaults to 64):
Dimension of each head.
vocab_size (`int`, *optional*, defaults to 32768):
Vocabulary size of the MAMBA2 model. Defines the number of different tokens that can be represented by the | 423_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba2.md | https://huggingface.co/docs/transformers/en/model_doc/mamba2/#mamba2config | .md | Vocabulary size of the MAMBA2 model. Defines the number of different tokens that can be represented by the
`inputs_ids` passed when calling [`Mamba2Model`].
hidden_size (`int`, *optional*, defaults to 4096):
Dimensionality of the embeddings and hidden states.
state_size (`int`, *optional*, defaults to 128): shape of the state space latents.
num_hidden_layers (`int`, *optional*, defaults to 64):
Number of hidden layers in the model.
layer_norm_epsilon (`float`, *optional*, defaults to 1e-05): | 423_3_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba2.md | https://huggingface.co/docs/transformers/en/model_doc/mamba2/#mamba2config | .md | Number of hidden layers in the model.
layer_norm_epsilon (`float`, *optional*, defaults to 1e-05):
The epsilon to use in the layer normalization layers.
pad_token_id (`int`, *optional*, defaults to 1):
Padding token id.
bos_token_id (`int`, *optional*, defaults to 0):
The id of the beginning of sentence token in the vocabulary.
eos_token_id (`int`, *optional*, defaults to 2):
The id of the end of sentence token in the vocabulary. | 423_3_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba2.md | https://huggingface.co/docs/transformers/en/model_doc/mamba2/#mamba2config | .md | eos_token_id (`int`, *optional*, defaults to 2):
The id of the end of sentence token in the vocabulary.
expand (`int`, *optional*, defaults to 2): Expanding factor used to determine the intermediate size.
conv_kernel (`int`, *optional*, defaults to 4): Size of the convolution kernel.
n_groups (`int`, *optional*, defaults to 8):
Number of groups for the evolution matrices of mamba 2.
use_bias (`bool`, *optional*, defaults to `False`):
Whether or not to use bias in ["in_proj", "out_proj"] of the mixer block | 423_3_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba2.md | https://huggingface.co/docs/transformers/en/model_doc/mamba2/#mamba2config | .md | use_bias (`bool`, *optional*, defaults to `False`):
Whether or not to use bias in ["in_proj", "out_proj"] of the mixer block
use_conv_bias (`bool`, *optional*, defaults to `True`):
Whether or not to use bias in the convolution layer of the mixer block.
hidden_act (`str`, *optional*, defaults to `"silu"`):
The non-linear activation function (function or string) in the decoder.
initializer_range (`float`, *optional*, defaults to 0.1): | 423_3_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba2.md | https://huggingface.co/docs/transformers/en/model_doc/mamba2/#mamba2config | .md | initializer_range (`float`, *optional*, defaults to 0.1):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
residual_in_fp32 (`bool`, *optional*, defaults to `True`):
Whether or not residuals should be in `float32`. If set to `False` residuals will keep the same `dtype` as the rest of the model
time_step_rank (`Union[int,str]`, *optional*, defaults to `"auto"`): | 423_3_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba2.md | https://huggingface.co/docs/transformers/en/model_doc/mamba2/#mamba2config | .md | time_step_rank (`Union[int,str]`, *optional*, defaults to `"auto"`):
Rank of the discretization projection matrix. `"auto"` means that it will default to `math.ceil(self.hidden_size / 16)`
time_step_min (`float`, *optional*, defaults to 0.001):
Minimum `time_step` used to bound `dt_proj.bias`.
time_step_max (`float`, *optional*, defaults to 0.1):
Maximum `time_step` used to bound `dt_proj.bias`.
time_step_floor (`float`, *optional*, defaults to 0.0001): | 423_3_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba2.md | https://huggingface.co/docs/transformers/en/model_doc/mamba2/#mamba2config | .md | Maximum `time_step` used to bound `dt_proj.bias`.
time_step_floor (`float`, *optional*, defaults to 0.0001):
Minimum clamping value of the `dt_proj.bias` layer initialization.
time_step_limit (`tuple`, *optional*, defaults to `(0.0, inf)`):
Accepted range of time step values.
rescale_prenorm_residual (`bool`, *optional*, defaults to `False`):
Whether or not to rescale `out_proj` weights when initializing.
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the cache should be used. | 423_3_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba2.md | https://huggingface.co/docs/transformers/en/model_doc/mamba2/#mamba2config | .md | use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the cache should be used.
rms_norm (`bool`, *optional*, defaults to `True`):
Whether to use RMS norm or not.
chunk_size (`int`, *optional*, defaults to 256):
Size of the chunks that will comprise the sequence.
tie_word_embeddings (`bool`, *optional*, defaults to `False`):
Whether to tie word embeddings or not.
Example:
```python
>>> from transformers import Mamba2Config, Mamba2Model | 423_3_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba2.md | https://huggingface.co/docs/transformers/en/model_doc/mamba2/#mamba2config | .md | >>> # Initializing a Mamba2 configuration
>>> configuration = Mamba2Config()
>>> # Initializing a model (with random weights) from the configuration
>>> model = Mamba2Model(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
``` | 423_3_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba2.md | https://huggingface.co/docs/transformers/en/model_doc/mamba2/#mamba2model | .md | The bare MAMBA2 Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | 423_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba2.md | https://huggingface.co/docs/transformers/en/model_doc/mamba2/#mamba2model | .md | etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`Mamba2Config`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the | 423_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba2.md | https://huggingface.co/docs/transformers/en/model_doc/mamba2/#mamba2model | .md | Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 423_4_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba2.md | https://huggingface.co/docs/transformers/en/model_doc/mamba2/#mamba2lmheadmodel | .md | The MAMBA2 Model transformer with a language modeling head on top (linear layer with weights not tied to the input
embeddings).
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | 423_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba2.md | https://huggingface.co/docs/transformers/en/model_doc/mamba2/#mamba2lmheadmodel | .md | etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`Mamba2Config`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the | 423_5_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba2.md | https://huggingface.co/docs/transformers/en/model_doc/mamba2/#mamba2lmheadmodel | .md | Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 423_5_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/modeling_utils.md | https://huggingface.co/docs/transformers/en/internal/modeling_utils/ | .md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 424_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/modeling_utils.md | https://huggingface.co/docs/transformers/en/internal/modeling_utils/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 424_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/modeling_utils.md | https://huggingface.co/docs/transformers/en/internal/modeling_utils/#custom-layers-and-utilities | .md | This page lists all the custom layers used by the library, as well as the utility functions it provides for modeling.
Most of those are only useful if you are studying the code of the models in the library. | 424_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/modeling_utils.md | https://huggingface.co/docs/transformers/en/internal/modeling_utils/#pytorch-custom-modules | .md | pytorch_utils.Conv1D
1D-convolutional layer as defined by Radford et al. for OpenAI GPT (and also used in GPT-2).
Basically works like a linear layer but the weights are transposed.
Args:
nf (`int`): The number of output features.
nx (`int`): The number of input features.
modeling_utils.PoolerStartLogits
Compute SQuAD start logits from sequence hidden states.
Args:
config ([`PretrainedConfig`]):
The config used by the model, will be used to grab the `hidden_size` of the model.
- forward | 424_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/modeling_utils.md | https://huggingface.co/docs/transformers/en/internal/modeling_utils/#pytorch-custom-modules | .md | config ([`PretrainedConfig`]):
The config used by the model, will be used to grab the `hidden_size` of the model.
- forward
modeling_utils.PoolerEndLogits
Compute SQuAD end logits from sequence hidden states.
Args:
config ([`PretrainedConfig`]):
The config used by the model, will be used to grab the `hidden_size` of the model and the `layer_norm_eps`
to use.
- forward
modeling_utils.PoolerAnswerClass
Compute SQuAD 2.0 answer class from classification and start tokens hidden states.
Args: | 424_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/modeling_utils.md | https://huggingface.co/docs/transformers/en/internal/modeling_utils/#pytorch-custom-modules | .md | modeling_utils.PoolerAnswerClass
Compute SQuAD 2.0 answer class from classification and start tokens hidden states.
Args:
config ([`PretrainedConfig`]):
The config used by the model, will be used to grab the `hidden_size` of the model.
- forward
modeling_utils.SquadHeadOutput
Base class for outputs of question answering models using a [`~modeling_utils.SQuADHead`].
Args:
loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned if both `start_positions` and `end_positions` are provided): | 424_2_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/modeling_utils.md | https://huggingface.co/docs/transformers/en/internal/modeling_utils/#pytorch-custom-modules | .md | loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned if both `start_positions` and `end_positions` are provided):
Classification loss as the sum of start token, end token (and is_impossible if provided) classification
losses.
start_top_log_probs (`torch.FloatTensor` of shape `(batch_size, config.start_n_top)`, *optional*, returned if `start_positions` or `end_positions` is not provided):
Log probabilities for the top config.start_n_top start token possibilities (beam-search). | 424_2_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/modeling_utils.md | https://huggingface.co/docs/transformers/en/internal/modeling_utils/#pytorch-custom-modules | .md | Log probabilities for the top config.start_n_top start token possibilities (beam-search).
start_top_index (`torch.LongTensor` of shape `(batch_size, config.start_n_top)`, *optional*, returned if `start_positions` or `end_positions` is not provided):
Indices for the top config.start_n_top start token possibilities (beam-search).
end_top_log_probs (`torch.FloatTensor` of shape `(batch_size, config.start_n_top * config.end_n_top)`, *optional*, returned if `start_positions` or `end_positions` is not provided): | 424_2_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/modeling_utils.md | https://huggingface.co/docs/transformers/en/internal/modeling_utils/#pytorch-custom-modules | .md | Log probabilities for the top `config.start_n_top * config.end_n_top` end token possibilities
(beam-search).
end_top_index (`torch.LongTensor` of shape `(batch_size, config.start_n_top * config.end_n_top)`, *optional*, returned if `start_positions` or `end_positions` is not provided):
Indices for the top `config.start_n_top * config.end_n_top` end token possibilities (beam-search). | 424_2_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/modeling_utils.md | https://huggingface.co/docs/transformers/en/internal/modeling_utils/#pytorch-custom-modules | .md | Indices for the top `config.start_n_top * config.end_n_top` end token possibilities (beam-search).
cls_logits (`torch.FloatTensor` of shape `(batch_size,)`, *optional*, returned if `start_positions` or `end_positions` is not provided):
Log probabilities for the `is_impossible` label of the answers.
modeling_utils.SQuADHead
A SQuAD head inspired by XLNet.
Args:
config ([`PretrainedConfig`]):
The config used by the model, will be used to grab the `hidden_size` of the model and the `layer_norm_eps` | 424_2_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/modeling_utils.md | https://huggingface.co/docs/transformers/en/internal/modeling_utils/#pytorch-custom-modules | .md | The config used by the model, will be used to grab the `hidden_size` of the model and the `layer_norm_eps`
to use.
- forward
modeling_utils.SequenceSummary
Compute a single vector summary of a sequence hidden states.
Args:
config ([`PretrainedConfig`]):
The config used by the model. Relevant arguments in the config class of the model are (refer to the actual
config class of your model for the default values it uses): | 424_2_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/modeling_utils.md | https://huggingface.co/docs/transformers/en/internal/modeling_utils/#pytorch-custom-modules | .md | config class of your model for the default values it uses):
- **summary_type** (`str`) -- The method to use to make this summary. Accepted values are:
- `"last"` -- Take the last token hidden state (like XLNet)
- `"first"` -- Take the first token hidden state (like Bert)
- `"mean"` -- Take the mean of all tokens hidden states
- `"cls_index"` -- Supply a Tensor of classification token position (GPT/GPT-2)
- `"attn"` -- Not implemented now, use multi-head attention | 424_2_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/modeling_utils.md | https://huggingface.co/docs/transformers/en/internal/modeling_utils/#pytorch-custom-modules | .md | - `"attn"` -- Not implemented now, use multi-head attention
- **summary_use_proj** (`bool`) -- Add a projection after the vector extraction.
- **summary_proj_to_labels** (`bool`) -- If `True`, the projection outputs to `config.num_labels` classes
(otherwise to `config.hidden_size`).
- **summary_activation** (`Optional[str]`) -- Set to `"tanh"` to add a tanh activation to the output,
another string or `None` will add no activation. | 424_2_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/modeling_utils.md | https://huggingface.co/docs/transformers/en/internal/modeling_utils/#pytorch-custom-modules | .md | another string or `None` will add no activation.
- **summary_first_dropout** (`float`) -- Optional dropout probability before the projection and activation.
- **summary_last_dropout** (`float`)-- Optional dropout probability after the projection and activation.
- forward | 424_2_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/modeling_utils.md | https://huggingface.co/docs/transformers/en/internal/modeling_utils/#pytorch-helper-functions | .md | pytorch_utils.apply_chunking_to_forward
This function chunks the `input_tensors` into smaller input tensor parts of size `chunk_size` over the dimension
`chunk_dim`. It then applies a layer `forward_fn` to each chunk independently to save memory.
If the `forward_fn` is independent across the `chunk_dim` this function will yield the same result as directly
applying `forward_fn` to `input_tensors`.
Args:
forward_fn (`Callable[..., torch.Tensor]`):
The forward function of the model.
chunk_size (`int`): | 424_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/modeling_utils.md | https://huggingface.co/docs/transformers/en/internal/modeling_utils/#pytorch-helper-functions | .md | Args:
forward_fn (`Callable[..., torch.Tensor]`):
The forward function of the model.
chunk_size (`int`):
The chunk size of a chunked tensor: `num_chunks = len(input_tensors[0]) / chunk_size`.
chunk_dim (`int`):
The dimension over which the `input_tensors` should be chunked.
input_tensors (`Tuple[torch.Tensor]`):
The input tensors of `forward_fn` which will be chunked
Returns:
`torch.Tensor`: A tensor with the same shape as the `forward_fn` would have given if applied`.
Examples:
```python | 424_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/modeling_utils.md | https://huggingface.co/docs/transformers/en/internal/modeling_utils/#pytorch-helper-functions | .md | Returns:
`torch.Tensor`: A tensor with the same shape as the `forward_fn` would have given if applied`.
Examples:
```python
# rename the usual forward() fn to forward_chunk()
def forward_chunk(self, hidden_states):
hidden_states = self.decoder(hidden_states)
return hidden_states | 424_3_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/modeling_utils.md | https://huggingface.co/docs/transformers/en/internal/modeling_utils/#pytorch-helper-functions | .md | # implement a chunked forward function
def forward(self, hidden_states):
return apply_chunking_to_forward(self.forward_chunk, self.chunk_size_lm_head, self.seq_len_dim, hidden_states)
```
pytorch_utils.find_pruneable_heads_and_indices
Finds the heads and their indices taking `already_pruned_heads` into account.
Args:
heads (`List[int]`): List of the indices of heads to prune.
n_heads (`int`): The number of heads in the model.
head_size (`int`): The size of each head. | 424_3_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/modeling_utils.md | https://huggingface.co/docs/transformers/en/internal/modeling_utils/#pytorch-helper-functions | .md | n_heads (`int`): The number of heads in the model.
head_size (`int`): The size of each head.
already_pruned_heads (`Set[int]`): A set of already pruned heads.
Returns:
`Tuple[Set[int], torch.LongTensor]`: A tuple with the indices of heads to prune taking `already_pruned_heads`
into account and the indices of rows/columns to keep in the layer weight.
pytorch_utils.prune_layer
Prune a Conv1D or linear layer to keep only entries in index.
Used to remove heads.
Args: | 424_3_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/modeling_utils.md | https://huggingface.co/docs/transformers/en/internal/modeling_utils/#pytorch-helper-functions | .md | pytorch_utils.prune_layer
Prune a Conv1D or linear layer to keep only entries in index.
Used to remove heads.
Args:
layer (`Union[torch.nn.Linear, Conv1D]`): The layer to prune.
index (`torch.LongTensor`): The indices to keep in the layer.
dim (`int`, *optional*): The dimension on which to keep the indices.
Returns:
`torch.nn.Linear` or [`~pytorch_utils.Conv1D`]: The pruned layer as a new layer with `requires_grad=True`.
pytorch_utils.prune_conv1d_layer | 424_3_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/modeling_utils.md | https://huggingface.co/docs/transformers/en/internal/modeling_utils/#pytorch-helper-functions | .md | pytorch_utils.prune_conv1d_layer
Prune a Conv1D layer to keep only entries in index. A Conv1D work as a Linear layer (see e.g. BERT) but the weights
are transposed.
Used to remove heads.
Args:
layer ([`~pytorch_utils.Conv1D`]): The layer to prune.
index (`torch.LongTensor`): The indices to keep in the layer.
dim (`int`, *optional*, defaults to 1): The dimension on which to keep the indices.
Returns:
[`~pytorch_utils.Conv1D`]: The pruned layer as a new layer with `requires_grad=True`. | 424_3_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/modeling_utils.md | https://huggingface.co/docs/transformers/en/internal/modeling_utils/#pytorch-helper-functions | .md | Returns:
[`~pytorch_utils.Conv1D`]: The pruned layer as a new layer with `requires_grad=True`.
pytorch_utils.prune_linear_layer
Prune a linear layer to keep only entries in index.
Used to remove heads.
Args:
layer (`torch.nn.Linear`): The layer to prune.
index (`torch.LongTensor`): The indices to keep in the layer.
dim (`int`, *optional*, defaults to 0): The dimension on which to keep the indices.
Returns:
`torch.nn.Linear`: The pruned layer as a new layer with `requires_grad=True`. | 424_3_7 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.