source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_bigcode.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_bigcode/#gptbigcodefortokenclassification
|
.md
|
GPT_BIGCODE Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g.
for Named-Entity-Recognition (NER) tasks.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
228_9_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_bigcode.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_bigcode/#gptbigcodefortokenclassification
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`GPTBigCodeConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
228_9_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_bigcode.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_bigcode/#gptbigcodefortokenclassification
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
228_9_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nystromformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/nystromformer/
|
.md
|
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
229_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nystromformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/nystromformer/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
229_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nystromformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/nystromformer/#overview
|
.md
|
The Nyströmformer model was proposed in [*Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention*](https://arxiv.org/abs/2102.03902) by Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn
Fung, Yin Li, and Vikas Singh.
The abstract from the paper is the following:
*Transformers have emerged as a powerful tool for a broad range of natural language processing tasks. A key component
|
229_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nystromformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/nystromformer/#overview
|
.md
|
*Transformers have emerged as a powerful tool for a broad range of natural language processing tasks. A key component
that drives the impressive performance of Transformers is the self-attention mechanism that encodes the influence or
dependence of other tokens on each specific token. While beneficial, the quadratic complexity of self-attention on the
input sequence length has limited its application to longer sequences -- a topic being actively studied in the
|
229_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nystromformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/nystromformer/#overview
|
.md
|
input sequence length has limited its application to longer sequences -- a topic being actively studied in the
community. To address this limitation, we propose Nyströmformer -- a model that exhibits favorable scalability as a
function of sequence length. Our idea is based on adapting the Nyström method to approximate standard self-attention
with O(n) complexity. The scalability of Nyströmformer enables application to longer sequences with thousands of
|
229_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nystromformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/nystromformer/#overview
|
.md
|
with O(n) complexity. The scalability of Nyströmformer enables application to longer sequences with thousands of
tokens. We perform evaluations on multiple downstream tasks on the GLUE benchmark and IMDB reviews with standard
sequence length, and find that our Nyströmformer performs comparably, or in a few cases, even slightly better, than
standard self-attention. On longer sequence tasks in the Long Range Arena (LRA) benchmark, Nyströmformer performs
|
229_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nystromformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/nystromformer/#overview
|
.md
|
standard self-attention. On longer sequence tasks in the Long Range Arena (LRA) benchmark, Nyströmformer performs
favorably relative to other efficient self-attention methods. Our code is available at this https URL.*
This model was contributed by [novice03](https://huggingface.co/novice03). The original code can be found [here](https://github.com/mlpen/Nystromformer).
|
229_1_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nystromformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/nystromformer/#resources
|
.md
|
- [Text classification task guide](../tasks/sequence_classification)
- [Token classification task guide](../tasks/token_classification)
- [Question answering task guide](../tasks/question_answering)
- [Masked language modeling task guide](../tasks/masked_language_modeling)
- [Multiple choice task guide](../tasks/multiple_choice)
|
229_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nystromformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/nystromformer/#nystromformerconfig
|
.md
|
This is the configuration class to store the configuration of a [`NystromformerModel`]. It is used to instantiate
an Nystromformer model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the Nystromformer
[uw-madison/nystromformer-512](https://huggingface.co/uw-madison/nystromformer-512) architecture.
|
229_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nystromformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/nystromformer/#nystromformerconfig
|
.md
|
[uw-madison/nystromformer-512](https://huggingface.co/uw-madison/nystromformer-512) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 30000):
Vocabulary size of the Nystromformer model. Defines the number of different tokens that can be represented
by the `inputs_ids` passed when calling [`NystromformerModel`].
|
229_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nystromformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/nystromformer/#nystromformerconfig
|
.md
|
by the `inputs_ids` passed when calling [`NystromformerModel`].
hidden_size (`int`, *optional*, defaults to 768):
Dimension of the encoder layers and the pooler layer.
num_hidden_layers (`int`, *optional*, defaults to 12):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 12):
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (`int`, *optional*, defaults to 3072):
|
229_3_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nystromformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/nystromformer/#nystromformerconfig
|
.md
|
intermediate_size (`int`, *optional*, defaults to 3072):
Dimension of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"selu"` and `"gelu_new"` are supported.
hidden_dropout_prob (`float`, *optional*, defaults to 0.1):
|
229_3_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nystromformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/nystromformer/#nystromformerconfig
|
.md
|
`"relu"`, `"selu"` and `"gelu_new"` are supported.
hidden_dropout_prob (`float`, *optional*, defaults to 0.1):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1):
The dropout ratio for the attention probabilities.
max_position_embeddings (`int`, *optional*, defaults to 512):
The maximum sequence length that this model might ever be used with. Typically set this to something large
|
229_3_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nystromformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/nystromformer/#nystromformerconfig
|
.md
|
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (`int`, *optional*, defaults to 2):
The vocabulary size of the `token_type_ids` passed when calling [`NystromformerModel`].
segment_means_seq_len (`int`, *optional*, defaults to 64):
Sequence length used in segment-means.
num_landmarks (`int`, *optional*, defaults to 64):
|
229_3_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nystromformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/nystromformer/#nystromformerconfig
|
.md
|
Sequence length used in segment-means.
num_landmarks (`int`, *optional*, defaults to 64):
The number of landmark (or Nystrom) points to use in Nystrom approximation of the softmax self-attention
matrix.
conv_kernel_size (`int`, *optional*, defaults to 65):
The kernel size of depthwise convolution used in Nystrom approximation.
inv_coeff_init_option (`bool`, *optional*, defaults to `False`):
Whether or not to use exact coefficient computation for the initial values for the iterative method of
|
229_3_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nystromformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/nystromformer/#nystromformerconfig
|
.md
|
Whether or not to use exact coefficient computation for the initial values for the iterative method of
calculating the Moore-Penrose inverse of a matrix.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (`float`, *optional*, defaults to 1e-12):
The epsilon used by the layer normalization layers.
Example:
```python
>>> from transformers import NystromformerModel, NystromformerConfig
|
229_3_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nystromformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/nystromformer/#nystromformerconfig
|
.md
|
>>> # Initializing a Nystromformer uw-madison/nystromformer-512 style configuration
>>> configuration = NystromformerConfig()
>>> # Initializing a model from the uw-madison/nystromformer-512 style configuration
>>> model = NystromformerModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
229_3_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nystromformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/nystromformer/#nystromformermodel
|
.md
|
The bare Nyströmformer Model transformer outputting raw hidden-states without any specific head on top.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`NystromformerConfig`]): Model configuration class with all the parameters of the model.
|
229_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nystromformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/nystromformer/#nystromformermodel
|
.md
|
behavior.
Parameters:
config ([`NystromformerConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
229_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nystromformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/nystromformer/#nystromformerformaskedlm
|
.md
|
Nyströmformer Model with a `language modeling` head on top.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`NystromformerConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
229_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nystromformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/nystromformer/#nystromformerformaskedlm
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
229_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nystromformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/nystromformer/#nystromformerforsequenceclassification
|
.md
|
Nyströmformer Model transformer with a sequence classification/regression head on top (a linear layer on top of the
pooled output) e.g. for GLUE tasks.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`NystromformerConfig`]): Model configuration class with all the parameters of the model.
|
229_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nystromformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/nystromformer/#nystromformerforsequenceclassification
|
.md
|
behavior.
Parameters:
config ([`NystromformerConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
229_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nystromformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/nystromformer/#nystromformerformultiplechoice
|
.md
|
Nyströmformer Model with a multiple choice classification head on top (a linear layer on top of the pooled output
and a softmax) e.g. for RocStories/SWAG tasks.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`NystromformerConfig`]): Model configuration class with all the parameters of the model.
|
229_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nystromformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/nystromformer/#nystromformerformultiplechoice
|
.md
|
behavior.
Parameters:
config ([`NystromformerConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
229_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nystromformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/nystromformer/#nystromformerfortokenclassification
|
.md
|
Nyströmformer Model with a token classification head on top (a linear layer on top of the hidden-states output)
e.g. for Named-Entity-Recognition (NER) tasks.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`NystromformerConfig`]): Model configuration class with all the parameters of the model.
|
229_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nystromformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/nystromformer/#nystromformerfortokenclassification
|
.md
|
behavior.
Parameters:
config ([`NystromformerConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
229_8_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nystromformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/nystromformer/#nystromformerforquestionanswering
|
.md
|
Nyströmformer Model with a span classification head on top for extractive question-answering tasks like SQuAD (a
linear layers on top of the hidden-states output to compute `span start logits` and `span end logits`).
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
|
229_9_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nystromformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/nystromformer/#nystromformerforquestionanswering
|
.md
|
behavior.
Parameters:
config ([`NystromformerConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
229_9_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/glpn.md
|
https://huggingface.co/docs/transformers/en/model_doc/glpn/
|
.md
|
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
230_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/glpn.md
|
https://huggingface.co/docs/transformers/en/model_doc/glpn/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
230_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/glpn.md
|
https://huggingface.co/docs/transformers/en/model_doc/glpn/#glpn
|
.md
|
<Tip>
This is a recently introduced model so the API hasn't been tested extensively. There may be some bugs or slight
breaking changes to fix it in the future. If you see something strange, file a [Github Issue](https://github.com/huggingface/transformers/issues/new?assignees=&labels=&template=bug-report.md&title).
</Tip>
|
230_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/glpn.md
|
https://huggingface.co/docs/transformers/en/model_doc/glpn/#overview
|
.md
|
The GLPN model was proposed in [Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth](https://arxiv.org/abs/2201.07436) by Doyeon Kim, Woonghyun Ga, Pyungwhan Ahn, Donggyu Joo, Sehwan Chun, Junmo Kim.
GLPN combines [SegFormer](segformer)'s hierarchical mix-Transformer with a lightweight decoder for monocular depth estimation. The proposed decoder shows better performance than the previously proposed decoders, with considerably
less computational complexity.
|
230_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/glpn.md
|
https://huggingface.co/docs/transformers/en/model_doc/glpn/#overview
|
.md
|
less computational complexity.
The abstract from the paper is the following:
|
230_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/glpn.md
|
https://huggingface.co/docs/transformers/en/model_doc/glpn/#overview
|
.md
|
*Depth estimation from a single image is an important task that can be applied to various fields in computer vision, and has grown rapidly with the development of convolutional neural networks. In this paper, we propose a novel structure and training strategy for monocular depth estimation to further improve the prediction accuracy of the network. We deploy a hierarchical transformer encoder to capture and convey the global context, and design a lightweight yet powerful decoder to generate an estimated
|
230_2_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/glpn.md
|
https://huggingface.co/docs/transformers/en/model_doc/glpn/#overview
|
.md
|
encoder to capture and convey the global context, and design a lightweight yet powerful decoder to generate an estimated depth map while considering local connectivity. By constructing connected paths between multi-scale local features and the global decoding stream with our proposed selective feature fusion module, the network can integrate both representations and recover fine details. In addition, the proposed decoder shows better performance than the previously proposed decoders, with considerably less
|
230_2_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/glpn.md
|
https://huggingface.co/docs/transformers/en/model_doc/glpn/#overview
|
.md
|
In addition, the proposed decoder shows better performance than the previously proposed decoders, with considerably less computational complexity. Furthermore, we improve the depth-specific augmentation method by utilizing an important observation in depth estimation to enhance the model. Our network achieves state-of-the-art performance over the challenging depth dataset NYU Depth V2. Extensive experiments have been conducted to validate and show the effectiveness of the proposed approach. Finally, our
|
230_2_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/glpn.md
|
https://huggingface.co/docs/transformers/en/model_doc/glpn/#overview
|
.md
|
V2. Extensive experiments have been conducted to validate and show the effectiveness of the proposed approach. Finally, our model shows better generalisation ability and robustness than other comparative models.*
|
230_2_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/glpn.md
|
https://huggingface.co/docs/transformers/en/model_doc/glpn/#overview
|
.md
|
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/glpn_architecture.jpg"
alt="drawing" width="600"/>
<small> Summary of the approach. Taken from the <a href="https://arxiv.org/abs/2201.07436" target="_blank">original paper</a>. </small>
This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/vinvino02/GLPDepth).
|
230_2_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/glpn.md
|
https://huggingface.co/docs/transformers/en/model_doc/glpn/#resources
|
.md
|
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with GLPN.
- Demo notebooks for [`GLPNForDepthEstimation`] can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/GLPN).
- [Monocular depth estimation task guide](../tasks/monocular_depth_estimation)
|
230_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/glpn.md
|
https://huggingface.co/docs/transformers/en/model_doc/glpn/#glpnconfig
|
.md
|
This is the configuration class to store the configuration of a [`GLPNModel`]. It is used to instantiate an GLPN
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the GLPN
[vinvino02/glpn-kitti](https://huggingface.co/vinvino02/glpn-kitti) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
230_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/glpn.md
|
https://huggingface.co/docs/transformers/en/model_doc/glpn/#glpnconfig
|
.md
|
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
num_channels (`int`, *optional*, defaults to 3):
The number of input channels.
num_encoder_blocks (`int`, *optional*, defaults to 4):
The number of encoder blocks (i.e. stages in the Mix Transformer encoder).
depths (`List[int]`, *optional*, defaults to `[2, 2, 2, 2]`):
The number of layers in each encoder block.
|
230_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/glpn.md
|
https://huggingface.co/docs/transformers/en/model_doc/glpn/#glpnconfig
|
.md
|
depths (`List[int]`, *optional*, defaults to `[2, 2, 2, 2]`):
The number of layers in each encoder block.
sr_ratios (`List[int]`, *optional*, defaults to `[8, 4, 2, 1]`):
Sequence reduction ratios in each encoder block.
hidden_sizes (`List[int]`, *optional*, defaults to `[32, 64, 160, 256]`):
Dimension of each of the encoder blocks.
patch_sizes (`List[int]`, *optional*, defaults to `[7, 3, 3, 3]`):
Patch size before each encoder block.
strides (`List[int]`, *optional*, defaults to `[4, 2, 2, 2]`):
|
230_4_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/glpn.md
|
https://huggingface.co/docs/transformers/en/model_doc/glpn/#glpnconfig
|
.md
|
Patch size before each encoder block.
strides (`List[int]`, *optional*, defaults to `[4, 2, 2, 2]`):
Stride before each encoder block.
num_attention_heads (`List[int]`, *optional*, defaults to `[1, 2, 5, 8]`):
Number of attention heads for each attention layer in each block of the Transformer encoder.
mlp_ratios (`List[int]`, *optional*, defaults to `[4, 4, 4, 4]`):
Ratio of the size of the hidden layer compared to the size of the input layer of the Mix FFNs in the
encoder blocks.
|
230_4_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/glpn.md
|
https://huggingface.co/docs/transformers/en/model_doc/glpn/#glpnconfig
|
.md
|
Ratio of the size of the hidden layer compared to the size of the input layer of the Mix FFNs in the
encoder blocks.
hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"selu"` and `"gelu_new"` are supported.
hidden_dropout_prob (`float`, *optional*, defaults to 0.0):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
|
230_4_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/glpn.md
|
https://huggingface.co/docs/transformers/en/model_doc/glpn/#glpnconfig
|
.md
|
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
drop_path_rate (`float`, *optional*, defaults to 0.1):
|
230_4_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/glpn.md
|
https://huggingface.co/docs/transformers/en/model_doc/glpn/#glpnconfig
|
.md
|
drop_path_rate (`float`, *optional*, defaults to 0.1):
The dropout probability for stochastic depth, used in the blocks of the Transformer encoder.
layer_norm_eps (`float`, *optional*, defaults to 1e-06):
The epsilon used by the layer normalization layers.
decoder_hidden_size (`int`, *optional*, defaults to 64):
The dimension of the decoder.
max_depth (`int`, *optional*, defaults to 10):
The maximum depth of the decoder.
head_in_index (`int`, *optional*, defaults to -1):
|
230_4_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/glpn.md
|
https://huggingface.co/docs/transformers/en/model_doc/glpn/#glpnconfig
|
.md
|
The maximum depth of the decoder.
head_in_index (`int`, *optional*, defaults to -1):
The index of the features to use in the head.
Example:
```python
>>> from transformers import GLPNModel, GLPNConfig
|
230_4_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/glpn.md
|
https://huggingface.co/docs/transformers/en/model_doc/glpn/#glpnconfig
|
.md
|
>>> # Initializing a GLPN vinvino02/glpn-kitti style configuration
>>> configuration = GLPNConfig()
>>> # Initializing a model from the vinvino02/glpn-kitti style configuration
>>> model = GLPNModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
230_4_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/glpn.md
|
https://huggingface.co/docs/transformers/en/model_doc/glpn/#glpnfeatureextractor
|
.md
|
No docstring available for GLPNFeatureExtractor
Methods: __call__
|
230_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/glpn.md
|
https://huggingface.co/docs/transformers/en/model_doc/glpn/#glpnimageprocessor
|
.md
|
Constructs a GLPN image processor.
Args:
do_resize (`bool`, *optional*, defaults to `True`):
Whether to resize the image's (height, width) dimensions, rounding them down to the closest multiple of
`size_divisor`. Can be overridden by `do_resize` in `preprocess`.
size_divisor (`int`, *optional*, defaults to 32):
When `do_resize` is `True`, images are resized so their height and width are rounded down to the closest
multiple of `size_divisor`. Can be overridden by `size_divisor` in `preprocess`.
|
230_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/glpn.md
|
https://huggingface.co/docs/transformers/en/model_doc/glpn/#glpnimageprocessor
|
.md
|
multiple of `size_divisor`. Can be overridden by `size_divisor` in `preprocess`.
resample (`PIL.Image` resampling filter, *optional*, defaults to `Resampling.BILINEAR`):
Resampling filter to use if resizing the image. Can be overridden by `resample` in `preprocess`.
do_rescale (`bool`, *optional*, defaults to `True`):
Whether or not to apply the scaling factor (to make pixel values floats between 0. and 1.). Can be
overridden by `do_rescale` in `preprocess`.
Methods: preprocess
|
230_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/glpn.md
|
https://huggingface.co/docs/transformers/en/model_doc/glpn/#glpnmodel
|
.md
|
The bare GLPN encoder (Mix-Transformer) outputting raw hidden-states without any specific head on top.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`GLPNConfig`]): Model configuration class with all the parameters of the model.
|
230_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/glpn.md
|
https://huggingface.co/docs/transformers/en/model_doc/glpn/#glpnmodel
|
.md
|
behavior.
Parameters:
config ([`GLPNConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
230_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/glpn.md
|
https://huggingface.co/docs/transformers/en/model_doc/glpn/#glpnfordepthestimation
|
.md
|
GLPN Model transformer with a lightweight depth estimation head on top e.g. for KITTI, NYUv2.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`GLPNConfig`]): Model configuration class with all the parameters of the model.
|
230_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/glpn.md
|
https://huggingface.co/docs/transformers/en/model_doc/glpn/#glpnfordepthestimation
|
.md
|
behavior.
Parameters:
config ([`GLPNConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
230_8_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/gemma/
|
.md
|
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
231_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/gemma/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
231_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/gemma/#overview
|
.md
|
The Gemma model was proposed in [Gemma: Open Models Based on Gemini Technology and Research](https://blog.google/technology/developers/gemma-open-models/) by Gemma Team, Google.
Gemma models are trained on 6T tokens, and released with 2 versions, 2b and 7b.
The abstract from the paper is the following:
|
231_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/gemma/#overview
|
.md
|
*This work introduces Gemma, a new family of open language models demonstrating strong performance across academic benchmarks for language understanding, reasoning, and safety. We release two sizes of models (2 billion and 7 billion parameters), and provide both pretrained and fine-tuned checkpoints. Gemma outperforms similarly sized open models on 11 out of 18 text-based tasks, and we present comprehensive evaluations of safety and responsibility aspects of the models, alongside a detailed description of
|
231_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/gemma/#overview
|
.md
|
we present comprehensive evaluations of safety and responsibility aspects of the models, alongside a detailed description of our model development. We believe the responsible release of LLMs is critical for improving the safety of frontier models, and for enabling the next wave of LLM innovations*
|
231_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/gemma/#overview
|
.md
|
Tips:
- The original checkpoints can be converted using the conversion script `src/transformers/models/gemma/convert_gemma_weights_to_hf.py`
This model was contributed by [Arthur Zucker](https://huggingface.co/ArthurZ), [Younes Belkada](https://huggingface.co/ybelkada), [Sanchit Gandhi](https://huggingface.co/sanchit-gandhi), [Pedro Cuenca](https://huggingface.co/pcuenq).
|
231_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/gemma/#gemmaconfig
|
.md
|
This is the configuration class to store the configuration of a [`GemmaModel`]. It is used to instantiate an Gemma
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the Gemma-7B.
e.g. [google/gemma-7b](https://huggingface.co/google/gemma-7b)
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
231_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/gemma/#gemmaconfig
|
.md
|
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 256000):
Vocabulary size of the Gemma model. Defines the number of different tokens that can be represented by the
`inputs_ids` passed when calling [`GemmaModel`]
hidden_size (`int`, *optional*, defaults to 3072):
Dimension of the hidden representations.
|
231_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/gemma/#gemmaconfig
|
.md
|
hidden_size (`int`, *optional*, defaults to 3072):
Dimension of the hidden representations.
intermediate_size (`int`, *optional*, defaults to 24576):
Dimension of the MLP representations.
num_hidden_layers (`int`, *optional*, defaults to 28):
Number of hidden layers in the Transformer decoder.
num_attention_heads (`int`, *optional*, defaults to 16):
Number of attention heads for each attention layer in the Transformer decoder.
num_key_value_heads (`int`, *optional*, defaults to 16):
|
231_2_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/gemma/#gemmaconfig
|
.md
|
num_key_value_heads (`int`, *optional*, defaults to 16):
This is the number of key_value heads that should be used to implement Grouped Query Attention. If
`num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if
`num_key_value_heads=1` the model will use Multi Query Attention (MQA) otherwise GQA is used. When
converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
|
231_2_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/gemma/#gemmaconfig
|
.md
|
converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
by meanpooling all the original heads within that group. For more details checkout [this
paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to
`num_attention_heads`.
head_dim (`int`, *optional*, defaults to 256):
The attention head dimension.
hidden_act (`str` or `function`, *optional*, defaults to `"gelu_pytorch_tanh"`):
|
231_2_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/gemma/#gemmaconfig
|
.md
|
The attention head dimension.
hidden_act (`str` or `function`, *optional*, defaults to `"gelu_pytorch_tanh"`):
The legacy activation function. It is overwritten by the `hidden_activation`.
hidden_activation (`str` or `function`, *optional*):
The non-linear activation function (function or string) in the decoder. Will default to `"gelu_pytorch_tanh"`
if not specified. `"gelu_pytorch_tanh"` uses an approximation of the `"gelu"` activation function.
|
231_2_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/gemma/#gemmaconfig
|
.md
|
if not specified. `"gelu_pytorch_tanh"` uses an approximation of the `"gelu"` activation function.
max_position_embeddings (`int`, *optional*, defaults to 8192):
The maximum sequence length that this model might ever be used with.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
rms_norm_eps (`float`, *optional*, defaults to 1e-06):
The epsilon used by the rms normalization layers.
|
231_2_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/gemma/#gemmaconfig
|
.md
|
rms_norm_eps (`float`, *optional*, defaults to 1e-06):
The epsilon used by the rms normalization layers.
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if `config.is_decoder=True`.
pad_token_id (`int`, *optional*, defaults to 0):
Padding token id.
eos_token_id (`int`, *optional*, defaults to 1):
End of stream token id.
bos_token_id (`int`, *optional*, defaults to 2):
|
231_2_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/gemma/#gemmaconfig
|
.md
|
eos_token_id (`int`, *optional*, defaults to 1):
End of stream token id.
bos_token_id (`int`, *optional*, defaults to 2):
Beginning of stream token id.
tie_word_embeddings (`bool`, *optional*, defaults to `True`):
Whether to tie weight embeddings
rope_theta (`float`, *optional*, defaults to 10000.0):
The base period of the RoPE embeddings.
attention_bias (`bool`, defaults to `False`, *optional*, defaults to `False`):
|
231_2_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/gemma/#gemmaconfig
|
.md
|
The base period of the RoPE embeddings.
attention_bias (`bool`, defaults to `False`, *optional*, defaults to `False`):
Whether to use a bias in the query, key, value and output projection layers during self-attention.
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
```python
>>> from transformers import GemmaModel, GemmaConfig
>>> # Initializing a Gemma gemma-7b style configuration
>>> configuration = GemmaConfig()
|
231_2_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/gemma/#gemmaconfig
|
.md
|
>>> # Initializing a Gemma gemma-7b style configuration
>>> configuration = GemmaConfig()
>>> # Initializing a model from the gemma-7b style configuration
>>> model = GemmaModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
231_2_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/gemma/#gemmatokenizer
|
.md
|
Construct a Gemma tokenizer. Based on byte-level Byte-Pair-Encoding. The default padding token is unset as there is
no padding token in the original model.
Args:
vocab_file (`str`):
Path to the vocabulary file.
unk_token (`str` or `tokenizers.AddedToken`, *optional*, defaults to `"<unk>"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
bos_token (`str` or `tokenizers.AddedToken`, *optional*, defaults to `"<bos>"`):
|
231_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/gemma/#gemmatokenizer
|
.md
|
token instead.
bos_token (`str` or `tokenizers.AddedToken`, *optional*, defaults to `"<bos>"`):
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
eos_token (`str` or `tokenizers.AddedToken`, *optional*, defaults to `"<eos>"`):
The end of sequence token.
pad_token (`str` or `tokenizers.AddedToken`, *optional*, defaults to `"<pad>"`):
A special token used to make arrays of tokens the same size for batching purpose. Will then be ignored by
|
231_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/gemma/#gemmatokenizer
|
.md
|
A special token used to make arrays of tokens the same size for batching purpose. Will then be ignored by
attention mechanisms or loss computation.
sp_model_kwargs (`Dict[str, Any]`, `Optional`, *optional*):
Will be passed to the `SentencePieceProcessor.__init__()` method. The [Python wrapper for
SentencePiece](https://github.com/google/sentencepiece/tree/master/python) can be used, among other things,
to set:
- `enable_sampling`: Enable subword regularization.
|
231_3_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/gemma/#gemmatokenizer
|
.md
|
to set:
- `enable_sampling`: Enable subword regularization.
- `nbest_size`: Sampling parameters for unigram. Invalid for BPE-Dropout.
- `nbest_size = {0,1}`: No sampling is performed.
- `nbest_size > 1`: samples from the nbest_size results.
- `nbest_size < 0`: assuming that nbest_size is infinite and samples from the all hypothesis (lattice)
using forward-filtering-and-backward-sampling algorithm.
- `alpha`: Smoothing parameter for unigram sampling, and dropout probability of merge operations for
|
231_3_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/gemma/#gemmatokenizer
|
.md
|
- `alpha`: Smoothing parameter for unigram sampling, and dropout probability of merge operations for
BPE-dropout.
add_bos_token (`bool`, *optional*, defaults to `True`):
Whether or not to add an `bos_token` at the start of sequences.
add_eos_token (`bool`, *optional*, defaults to `False`):
Whether or not to add an `eos_token` at the end of sequences.
clean_up_tokenization_spaces (`bool`, *optional*, defaults to `False`):
|
231_3_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/gemma/#gemmatokenizer
|
.md
|
clean_up_tokenization_spaces (`bool`, *optional*, defaults to `False`):
Whether or not to cleanup spaces after decoding, cleanup consists in removing potential artifacts like
extra spaces.
use_default_system_prompt (`bool`, *optional*, defaults to `False`):
Whether or not the default system prompt for Gemma should be used.
spaces_between_special_tokens (`bool`, *optional*, defaults to `False`):
Whether or not to add spaces between special tokens.
|
231_3_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/gemma/#gemmatokenizerfast
|
.md
|
Construct a Gemma tokenizer fast. Based on byte-level Byte-Pair-Encoding.
This uses notably ByteFallback and no prefix space. Normalization is applied to replace `" "` with `"▁"`
```python
>>> from transformers import GemmaTokenizerFast
|
231_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/gemma/#gemmatokenizerfast
|
.md
|
>>> tokenizer = GemmaTokenizerFast.from_pretrained("hf-internal-testing/dummy-gemma")
>>> tokenizer.encode("Hello this is a test")
[2, 4521, 736, 603, 476, 2121]
```
If you want to change the `bos_token` or the `eos_token`, make sure to specify them when initializing the model, or
call `tokenizer.update_post_processor()` to make sure that the post-processing is correctly done (otherwise the
values of the first token and final token of an encoded sequence will not be correct). For more details, checkout
|
231_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/gemma/#gemmatokenizerfast
|
.md
|
values of the first token and final token of an encoded sequence will not be correct). For more details, checkout
[post-processors] (https://huggingface.co/docs/tokenizers/api/post-processors) documentation.
This tokenizer inherits from [`PreTrainedTokenizerFast`] which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
Args:
vocab_file (`str`, *optional*):
|
231_4_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/gemma/#gemmatokenizerfast
|
.md
|
refer to this superclass for more information regarding those methods.
Args:
vocab_file (`str`, *optional*):
[SentencePiece](https://github.com/google/sentencepiece) file (generally has a .model extension) that
contains the vocabulary necessary to instantiate a tokenizer.
tokenizer_file (`str`, *optional*):
[tokenizers](https://github.com/huggingface/tokenizers) file (generally has a .json extension) that
contains everything needed to load the tokenizer.
|
231_4_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/gemma/#gemmatokenizerfast
|
.md
|
contains everything needed to load the tokenizer.
clean_up_tokenization_spaces (`bool`, *optional*, defaults to `False`):
Whether or not to cleanup spaces after decoding, cleanup consists in removing potential artifacts like
extra spaces.
unk_token (`str` or `tokenizers.AddedToken`, *optional*, defaults to `"<unk>"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
|
231_4_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/gemma/#gemmatokenizerfast
|
.md
|
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
bos_token (`str` or `tokenizers.AddedToken`, *optional*, defaults to `"<bos>"`):
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
eos_token (`str` or `tokenizers.AddedToken`, *optional*, defaults to `"<eos>"`):
The end of sequence token.
pad_token (`str`, *optional*, defaults to `"<pad>"`):
The padding token
|
231_4_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/gemma/#gemmatokenizerfast
|
.md
|
The end of sequence token.
pad_token (`str`, *optional*, defaults to `"<pad>"`):
The padding token
add_bos_token (`bool`, *optional*, defaults to `True`):
Whether or not to add an `bos_token` at the start of sequences.
add_eos_token (`bool`, *optional*, defaults to `False`):
Whether or not to add an `eos_token` at the end of sequences.
|
231_4_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/gemma/#gemmamodel
|
.md
|
The bare Gemma Model outputting raw hidden-states without any specific head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
231_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/gemma/#gemmamodel
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`GemmaConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
|
231_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/gemma/#gemmamodel
|
.md
|
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Transformer decoder consisting of *config.num_hidden_layers* layers. Each layer is a [`GemmaDecoderLayer`]
Args:
config: GemmaConfig
Methods: forward
|
231_5_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/gemma/#gemmaforcausallm
|
.md
|
No docstring available for GemmaForCausalLM
Methods: forward
|
231_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/gemma/#gemmaforsequenceclassification
|
.md
|
The Gemma Model transformer with a sequence classification head on top (linear layer).
[`GemmaForSequenceClassification`] uses the last token in order to do the classification, as other causal models
(e.g. GPT-2) do.
Since it does classification on the last token, it requires to know the position of the last token. If a
`pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If
|
231_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/gemma/#gemmaforsequenceclassification
|
.md
|
`pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If
no `pad_token_id` is defined, it simply takes the last value in each row of the batch. Since it cannot guess the
padding tokens when `inputs_embeds` are passed instead of `input_ids`, it does the same (take the last value in
each row of the batch).
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
|
231_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/gemma/#gemmaforsequenceclassification
|
.md
|
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
|
231_7_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/gemma/#gemmaforsequenceclassification
|
.md
|
and behavior.
Parameters:
config ([`GemmaConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
231_7_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/gemma/#gemmafortokenclassification
|
.md
|
The Gemma Model transformer with a token classification head on top (a linear layer on top of the hidden-states
output) e.g. for Named-Entity-Recognition (NER) tasks.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
|
231_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/gemma/#gemmafortokenclassification
|
.md
|
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`GemmaConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
|
231_8_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gemma.md
|
https://huggingface.co/docs/transformers/en/model_doc/gemma/#gemmafortokenclassification
|
.md
|
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
231_8_2
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.