source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/transfo-xl.md
|
https://huggingface.co/docs/transformers/en/model_doc/transfo-xl/#transfoxlconfig
|
.md
|
[transfo-xl/transfo-xl-wt103](https://huggingface.co/transfo-xl/transfo-xl-wt103) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 267735):
Vocabulary size of the BERT model. Defines the number of different tokens that can be represented by the
|
350_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/transfo-xl.md
|
https://huggingface.co/docs/transformers/en/model_doc/transfo-xl/#transfoxlconfig
|
.md
|
Vocabulary size of the BERT model. Defines the number of different tokens that can be represented by the
`inputs_ids` passed when calling [`TransfoXLModel`] or [`TFTransfoXLModel`].
cutoffs (`List[int]`, *optional*, defaults to `[20000, 40000, 200000]`):
Cutoffs for the adaptive softmax.
d_model (`int`, *optional*, defaults to 1024):
Dimensionality of the model's hidden states.
d_embed (`int`, *optional*, defaults to 1024):
Dimensionality of the embeddings
n_head (`int`, *optional*, defaults to 16):
|
350_5_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/transfo-xl.md
|
https://huggingface.co/docs/transformers/en/model_doc/transfo-xl/#transfoxlconfig
|
.md
|
d_embed (`int`, *optional*, defaults to 1024):
Dimensionality of the embeddings
n_head (`int`, *optional*, defaults to 16):
Number of attention heads for each attention layer in the Transformer encoder.
d_head (`int`, *optional*, defaults to 64):
Dimensionality of the model's heads.
d_inner (`int`, *optional*, defaults to 4096):
Inner dimension in FF
div_val (`int`, *optional*, defaults to 4):
Divident value for adapative input and softmax
pre_lnorm (`boolean`, *optional*, defaults to `False`):
|
350_5_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/transfo-xl.md
|
https://huggingface.co/docs/transformers/en/model_doc/transfo-xl/#transfoxlconfig
|
.md
|
Divident value for adapative input and softmax
pre_lnorm (`boolean`, *optional*, defaults to `False`):
Whether or not to apply LayerNorm to the input instead of the output in the blocks.
n_layer (`int`, *optional*, defaults to 18):
Number of hidden layers in the Transformer encoder.
mem_len (`int`, *optional*, defaults to 1600):
Length of the retained previous heads.
clamp_len (`int`, *optional*, defaults to 1000):
Use the same pos embeddings after clamp_len.
|
350_5_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/transfo-xl.md
|
https://huggingface.co/docs/transformers/en/model_doc/transfo-xl/#transfoxlconfig
|
.md
|
clamp_len (`int`, *optional*, defaults to 1000):
Use the same pos embeddings after clamp_len.
same_length (`boolean`, *optional*, defaults to `True`):
Whether or not to use the same attn length for all tokens
proj_share_all_but_first (`boolean`, *optional*, defaults to `True`):
True to share all but first projs, False not to share.
attn_type (`int`, *optional*, defaults to 0):
Attention type. 0 for Transformer-XL, 1 for Shaw et al, 2 for Vaswani et al, 3 for Al Rfou et al.
|
350_5_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/transfo-xl.md
|
https://huggingface.co/docs/transformers/en/model_doc/transfo-xl/#transfoxlconfig
|
.md
|
Attention type. 0 for Transformer-XL, 1 for Shaw et al, 2 for Vaswani et al, 3 for Al Rfou et al.
sample_softmax (`int`, *optional*, defaults to -1):
Number of samples in the sampled softmax.
adaptive (`boolean`, *optional*, defaults to `True`):
Whether or not to use adaptive softmax.
dropout (`float`, *optional*, defaults to 0.1):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
dropatt (`float`, *optional*, defaults to 0.0):
|
350_5_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/transfo-xl.md
|
https://huggingface.co/docs/transformers/en/model_doc/transfo-xl/#transfoxlconfig
|
.md
|
dropatt (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
untie_r (`boolean`, *optional*, defaults to `True`):
Whether ot not to untie relative position biases.
init (`str`, *optional*, defaults to `"normal"`):
Parameter initializer to use.
init_range (`float`, *optional*, defaults to 0.01):
Parameters initialized by U(-init_range, init_range).
proj_init_std (`float`, *optional*, defaults to 0.01):
Parameters initialized by N(0, init_std)
|
350_5_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/transfo-xl.md
|
https://huggingface.co/docs/transformers/en/model_doc/transfo-xl/#transfoxlconfig
|
.md
|
proj_init_std (`float`, *optional*, defaults to 0.01):
Parameters initialized by N(0, init_std)
init_std (`float`, *optional*, defaults to 0.02):
Parameters initialized by N(0, init_std)
layer_norm_epsilon (`float`, *optional*, defaults to 1e-05):
The epsilon to use in the layer normalization layers
eos_token_id (`int`, *optional*, defaults to 0):
End of stream token id.
Examples:
```python
>>> from transformers import TransfoXLConfig, TransfoXLModel
|
350_5_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/transfo-xl.md
|
https://huggingface.co/docs/transformers/en/model_doc/transfo-xl/#transfoxlconfig
|
.md
|
>>> # Initializing a Transformer XL configuration
>>> configuration = TransfoXLConfig()
>>> # Initializing a model (with random weights) from the configuration
>>> model = TransfoXLModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
350_5_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/transfo-xl.md
|
https://huggingface.co/docs/transformers/en/model_doc/transfo-xl/#transfoxltokenizer
|
.md
|
Construct a Transformer-XL tokenizer adapted from Vocab class in [the original
code](https://github.com/kimiyoung/transformer-xl). The Transformer-XL tokenizer is a word-level tokenizer (no
sub-word tokenization).
This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
Args:
special (`List[str]`, *optional*):
|
350_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/transfo-xl.md
|
https://huggingface.co/docs/transformers/en/model_doc/transfo-xl/#transfoxltokenizer
|
.md
|
this superclass for more information regarding those methods.
Args:
special (`List[str]`, *optional*):
A list of special tokens (to be treated by the original implementation of this tokenizer).
min_freq (`int`, *optional*, defaults to 0):
The minimum number of times a token has to be present in order to be kept in the vocabulary (otherwise it
will be mapped to `unk_token`).
max_size (`int`, *optional*):
The maximum size of the vocabulary. If left unset, it will default to the size of the vocabulary found
|
350_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/transfo-xl.md
|
https://huggingface.co/docs/transformers/en/model_doc/transfo-xl/#transfoxltokenizer
|
.md
|
The maximum size of the vocabulary. If left unset, it will default to the size of the vocabulary found
after excluding the tokens according to the `min_freq` rule.
lower_case (`bool`, *optional*, defaults to `False`):
Whether or not to lowercase the input when tokenizing.
delimiter (`str`, *optional*):
The delimiter used between tokens.
vocab_file (`str`, *optional*):
File containing the vocabulary (from the original implementation).
pretrained_vocab_file (`str`, *optional*):
|
350_6_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/transfo-xl.md
|
https://huggingface.co/docs/transformers/en/model_doc/transfo-xl/#transfoxltokenizer
|
.md
|
File containing the vocabulary (from the original implementation).
pretrained_vocab_file (`str`, *optional*):
File containing the vocabulary as saved with the `save_pretrained()` method.
never_split (`List[str]`, *optional*):
List of tokens that should never be split. If no list is specified, will simply use the existing special
tokens.
unk_token (`str`, *optional*, defaults to `"<unk>"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
|
350_6_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/transfo-xl.md
|
https://huggingface.co/docs/transformers/en/model_doc/transfo-xl/#transfoxltokenizer
|
.md
|
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
eos_token (`str`, *optional*, defaults to `"<eos>"`):
The end of sequence token.
additional_special_tokens (`List[str]`, *optional*, defaults to `['<formula>']`):
A list of additional special tokens (for the HuggingFace functionality).
language (`str`, *optional*, defaults to `"en"`):
The language of this tokenizer (used for mose preprocessing).
Methods: save_vocabulary
|
350_6_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/transfo-xl.md
|
https://huggingface.co/docs/transformers/en/model_doc/transfo-xl/#transfoxl-specific-outputs
|
.md
|
[[autodoc]] models.deprecated.transfo_xl.modeling_transfo_xl.TransfoXLModelOutput: module 'transformers.models.deprecated' has no attribute 'transfo_xl'
[[autodoc]] models.deprecated.transfo_xl.modeling_transfo_xl.TransfoXLLMHeadModelOutput: module 'transformers.models.deprecated' has no attribute 'transfo_xl'
[[autodoc]] models.deprecated.transfo_xl.modeling_tf_transfo_xl.TFTransfoXLModelOutput: module 'transformers.models.deprecated' has no attribute 'transfo_xl'
|
350_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/transfo-xl.md
|
https://huggingface.co/docs/transformers/en/model_doc/transfo-xl/#transfoxl-specific-outputs
|
.md
|
[[autodoc]] models.deprecated.transfo_xl.modeling_tf_transfo_xl.TFTransfoXLLMHeadModelOutput: module 'transformers.models.deprecated' has no attribute 'transfo_xl'
<frameworkcontent>
<pt>
|
350_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/transfo-xl.md
|
https://huggingface.co/docs/transformers/en/model_doc/transfo-xl/#transfoxlmodel
|
.md
|
The bare Bert Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
350_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/transfo-xl.md
|
https://huggingface.co/docs/transformers/en/model_doc/transfo-xl/#transfoxlmodel
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`TransfoXLConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
350_8_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/transfo-xl.md
|
https://huggingface.co/docs/transformers/en/model_doc/transfo-xl/#transfoxlmodel
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
350_8_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/transfo-xl.md
|
https://huggingface.co/docs/transformers/en/model_doc/transfo-xl/#transfoxllmheadmodel
|
.md
|
The Transformer-XL Model with a language modeling head on top (adaptive softmax with weights tied to the adaptive
input embeddings)
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
350_9_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/transfo-xl.md
|
https://huggingface.co/docs/transformers/en/model_doc/transfo-xl/#transfoxllmheadmodel
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`TransfoXLConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
350_9_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/transfo-xl.md
|
https://huggingface.co/docs/transformers/en/model_doc/transfo-xl/#transfoxllmheadmodel
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
350_9_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/transfo-xl.md
|
https://huggingface.co/docs/transformers/en/model_doc/transfo-xl/#transfoxlforsequenceclassification
|
.md
|
The Transformer-XL Model transformer with a sequence classification head on top (linear layer).
[`TransfoXLForSequenceClassification`] uses the last token in order to do the classification, as other causal
models (e.g. GPT-1) do.
Since it does classification on the last token, it requires to know the position of the last token. If a
`pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If
|
350_10_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/transfo-xl.md
|
https://huggingface.co/docs/transformers/en/model_doc/transfo-xl/#transfoxlforsequenceclassification
|
.md
|
`pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If
no `pad_token_id` is defined, it simply takes the last value in each row of the batch. Since it cannot guess the
padding tokens when `inputs_embeds` are passed instead of `input_ids`, it does the same (take the last value in
each row of the batch).
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
|
350_10_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/transfo-xl.md
|
https://huggingface.co/docs/transformers/en/model_doc/transfo-xl/#transfoxlforsequenceclassification
|
.md
|
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
|
350_10_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/transfo-xl.md
|
https://huggingface.co/docs/transformers/en/model_doc/transfo-xl/#transfoxlforsequenceclassification
|
.md
|
and behavior.
Parameters:
config ([`TransfoXLConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
</pt>
<tf>
|
350_10_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/transfo-xl.md
|
https://huggingface.co/docs/transformers/en/model_doc/transfo-xl/#tftransfoxlmodel
|
.md
|
No docstring available for TFTransfoXLModel
Methods: call
|
350_11_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/transfo-xl.md
|
https://huggingface.co/docs/transformers/en/model_doc/transfo-xl/#tftransfoxllmheadmodel
|
.md
|
No docstring available for TFTransfoXLLMHeadModel
Methods: call
|
350_12_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/transfo-xl.md
|
https://huggingface.co/docs/transformers/en/model_doc/transfo-xl/#tftransfoxlforsequenceclassification
|
.md
|
No docstring available for TFTransfoXLForSequenceClassification
Methods: call
</tf>
</frameworkcontent>
|
350_13_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/transfo-xl.md
|
https://huggingface.co/docs/transformers/en/model_doc/transfo-xl/#internal-layers
|
.md
|
No docstring available for AdaptiveEmbedding
No docstring available for TFAdaptiveEmbedding
|
350_14_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deta.md
|
https://huggingface.co/docs/transformers/en/model_doc/deta/
|
.md
|
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
351_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deta.md
|
https://huggingface.co/docs/transformers/en/model_doc/deta/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
351_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deta.md
|
https://huggingface.co/docs/transformers/en/model_doc/deta/#deta
|
.md
|
<Tip warning={true}>
This model is in maintenance mode only, we don't accept any new PRs changing its code.
If you run into any issues running this model, please reinstall the last version that supported this model: v4.40.2.
You can do so by running the following command: `pip install -U transformers==4.40.2`.
</Tip>
|
351_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deta.md
|
https://huggingface.co/docs/transformers/en/model_doc/deta/#overview
|
.md
|
The DETA model was proposed in [NMS Strikes Back](https://arxiv.org/abs/2212.06137) by Jeffrey Ouyang-Zhang, Jang Hyun Cho, Xingyi Zhou, Philipp Krähenbühl.
DETA (short for Detection Transformers with Assignment) improves [Deformable DETR](deformable_detr) by replacing the one-to-one bipartite Hungarian matching loss
with one-to-many label assignments used in traditional detectors with non-maximum suppression (NMS). This leads to significant gains of up to 2.5 mAP.
|
351_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deta.md
|
https://huggingface.co/docs/transformers/en/model_doc/deta/#overview
|
.md
|
The abstract from the paper is the following:
|
351_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deta.md
|
https://huggingface.co/docs/transformers/en/model_doc/deta/#overview
|
.md
|
*Detection Transformer (DETR) directly transforms queries to unique objects by using one-to-one bipartite matching during training and enables end-to-end object detection. Recently, these models have surpassed traditional detectors on COCO with undeniable elegance. However, they differ from traditional detectors in multiple designs, including model architecture and training schedules, and thus the effectiveness of one-to-one matching is not fully understood. In this work, we conduct a strict comparison
|
351_2_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deta.md
|
https://huggingface.co/docs/transformers/en/model_doc/deta/#overview
|
.md
|
and thus the effectiveness of one-to-one matching is not fully understood. In this work, we conduct a strict comparison between the one-to-one Hungarian matching in DETRs and the one-to-many label assignments in traditional detectors with non-maximum supervision (NMS). Surprisingly, we observe one-to-many assignments with NMS consistently outperform standard one-to-one matching under the same setting, with a significant gain of up to 2.5 mAP. Our detector that trains Deformable-DETR with traditional
|
351_2_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deta.md
|
https://huggingface.co/docs/transformers/en/model_doc/deta/#overview
|
.md
|
under the same setting, with a significant gain of up to 2.5 mAP. Our detector that trains Deformable-DETR with traditional IoU-based label assignment achieved 50.2 COCO mAP within 12 epochs (1x schedule) with ResNet50 backbone, outperforming all existing traditional or transformer-based detectors in this setting. On multiple datasets, schedules, and architectures, we consistently show bipartite matching is unnecessary for performant detection transformers. Furthermore, we attribute the success of
|
351_2_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deta.md
|
https://huggingface.co/docs/transformers/en/model_doc/deta/#overview
|
.md
|
show bipartite matching is unnecessary for performant detection transformers. Furthermore, we attribute the success of detection transformers to their expressive transformer architecture.*
|
351_2_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deta.md
|
https://huggingface.co/docs/transformers/en/model_doc/deta/#overview
|
.md
|
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/deta_architecture.jpg"
alt="drawing" width="600"/>
<small> DETA overview. Taken from the <a href="https://arxiv.org/abs/2212.06137">original paper</a>. </small>
This model was contributed by [nielsr](https://huggingface.co/nielsr).
The original code can be found [here](https://github.com/jozhang97/DETA).
|
351_2_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deta.md
|
https://huggingface.co/docs/transformers/en/model_doc/deta/#resources
|
.md
|
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with DETA.
- Demo notebooks for DETA can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/DETA).
- Scripts for finetuning [`DetaForObjectDetection`] with [`Trainer`] or [Accelerate](https://huggingface.co/docs/accelerate/index) can be found [here](https://github.com/huggingface/transformers/tree/main/examples/pytorch/object-detection).
|
351_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deta.md
|
https://huggingface.co/docs/transformers/en/model_doc/deta/#resources
|
.md
|
- See also: [Object detection task guide](../tasks/object_detection).
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
|
351_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deta.md
|
https://huggingface.co/docs/transformers/en/model_doc/deta/#detaconfig
|
.md
|
This is the configuration class to store the configuration of a [`DetaModel`]. It is used to instantiate a DETA
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the DETA
[SenseTime/deformable-detr](https://huggingface.co/SenseTime/deformable-detr) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
351_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deta.md
|
https://huggingface.co/docs/transformers/en/model_doc/deta/#detaconfig
|
.md
|
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
backbone_config (`PretrainedConfig` or `dict`, *optional*, defaults to `ResNetConfig()`):
The configuration of the backbone model.
backbone (`str`, *optional*):
Name of backbone to use when `backbone_config` is `None`. If `use_pretrained_backbone` is `True`, this
|
351_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deta.md
|
https://huggingface.co/docs/transformers/en/model_doc/deta/#detaconfig
|
.md
|
Name of backbone to use when `backbone_config` is `None`. If `use_pretrained_backbone` is `True`, this
will load the corresponding pretrained weights from the timm or transformers library. If `use_pretrained_backbone`
is `False`, this loads the backbone's config and uses that to initialize the backbone with random weights.
use_pretrained_backbone (`bool`, *optional*, `False`):
Whether to use pretrained weights for the backbone.
use_timm_backbone (`bool`, *optional*, `False`):
|
351_4_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deta.md
|
https://huggingface.co/docs/transformers/en/model_doc/deta/#detaconfig
|
.md
|
Whether to use pretrained weights for the backbone.
use_timm_backbone (`bool`, *optional*, `False`):
Whether to load `backbone` from the timm library. If `False`, the backbone is loaded from the transformers
library.
backbone_kwargs (`dict`, *optional*):
Keyword arguments to be passed to AutoBackbone when loading from a checkpoint
e.g. `{'out_indices': (0, 1, 2, 3)}`. Cannot be specified if `backbone_config` is set.
num_queries (`int`, *optional*, defaults to 900):
|
351_4_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deta.md
|
https://huggingface.co/docs/transformers/en/model_doc/deta/#detaconfig
|
.md
|
num_queries (`int`, *optional*, defaults to 900):
Number of object queries, i.e. detection slots. This is the maximal number of objects [`DetaModel`] can
detect in a single image. In case `two_stage` is set to `True`, we use `two_stage_num_proposals` instead.
d_model (`int`, *optional*, defaults to 256):
Dimension of the layers.
encoder_layers (`int`, *optional*, defaults to 6):
Number of encoder layers.
decoder_layers (`int`, *optional*, defaults to 6):
Number of decoder layers.
|
351_4_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deta.md
|
https://huggingface.co/docs/transformers/en/model_doc/deta/#detaconfig
|
.md
|
Number of encoder layers.
decoder_layers (`int`, *optional*, defaults to 6):
Number of decoder layers.
encoder_attention_heads (`int`, *optional*, defaults to 8):
Number of attention heads for each attention layer in the Transformer encoder.
decoder_attention_heads (`int`, *optional*, defaults to 8):
Number of attention heads for each attention layer in the Transformer decoder.
decoder_ffn_dim (`int`, *optional*, defaults to 2048):
|
351_4_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deta.md
|
https://huggingface.co/docs/transformers/en/model_doc/deta/#detaconfig
|
.md
|
decoder_ffn_dim (`int`, *optional*, defaults to 2048):
Dimension of the "intermediate" (often named feed-forward) layer in decoder.
encoder_ffn_dim (`int`, *optional*, defaults to 2048):
Dimension of the "intermediate" (often named feed-forward) layer in decoder.
activation_function (`str` or `function`, *optional*, defaults to `"relu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"silu"` and `"gelu_new"` are supported.
|
351_4_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deta.md
|
https://huggingface.co/docs/transformers/en/model_doc/deta/#detaconfig
|
.md
|
`"relu"`, `"silu"` and `"gelu_new"` are supported.
dropout (`float`, *optional*, defaults to 0.1):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
activation_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for activations inside the fully connected layer.
init_std (`float`, *optional*, defaults to 0.02):
|
351_4_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deta.md
|
https://huggingface.co/docs/transformers/en/model_doc/deta/#detaconfig
|
.md
|
The dropout ratio for activations inside the fully connected layer.
init_std (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
init_xavier_std (`float`, *optional*, defaults to 1):
The scaling factor used for the Xavier initialization gain in the HM Attention map module.
encoder_layerdrop (`float`, *optional*, defaults to 0.0):
|
351_4_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deta.md
|
https://huggingface.co/docs/transformers/en/model_doc/deta/#detaconfig
|
.md
|
encoder_layerdrop (`float`, *optional*, defaults to 0.0):
The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
auxiliary_loss (`bool`, *optional*, defaults to `False`):
Whether auxiliary decoding losses (loss at each decoder layer) are to be used.
position_embedding_type (`str`, *optional*, defaults to `"sine"`):
Type of position embeddings to be used on top of the image features. One of `"sine"` or `"learned"`.
|
351_4_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deta.md
|
https://huggingface.co/docs/transformers/en/model_doc/deta/#detaconfig
|
.md
|
Type of position embeddings to be used on top of the image features. One of `"sine"` or `"learned"`.
class_cost (`float`, *optional*, defaults to 1):
Relative weight of the classification error in the Hungarian matching cost.
bbox_cost (`float`, *optional*, defaults to 5):
Relative weight of the L1 error of the bounding box coordinates in the Hungarian matching cost.
giou_cost (`float`, *optional*, defaults to 2):
|
351_4_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deta.md
|
https://huggingface.co/docs/transformers/en/model_doc/deta/#detaconfig
|
.md
|
giou_cost (`float`, *optional*, defaults to 2):
Relative weight of the generalized IoU loss of the bounding box in the Hungarian matching cost.
mask_loss_coefficient (`float`, *optional*, defaults to 1):
Relative weight of the Focal loss in the panoptic segmentation loss.
dice_loss_coefficient (`float`, *optional*, defaults to 1):
Relative weight of the DICE/F-1 loss in the panoptic segmentation loss.
bbox_loss_coefficient (`float`, *optional*, defaults to 5):
|
351_4_11
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deta.md
|
https://huggingface.co/docs/transformers/en/model_doc/deta/#detaconfig
|
.md
|
bbox_loss_coefficient (`float`, *optional*, defaults to 5):
Relative weight of the L1 bounding box loss in the object detection loss.
giou_loss_coefficient (`float`, *optional*, defaults to 2):
Relative weight of the generalized IoU loss in the object detection loss.
eos_coefficient (`float`, *optional*, defaults to 0.1):
Relative classification weight of the 'no-object' class in the object detection loss.
num_feature_levels (`int`, *optional*, defaults to 5):
The number of input feature levels.
|
351_4_12
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deta.md
|
https://huggingface.co/docs/transformers/en/model_doc/deta/#detaconfig
|
.md
|
num_feature_levels (`int`, *optional*, defaults to 5):
The number of input feature levels.
encoder_n_points (`int`, *optional*, defaults to 4):
The number of sampled keys in each feature level for each attention head in the encoder.
decoder_n_points (`int`, *optional*, defaults to 4):
The number of sampled keys in each feature level for each attention head in the decoder.
two_stage (`bool`, *optional*, defaults to `True`):
|
351_4_13
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deta.md
|
https://huggingface.co/docs/transformers/en/model_doc/deta/#detaconfig
|
.md
|
two_stage (`bool`, *optional*, defaults to `True`):
Whether to apply a two-stage deformable DETR, where the region proposals are also generated by a variant of
DETA, which are further fed into the decoder for iterative bounding box refinement.
two_stage_num_proposals (`int`, *optional*, defaults to 300):
The number of region proposals to be generated, in case `two_stage` is set to `True`.
with_box_refine (`bool`, *optional*, defaults to `True`):
|
351_4_14
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deta.md
|
https://huggingface.co/docs/transformers/en/model_doc/deta/#detaconfig
|
.md
|
with_box_refine (`bool`, *optional*, defaults to `True`):
Whether to apply iterative bounding box refinement, where each decoder layer refines the bounding boxes
based on the predictions from the previous layer.
focal_alpha (`float`, *optional*, defaults to 0.25):
Alpha parameter in the focal loss.
assign_first_stage (`bool`, *optional*, defaults to `True`):
Whether to assign each prediction i to the highest overlapping ground truth object if the overlap is larger than a threshold 0.7.
|
351_4_15
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deta.md
|
https://huggingface.co/docs/transformers/en/model_doc/deta/#detaconfig
|
.md
|
assign_second_stage (`bool`, *optional*, defaults to `True`):
Whether to assign second assignment procedure in the second stage closely follows the first stage assignment procedure.
disable_custom_kernels (`bool`, *optional*, defaults to `True`):
Disable the use of custom CUDA and CPU kernels. This option is necessary for the ONNX export, as custom
kernels are not supported by PyTorch ONNX export.
Examples:
```python
>>> from transformers import DetaConfig, DetaModel
|
351_4_16
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deta.md
|
https://huggingface.co/docs/transformers/en/model_doc/deta/#detaconfig
|
.md
|
>>> # Initializing a DETA SenseTime/deformable-detr style configuration
>>> configuration = DetaConfig()
>>> # Initializing a model (with random weights) from the SenseTime/deformable-detr style configuration
>>> model = DetaModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
351_4_17
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deta.md
|
https://huggingface.co/docs/transformers/en/model_doc/deta/#detaimageprocessor
|
.md
|
Constructs a Deformable DETR image processor.
Args:
format (`str`, *optional*, defaults to `"coco_detection"`):
Data format of the annotations. One of "coco_detection" or "coco_panoptic".
do_resize (`bool`, *optional*, defaults to `True`):
Controls whether to resize the image's (height, width) dimensions to the specified `size`. Can be
overridden by the `do_resize` parameter in the `preprocess` method.
size (`Dict[str, int]` *optional*, defaults to `{"shortest_edge": 800, "longest_edge": 1333}`):
|
351_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deta.md
|
https://huggingface.co/docs/transformers/en/model_doc/deta/#detaimageprocessor
|
.md
|
size (`Dict[str, int]` *optional*, defaults to `{"shortest_edge": 800, "longest_edge": 1333}`):
Size of the image's `(height, width)` dimensions after resizing. Can be overridden by the `size` parameter
in the `preprocess` method. Available options are:
- `{"height": int, "width": int}`: The image will be resized to the exact size `(height, width)`.
Do NOT keep the aspect ratio.
- `{"shortest_edge": int, "longest_edge": int}`: The image will be resized to a maximum size respecting
|
351_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deta.md
|
https://huggingface.co/docs/transformers/en/model_doc/deta/#detaimageprocessor
|
.md
|
- `{"shortest_edge": int, "longest_edge": int}`: The image will be resized to a maximum size respecting
the aspect ratio and keeping the shortest edge less or equal to `shortest_edge` and the longest edge
less or equal to `longest_edge`.
- `{"max_height": int, "max_width": int}`: The image will be resized to the maximum size respecting the
aspect ratio and keeping the height less or equal to `max_height` and the width less or equal to
`max_width`.
|
351_5_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deta.md
|
https://huggingface.co/docs/transformers/en/model_doc/deta/#detaimageprocessor
|
.md
|
aspect ratio and keeping the height less or equal to `max_height` and the width less or equal to
`max_width`.
resample (`PILImageResampling`, *optional*, defaults to `PILImageResampling.BILINEAR`):
Resampling filter to use if resizing the image.
do_rescale (`bool`, *optional*, defaults to `True`):
Controls whether to rescale the image by the specified scale `rescale_factor`. Can be overridden by the
`do_rescale` parameter in the `preprocess` method.
|
351_5_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deta.md
|
https://huggingface.co/docs/transformers/en/model_doc/deta/#detaimageprocessor
|
.md
|
`do_rescale` parameter in the `preprocess` method.
rescale_factor (`int` or `float`, *optional*, defaults to `1/255`):
Scale factor to use if rescaling the image. Can be overridden by the `rescale_factor` parameter in the
`preprocess` method.
do_normalize:
Controls whether to normalize the image. Can be overridden by the `do_normalize` parameter in the
`preprocess` method.
image_mean (`float` or `List[float]`, *optional*, defaults to `IMAGENET_DEFAULT_MEAN`):
|
351_5_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deta.md
|
https://huggingface.co/docs/transformers/en/model_doc/deta/#detaimageprocessor
|
.md
|
`preprocess` method.
image_mean (`float` or `List[float]`, *optional*, defaults to `IMAGENET_DEFAULT_MEAN`):
Mean values to use when normalizing the image. Can be a single value or a list of values, one for each
channel. Can be overridden by the `image_mean` parameter in the `preprocess` method.
image_std (`float` or `List[float]`, *optional*, defaults to `IMAGENET_DEFAULT_STD`):
Standard deviation values to use when normalizing the image. Can be a single value or a list of values, one
|
351_5_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deta.md
|
https://huggingface.co/docs/transformers/en/model_doc/deta/#detaimageprocessor
|
.md
|
Standard deviation values to use when normalizing the image. Can be a single value or a list of values, one
for each channel. Can be overridden by the `image_std` parameter in the `preprocess` method.
do_convert_annotations (`bool`, *optional*, defaults to `True`):
Controls whether to convert the annotations to the format expected by the DETR model. Converts the
bounding boxes to the format `(center_x, center_y, width, height)` and in the range `[0, 1]`.
|
351_5_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deta.md
|
https://huggingface.co/docs/transformers/en/model_doc/deta/#detaimageprocessor
|
.md
|
bounding boxes to the format `(center_x, center_y, width, height)` and in the range `[0, 1]`.
Can be overridden by the `do_convert_annotations` parameter in the `preprocess` method.
do_pad (`bool`, *optional*, defaults to `True`):
Controls whether to pad the image. Can be overridden by the `do_pad` parameter in the `preprocess`
method. If `True`, padding will be applied to the bottom and right of the image with zeros.
If `pad_size` is provided, the image will be padded to the specified dimensions.
|
351_5_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deta.md
|
https://huggingface.co/docs/transformers/en/model_doc/deta/#detaimageprocessor
|
.md
|
If `pad_size` is provided, the image will be padded to the specified dimensions.
Otherwise, the image will be padded to the maximum height and width of the batch.
pad_size (`Dict[str, int]`, *optional*):
The size `{"height": int, "width" int}` to pad the images to. Must be larger than any image size
provided for preprocessing. If `pad_size` is not provided, images will be padded to the largest
height and width in the batch.
Methods: preprocess
- post_process_object_detection
|
351_5_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deta.md
|
https://huggingface.co/docs/transformers/en/model_doc/deta/#detamodel
|
.md
|
The bare DETA Model (consisting of a backbone and encoder-decoder Transformer) outputting raw hidden-states without
any specific head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
351_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deta.md
|
https://huggingface.co/docs/transformers/en/model_doc/deta/#detamodel
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`DetaConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
|
351_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deta.md
|
https://huggingface.co/docs/transformers/en/model_doc/deta/#detamodel
|
.md
|
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
351_6_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deta.md
|
https://huggingface.co/docs/transformers/en/model_doc/deta/#detaforobjectdetection
|
.md
|
DETA Model (consisting of a backbone and encoder-decoder Transformer) with object detection heads on top, for tasks
such as COCO detection.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
351_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deta.md
|
https://huggingface.co/docs/transformers/en/model_doc/deta/#detaforobjectdetection
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`DetaConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
|
351_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deta.md
|
https://huggingface.co/docs/transformers/en/model_doc/deta/#detaforobjectdetection
|
.md
|
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
351_7_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/starcoder2.md
|
https://huggingface.co/docs/transformers/en/model_doc/starcoder2/
|
.md
|
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
352_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/starcoder2.md
|
https://huggingface.co/docs/transformers/en/model_doc/starcoder2/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
352_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/starcoder2.md
|
https://huggingface.co/docs/transformers/en/model_doc/starcoder2/#overview
|
.md
|
StarCoder2 is a family of open LLMs for code and comes in 3 different sizes with 3B, 7B and 15B parameters. The flagship StarCoder2-15B model is trained on over 4 trillion tokens and 600+ programming languages from The Stack v2. All models use Grouped Query Attention, a context window of 16,384 tokens with a sliding window attention of 4,096 tokens, and were trained using the Fill-in-the-Middle objective. The models have been released with the paper [StarCoder 2 and The Stack v2: The Next
|
352_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/starcoder2.md
|
https://huggingface.co/docs/transformers/en/model_doc/starcoder2/#overview
|
.md
|
using the Fill-in-the-Middle objective. The models have been released with the paper [StarCoder 2 and The Stack v2: The Next Generation](https://arxiv.org/abs/2402.19173) by Anton Lozhkov, Raymond Li, Loubna Ben Allal, Federico Cassano, Joel Lamy-Poirier, Nouamane Tazi, Ao Tang, Dmytro Pykhtar, Jiawei Liu, Yuxiang Wei, Tianyang Liu, Max Tian, Denis Kocetkov, Arthur Zucker, Younes Belkada, Zijian Wang, Qian Liu, Dmitry Abulkhanov, Indraneil Paul, Zhuang Li, Wen-Ding Li, Megan Risdal, Jia Li, Jian Zhu, Terry
|
352_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/starcoder2.md
|
https://huggingface.co/docs/transformers/en/model_doc/starcoder2/#overview
|
.md
|
Zijian Wang, Qian Liu, Dmitry Abulkhanov, Indraneil Paul, Zhuang Li, Wen-Ding Li, Megan Risdal, Jia Li, Jian Zhu, Terry Yue Zhuo, Evgenii Zheltonozhskii, Nii Osae Osae Dade, Wenhao Yu, Lucas Krauß, Naman Jain, Yixuan Su, Xuanli He, Manan Dey, Edoardo Abati, Yekun Chai, Niklas Muennighoff, Xiangru Tang, Muhtasham Oblokulov, Christopher Akiki, Marc Marone, Chenghao Mou, Mayank Mishra, Alex Gu, Binyuan Hui, Tri Dao, Armel Zebaze, Olivier Dehaene, Nicolas Patry, Canwen Xu, Julian McAuley, Han Hu, Torsten
|
352_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/starcoder2.md
|
https://huggingface.co/docs/transformers/en/model_doc/starcoder2/#overview
|
.md
|
Mishra, Alex Gu, Binyuan Hui, Tri Dao, Armel Zebaze, Olivier Dehaene, Nicolas Patry, Canwen Xu, Julian McAuley, Han Hu, Torsten Scholak, Sebastien Paquet, Jennifer Robinson, Carolyn Jane Anderson, Nicolas Chapados, Mostofa Patwary, Nima Tajbakhsh, Yacine Jernite, Carlos Muñoz Ferrandis, Lingming Zhang, Sean Hughes, Thomas Wolf, Arjun Guha, Leandro von Werra, and Harm de Vries.
|
352_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/starcoder2.md
|
https://huggingface.co/docs/transformers/en/model_doc/starcoder2/#overview
|
.md
|
The abstract of the paper is the following:
|
352_1_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/starcoder2.md
|
https://huggingface.co/docs/transformers/en/model_doc/starcoder2/#overview
|
.md
|
> The BigCode project, an open-scientific collaboration focused on the responsible development of Large Language Models for Code (Code LLMs), introduces StarCoder2. In partnership with Software Heritage (SWH), we build The Stack v2 on top of the digital commons of their source code archive. Alongside the SWH repositories spanning 619 programming languages, we carefully select other high-quality data sources, such as GitHub pull requests, Kaggle notebooks, and code documentation. This results in a training
|
352_1_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/starcoder2.md
|
https://huggingface.co/docs/transformers/en/model_doc/starcoder2/#overview
|
.md
|
high-quality data sources, such as GitHub pull requests, Kaggle notebooks, and code documentation. This results in a training set that is 4x larger than the first StarCoder dataset. We train StarCoder2 models with 3B, 7B, and 15B parameters on 3.3 to 4.3 trillion tokens and thoroughly evaluate them on a comprehensive set of Code LLM benchmarks. We find that our small model, StarCoder2-3B, outperforms other Code LLMs of similar size on most benchmarks, and also outperforms StarCoderBase-15B. Our large
|
352_1_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/starcoder2.md
|
https://huggingface.co/docs/transformers/en/model_doc/starcoder2/#overview
|
.md
|
outperforms other Code LLMs of similar size on most benchmarks, and also outperforms StarCoderBase-15B. Our large model, StarCoder2- 15B, significantly outperforms other models of comparable size. In addition, it matches or outperforms CodeLlama-34B, a model more than twice its size. Although DeepSeekCoder- 33B is the best-performing model at code completion for high-resource languages, we find that StarCoder2-15B outperforms it on math and code reasoning benchmarks, as well as several low-resource
|
352_1_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/starcoder2.md
|
https://huggingface.co/docs/transformers/en/model_doc/starcoder2/#overview
|
.md
|
languages, we find that StarCoder2-15B outperforms it on math and code reasoning benchmarks, as well as several low-resource languages. We make the model weights available under an OpenRAIL license and ensure full transparency regarding the training data by releasing the SoftWare Heritage persistent IDentifiers (SWHIDs) of the source code data.
|
352_1_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/starcoder2.md
|
https://huggingface.co/docs/transformers/en/model_doc/starcoder2/#license
|
.md
|
The models are licensed under the [BigCode OpenRAIL-M v1 license agreement](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement).
|
352_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/starcoder2.md
|
https://huggingface.co/docs/transformers/en/model_doc/starcoder2/#usage-tips
|
.md
|
The StarCoder2 models can be found in the [HuggingFace hub](https://huggingface.co/collections/bigcode/starcoder2-65de6da6e87db3383572be1a). You can find some examples for inference and fine-tuning in StarCoder2's [GitHub repo](https://github.com/bigcode-project/starcoder2).
These ready-to-use checkpoints can be downloaded and used via the HuggingFace Hub:
```python
>>> from transformers import AutoModelForCausalLM, AutoTokenizer
|
352_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/starcoder2.md
|
https://huggingface.co/docs/transformers/en/model_doc/starcoder2/#usage-tips
|
.md
|
>>> model = AutoModelForCausalLM.from_pretrained("bigcode/starcoder2-7b", device_map="auto")
>>> tokenizer = AutoTokenizer.from_pretrained("bigcode/starcoder2-7b")
>>> prompt = "def print_hello_world():"
>>> model_inputs = tokenizer([prompt], return_tensors="pt").to("cuda")
>>> generated_ids = model.generate(**model_inputs, max_new_tokens=10, do_sample=False)
>>> tokenizer.batch_decode(generated_ids)[0]
'def print_hello_world():\n print("Hello World!")\n\ndef print'
```
|
352_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/starcoder2.md
|
https://huggingface.co/docs/transformers/en/model_doc/starcoder2/#starcoder2config
|
.md
|
This is the configuration class to store the configuration of a [`Starcoder2Model`]. It is used to instantiate a
Starcoder2 model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the [bigcode/starcoder2-7b](https://huggingface.co/bigcode/starcoder2-7b) model.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
352_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/starcoder2.md
|
https://huggingface.co/docs/transformers/en/model_doc/starcoder2/#starcoder2config
|
.md
|
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 49152):
Vocabulary size of the Starcoder2 model. Defines the number of different tokens that can be represented by the
`inputs_ids` passed when calling [`Starcoder2Model`]
hidden_size (`int`, *optional*, defaults to 3072):
Dimension of the hidden representations.
|
352_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/starcoder2.md
|
https://huggingface.co/docs/transformers/en/model_doc/starcoder2/#starcoder2config
|
.md
|
hidden_size (`int`, *optional*, defaults to 3072):
Dimension of the hidden representations.
intermediate_size (`int`, *optional*, defaults to 12288):
Dimension of the MLP representations.
num_hidden_layers (`int`, *optional*, defaults to 30):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 24):
Number of attention heads for each attention layer in the Transformer encoder.
num_key_value_heads (`int`, *optional*, defaults to 2):
|
352_4_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/starcoder2.md
|
https://huggingface.co/docs/transformers/en/model_doc/starcoder2/#starcoder2config
|
.md
|
num_key_value_heads (`int`, *optional*, defaults to 2):
This is the number of key_value heads that should be used to implement Grouped Query Attention. If
`num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if
`num_key_value_heads=1` the model will use Multi Query Attention (MQA) otherwise GQA is used. When
converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
|
352_4_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/starcoder2.md
|
https://huggingface.co/docs/transformers/en/model_doc/starcoder2/#starcoder2config
|
.md
|
converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
by meanpooling all the original heads within that group. For more details checkout [this
paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to `8`.
hidden_act (`str` or `function`, *optional*, defaults to `"gelu_pytorch_tanh"`):
The non-linear activation function (function or string) in the decoder.
max_position_embeddings (`int`, *optional*, defaults to 4096):
|
352_4_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/starcoder2.md
|
https://huggingface.co/docs/transformers/en/model_doc/starcoder2/#starcoder2config
|
.md
|
max_position_embeddings (`int`, *optional*, defaults to 4096):
The maximum sequence length that this model might ever be used with. Starcoder2's sliding window attention
allows sequence of up to 4096*32 tokens.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
norm_epsilon (`float`, *optional*, defaults to 1e-05):
Epsilon value for the layer norm
use_cache (`bool`, *optional*, defaults to `True`):
|
352_4_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/starcoder2.md
|
https://huggingface.co/docs/transformers/en/model_doc/starcoder2/#starcoder2config
|
.md
|
Epsilon value for the layer norm
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if `config.is_decoder=True`.
bos_token_id (`int`, *optional*, defaults to 50256):
The id of the "beginning-of-sequence" token.
eos_token_id (`int`, *optional*, defaults to 50256):
The id of the "end-of-sequence" token.
rope_theta (`float`, *optional*, defaults to 10000.0):
The base period of the RoPE embeddings.
|
352_4_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/starcoder2.md
|
https://huggingface.co/docs/transformers/en/model_doc/starcoder2/#starcoder2config
|
.md
|
rope_theta (`float`, *optional*, defaults to 10000.0):
The base period of the RoPE embeddings.
rope_scaling (`Dict`, *optional*):
Dictionary containing the scaling configuration for the RoPE embeddings. NOTE: if you apply new rope type
and you expect the model to work on longer `max_position_embeddings`, we recommend you to update this value
accordingly.
Expected contents:
`rope_type` (`str`):
The sub-variant of RoPE to use. Can be one of ['default', 'linear', 'dynamic', 'yarn', 'longrope',
|
352_4_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/starcoder2.md
|
https://huggingface.co/docs/transformers/en/model_doc/starcoder2/#starcoder2config
|
.md
|
`rope_type` (`str`):
The sub-variant of RoPE to use. Can be one of ['default', 'linear', 'dynamic', 'yarn', 'longrope',
'llama3'], with 'default' being the original RoPE implementation.
`factor` (`float`, *optional*):
Used with all rope types except 'default'. The scaling factor to apply to the RoPE embeddings. In
most scaling types, a `factor` of x will enable the model to handle sequences of length x *
original maximum pre-trained length.
`original_max_position_embeddings` (`int`, *optional*):
|
352_4_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/starcoder2.md
|
https://huggingface.co/docs/transformers/en/model_doc/starcoder2/#starcoder2config
|
.md
|
original maximum pre-trained length.
`original_max_position_embeddings` (`int`, *optional*):
Used with 'dynamic', 'longrope' and 'llama3'. The original max position embeddings used during
pretraining.
`attention_factor` (`float`, *optional*):
Used with 'yarn' and 'longrope'. The scaling factor to be applied on the attention
computation. If unspecified, it defaults to value recommended by the implementation, using the
`factor` field to infer the suggested value.
`beta_fast` (`float`, *optional*):
|
352_4_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/starcoder2.md
|
https://huggingface.co/docs/transformers/en/model_doc/starcoder2/#starcoder2config
|
.md
|
`factor` field to infer the suggested value.
`beta_fast` (`float`, *optional*):
Only used with 'yarn'. Parameter to set the boundary for extrapolation (only) in the linear
ramp function. If unspecified, it defaults to 32.
`beta_slow` (`float`, *optional*):
Only used with 'yarn'. Parameter to set the boundary for interpolation (only) in the linear
ramp function. If unspecified, it defaults to 1.
`short_factor` (`List[float]`, *optional*):
|
352_4_10
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.