source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/text_generation.md | https://huggingface.co/docs/transformers/en/main_classes/text_generation/#generationconfig | .md | Whether the model is an assistant (draft) model.
num_assistant_tokens (`int`, *optional*, defaults to 20):
Defines the number of _speculative tokens_ that shall be generated by the assistant model before being
checked by the target model at each iteration. Higher values for `num_assistant_tokens` make the generation
more _speculative_ : If the assistant model is performant larger speed-ups can be reached, if the assistant
model requires lots of corrections, lower speed-ups are reached. | 452_2_43 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/text_generation.md | https://huggingface.co/docs/transformers/en/main_classes/text_generation/#generationconfig | .md | model requires lots of corrections, lower speed-ups are reached.
num_assistant_tokens_schedule (`str`, *optional*, defaults to `"constant"`):
Defines the schedule at which max assistant tokens shall be changed during inference.
- `"heuristic"`: When all speculative tokens are correct, increase `num_assistant_tokens` by 2 else
reduce by 1. `num_assistant_tokens` value is persistent over multiple generation calls with the same assistant model. | 452_2_44 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/text_generation.md | https://huggingface.co/docs/transformers/en/main_classes/text_generation/#generationconfig | .md | reduce by 1. `num_assistant_tokens` value is persistent over multiple generation calls with the same assistant model.
- `"heuristic_transient"`: Same as `"heuristic"` but `num_assistant_tokens` is reset to its initial value after each generation call.
- `"constant"`: `num_assistant_tokens` stays unchanged during generation
assistant_confidence_threshold (`float`, *optional*, defaults to 0.4): | 452_2_45 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/text_generation.md | https://huggingface.co/docs/transformers/en/main_classes/text_generation/#generationconfig | .md | assistant_confidence_threshold (`float`, *optional*, defaults to 0.4):
The confidence threshold for the assistant model. If the assistant model's confidence in its prediction for the current token is lower
than this threshold, the assistant model stops the current token generation iteration, even if the number of _speculative tokens_ | 452_2_46 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/text_generation.md | https://huggingface.co/docs/transformers/en/main_classes/text_generation/#generationconfig | .md | (defined by `num_assistant_tokens`) is not yet reached. The assistant's confidence threshold is adjusted throughout the speculative iterations to reduce the number of unnecessary draft and target forward passes, biased towards avoiding false negatives.
`assistant_confidence_threshold` value is persistent over multiple generation calls with the same assistant model.
It is an unsupervised version of the dynamic speculation lookahead | 452_2_47 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/text_generation.md | https://huggingface.co/docs/transformers/en/main_classes/text_generation/#generationconfig | .md | It is an unsupervised version of the dynamic speculation lookahead
from Dynamic Speculation Lookahead Accelerates Speculative Decoding of Large Language Models <https://arxiv.org/abs/2405.04304>.
prompt_lookup_num_tokens (`int`, *optional*):
The number of tokens to be output as candidate tokens.
max_matching_ngram_size (`int`, *optional*):
The maximum ngram size to be considered for matching in the prompt. Default to 2 if not provided.
assistant_early_exit(`int`, *optional*): | 452_2_48 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/text_generation.md | https://huggingface.co/docs/transformers/en/main_classes/text_generation/#generationconfig | .md | assistant_early_exit(`int`, *optional*):
If set to a positive integer, early exit of the model will be used as an assistant. Can only be used with
models that support early exit (i.e. models where logits from intermediate layers can be interpreted by the LM head).
assistant_lookbehind(`int`, *optional*, defaults to 10):
If set to a positive integer, the re-encodeing process will additionally consider the last `assistant_lookbehind` assistant tokens | 452_2_49 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/text_generation.md | https://huggingface.co/docs/transformers/en/main_classes/text_generation/#generationconfig | .md | to correctly align tokens. Can only be used with different tokenizers in speculative decoding.
See this [blog](https://huggingface.co/blog/universal_assisted_generation) for more details.
target_lookbehind(`int`, *optional*, defaults to 10):
If set to a positive integer, the re-encodeing process will additionally consider the last `target_lookbehind` target tokens
to correctly align tokens. Can only be used with different tokenizers in speculative decoding. | 452_2_50 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/text_generation.md | https://huggingface.co/docs/transformers/en/main_classes/text_generation/#generationconfig | .md | to correctly align tokens. Can only be used with different tokenizers in speculative decoding.
See this [blog](https://huggingface.co/blog/universal_assisted_generation) for more details.
> Parameters related to performances and compilation
compile_config (CompileConfig, *optional*):
If using a static cache, this controls how `generate` will `compile` the forward pass for performance
gains.
> Wild card
generation_kwargs: | 452_2_51 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/text_generation.md | https://huggingface.co/docs/transformers/en/main_classes/text_generation/#generationconfig | .md | gains.
> Wild card
generation_kwargs:
Additional generation kwargs will be forwarded to the `generate` function of the model. Kwargs that are not
present in `generate`'s signature will be used in the model forward pass.
- from_pretrained
- from_model_config
- save_pretrained
- update
- validate
- get_generation_mode | 452_2_52 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/text_generation.md | https://huggingface.co/docs/transformers/en/main_classes/text_generation/#generationmixin | .md | A class containing all functions for auto-regressive text generation, to be used as a mixin in [`PreTrainedModel`].
The class exposes [`~generation.GenerationMixin.generate`], which can be used for:
- *greedy decoding* if `num_beams=1` and `do_sample=False`
- *contrastive search* if `penalty_alpha>0` and `top_k>1`
- *multinomial sampling* if `num_beams=1` and `do_sample=True`
- *beam-search decoding* if `num_beams>1` and `do_sample=False` | 452_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/text_generation.md | https://huggingface.co/docs/transformers/en/main_classes/text_generation/#generationmixin | .md | - *multinomial sampling* if `num_beams=1` and `do_sample=True`
- *beam-search decoding* if `num_beams>1` and `do_sample=False`
- *beam-search multinomial sampling* if `num_beams>1` and `do_sample=True`
- *diverse beam-search decoding* if `num_beams>1` and `num_beam_groups>1`
- *constrained beam-search decoding* if `constraints!=None` or `force_words_ids!=None`
- *assisted decoding* if `assistant_model` or `prompt_lookup_num_tokens` is passed to `.generate()` | 452_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/text_generation.md | https://huggingface.co/docs/transformers/en/main_classes/text_generation/#generationmixin | .md | - *assisted decoding* if `assistant_model` or `prompt_lookup_num_tokens` is passed to `.generate()`
To learn more about decoding strategies refer to the [text generation strategies guide](../generation_strategies).
- generate
- compute_transition_scores | 452_3_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/text_generation.md | https://huggingface.co/docs/transformers/en/main_classes/text_generation/#tfgenerationmixin | .md | TFGenerationMixin
- generate
- compute_transition_scores | 452_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/text_generation.md | https://huggingface.co/docs/transformers/en/main_classes/text_generation/#flaxgenerationmixin | .md | FlaxGenerationMixin
- generate | 452_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/tokenizer.md | https://huggingface.co/docs/transformers/en/main_classes/tokenizer/ | .md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 453_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/tokenizer.md | https://huggingface.co/docs/transformers/en/main_classes/tokenizer/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 453_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/tokenizer.md | https://huggingface.co/docs/transformers/en/main_classes/tokenizer/#tokenizer | .md | A tokenizer is in charge of preparing the inputs for a model. The library contains tokenizers for all the models. Most
of the tokenizers are available in two flavors: a full python implementation and a "Fast" implementation based on the
Rust library [🤗 Tokenizers](https://github.com/huggingface/tokenizers). The "Fast" implementations allows:
1. a significant speed-up in particular when doing batched tokenization and | 453_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/tokenizer.md | https://huggingface.co/docs/transformers/en/main_classes/tokenizer/#tokenizer | .md | 1. a significant speed-up in particular when doing batched tokenization and
2. additional methods to map between the original string (character and words) and the token space (e.g. getting the
index of the token comprising a given character or the span of characters corresponding to a given token).
The base classes [`PreTrainedTokenizer`] and [`PreTrainedTokenizerFast`]
implement the common methods for encoding string inputs in model inputs (see below) and instantiating/saving python and | 453_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/tokenizer.md | https://huggingface.co/docs/transformers/en/main_classes/tokenizer/#tokenizer | .md | implement the common methods for encoding string inputs in model inputs (see below) and instantiating/saving python and
"Fast" tokenizers either from a local file or directory or from a pretrained tokenizer provided by the library
(downloaded from HuggingFace's AWS S3 repository). They both rely on
[`~tokenization_utils_base.PreTrainedTokenizerBase`] that contains the common methods, and
[`~tokenization_utils_base.SpecialTokensMixin`]. | 453_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/tokenizer.md | https://huggingface.co/docs/transformers/en/main_classes/tokenizer/#tokenizer | .md | [`~tokenization_utils_base.SpecialTokensMixin`].
[`PreTrainedTokenizer`] and [`PreTrainedTokenizerFast`] thus implement the main
methods for using all the tokenizers:
- Tokenizing (splitting strings in sub-word token strings), converting tokens strings to ids and back, and
encoding/decoding (i.e., tokenizing and converting to integers).
- Adding new tokens to the vocabulary in a way that is independent of the underlying structure (BPE, SentencePiece...). | 453_1_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/tokenizer.md | https://huggingface.co/docs/transformers/en/main_classes/tokenizer/#tokenizer | .md | - Adding new tokens to the vocabulary in a way that is independent of the underlying structure (BPE, SentencePiece...).
- Managing special tokens (like mask, beginning-of-sentence, etc.): adding them, assigning them to attributes in the
tokenizer for easy access and making sure they are not split during tokenization.
[`BatchEncoding`] holds the output of the
[`~tokenization_utils_base.PreTrainedTokenizerBase`]'s encoding methods (`__call__`, | 453_1_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/tokenizer.md | https://huggingface.co/docs/transformers/en/main_classes/tokenizer/#tokenizer | .md | [`BatchEncoding`] holds the output of the
[`~tokenization_utils_base.PreTrainedTokenizerBase`]'s encoding methods (`__call__`,
`encode_plus` and `batch_encode_plus`) and is derived from a Python dictionary. When the tokenizer is a pure python
tokenizer, this class behaves just like a standard python dictionary and holds the various model inputs computed by
these methods (`input_ids`, `attention_mask`...). When the tokenizer is a "Fast" tokenizer (i.e., backed by | 453_1_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/tokenizer.md | https://huggingface.co/docs/transformers/en/main_classes/tokenizer/#tokenizer | .md | these methods (`input_ids`, `attention_mask`...). When the tokenizer is a "Fast" tokenizer (i.e., backed by
HuggingFace [tokenizers library](https://github.com/huggingface/tokenizers)), this class provides in addition
several advanced alignment methods which can be used to map between the original string (character and words) and the
token space (e.g., getting the index of the token comprising a given character or the span of characters corresponding
to a given token). | 453_1_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/tokenizer.md | https://huggingface.co/docs/transformers/en/main_classes/tokenizer/#multimodal-tokenizer | .md | Apart from that each tokenizer can be a "multimodal" tokenizer which means that the tokenizer will hold all relevant special tokens
as part of tokenizer attributes for easier access. For example, if the tokenizer is loaded from a vision-language model like LLaVA, you will
be able to access `tokenizer.image_token_id` to obtain the special image token used as a placeholder. | 453_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/tokenizer.md | https://huggingface.co/docs/transformers/en/main_classes/tokenizer/#multimodal-tokenizer | .md | be able to access `tokenizer.image_token_id` to obtain the special image token used as a placeholder.
To enable extra special tokens for any type of tokenizer, you have to add the following lines and save the tokenizer. Extra special tokens do not
have to be modality related and can ne anything that the model often needs access to. In the below code, tokenizer at `output_dir` will have direct access
to three more special tokens.
```python
vision_tokenizer = AutoTokenizer.from_pretrained( | 453_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/tokenizer.md | https://huggingface.co/docs/transformers/en/main_classes/tokenizer/#multimodal-tokenizer | .md | to three more special tokens.
```python
vision_tokenizer = AutoTokenizer.from_pretrained(
"llava-hf/llava-1.5-7b-hf",
extra_special_tokens={"image_token": "<image>", "boi_token": "<image_start>", "eoi_token": "<image_end>"}
)
print(vision_tokenizer.image_token, vision_tokenizer.image_token_id)
("<image>", 32000)
``` | 453_2_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/tokenizer.md | https://huggingface.co/docs/transformers/en/main_classes/tokenizer/#pretrainedtokenizer | .md | Base class for all slow tokenizers.
Inherits from [`~tokenization_utils_base.PreTrainedTokenizerBase`].
Handle all the shared methods for tokenization and special tokens as well as methods downloading/caching/loading
pretrained tokenizers as well as adding tokens to the vocabulary.
This class also contain the added tokens in a unified way on top of all tokenizers so we don't have to handle the | 453_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/tokenizer.md | https://huggingface.co/docs/transformers/en/main_classes/tokenizer/#pretrainedtokenizer | .md | This class also contain the added tokens in a unified way on top of all tokenizers so we don't have to handle the
specific vocabulary augmentation methods of the various underlying dictionary structures (BPE, sentencepiece...).
Class attributes (overridden by derived classes)
- **vocab_files_names** (`Dict[str, str]`) -- A dictionary with, as keys, the `__init__` keyword name of each
vocabulary file required by the model, and as associated values, the filename for saving the associated file
(string). | 453_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/tokenizer.md | https://huggingface.co/docs/transformers/en/main_classes/tokenizer/#pretrainedtokenizer | .md | vocabulary file required by the model, and as associated values, the filename for saving the associated file
(string).
- **pretrained_vocab_files_map** (`Dict[str, Dict[str, str]]`) -- A dictionary of dictionaries, with the
high-level keys being the `__init__` keyword name of each vocabulary file required by the model, the
low-level being the `short-cut-names` of the pretrained models with, as associated values, the `url` to the
associated pretrained vocabulary file. | 453_3_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/tokenizer.md | https://huggingface.co/docs/transformers/en/main_classes/tokenizer/#pretrainedtokenizer | .md | associated pretrained vocabulary file.
- **model_input_names** (`List[str]`) -- A list of inputs expected in the forward pass of the model.
- **padding_side** (`str`) -- The default value for the side on which the model should have padding applied.
Should be `'right'` or `'left'`.
- **truncation_side** (`str`) -- The default value for the side on which the model should have truncation
applied. Should be `'right'` or `'left'`.
Args:
model_max_length (`int`, *optional*): | 453_3_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/tokenizer.md | https://huggingface.co/docs/transformers/en/main_classes/tokenizer/#pretrainedtokenizer | .md | applied. Should be `'right'` or `'left'`.
Args:
model_max_length (`int`, *optional*):
The maximum length (in number of tokens) for the inputs to the transformer model. When the tokenizer is
loaded with [`~tokenization_utils_base.PreTrainedTokenizerBase.from_pretrained`], this will be set to the
value stored for the associated model in `max_model_input_sizes` (see above). If no value is provided, will
default to VERY_LARGE_INTEGER (`int(1e30)`).
padding_side (`str`, *optional*): | 453_3_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/tokenizer.md | https://huggingface.co/docs/transformers/en/main_classes/tokenizer/#pretrainedtokenizer | .md | default to VERY_LARGE_INTEGER (`int(1e30)`).
padding_side (`str`, *optional*):
The side on which the model should have padding applied. Should be selected between ['right', 'left'].
Default value is picked from the class attribute of the same name.
truncation_side (`str`, *optional*):
The side on which the model should have truncation applied. Should be selected between ['right', 'left'].
Default value is picked from the class attribute of the same name.
chat_template (`str`, *optional*): | 453_3_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/tokenizer.md | https://huggingface.co/docs/transformers/en/main_classes/tokenizer/#pretrainedtokenizer | .md | Default value is picked from the class attribute of the same name.
chat_template (`str`, *optional*):
A Jinja template string that will be used to format lists of chat messages. See
https://huggingface.co/docs/transformers/chat_templating for a full description.
model_input_names (`List[string]`, *optional*):
The list of inputs accepted by the forward pass of the model (like `"token_type_ids"` or
`"attention_mask"`). Default value is picked from the class attribute of the same name. | 453_3_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/tokenizer.md | https://huggingface.co/docs/transformers/en/main_classes/tokenizer/#pretrainedtokenizer | .md | `"attention_mask"`). Default value is picked from the class attribute of the same name.
bos_token (`str` or `tokenizers.AddedToken`, *optional*):
A special token representing the beginning of a sentence. Will be associated to `self.bos_token` and
`self.bos_token_id`.
eos_token (`str` or `tokenizers.AddedToken`, *optional*):
A special token representing the end of a sentence. Will be associated to `self.eos_token` and
`self.eos_token_id`.
unk_token (`str` or `tokenizers.AddedToken`, *optional*): | 453_3_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/tokenizer.md | https://huggingface.co/docs/transformers/en/main_classes/tokenizer/#pretrainedtokenizer | .md | `self.eos_token_id`.
unk_token (`str` or `tokenizers.AddedToken`, *optional*):
A special token representing an out-of-vocabulary token. Will be associated to `self.unk_token` and
`self.unk_token_id`.
sep_token (`str` or `tokenizers.AddedToken`, *optional*):
A special token separating two different sentences in the same input (used by BERT for instance). Will be
associated to `self.sep_token` and `self.sep_token_id`.
pad_token (`str` or `tokenizers.AddedToken`, *optional*): | 453_3_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/tokenizer.md | https://huggingface.co/docs/transformers/en/main_classes/tokenizer/#pretrainedtokenizer | .md | associated to `self.sep_token` and `self.sep_token_id`.
pad_token (`str` or `tokenizers.AddedToken`, *optional*):
A special token used to make arrays of tokens the same size for batching purpose. Will then be ignored by
attention mechanisms or loss computation. Will be associated to `self.pad_token` and `self.pad_token_id`.
cls_token (`str` or `tokenizers.AddedToken`, *optional*):
A special token representing the class of the input (used by BERT for instance). Will be associated to | 453_3_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/tokenizer.md | https://huggingface.co/docs/transformers/en/main_classes/tokenizer/#pretrainedtokenizer | .md | A special token representing the class of the input (used by BERT for instance). Will be associated to
`self.cls_token` and `self.cls_token_id`.
mask_token (`str` or `tokenizers.AddedToken`, *optional*):
A special token representing a masked token (used by masked-language modeling pretraining objectives, like
BERT). Will be associated to `self.mask_token` and `self.mask_token_id`.
additional_special_tokens (tuple or list of `str` or `tokenizers.AddedToken`, *optional*): | 453_3_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/tokenizer.md | https://huggingface.co/docs/transformers/en/main_classes/tokenizer/#pretrainedtokenizer | .md | additional_special_tokens (tuple or list of `str` or `tokenizers.AddedToken`, *optional*):
A tuple or a list of additional special tokens. Add them here to ensure they are skipped when decoding with
`skip_special_tokens` is set to True. If they are not part of the vocabulary, they will be added at the end
of the vocabulary.
clean_up_tokenization_spaces (`bool`, *optional*, defaults to `True`):
Whether or not the model should cleanup the spaces that were added when splitting the input text during the | 453_3_11 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/tokenizer.md | https://huggingface.co/docs/transformers/en/main_classes/tokenizer/#pretrainedtokenizer | .md | Whether or not the model should cleanup the spaces that were added when splitting the input text during the
tokenization process.
split_special_tokens (`bool`, *optional*, defaults to `False`):
Whether or not the special tokens should be split during the tokenization process. Passing will affect the
internal state of the tokenizer. The default behavior is to not split special tokens. This means that if
`<s>` is the `bos_token`, then `tokenizer.tokenize("<s>") = ['<s>`]. Otherwise, if | 453_3_12 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/tokenizer.md | https://huggingface.co/docs/transformers/en/main_classes/tokenizer/#pretrainedtokenizer | .md | `<s>` is the `bos_token`, then `tokenizer.tokenize("<s>") = ['<s>`]. Otherwise, if
`split_special_tokens=True`, then `tokenizer.tokenize("<s>")` will be give `['<','s', '>']`.
- __call__
- add_tokens
- add_special_tokens
- apply_chat_template
- batch_decode
- decode
- encode
- push_to_hub
- all | 453_3_13 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/tokenizer.md | https://huggingface.co/docs/transformers/en/main_classes/tokenizer/#pretrainedtokenizerfast | .md | The [`PreTrainedTokenizerFast`] depend on the [tokenizers](https://huggingface.co/docs/tokenizers) library. The tokenizers obtained from the 🤗 tokenizers library can be
loaded very simply into 🤗 transformers. Take a look at the [Using tokenizers from 🤗 tokenizers](../fast_tokenizers) page to understand how this is done.
Base class for all slow tokenizers.
Inherits from [`~tokenization_utils_base.PreTrainedTokenizerBase`]. | 453_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/tokenizer.md | https://huggingface.co/docs/transformers/en/main_classes/tokenizer/#pretrainedtokenizerfast | .md | Base class for all slow tokenizers.
Inherits from [`~tokenization_utils_base.PreTrainedTokenizerBase`].
Handle all the shared methods for tokenization and special tokens as well as methods downloading/caching/loading
pretrained tokenizers as well as adding tokens to the vocabulary.
This class also contain the added tokens in a unified way on top of all tokenizers so we don't have to handle the | 453_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/tokenizer.md | https://huggingface.co/docs/transformers/en/main_classes/tokenizer/#pretrainedtokenizerfast | .md | This class also contain the added tokens in a unified way on top of all tokenizers so we don't have to handle the
specific vocabulary augmentation methods of the various underlying dictionary structures (BPE, sentencepiece...).
Class attributes (overridden by derived classes)
- **vocab_files_names** (`Dict[str, str]`) -- A dictionary with, as keys, the `__init__` keyword name of each
vocabulary file required by the model, and as associated values, the filename for saving the associated file
(string). | 453_4_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/tokenizer.md | https://huggingface.co/docs/transformers/en/main_classes/tokenizer/#pretrainedtokenizerfast | .md | vocabulary file required by the model, and as associated values, the filename for saving the associated file
(string).
- **pretrained_vocab_files_map** (`Dict[str, Dict[str, str]]`) -- A dictionary of dictionaries, with the
high-level keys being the `__init__` keyword name of each vocabulary file required by the model, the
low-level being the `short-cut-names` of the pretrained models with, as associated values, the `url` to the
associated pretrained vocabulary file. | 453_4_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/tokenizer.md | https://huggingface.co/docs/transformers/en/main_classes/tokenizer/#pretrainedtokenizerfast | .md | associated pretrained vocabulary file.
- **model_input_names** (`List[str]`) -- A list of inputs expected in the forward pass of the model.
- **padding_side** (`str`) -- The default value for the side on which the model should have padding applied.
Should be `'right'` or `'left'`.
- **truncation_side** (`str`) -- The default value for the side on which the model should have truncation
applied. Should be `'right'` or `'left'`.
Args:
model_max_length (`int`, *optional*): | 453_4_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/tokenizer.md | https://huggingface.co/docs/transformers/en/main_classes/tokenizer/#pretrainedtokenizerfast | .md | applied. Should be `'right'` or `'left'`.
Args:
model_max_length (`int`, *optional*):
The maximum length (in number of tokens) for the inputs to the transformer model. When the tokenizer is
loaded with [`~tokenization_utils_base.PreTrainedTokenizerBase.from_pretrained`], this will be set to the
value stored for the associated model in `max_model_input_sizes` (see above). If no value is provided, will
default to VERY_LARGE_INTEGER (`int(1e30)`).
padding_side (`str`, *optional*): | 453_4_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/tokenizer.md | https://huggingface.co/docs/transformers/en/main_classes/tokenizer/#pretrainedtokenizerfast | .md | default to VERY_LARGE_INTEGER (`int(1e30)`).
padding_side (`str`, *optional*):
The side on which the model should have padding applied. Should be selected between ['right', 'left'].
Default value is picked from the class attribute of the same name.
truncation_side (`str`, *optional*):
The side on which the model should have truncation applied. Should be selected between ['right', 'left'].
Default value is picked from the class attribute of the same name.
chat_template (`str`, *optional*): | 453_4_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/tokenizer.md | https://huggingface.co/docs/transformers/en/main_classes/tokenizer/#pretrainedtokenizerfast | .md | Default value is picked from the class attribute of the same name.
chat_template (`str`, *optional*):
A Jinja template string that will be used to format lists of chat messages. See
https://huggingface.co/docs/transformers/chat_templating for a full description.
model_input_names (`List[string]`, *optional*):
The list of inputs accepted by the forward pass of the model (like `"token_type_ids"` or
`"attention_mask"`). Default value is picked from the class attribute of the same name. | 453_4_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/tokenizer.md | https://huggingface.co/docs/transformers/en/main_classes/tokenizer/#pretrainedtokenizerfast | .md | `"attention_mask"`). Default value is picked from the class attribute of the same name.
bos_token (`str` or `tokenizers.AddedToken`, *optional*):
A special token representing the beginning of a sentence. Will be associated to `self.bos_token` and
`self.bos_token_id`.
eos_token (`str` or `tokenizers.AddedToken`, *optional*):
A special token representing the end of a sentence. Will be associated to `self.eos_token` and
`self.eos_token_id`.
unk_token (`str` or `tokenizers.AddedToken`, *optional*): | 453_4_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/tokenizer.md | https://huggingface.co/docs/transformers/en/main_classes/tokenizer/#pretrainedtokenizerfast | .md | `self.eos_token_id`.
unk_token (`str` or `tokenizers.AddedToken`, *optional*):
A special token representing an out-of-vocabulary token. Will be associated to `self.unk_token` and
`self.unk_token_id`.
sep_token (`str` or `tokenizers.AddedToken`, *optional*):
A special token separating two different sentences in the same input (used by BERT for instance). Will be
associated to `self.sep_token` and `self.sep_token_id`.
pad_token (`str` or `tokenizers.AddedToken`, *optional*): | 453_4_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/tokenizer.md | https://huggingface.co/docs/transformers/en/main_classes/tokenizer/#pretrainedtokenizerfast | .md | associated to `self.sep_token` and `self.sep_token_id`.
pad_token (`str` or `tokenizers.AddedToken`, *optional*):
A special token used to make arrays of tokens the same size for batching purpose. Will then be ignored by
attention mechanisms or loss computation. Will be associated to `self.pad_token` and `self.pad_token_id`.
cls_token (`str` or `tokenizers.AddedToken`, *optional*):
A special token representing the class of the input (used by BERT for instance). Will be associated to | 453_4_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/tokenizer.md | https://huggingface.co/docs/transformers/en/main_classes/tokenizer/#pretrainedtokenizerfast | .md | A special token representing the class of the input (used by BERT for instance). Will be associated to
`self.cls_token` and `self.cls_token_id`.
mask_token (`str` or `tokenizers.AddedToken`, *optional*):
A special token representing a masked token (used by masked-language modeling pretraining objectives, like
BERT). Will be associated to `self.mask_token` and `self.mask_token_id`.
additional_special_tokens (tuple or list of `str` or `tokenizers.AddedToken`, *optional*): | 453_4_11 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/tokenizer.md | https://huggingface.co/docs/transformers/en/main_classes/tokenizer/#pretrainedtokenizerfast | .md | additional_special_tokens (tuple or list of `str` or `tokenizers.AddedToken`, *optional*):
A tuple or a list of additional special tokens. Add them here to ensure they are skipped when decoding with
`skip_special_tokens` is set to True. If they are not part of the vocabulary, they will be added at the end
of the vocabulary.
clean_up_tokenization_spaces (`bool`, *optional*, defaults to `True`):
Whether or not the model should cleanup the spaces that were added when splitting the input text during the | 453_4_12 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/tokenizer.md | https://huggingface.co/docs/transformers/en/main_classes/tokenizer/#pretrainedtokenizerfast | .md | Whether or not the model should cleanup the spaces that were added when splitting the input text during the
tokenization process.
split_special_tokens (`bool`, *optional*, defaults to `False`):
Whether or not the special tokens should be split during the tokenization process. Passing will affect the
internal state of the tokenizer. The default behavior is to not split special tokens. This means that if
`<s>` is the `bos_token`, then `tokenizer.tokenize("<s>") = ['<s>`]. Otherwise, if | 453_4_13 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/tokenizer.md | https://huggingface.co/docs/transformers/en/main_classes/tokenizer/#pretrainedtokenizerfast | .md | `<s>` is the `bos_token`, then `tokenizer.tokenize("<s>") = ['<s>`]. Otherwise, if
`split_special_tokens=True`, then `tokenizer.tokenize("<s>")` will be give `['<','s', '>']`.
Fast
- __call__
- add_tokens
- add_special_tokens
- apply_chat_template
- batch_decode
- decode
- encode
- push_to_hub
- all | 453_4_14 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/tokenizer.md | https://huggingface.co/docs/transformers/en/main_classes/tokenizer/#batchencoding | .md | Holds the output of the [`~tokenization_utils_base.PreTrainedTokenizerBase.__call__`],
[`~tokenization_utils_base.PreTrainedTokenizerBase.encode_plus`] and
[`~tokenization_utils_base.PreTrainedTokenizerBase.batch_encode_plus`] methods (tokens, attention_masks, etc).
This class is derived from a python dictionary and can be used as a dictionary. In addition, this class exposes
utility methods to map from word/character space to token space.
Args:
data (`dict`, *optional*): | 453_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/tokenizer.md | https://huggingface.co/docs/transformers/en/main_classes/tokenizer/#batchencoding | .md | utility methods to map from word/character space to token space.
Args:
data (`dict`, *optional*):
Dictionary of lists/arrays/tensors returned by the `__call__`/`encode_plus`/`batch_encode_plus` methods
('input_ids', 'attention_mask', etc.).
encoding (`tokenizers.Encoding` or `Sequence[tokenizers.Encoding]`, *optional*):
If the tokenizer is a fast tokenizer which outputs additional information like mapping from word/character | 453_5_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/tokenizer.md | https://huggingface.co/docs/transformers/en/main_classes/tokenizer/#batchencoding | .md | If the tokenizer is a fast tokenizer which outputs additional information like mapping from word/character
space to token space the `tokenizers.Encoding` instance or list of instance (for batches) hold this
information.
tensor_type (`Union[None, str, TensorType]`, *optional*):
You can give a tensor_type here to convert the lists of integers in PyTorch/TensorFlow/Numpy Tensors at
initialization.
prepend_batch_axis (`bool`, *optional*, defaults to `False`): | 453_5_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/tokenizer.md | https://huggingface.co/docs/transformers/en/main_classes/tokenizer/#batchencoding | .md | initialization.
prepend_batch_axis (`bool`, *optional*, defaults to `False`):
Whether or not to add a batch axis when converting to tensors (see `tensor_type` above). Note that this
parameter has an effect if the parameter `tensor_type` is set, *otherwise has no effect*.
n_sequences (`Optional[int]`, *optional*):
You can give a tensor_type here to convert the lists of integers in PyTorch/TensorFlow/Numpy Tensors at
initialization. | 453_5_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/optimizer_schedules.md | https://huggingface.co/docs/transformers/en/main_classes/optimizer_schedules/ | .md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 454_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/optimizer_schedules.md | https://huggingface.co/docs/transformers/en/main_classes/optimizer_schedules/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 454_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/optimizer_schedules.md | https://huggingface.co/docs/transformers/en/main_classes/optimizer_schedules/#optimization | .md | The `.optimization` module provides:
- an optimizer with weight decay fixed that can be used to fine-tuned models, and
- several schedules in the form of schedule objects that inherit from `_LRSchedule`:
- a gradient accumulation class to accumulate the gradients of multiple batches | 454_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/optimizer_schedules.md | https://huggingface.co/docs/transformers/en/main_classes/optimizer_schedules/#adamw-pytorch | .md | Implements Adam algorithm with weight decay fix as introduced in [Decoupled Weight Decay
Regularization](https://arxiv.org/abs/1711.05101).
Parameters:
params (`Iterable[nn.parameter.Parameter]`):
Iterable of parameters to optimize or dictionaries defining parameter groups.
lr (`float`, *optional*, defaults to 0.001):
The learning rate to use.
betas (`Tuple[float,float]`, *optional*, defaults to `(0.9, 0.999)`):
Adam's betas parameters (b1, b2).
eps (`float`, *optional*, defaults to 1e-06): | 454_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/optimizer_schedules.md | https://huggingface.co/docs/transformers/en/main_classes/optimizer_schedules/#adamw-pytorch | .md | Adam's betas parameters (b1, b2).
eps (`float`, *optional*, defaults to 1e-06):
Adam's epsilon for numerical stability.
weight_decay (`float`, *optional*, defaults to 0.0):
Decoupled weight decay to apply.
correct_bias (`bool`, *optional*, defaults to `True`):
Whether or not to correct bias in Adam (for instance, in Bert TF repository they use `False`).
no_deprecation_warning (`bool`, *optional*, defaults to `False`):
A flag used to disable the deprecation warning (set to `True` to disable the warning). | 454_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/optimizer_schedules.md | https://huggingface.co/docs/transformers/en/main_classes/optimizer_schedules/#adafactor-pytorch | .md | AdaFactor pytorch implementation can be used as a drop in replacement for Adam original fairseq code:
https://github.com/pytorch/fairseq/blob/master/fairseq/optim/adafactor.py
Paper: *Adafactor: Adaptive Learning Rates with Sublinear Memory Cost* https://arxiv.org/abs/1804.04235 Note that
this optimizer internally adjusts the learning rate depending on the `scale_parameter`, `relative_step` and | 454_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/optimizer_schedules.md | https://huggingface.co/docs/transformers/en/main_classes/optimizer_schedules/#adafactor-pytorch | .md | this optimizer internally adjusts the learning rate depending on the `scale_parameter`, `relative_step` and
`warmup_init` options. To use a manual (external) learning rate schedule you should set `scale_parameter=False` and
`relative_step=False`.
Arguments:
params (`Iterable[nn.parameter.Parameter]`):
Iterable of parameters to optimize or dictionaries defining parameter groups.
lr (`float`, *optional*):
The external learning rate.
eps (`Tuple[float, float]`, *optional*, defaults to `(1e-30, 0.001)`): | 454_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/optimizer_schedules.md | https://huggingface.co/docs/transformers/en/main_classes/optimizer_schedules/#adafactor-pytorch | .md | lr (`float`, *optional*):
The external learning rate.
eps (`Tuple[float, float]`, *optional*, defaults to `(1e-30, 0.001)`):
Regularization constants for square gradient and parameter scale respectively
clip_threshold (`float`, *optional*, defaults to 1.0):
Threshold of root mean square of final gradient update
decay_rate (`float`, *optional*, defaults to -0.8):
Coefficient used to compute running averages of square
beta1 (`float`, *optional*):
Coefficient used for computing running averages of gradient | 454_3_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/optimizer_schedules.md | https://huggingface.co/docs/transformers/en/main_classes/optimizer_schedules/#adafactor-pytorch | .md | beta1 (`float`, *optional*):
Coefficient used for computing running averages of gradient
weight_decay (`float`, *optional*, defaults to 0.0):
Weight decay (L2 penalty)
scale_parameter (`bool`, *optional*, defaults to `True`):
If True, learning rate is scaled by root mean square
relative_step (`bool`, *optional*, defaults to `True`):
If True, time-dependent learning rate is computed instead of external learning rate
warmup_init (`bool`, *optional*, defaults to `False`): | 454_3_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/optimizer_schedules.md | https://huggingface.co/docs/transformers/en/main_classes/optimizer_schedules/#adafactor-pytorch | .md | warmup_init (`bool`, *optional*, defaults to `False`):
Time-dependent learning rate computation depends on whether warm-up initialization is being used
This implementation handles low-precision (FP16, bfloat) values, but we have not thoroughly tested.
Recommended T5 finetuning settings (https://discuss.huggingface.co/t/t5-finetuning-tips/684/3):
- Training without LR warmup or clip_threshold is not recommended.
- use scheduled LR warm-up to fixed LR | 454_3_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/optimizer_schedules.md | https://huggingface.co/docs/transformers/en/main_classes/optimizer_schedules/#adafactor-pytorch | .md | - Training without LR warmup or clip_threshold is not recommended.
- use scheduled LR warm-up to fixed LR
- use clip_threshold=1.0 (https://arxiv.org/abs/1804.04235)
- Disable relative updates
- Use scale_parameter=False
- Additional optimizer operations like gradient clipping should not be used alongside Adafactor
Example:
```python
Adafactor(model.parameters(), scale_parameter=False, relative_step=False, warmup_init=False, lr=1e-3)
```
Others reported the following combination to work well: | 454_3_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/optimizer_schedules.md | https://huggingface.co/docs/transformers/en/main_classes/optimizer_schedules/#adafactor-pytorch | .md | ```
Others reported the following combination to work well:
```python
Adafactor(model.parameters(), scale_parameter=True, relative_step=True, warmup_init=True, lr=None)
```
When using `lr=None` with [`Trainer`] you will most likely need to use [`~optimization.AdafactorSchedule`]
scheduler as following:
```python
from transformers.optimization import Adafactor, AdafactorSchedule | 454_3_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/optimizer_schedules.md | https://huggingface.co/docs/transformers/en/main_classes/optimizer_schedules/#adafactor-pytorch | .md | optimizer = Adafactor(model.parameters(), scale_parameter=True, relative_step=True, warmup_init=True, lr=None)
lr_scheduler = AdafactorSchedule(optimizer)
trainer = Trainer(..., optimizers=(optimizer, lr_scheduler))
```
Usage:
```python
# replace AdamW with Adafactor
optimizer = Adafactor(
model.parameters(),
lr=1e-3,
eps=(1e-30, 1e-3),
clip_threshold=1.0,
decay_rate=-0.8,
beta1=None,
weight_decay=0.0,
relative_step=False,
scale_parameter=False,
warmup_init=False,
)
``` | 454_3_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/optimizer_schedules.md | https://huggingface.co/docs/transformers/en/main_classes/optimizer_schedules/#adamweightdecay-tensorflow | .md | Implements Adam algorithm with weight decay fix as introduced in [Decoupled Weight Decay
Regularization](https://arxiv.org/abs/1711.05101).
Parameters:
params (`Iterable[nn.parameter.Parameter]`):
Iterable of parameters to optimize or dictionaries defining parameter groups.
lr (`float`, *optional*, defaults to 0.001):
The learning rate to use.
betas (`Tuple[float,float]`, *optional*, defaults to `(0.9, 0.999)`):
Adam's betas parameters (b1, b2).
eps (`float`, *optional*, defaults to 1e-06): | 454_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/optimizer_schedules.md | https://huggingface.co/docs/transformers/en/main_classes/optimizer_schedules/#adamweightdecay-tensorflow | .md | Adam's betas parameters (b1, b2).
eps (`float`, *optional*, defaults to 1e-06):
Adam's epsilon for numerical stability.
weight_decay (`float`, *optional*, defaults to 0.0):
Decoupled weight decay to apply.
correct_bias (`bool`, *optional*, defaults to `True`):
Whether or not to correct bias in Adam (for instance, in Bert TF repository they use `False`).
no_deprecation_warning (`bool`, *optional*, defaults to `False`):
A flag used to disable the deprecation warning (set to `True` to disable the warning). | 454_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/optimizer_schedules.md | https://huggingface.co/docs/transformers/en/main_classes/optimizer_schedules/#adamweightdecay-tensorflow | .md | A flag used to disable the deprecation warning (set to `True` to disable the warning).
eightDecay
create_optimizer | 454_4_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/optimizer_schedules.md | https://huggingface.co/docs/transformers/en/main_classes/optimizer_schedules/#learning-rate-schedules-pytorch | .md | Scheduler names for the parameter `lr_scheduler_type` in [`TrainingArguments`].
By default, it uses "linear". Internally, this retrieves `get_linear_schedule_with_warmup` scheduler from [`Trainer`].
Scheduler types:
- "linear" = get_linear_schedule_with_warmup
- "cosine" = get_cosine_schedule_with_warmup
- "cosine_with_restarts" = get_cosine_with_hard_restarts_schedule_with_warmup
- "polynomial" = get_polynomial_decay_schedule_with_warmup
- "constant" = get_constant_schedule | 454_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/optimizer_schedules.md | https://huggingface.co/docs/transformers/en/main_classes/optimizer_schedules/#learning-rate-schedules-pytorch | .md | - "polynomial" = get_polynomial_decay_schedule_with_warmup
- "constant" = get_constant_schedule
- "constant_with_warmup" = get_constant_schedule_with_warmup
- "inverse_sqrt" = get_inverse_sqrt_schedule
- "reduce_lr_on_plateau" = get_reduce_on_plateau_schedule
- "cosine_with_min_lr" = get_cosine_with_min_lr_schedule_with_warmup
- "warmup_stable_decay" = get_wsd_schedule
Unified API to get any scheduler from its name.
Args:
name (`str` or `SchedulerType`):
The name of the scheduler to use. | 454_5_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/optimizer_schedules.md | https://huggingface.co/docs/transformers/en/main_classes/optimizer_schedules/#learning-rate-schedules-pytorch | .md | Unified API to get any scheduler from its name.
Args:
name (`str` or `SchedulerType`):
The name of the scheduler to use.
optimizer (`torch.optim.Optimizer`):
The optimizer that will be used during training.
num_warmup_steps (`int`, *optional*):
The number of warmup steps to do. This is not required by all schedulers (hence the argument being
optional), the function will raise an error if it's unset and the scheduler type requires it.
num_training_steps (`int``, *optional*): | 454_5_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/optimizer_schedules.md | https://huggingface.co/docs/transformers/en/main_classes/optimizer_schedules/#learning-rate-schedules-pytorch | .md | num_training_steps (`int``, *optional*):
The number of training steps to do. This is not required by all schedulers (hence the argument being
optional), the function will raise an error if it's unset and the scheduler type requires it.
scheduler_specific_kwargs (`dict`, *optional*):
Extra parameters for schedulers such as cosine with restarts. Mismatched scheduler types and scheduler
parameters will cause the scheduler function to raise a TypeError. | 454_5_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/optimizer_schedules.md | https://huggingface.co/docs/transformers/en/main_classes/optimizer_schedules/#learning-rate-schedules-pytorch | .md | parameters will cause the scheduler function to raise a TypeError.
Create a schedule with a constant learning rate, using the learning rate set in optimizer.
Args:
optimizer ([`~torch.optim.Optimizer`]):
The optimizer for which to schedule the learning rate.
last_epoch (`int`, *optional*, defaults to -1):
The index of the last epoch when resuming training.
Return:
`torch.optim.lr_scheduler.LambdaLR` with the appropriate schedule. | 454_5_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/optimizer_schedules.md | https://huggingface.co/docs/transformers/en/main_classes/optimizer_schedules/#learning-rate-schedules-pytorch | .md | Return:
`torch.optim.lr_scheduler.LambdaLR` with the appropriate schedule.
Create a schedule with a constant learning rate, using the learning rate set in optimizer.
Args:
optimizer ([`~torch.optim.Optimizer`]):
The optimizer for which to schedule the learning rate.
last_epoch (`int`, *optional*, defaults to -1):
The index of the last epoch when resuming training.
Return:
`torch.optim.lr_scheduler.LambdaLR` with the appropriate schedule.
_with_warmup | 454_5_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/optimizer_schedules.md | https://huggingface.co/docs/transformers/en/main_classes/optimizer_schedules/#learning-rate-schedules-pytorch | .md | Return:
`torch.optim.lr_scheduler.LambdaLR` with the appropriate schedule.
_with_warmup
<img alt="" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/warmup_constant_schedule.png"/>
Create a schedule with a learning rate that decreases following the values of the cosine function between the
initial lr set in the optimizer to 0, after a warmup period during which it increases linearly between 0 and the
initial lr set in the optimizer.
Args: | 454_5_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/optimizer_schedules.md | https://huggingface.co/docs/transformers/en/main_classes/optimizer_schedules/#learning-rate-schedules-pytorch | .md | initial lr set in the optimizer.
Args:
optimizer ([`~torch.optim.Optimizer`]):
The optimizer for which to schedule the learning rate.
num_warmup_steps (`int`):
The number of steps for the warmup phase.
num_training_steps (`int`):
The total number of training steps.
num_cycles (`float`, *optional*, defaults to 0.5):
The number of waves in the cosine schedule (the defaults is to just decrease from the max value to 0
following a half-cosine).
last_epoch (`int`, *optional*, defaults to -1): | 454_5_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/optimizer_schedules.md | https://huggingface.co/docs/transformers/en/main_classes/optimizer_schedules/#learning-rate-schedules-pytorch | .md | following a half-cosine).
last_epoch (`int`, *optional*, defaults to -1):
The index of the last epoch when resuming training.
Return:
`torch.optim.lr_scheduler.LambdaLR` with the appropriate schedule.
<img alt="" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/warmup_cosine_schedule.png"/>
Create a schedule with a learning rate that decreases following the values of the cosine function between the | 454_5_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/optimizer_schedules.md | https://huggingface.co/docs/transformers/en/main_classes/optimizer_schedules/#learning-rate-schedules-pytorch | .md | Create a schedule with a learning rate that decreases following the values of the cosine function between the
initial lr set in the optimizer to 0, with several hard restarts, after a warmup period during which it increases
linearly between 0 and the initial lr set in the optimizer.
Args:
optimizer ([`~torch.optim.Optimizer`]):
The optimizer for which to schedule the learning rate.
num_warmup_steps (`int`):
The number of steps for the warmup phase.
num_training_steps (`int`): | 454_5_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/optimizer_schedules.md | https://huggingface.co/docs/transformers/en/main_classes/optimizer_schedules/#learning-rate-schedules-pytorch | .md | num_warmup_steps (`int`):
The number of steps for the warmup phase.
num_training_steps (`int`):
The total number of training steps.
num_cycles (`int`, *optional*, defaults to 1):
The number of hard restarts to use.
last_epoch (`int`, *optional*, defaults to -1):
The index of the last epoch when resuming training.
Return:
`torch.optim.lr_scheduler.LambdaLR` with the appropriate schedule. | 454_5_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/optimizer_schedules.md | https://huggingface.co/docs/transformers/en/main_classes/optimizer_schedules/#learning-rate-schedules-pytorch | .md | Return:
`torch.optim.lr_scheduler.LambdaLR` with the appropriate schedule.
<img alt="" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/warmup_cosine_hard_restarts_schedule.png"/>
Create a schedule with a learning rate that decreases linearly from the initial lr set in the optimizer to 0, after
a warmup period during which it increases linearly from 0 to the initial lr set in the optimizer.
Args:
optimizer ([`~torch.optim.Optimizer`]): | 454_5_11 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/optimizer_schedules.md | https://huggingface.co/docs/transformers/en/main_classes/optimizer_schedules/#learning-rate-schedules-pytorch | .md | Args:
optimizer ([`~torch.optim.Optimizer`]):
The optimizer for which to schedule the learning rate.
num_warmup_steps (`int`):
The number of steps for the warmup phase.
num_training_steps (`int`):
The total number of training steps.
last_epoch (`int`, *optional*, defaults to -1):
The index of the last epoch when resuming training.
Return:
`torch.optim.lr_scheduler.LambdaLR` with the appropriate schedule. | 454_5_12 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/optimizer_schedules.md | https://huggingface.co/docs/transformers/en/main_classes/optimizer_schedules/#learning-rate-schedules-pytorch | .md | Return:
`torch.optim.lr_scheduler.LambdaLR` with the appropriate schedule.
<img alt="" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/warmup_linear_schedule.png"/>
Create a schedule with a learning rate that decreases as a polynomial decay from the initial lr set in the
optimizer to end lr defined by *lr_end*, after a warmup period during which it increases linearly from 0 to the
initial lr set in the optimizer.
Args:
optimizer ([`~torch.optim.Optimizer`]): | 454_5_13 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/optimizer_schedules.md | https://huggingface.co/docs/transformers/en/main_classes/optimizer_schedules/#learning-rate-schedules-pytorch | .md | initial lr set in the optimizer.
Args:
optimizer ([`~torch.optim.Optimizer`]):
The optimizer for which to schedule the learning rate.
num_warmup_steps (`int`):
The number of steps for the warmup phase.
num_training_steps (`int`):
The total number of training steps.
lr_end (`float`, *optional*, defaults to 1e-7):
The end LR.
power (`float`, *optional*, defaults to 1.0):
Power factor.
last_epoch (`int`, *optional*, defaults to -1):
The index of the last epoch when resuming training. | 454_5_14 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/optimizer_schedules.md | https://huggingface.co/docs/transformers/en/main_classes/optimizer_schedules/#learning-rate-schedules-pytorch | .md | Power factor.
last_epoch (`int`, *optional*, defaults to -1):
The index of the last epoch when resuming training.
Note: *power* defaults to 1.0 as in the fairseq implementation, which in turn is based on the original BERT
implementation at
https://github.com/google-research/bert/blob/f39e881b169b9d53bea03d2d341b31707a6c052b/optimization.py#L37
Return:
`torch.optim.lr_scheduler.LambdaLR` with the appropriate schedule. | 454_5_15 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/optimizer_schedules.md | https://huggingface.co/docs/transformers/en/main_classes/optimizer_schedules/#learning-rate-schedules-pytorch | .md | Return:
`torch.optim.lr_scheduler.LambdaLR` with the appropriate schedule.
Create a schedule with an inverse square-root learning rate, from the initial lr set in the optimizer, after a
warmup period which increases lr linearly from 0 to the initial lr set in the optimizer.
Args:
optimizer ([`~torch.optim.Optimizer`]):
The optimizer for which to schedule the learning rate.
num_warmup_steps (`int`):
The number of steps for the warmup phase.
timescale (`int`, *optional*, defaults to `num_warmup_steps`): | 454_5_16 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/optimizer_schedules.md | https://huggingface.co/docs/transformers/en/main_classes/optimizer_schedules/#learning-rate-schedules-pytorch | .md | The number of steps for the warmup phase.
timescale (`int`, *optional*, defaults to `num_warmup_steps`):
Time scale.
last_epoch (`int`, *optional*, defaults to -1):
The index of the last epoch when resuming training.
Return:
`torch.optim.lr_scheduler.LambdaLR` with the appropriate schedule.
Create a schedule with a learning rate that has three stages:
1. linear increase from 0 to initial lr.
2. constant lr (equal to initial lr). | 454_5_17 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/optimizer_schedules.md | https://huggingface.co/docs/transformers/en/main_classes/optimizer_schedules/#learning-rate-schedules-pytorch | .md | 1. linear increase from 0 to initial lr.
2. constant lr (equal to initial lr).
3. decrease following the values of the cosine function between the initial lr set in the optimizer to
a fraction of initial lr.
Args:
optimizer ([`~torch.optim.Optimizer`]):
The optimizer for which to schedule the learning rate.
num_warmup_steps (`int`):
The number of steps for the warmup phase.
num_stable_steps (`int`):
The number of steps for the stable phase.
num_decay_steps (`int`): | 454_5_18 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/optimizer_schedules.md | https://huggingface.co/docs/transformers/en/main_classes/optimizer_schedules/#learning-rate-schedules-pytorch | .md | num_stable_steps (`int`):
The number of steps for the stable phase.
num_decay_steps (`int`):
The number of steps for the cosine annealing phase.
min_lr_ratio (`float`, *optional*, defaults to 0):
The minimum learning rate as a ratio of the initial learning rate.
num_cycles (`float`, *optional*, defaults to 0.5):
The number of waves in the cosine schedule (the defaults is to just decrease from the max value to 0
following a half-cosine).
last_epoch (`int`, *optional*, defaults to -1): | 454_5_19 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/optimizer_schedules.md | https://huggingface.co/docs/transformers/en/main_classes/optimizer_schedules/#learning-rate-schedules-pytorch | .md | following a half-cosine).
last_epoch (`int`, *optional*, defaults to -1):
The index of the last epoch when resuming training.
Return:
`torch.optim.lr_scheduler.LambdaLR` with the appropriate schedule. | 454_5_20 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/optimizer_schedules.md | https://huggingface.co/docs/transformers/en/main_classes/optimizer_schedules/#warmup-tensorflow | .md | WarmUp | 454_6_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/optimizer_schedules.md | https://huggingface.co/docs/transformers/en/main_classes/optimizer_schedules/#gradientaccumulator-tensorflow | .md | GradientAccumulator | 454_7_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/model.md | https://huggingface.co/docs/transformers/en/main_classes/model/ | .md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 455_0_0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.