source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/trainer.md
https://huggingface.co/docs/transformers/en/main_classes/trainer/#seq2seqtrainingarguments
.md
for a full list of optimizers. optim_args (`str`, *optional*): Optional arguments that are supplied to optimizers such as AnyPrecisionAdamW, AdEMAMix, and GaLore. group_by_length (`bool`, *optional*, defaults to `False`): Whether or not to group together samples of roughly the same length in the training dataset (to minimize padding applied and be more efficient). Only useful if applying dynamic padding. length_column_name (`str`, *optional*, defaults to `"length"`):
460_5_67
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/trainer.md
https://huggingface.co/docs/transformers/en/main_classes/trainer/#seq2seqtrainingarguments
.md
length_column_name (`str`, *optional*, defaults to `"length"`): Column name for precomputed lengths. If the column exists, grouping by length will use these values rather than computing them on train startup. Ignored unless `group_by_length` is `True` and the dataset is an instance of `Dataset`. report_to (`str` or `List[str]`, *optional*, defaults to `"all"`): The list of integrations to report the results and logs to. Supported platforms are `"azure_ml"`,
460_5_68
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/trainer.md
https://huggingface.co/docs/transformers/en/main_classes/trainer/#seq2seqtrainingarguments
.md
The list of integrations to report the results and logs to. Supported platforms are `"azure_ml"`, `"clearml"`, `"codecarbon"`, `"comet_ml"`, `"dagshub"`, `"dvclive"`, `"flyte"`, `"mlflow"`, `"neptune"`, `"tensorboard"`, and `"wandb"`. Use `"all"` to report to all integrations installed, `"none"` for no integrations. ddp_find_unused_parameters (`bool`, *optional*): When using distributed training, the value of the flag `find_unused_parameters` passed to
460_5_69
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/trainer.md
https://huggingface.co/docs/transformers/en/main_classes/trainer/#seq2seqtrainingarguments
.md
When using distributed training, the value of the flag `find_unused_parameters` passed to `DistributedDataParallel`. Will default to `False` if gradient checkpointing is used, `True` otherwise. ddp_bucket_cap_mb (`int`, *optional*): When using distributed training, the value of the flag `bucket_cap_mb` passed to `DistributedDataParallel`. ddp_broadcast_buffers (`bool`, *optional*): When using distributed training, the value of the flag `broadcast_buffers` passed to
460_5_70
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/trainer.md
https://huggingface.co/docs/transformers/en/main_classes/trainer/#seq2seqtrainingarguments
.md
When using distributed training, the value of the flag `broadcast_buffers` passed to `DistributedDataParallel`. Will default to `False` if gradient checkpointing is used, `True` otherwise. dataloader_pin_memory (`bool`, *optional*, defaults to `True`): Whether you want to pin memory in data loaders or not. Will default to `True`. dataloader_persistent_workers (`bool`, *optional*, defaults to `False`): If True, the data loader will not shut down the worker processes after a dataset has been consumed once.
460_5_71
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/trainer.md
https://huggingface.co/docs/transformers/en/main_classes/trainer/#seq2seqtrainingarguments
.md
If True, the data loader will not shut down the worker processes after a dataset has been consumed once. This allows to maintain the workers Dataset instances alive. Can potentially speed up training, but will increase RAM usage. Will default to `False`. dataloader_prefetch_factor (`int`, *optional*): Number of batches loaded in advance by each worker. 2 means there will be a total of 2 * num_workers batches prefetched across all workers. skip_memory_metrics (`bool`, *optional*, defaults to `True`):
460_5_72
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/trainer.md
https://huggingface.co/docs/transformers/en/main_classes/trainer/#seq2seqtrainingarguments
.md
skip_memory_metrics (`bool`, *optional*, defaults to `True`): Whether to skip adding of memory profiler reports to metrics. This is skipped by default because it slows down the training and evaluation speed. push_to_hub (`bool`, *optional*, defaults to `False`): Whether or not to push the model to the Hub every time the model is saved. If this is activated, `output_dir` will begin a git directory synced with the repo (determined by `hub_model_id`) and the content
460_5_73
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/trainer.md
https://huggingface.co/docs/transformers/en/main_classes/trainer/#seq2seqtrainingarguments
.md
`output_dir` will begin a git directory synced with the repo (determined by `hub_model_id`) and the content will be pushed each time a save is triggered (depending on your `save_strategy`). Calling [`~Trainer.save_model`] will also trigger a push. <Tip warning={true}> If `output_dir` exists, it needs to be a local clone of the repository to which the [`Trainer`] will be pushed. </Tip> resume_from_checkpoint (`str`, *optional*):
460_5_74
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/trainer.md
https://huggingface.co/docs/transformers/en/main_classes/trainer/#seq2seqtrainingarguments
.md
pushed. </Tip> resume_from_checkpoint (`str`, *optional*): The path to a folder with a valid checkpoint for your model. This argument is not directly used by [`Trainer`], it's intended to be used by your training/evaluation scripts instead. See the [example scripts](https://github.com/huggingface/transformers/tree/main/examples) for more details. hub_model_id (`str`, *optional*): The name of the repository to keep in sync with the local *output_dir*. It can be a simple model ID in
460_5_75
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/trainer.md
https://huggingface.co/docs/transformers/en/main_classes/trainer/#seq2seqtrainingarguments
.md
The name of the repository to keep in sync with the local *output_dir*. It can be a simple model ID in which case the model will be pushed in your namespace. Otherwise it should be the whole repository name, for instance `"user_name/model"`, which allows you to push to an organization you are a member of with `"organization_name/model"`. Will default to `user_name/output_dir_name` with *output_dir_name* being the name of `output_dir`. Will default to the name of `output_dir`.
460_5_76
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/trainer.md
https://huggingface.co/docs/transformers/en/main_classes/trainer/#seq2seqtrainingarguments
.md
name of `output_dir`. Will default to the name of `output_dir`. hub_strategy (`str` or [`~trainer_utils.HubStrategy`], *optional*, defaults to `"every_save"`): Defines the scope of what is pushed to the Hub and when. Possible values are: - `"end"`: push the model, its configuration, the processing class e.g. tokenizer (if passed along to the [`Trainer`]) and a draft of a model card when the [`~Trainer.save_model`] method is called.
460_5_77
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/trainer.md
https://huggingface.co/docs/transformers/en/main_classes/trainer/#seq2seqtrainingarguments
.md
draft of a model card when the [`~Trainer.save_model`] method is called. - `"every_save"`: push the model, its configuration, the processing class e.g. tokenizer (if passed along to the [`Trainer`]) and a draft of a model card each time there is a model save. The pushes are asynchronous to not block training, and in case the save are very frequent, a new push is only attempted if the previous one is finished. A last push is made with the final model at the end of training.
460_5_78
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/trainer.md
https://huggingface.co/docs/transformers/en/main_classes/trainer/#seq2seqtrainingarguments
.md
finished. A last push is made with the final model at the end of training. - `"checkpoint"`: like `"every_save"` but the latest checkpoint is also pushed in a subfolder named last-checkpoint, allowing you to resume training easily with `trainer.train(resume_from_checkpoint="last-checkpoint")`. - `"all_checkpoints"`: like `"checkpoint"` but all checkpoints are pushed like they appear in the output folder (so you will get one checkpoint folder per folder in your final repository)
460_5_79
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/trainer.md
https://huggingface.co/docs/transformers/en/main_classes/trainer/#seq2seqtrainingarguments
.md
folder (so you will get one checkpoint folder per folder in your final repository) hub_token (`str`, *optional*): The token to use to push the model to the Hub. Will default to the token in the cache folder obtained with `huggingface-cli login`. hub_private_repo (`bool`, *optional*): Whether to make the repo private. If `None` (default), the repo will be public unless the organization's default is private. This value is ignored if the repo already exists.
460_5_80
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/trainer.md
https://huggingface.co/docs/transformers/en/main_classes/trainer/#seq2seqtrainingarguments
.md
hub_always_push (`bool`, *optional*, defaults to `False`): Unless this is `True`, the `Trainer` will skip pushing a checkpoint when the previous push is not finished. gradient_checkpointing (`bool`, *optional*, defaults to `False`): If True, use gradient checkpointing to save memory at the expense of slower backward pass. gradient_checkpointing_kwargs (`dict`, *optional*, defaults to `None`): Key word arguments to be passed to the `gradient_checkpointing_enable` method.
460_5_81
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/trainer.md
https://huggingface.co/docs/transformers/en/main_classes/trainer/#seq2seqtrainingarguments
.md
Key word arguments to be passed to the `gradient_checkpointing_enable` method. include_inputs_for_metrics (`bool`, *optional*, defaults to `False`): This argument is deprecated. Use `include_for_metrics` instead, e.g, `include_for_metrics = ["inputs"]`. include_for_metrics (`List[str]`, *optional*, defaults to `[]`): Include additional data in the `compute_metrics` function if needed for metrics computation. Possible options to add to `include_for_metrics` list:
460_5_82
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/trainer.md
https://huggingface.co/docs/transformers/en/main_classes/trainer/#seq2seqtrainingarguments
.md
Possible options to add to `include_for_metrics` list: - `"inputs"`: Input data passed to the model, intended for calculating input dependent metrics. - `"loss"`: Loss values computed during evaluation, intended for calculating loss dependent metrics. eval_do_concat_batches (`bool`, *optional*, defaults to `True`): Whether to recursively concat inputs/losses/labels/predictions across batches. If `False`, will instead store them as lists, with each batch kept separate.
460_5_83
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/trainer.md
https://huggingface.co/docs/transformers/en/main_classes/trainer/#seq2seqtrainingarguments
.md
will instead store them as lists, with each batch kept separate. auto_find_batch_size (`bool`, *optional*, defaults to `False`) Whether to find a batch size that will fit into memory automatically through exponential decay, avoiding CUDA Out-of-Memory errors. Requires accelerate to be installed (`pip install accelerate`) full_determinism (`bool`, *optional*, defaults to `False`) If `True`, [`enable_full_determinism`] is called instead of [`set_seed`] to ensure reproducible results in
460_5_84
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/trainer.md
https://huggingface.co/docs/transformers/en/main_classes/trainer/#seq2seqtrainingarguments
.md
If `True`, [`enable_full_determinism`] is called instead of [`set_seed`] to ensure reproducible results in distributed training. Important: this will negatively impact the performance, so only use it for debugging. torchdynamo (`str`, *optional*): If set, the backend compiler for TorchDynamo. Possible choices are `"eager"`, `"aot_eager"`, `"inductor"`, `"nvfuser"`, `"aot_nvfuser"`, `"aot_cudagraphs"`, `"ofi"`, `"fx2trt"`, `"onnxrt"` and `"ipex"`. ray_scope (`str`, *optional*, defaults to `"last"`):
460_5_85
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/trainer.md
https://huggingface.co/docs/transformers/en/main_classes/trainer/#seq2seqtrainingarguments
.md
ray_scope (`str`, *optional*, defaults to `"last"`): The scope to use when doing hyperparameter search with Ray. By default, `"last"` will be used. Ray will then use the last checkpoint of all trials, compare those, and select the best one. However, other options are also available. See the [Ray documentation]( https://docs.ray.io/en/latest/tune/api_docs/analysis.html#ray.tune.ExperimentAnalysis.get_best_trial) for more options. ddp_timeout (`int`, *optional*, defaults to 1800):
460_5_86
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/trainer.md
https://huggingface.co/docs/transformers/en/main_classes/trainer/#seq2seqtrainingarguments
.md
more options. ddp_timeout (`int`, *optional*, defaults to 1800): The timeout for `torch.distributed.init_process_group` calls, used to avoid GPU socket timeouts when performing slow operations in distributed runnings. Please refer the [PyTorch documentation] (https://pytorch.org/docs/stable/distributed.html#torch.distributed.init_process_group) for more information. use_mps_device (`bool`, *optional*, defaults to `False`):
460_5_87
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/trainer.md
https://huggingface.co/docs/transformers/en/main_classes/trainer/#seq2seqtrainingarguments
.md
information. use_mps_device (`bool`, *optional*, defaults to `False`): This argument is deprecated.`mps` device will be used if it is available similar to `cuda` device. torch_compile (`bool`, *optional*, defaults to `False`): Whether or not to compile the model using PyTorch 2.0 [`torch.compile`](https://pytorch.org/get-started/pytorch-2.0/). This will use the best defaults for the [`torch.compile` API](https://pytorch.org/docs/stable/generated/torch.compile.html?highlight=torch+compile#torch.compile).
460_5_88
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/trainer.md
https://huggingface.co/docs/transformers/en/main_classes/trainer/#seq2seqtrainingarguments
.md
API](https://pytorch.org/docs/stable/generated/torch.compile.html?highlight=torch+compile#torch.compile). You can customize the defaults with the argument `torch_compile_backend` and `torch_compile_mode` but we don't guarantee any of them will work as the support is progressively rolled in in PyTorch. This flag and the whole compile API is experimental and subject to change in future releases. torch_compile_backend (`str`, *optional*):
460_5_89
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/trainer.md
https://huggingface.co/docs/transformers/en/main_classes/trainer/#seq2seqtrainingarguments
.md
torch_compile_backend (`str`, *optional*): The backend to use in `torch.compile`. If set to any value, `torch_compile` will be set to `True`. Refer to the PyTorch doc for possible values and note that they may change across PyTorch versions. This flag is experimental and subject to change in future releases. torch_compile_mode (`str`, *optional*): The mode to use in `torch.compile`. If set to any value, `torch_compile` will be set to `True`.
460_5_90
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/trainer.md
https://huggingface.co/docs/transformers/en/main_classes/trainer/#seq2seqtrainingarguments
.md
The mode to use in `torch.compile`. If set to any value, `torch_compile` will be set to `True`. Refer to the PyTorch doc for possible values and note that they may change across PyTorch versions. This flag is experimental and subject to change in future releases. split_batches (`bool`, *optional*): Whether or not the accelerator should split the batches yielded by the dataloaders across the devices during distributed training. If
460_5_91
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/trainer.md
https://huggingface.co/docs/transformers/en/main_classes/trainer/#seq2seqtrainingarguments
.md
during distributed training. If set to `True`, the actual batch size used will be the same on any kind of distributed processes, but it must be a round multiple of the number of processes you are using (such as GPUs). include_tokens_per_second (`bool`, *optional*): Whether or not to compute the number of tokens per second per device for training speed metrics. This will iterate over the entire training dataloader once beforehand, and will slow down the entire process.
460_5_92
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/trainer.md
https://huggingface.co/docs/transformers/en/main_classes/trainer/#seq2seqtrainingarguments
.md
This will iterate over the entire training dataloader once beforehand, and will slow down the entire process. include_num_input_tokens_seen (`bool`, *optional*): Whether or not to track the number of input tokens seen throughout training. May be slower in distributed training as gather operations must be called. neftune_noise_alpha (`Optional[float]`): If not `None`, this will activate NEFTune noise embeddings. This can drastically improve model performance
460_5_93
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/trainer.md
https://huggingface.co/docs/transformers/en/main_classes/trainer/#seq2seqtrainingarguments
.md
If not `None`, this will activate NEFTune noise embeddings. This can drastically improve model performance for instruction fine-tuning. Check out the [original paper](https://arxiv.org/abs/2310.05914) and the [original code](https://github.com/neelsjain/NEFTune). Support transformers `PreTrainedModel` and also `PeftModel` from peft. The original paper used values in the range [5.0, 15.0]. optim_target_modules (`Union[str, List[str]]`, *optional*):
460_5_94
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/trainer.md
https://huggingface.co/docs/transformers/en/main_classes/trainer/#seq2seqtrainingarguments
.md
optim_target_modules (`Union[str, List[str]]`, *optional*): The target modules to optimize, i.e. the module names that you would like to train, right now this is used only for GaLore algorithm https://arxiv.org/abs/2403.03507 See: https://github.com/jiaweizzhao/GaLore for more details. You need to make sure to pass a valid GaloRe optimizer, e.g. one of: "galore_adamw", "galore_adamw_8bit", "galore_adafactor" and make sure that the target modules are `nn.Linear` modules only.
460_5_95
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/trainer.md
https://huggingface.co/docs/transformers/en/main_classes/trainer/#seq2seqtrainingarguments
.md
only. batch_eval_metrics (`Optional[bool]`, defaults to `False`): If set to `True`, evaluation will call compute_metrics at the end of each batch to accumulate statistics rather than saving all eval logits in memory. When set to `True`, you must pass a compute_metrics function that takes a boolean argument `compute_result`, which when passed `True`, will trigger the final global summary statistics from the batch-level summary statistics you've accumulated over the evaluation set.
460_5_96
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/trainer.md
https://huggingface.co/docs/transformers/en/main_classes/trainer/#seq2seqtrainingarguments
.md
summary statistics from the batch-level summary statistics you've accumulated over the evaluation set. eval_on_start (`bool`, *optional*, defaults to `False`): Whether to perform a evaluation step (sanity check) before the training to ensure the validation steps works correctly. eval_use_gather_object (`bool`, *optional*, defaults to `False`):
460_5_97
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/trainer.md
https://huggingface.co/docs/transformers/en/main_classes/trainer/#seq2seqtrainingarguments
.md
eval_use_gather_object (`bool`, *optional*, defaults to `False`): Whether to run recursively gather object in a nested list/tuple/dictionary of objects from all devices. This should only be enabled if users are not just returning tensors, and this is actively discouraged by PyTorch. use_liger_kernel (`bool`, *optional*, defaults to `False`): Whether enable [Liger](https://github.com/linkedin/Liger-Kernel) Kernel for LLM model training.
460_5_98
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/trainer.md
https://huggingface.co/docs/transformers/en/main_classes/trainer/#seq2seqtrainingarguments
.md
Whether enable [Liger](https://github.com/linkedin/Liger-Kernel) Kernel for LLM model training. It can effectively increase multi-GPU training throughput by ~20% and reduces memory usage by ~60%, works out of the box with flash attention, PyTorch FSDP, and Microsoft DeepSpeed. Currently, it supports llama, mistral, mixtral and gemma models. Args: predict_with_generate (`bool`, *optional*, defaults to `False`): Whether to use generate to calculate generative metrics (ROUGE, BLEU).
460_5_99
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/trainer.md
https://huggingface.co/docs/transformers/en/main_classes/trainer/#seq2seqtrainingarguments
.md
Whether to use generate to calculate generative metrics (ROUGE, BLEU). generation_max_length (`int`, *optional*): The `max_length` to use on each evaluation loop when `predict_with_generate=True`. Will default to the `max_length` value of the model configuration. generation_num_beams (`int`, *optional*): The `num_beams` to use on each evaluation loop when `predict_with_generate=True`. Will default to the `num_beams` value of the model configuration.
460_5_100
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/trainer.md
https://huggingface.co/docs/transformers/en/main_classes/trainer/#seq2seqtrainingarguments
.md
`num_beams` value of the model configuration. generation_config (`str` or `Path` or [`~generation.GenerationConfig`], *optional*): Allows to load a [`~generation.GenerationConfig`] from the `from_pretrained` method. This can be either: - a string, the *model id* of a pretrained model configuration hosted inside a model repo on huggingface.co. - a path to a *directory* containing a configuration file saved using the [`~GenerationConfig.save_pretrained`] method, e.g., `./my_model_directory/`.
460_5_101
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/trainer.md
https://huggingface.co/docs/transformers/en/main_classes/trainer/#seq2seqtrainingarguments
.md
[`~GenerationConfig.save_pretrained`] method, e.g., `./my_model_directory/`. - a [`~generation.GenerationConfig`] object. - all
460_5_102
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/data_collator.md
https://huggingface.co/docs/transformers/en/main_classes/data_collator/
.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
461_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/data_collator.md
https://huggingface.co/docs/transformers/en/main_classes/data_collator/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
461_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/data_collator.md
https://huggingface.co/docs/transformers/en/main_classes/data_collator/#data-collator
.md
Data collators are objects that will form a batch by using a list of dataset elements as input. These elements are of the same type as the elements of `train_dataset` or `eval_dataset`. To be able to build batches, data collators may apply some processing (like padding). Some of them (like [`DataCollatorForLanguageModeling`]) also apply some random data augmentation (like random masking) on the formed batch.
461_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/data_collator.md
https://huggingface.co/docs/transformers/en/main_classes/data_collator/#data-collator
.md
[`DataCollatorForLanguageModeling`]) also apply some random data augmentation (like random masking) on the formed batch. Examples of use can be found in the [example scripts](../examples) or [example notebooks](../notebooks).
461_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/data_collator.md
https://huggingface.co/docs/transformers/en/main_classes/data_collator/#default-data-collator
.md
data.data_collator.default_data_collator Very simple data collator that simply collates batches of dict-like objects and performs special handling for potential keys named: - `label`: handles a single value (int or float) per object - `label_ids`: handles a list of values per object Does not do any additional preprocessing: property names of the input object will be used as corresponding inputs to the model. See glue and ner for example of how it's useful.
461_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/data_collator.md
https://huggingface.co/docs/transformers/en/main_classes/data_collator/#defaultdatacollator
.md
data.data_collator.DefaultDataCollator Very simple data collator that simply collates batches of dict-like objects and performs special handling for potential keys named: - `label`: handles a single value (int or float) per object - `label_ids`: handles a list of values per object Does not do any additional preprocessing: property names of the input object will be used as corresponding inputs to the model. See glue and ner for example of how it's useful.
461_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/data_collator.md
https://huggingface.co/docs/transformers/en/main_classes/data_collator/#defaultdatacollator
.md
to the model. See glue and ner for example of how it's useful. This is an object (like other data collators) rather than a pure function like default_data_collator. This can be helpful if you need to set a return_tensors value at initialization. Args: return_tensors (`str`, *optional*, defaults to `"pt"`): The type of Tensor to return. Allowable values are "np", "pt" and "tf".
461_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/data_collator.md
https://huggingface.co/docs/transformers/en/main_classes/data_collator/#datacollatorwithpadding
.md
data.data_collator.DataCollatorWithPadding Data collator that will dynamically pad the inputs received. Args: tokenizer ([`PreTrainedTokenizer`] or [`PreTrainedTokenizerFast`]): The tokenizer used for encoding the data. padding (`bool`, `str` or [`~utils.PaddingStrategy`], *optional*, defaults to `True`): Select a strategy to pad the returned sequences (according to the model's padding side and padding index) among:
461_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/data_collator.md
https://huggingface.co/docs/transformers/en/main_classes/data_collator/#datacollatorwithpadding
.md
Select a strategy to pad the returned sequences (according to the model's padding side and padding index) among: - `True` or `'longest'` (default): Pad to the longest sequence in the batch (or no padding if only a single sequence is provided). - `'max_length'`: Pad to a maximum length specified with the argument `max_length` or to the maximum acceptable input length for the model if that argument is not provided.
461_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/data_collator.md
https://huggingface.co/docs/transformers/en/main_classes/data_collator/#datacollatorwithpadding
.md
acceptable input length for the model if that argument is not provided. - `False` or `'do_not_pad'`: No padding (i.e., can output a batch with sequences of different lengths). max_length (`int`, *optional*): Maximum length of the returned list and optionally padding length (see above). pad_to_multiple_of (`int`, *optional*): If set will pad the sequence to a multiple of the provided value. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >=
461_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/data_collator.md
https://huggingface.co/docs/transformers/en/main_classes/data_collator/#datacollatorwithpadding
.md
This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.0 (Volta). return_tensors (`str`, *optional*, defaults to `"pt"`): The type of Tensor to return. Allowable values are "np", "pt" and "tf".
461_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/data_collator.md
https://huggingface.co/docs/transformers/en/main_classes/data_collator/#datacollatorfortokenclassification
.md
data.data_collator.DataCollatorForTokenClassification Data collator that will dynamically pad the inputs received, as well as the labels. Args: tokenizer ([`PreTrainedTokenizer`] or [`PreTrainedTokenizerFast`]): The tokenizer used for encoding the data. padding (`bool`, `str` or [`~utils.PaddingStrategy`], *optional*, defaults to `True`): Select a strategy to pad the returned sequences (according to the model's padding side and padding index) among:
461_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/data_collator.md
https://huggingface.co/docs/transformers/en/main_classes/data_collator/#datacollatorfortokenclassification
.md
Select a strategy to pad the returned sequences (according to the model's padding side and padding index) among: - `True` or `'longest'` (default): Pad to the longest sequence in the batch (or no padding if only a single sequence is provided). - `'max_length'`: Pad to a maximum length specified with the argument `max_length` or to the maximum acceptable input length for the model if that argument is not provided.
461_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/data_collator.md
https://huggingface.co/docs/transformers/en/main_classes/data_collator/#datacollatorfortokenclassification
.md
acceptable input length for the model if that argument is not provided. - `False` or `'do_not_pad'`: No padding (i.e., can output a batch with sequences of different lengths). max_length (`int`, *optional*): Maximum length of the returned list and optionally padding length (see above). pad_to_multiple_of (`int`, *optional*): If set will pad the sequence to a multiple of the provided value. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >=
461_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/data_collator.md
https://huggingface.co/docs/transformers/en/main_classes/data_collator/#datacollatorfortokenclassification
.md
This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.0 (Volta). label_pad_token_id (`int`, *optional*, defaults to -100): The id to use when padding the labels (-100 will be automatically ignore by PyTorch loss functions). return_tensors (`str`, *optional*, defaults to `"pt"`): The type of Tensor to return. Allowable values are "np", "pt" and "tf".
461_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/data_collator.md
https://huggingface.co/docs/transformers/en/main_classes/data_collator/#datacollatorforseq2seq
.md
data.data_collator.DataCollatorForSeq2Seq Data collator that will dynamically pad the inputs received, as well as the labels. Args: tokenizer ([`PreTrainedTokenizer`] or [`PreTrainedTokenizerFast`]): The tokenizer used for encoding the data. model ([`PreTrainedModel`], *optional*): The model that is being trained. If set and has the *prepare_decoder_input_ids_from_labels*, use it to prepare the *decoder_input_ids* This is useful when using *label_smoothing* to avoid calculating loss twice.
461_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/data_collator.md
https://huggingface.co/docs/transformers/en/main_classes/data_collator/#datacollatorforseq2seq
.md
prepare the *decoder_input_ids* This is useful when using *label_smoothing* to avoid calculating loss twice. padding (`bool`, `str` or [`~utils.PaddingStrategy`], *optional*, defaults to `True`): Select a strategy to pad the returned sequences (according to the model's padding side and padding index) among: - `True` or `'longest'` (default): Pad to the longest sequence in the batch (or no padding if only a single sequence is provided).
461_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/data_collator.md
https://huggingface.co/docs/transformers/en/main_classes/data_collator/#datacollatorforseq2seq
.md
sequence is provided). - `'max_length'`: Pad to a maximum length specified with the argument `max_length` or to the maximum acceptable input length for the model if that argument is not provided. - `False` or `'do_not_pad'`: No padding (i.e., can output a batch with sequences of different lengths). max_length (`int`, *optional*): Maximum length of the returned list and optionally padding length (see above). pad_to_multiple_of (`int`, *optional*):
461_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/data_collator.md
https://huggingface.co/docs/transformers/en/main_classes/data_collator/#datacollatorforseq2seq
.md
Maximum length of the returned list and optionally padding length (see above). pad_to_multiple_of (`int`, *optional*): If set will pad the sequence to a multiple of the provided value. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.0 (Volta). label_pad_token_id (`int`, *optional*, defaults to -100): The id to use when padding the labels (-100 will be automatically ignored by PyTorch loss functions).
461_6_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/data_collator.md
https://huggingface.co/docs/transformers/en/main_classes/data_collator/#datacollatorforseq2seq
.md
The id to use when padding the labels (-100 will be automatically ignored by PyTorch loss functions). return_tensors (`str`, *optional*, defaults to `"pt"`): The type of Tensor to return. Allowable values are "np", "pt" and "tf".
461_6_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/data_collator.md
https://huggingface.co/docs/transformers/en/main_classes/data_collator/#datacollatorforlanguagemodeling
.md
data.data_collator.DataCollatorForLanguageModeling Data collator used for language modeling. Inputs are dynamically padded to the maximum length of a batch if they are not all of the same length. Args: tokenizer ([`PreTrainedTokenizer`] or [`PreTrainedTokenizerFast`]): The tokenizer used for encoding the data. mlm (`bool`, *optional*, defaults to `True`): Whether or not to use masked language modeling. If set to `False`, the labels are the same as the inputs
461_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/data_collator.md
https://huggingface.co/docs/transformers/en/main_classes/data_collator/#datacollatorforlanguagemodeling
.md
Whether or not to use masked language modeling. If set to `False`, the labels are the same as the inputs with the padding tokens ignored (by setting them to -100). Otherwise, the labels are -100 for non-masked tokens and the value to predict for the masked token. mlm_probability (`float`, *optional*, defaults to 0.15): The probability with which to (randomly) mask tokens in the input, when `mlm` is set to `True`. pad_to_multiple_of (`int`, *optional*):
461_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/data_collator.md
https://huggingface.co/docs/transformers/en/main_classes/data_collator/#datacollatorforlanguagemodeling
.md
pad_to_multiple_of (`int`, *optional*): If set will pad the sequence to a multiple of the provided value. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.0 (Volta). return_tensors (`str`): The type of Tensor to return. Allowable values are "np", "pt" and "tf". <Tip> For best performance, this data collator should be used with a dataset having items that are dictionaries or
461_7_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/data_collator.md
https://huggingface.co/docs/transformers/en/main_classes/data_collator/#datacollatorforlanguagemodeling
.md
<Tip> For best performance, this data collator should be used with a dataset having items that are dictionaries or BatchEncoding, with the `"special_tokens_mask"` key, as returned by a [`PreTrainedTokenizer`] or a [`PreTrainedTokenizerFast`] with the argument `return_special_tokens_mask=True`. </Tip> - numpy_mask_tokens - tf_mask_tokens - torch_mask_tokens
461_7_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/data_collator.md
https://huggingface.co/docs/transformers/en/main_classes/data_collator/#datacollatorforwholewordmask
.md
data.data_collator.DataCollatorForWholeWordMask Data collator used for language modeling that masks entire words. - collates batches of tensors, honoring their tokenizer's pad_token - preprocesses batches for masked language modeling <Tip> This collator relies on details of the implementation of subword tokenization by [`BertTokenizer`], specifically that subword tokens are prefixed with *##*. For tokenizers that do not adhere to this scheme, this collator will
461_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/data_collator.md
https://huggingface.co/docs/transformers/en/main_classes/data_collator/#datacollatorforwholewordmask
.md
that subword tokens are prefixed with *##*. For tokenizers that do not adhere to this scheme, this collator will produce an output that is roughly equivalent to [`.DataCollatorForLanguageModeling`]. </Tip> - numpy_mask_tokens - tf_mask_tokens - torch_mask_tokens
461_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/data_collator.md
https://huggingface.co/docs/transformers/en/main_classes/data_collator/#datacollatorforpermutationlanguagemodeling
.md
data.data_collator.DataCollatorForPermutationLanguageModeling Data collator used for permutation language modeling. - collates batches of tensors, honoring their tokenizer's pad_token - preprocesses batches for permutation language modeling with procedures specific to XLNet - numpy_mask_tokens - tf_mask_tokens - torch_mask_tokens
461_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/data_collator.md
https://huggingface.co/docs/transformers/en/main_classes/data_collator/#datacollatorwithflattening
.md
data.data_collator.DataCollatorWithFlattening Data collator used for padding free approach. Does the following: - concatate the entire mini batch into single long sequence [1, total_tokens] - uses `separator_id` to separate sequences within the concatenated `labels`, default value is -100 - no padding will be added, returns `input_ids`, `labels` and `position_ids`
461_10_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/deepspeed.md
https://huggingface.co/docs/transformers/en/main_classes/deepspeed/
.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
462_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/deepspeed.md
https://huggingface.co/docs/transformers/en/main_classes/deepspeed/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
462_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/deepspeed.md
https://huggingface.co/docs/transformers/en/main_classes/deepspeed/#deepspeed
.md
[DeepSpeed](https://github.com/microsoft/DeepSpeed), powered by Zero Redundancy Optimizer (ZeRO), is an optimization library for training and fitting very large models onto a GPU. It is available in several ZeRO stages, where each stage progressively saves more GPU memory by partitioning the optimizer state, gradients, parameters, and enabling offloading to a CPU or NVMe. DeepSpeed is integrated with the [`Trainer`] class and most of the setup is automatically taken care of for you.
462_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/deepspeed.md
https://huggingface.co/docs/transformers/en/main_classes/deepspeed/#deepspeed
.md
However, if you want to use DeepSpeed without the [`Trainer`], Transformers provides a [`HfDeepSpeedConfig`] class. <Tip> Learn more about using DeepSpeed with [`Trainer`] in the [DeepSpeed](../deepspeed) guide. </Tip>
462_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/deepspeed.md
https://huggingface.co/docs/transformers/en/main_classes/deepspeed/#hfdeepspeedconfig
.md
integrations.HfDeepSpeedConfig This object contains a DeepSpeed configuration dictionary and can be quickly queried for things like zero stage. A `weakref` of this object is stored in the module's globals to be able to access the config from areas where things like the Trainer object is not available (e.g. `from_pretrained` and `_get_resized_embeddings`). Therefore it's important that this object remains alive while the program is still running.
462_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/deepspeed.md
https://huggingface.co/docs/transformers/en/main_classes/deepspeed/#hfdeepspeedconfig
.md
it's important that this object remains alive while the program is still running. [`Trainer`] uses the `HfTrainerDeepSpeedConfig` subclass instead. That subclass has logic to sync the configuration with values of [`TrainingArguments`] by replacing special placeholder values: `"auto"`. Without this special logic the DeepSpeed configuration is not modified in any way. Args: config_file_or_dict (`Union[str, Dict]`): path to DeepSpeed config file or dict. - all
462_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/configuration.md
https://huggingface.co/docs/transformers/en/main_classes/configuration/
.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
463_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/configuration.md
https://huggingface.co/docs/transformers/en/main_classes/configuration/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
463_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/configuration.md
https://huggingface.co/docs/transformers/en/main_classes/configuration/#configuration
.md
The base class [`PretrainedConfig`] implements the common methods for loading/saving a configuration either from a local file or directory, or from a pretrained model configuration provided by the library (downloaded from HuggingFace's AWS S3 repository). Each derived config class implements model specific attributes. Common attributes present in all config classes are: `hidden_size`, `num_attention_heads`, and `num_hidden_layers`. Text models further implement: `vocab_size`.
463_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/configuration.md
https://huggingface.co/docs/transformers/en/main_classes/configuration/#pretrainedconfig
.md
Base class for all configuration classes. Handles a few parameters common to all models' configurations as well as methods for loading/downloading/saving configurations. <Tip> A configuration file can be loaded and saved to disk. Loading the configuration file and using this file to initialize a model does **not** load the model weights. It only affects the model's configuration. </Tip> Class attributes (overridden by derived classes):
463_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/configuration.md
https://huggingface.co/docs/transformers/en/main_classes/configuration/#pretrainedconfig
.md
</Tip> Class attributes (overridden by derived classes): - **model_type** (`str`) -- An identifier for the model type, serialized into the JSON file, and used to recreate the correct object in [`~transformers.AutoConfig`]. - **is_composition** (`bool`) -- Whether the config class is composed of multiple sub-configs. In this case the config has to be initialized from two or more configs of type [`~transformers.PretrainedConfig`] like: [`~transformers.EncoderDecoderConfig`] or [`~RagConfig`].
463_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/configuration.md
https://huggingface.co/docs/transformers/en/main_classes/configuration/#pretrainedconfig
.md
[`~transformers.EncoderDecoderConfig`] or [`~RagConfig`]. - **keys_to_ignore_at_inference** (`List[str]`) -- A list of keys to ignore by default when looking at dictionary outputs of the model during inference. - **attribute_map** (`Dict[str, str]`) -- A dict that maps model specific attribute names to the standardized naming of attributes. - **base_model_tp_plan** (`Dict[str, Any]`) -- A dict that maps sub-modules FQNs of a base model to a tensor
463_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/configuration.md
https://huggingface.co/docs/transformers/en/main_classes/configuration/#pretrainedconfig
.md
- **base_model_tp_plan** (`Dict[str, Any]`) -- A dict that maps sub-modules FQNs of a base model to a tensor parallel plan applied to the sub-module when `model.tensor_parallel` is called. Common attributes (present in all subclasses): - **vocab_size** (`int`) -- The number of tokens in the vocabulary, which is also the first dimension of the embeddings matrix (this attribute may be missing for models that don't have a text modality like ViT). - **hidden_size** (`int`) -- The hidden size of the model.
463_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/configuration.md
https://huggingface.co/docs/transformers/en/main_classes/configuration/#pretrainedconfig
.md
- **hidden_size** (`int`) -- The hidden size of the model. - **num_attention_heads** (`int`) -- The number of attention heads used in the multi-head attention layers of the model. - **num_hidden_layers** (`int`) -- The number of blocks in the model. <Tip warning={true}> Setting parameters for sequence generation in the model config is deprecated. For backward compatibility, loading some of them will still be possible, but attempting to overwrite them will throw an exception -- you should set
463_2_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/configuration.md
https://huggingface.co/docs/transformers/en/main_classes/configuration/#pretrainedconfig
.md
some of them will still be possible, but attempting to overwrite them will throw an exception -- you should set them in a [~transformers.GenerationConfig]. Check the documentation of [~transformers.GenerationConfig] for more information about the individual parameters. </Tip> Arg: name_or_path (`str`, *optional*, defaults to `""`): Store the string that was passed to [`PreTrainedModel.from_pretrained`] or
463_2_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/configuration.md
https://huggingface.co/docs/transformers/en/main_classes/configuration/#pretrainedconfig
.md
name_or_path (`str`, *optional*, defaults to `""`): Store the string that was passed to [`PreTrainedModel.from_pretrained`] or [`TFPreTrainedModel.from_pretrained`] as `pretrained_model_name_or_path` if the configuration was created with such a method. output_hidden_states (`bool`, *optional*, defaults to `False`): Whether or not the model should return all hidden-states. output_attentions (`bool`, *optional*, defaults to `False`): Whether or not the model should returns all attentions.
463_2_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/configuration.md
https://huggingface.co/docs/transformers/en/main_classes/configuration/#pretrainedconfig
.md
output_attentions (`bool`, *optional*, defaults to `False`): Whether or not the model should returns all attentions. return_dict (`bool`, *optional*, defaults to `True`): Whether or not the model should return a [`~transformers.utils.ModelOutput`] instead of a plain tuple. is_encoder_decoder (`bool`, *optional*, defaults to `False`): Whether the model is used as an encoder/decoder or not. is_decoder (`bool`, *optional*, defaults to `False`):
463_2_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/configuration.md
https://huggingface.co/docs/transformers/en/main_classes/configuration/#pretrainedconfig
.md
Whether the model is used as an encoder/decoder or not. is_decoder (`bool`, *optional*, defaults to `False`): Whether the model is used as decoder or not (in which case it's used as an encoder). cross_attention_hidden_size** (`bool`, *optional*): The hidden size of the cross-attention layer in case the model is used as a decoder in an encoder-decoder setting and the cross-attention hidden dimension differs from `self.config.hidden_size`. add_cross_attention (`bool`, *optional*, defaults to `False`):
463_2_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/configuration.md
https://huggingface.co/docs/transformers/en/main_classes/configuration/#pretrainedconfig
.md
add_cross_attention (`bool`, *optional*, defaults to `False`): Whether cross-attention layers should be added to the model. Note, this option is only relevant for models that can be used as decoder models within the [`EncoderDecoderModel`] class, which consists of all models in `AUTO_MODELS_FOR_CAUSAL_LM`. tie_encoder_decoder (`bool`, *optional*, defaults to `False`): Whether all encoder weights should be tied to their equivalent decoder weights. This requires the encoder
463_2_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/configuration.md
https://huggingface.co/docs/transformers/en/main_classes/configuration/#pretrainedconfig
.md
Whether all encoder weights should be tied to their equivalent decoder weights. This requires the encoder and decoder model to have the exact same parameter names. prune_heads (`Dict[int, List[int]]`, *optional*, defaults to `{}`): Pruned heads of the model. The keys are the selected layer indices and the associated values, the list of heads to prune in said layer. For instance `{1: [0, 2], 2: [2, 3]}` will prune heads 0 and 2 on layer 1 and heads 2 and 3 on layer 2.
463_2_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/configuration.md
https://huggingface.co/docs/transformers/en/main_classes/configuration/#pretrainedconfig
.md
For instance `{1: [0, 2], 2: [2, 3]}` will prune heads 0 and 2 on layer 1 and heads 2 and 3 on layer 2. chunk_size_feed_forward (`int`, *optional*, defaults to `0`): The chunk size of all feed forward layers in the residual attention blocks. A chunk size of `0` means that the feed forward layer is not chunked. A chunk size of n means that the feed forward layer processes `n` < sequence_length embeddings at a time. For more information on feed forward chunking, see [How does Feed
463_2_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/configuration.md
https://huggingface.co/docs/transformers/en/main_classes/configuration/#pretrainedconfig
.md
sequence_length embeddings at a time. For more information on feed forward chunking, see [How does Feed Forward Chunking work?](../glossary.html#feed-forward-chunking). > Parameters for fine-tuning tasks architectures (`List[str]`, *optional*): Model architectures that can be used with the model pretrained weights. finetuning_task (`str`, *optional*): Name of the task used to fine-tune the model. This can be used when converting from an original (TensorFlow or PyTorch) checkpoint.
463_2_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/configuration.md
https://huggingface.co/docs/transformers/en/main_classes/configuration/#pretrainedconfig
.md
or PyTorch) checkpoint. id2label (`Dict[int, str]`, *optional*): A map from index (for instance prediction index, or target index) to label. label2id (`Dict[str, int]`, *optional*): A map from label to index for the model. num_labels (`int`, *optional*): Number of labels to use in the last layer added to the model, typically for a classification task. task_specific_params (`Dict[str, Any]`, *optional*): Additional keyword arguments to store for the current task. problem_type (`str`, *optional*):
463_2_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/configuration.md
https://huggingface.co/docs/transformers/en/main_classes/configuration/#pretrainedconfig
.md
Additional keyword arguments to store for the current task. problem_type (`str`, *optional*): Problem type for `XxxForSequenceClassification` models. Can be one of `"regression"`, `"single_label_classification"` or `"multi_label_classification"`. > Parameters linked to the tokenizer tokenizer_class (`str`, *optional*): The name of the associated tokenizer class to use (if none is set, will use the tokenizer associated to the model by default). prefix (`str`, *optional*):
463_2_14
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/configuration.md
https://huggingface.co/docs/transformers/en/main_classes/configuration/#pretrainedconfig
.md
model by default). prefix (`str`, *optional*): A specific prompt that should be added at the beginning of each text before calling the model. bos_token_id (`int`, *optional*): The id of the _beginning-of-stream_ token. pad_token_id (`int`, *optional*): The id of the _padding_ token. eos_token_id (`int`, *optional*): The id of the _end-of-stream_ token. decoder_start_token_id (`int`, *optional*): If an encoder-decoder model starts decoding with a different token than _bos_, the id of that token.
463_2_15
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/configuration.md
https://huggingface.co/docs/transformers/en/main_classes/configuration/#pretrainedconfig
.md
If an encoder-decoder model starts decoding with a different token than _bos_, the id of that token. sep_token_id (`int`, *optional*): The id of the _separation_ token. > PyTorch specific parameters torchscript (`bool`, *optional*, defaults to `False`): Whether or not the model should be used with Torchscript. tie_word_embeddings (`bool`, *optional*, defaults to `True`): Whether the model's input and output word embeddings should be tied. Note that this is only relevant if the
463_2_16
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/configuration.md
https://huggingface.co/docs/transformers/en/main_classes/configuration/#pretrainedconfig
.md
Whether the model's input and output word embeddings should be tied. Note that this is only relevant if the model has a output word embedding layer. torch_dtype (`str`, *optional*): The `dtype` of the weights. This attribute can be used to initialize the model to a non-default `dtype` (which is normally `float32`) and thus allow for optimal storage allocation. For example, if the saved model is `float16`, ideally we want to load it back using the minimal amount of memory needed to load
463_2_17
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/configuration.md
https://huggingface.co/docs/transformers/en/main_classes/configuration/#pretrainedconfig
.md
model is `float16`, ideally we want to load it back using the minimal amount of memory needed to load `float16` weights. Since the config object is stored in plain text, this attribute contains just the floating type string without the `torch.` prefix. For example, for `torch.float16` ``torch_dtype` is the `"float16"` string. This attribute is currently not being used during model loading time, but this may change in the future
463_2_18
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/configuration.md
https://huggingface.co/docs/transformers/en/main_classes/configuration/#pretrainedconfig
.md
`"float16"` string. This attribute is currently not being used during model loading time, but this may change in the future versions. But we can already start preparing for the future by saving the dtype with save_pretrained. > TensorFlow specific parameters use_bfloat16 (`bool`, *optional*, defaults to `False`): Whether or not the model should use BFloat16 scalars (only used by some TensorFlow models). tf_legacy_loss (`bool`, *optional*, defaults to `False`):
463_2_19
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/configuration.md
https://huggingface.co/docs/transformers/en/main_classes/configuration/#pretrainedconfig
.md
tf_legacy_loss (`bool`, *optional*, defaults to `False`): Whether the model should use legacy TensorFlow losses. Legacy losses have variable output shapes and may not be XLA-compatible. This option is here for backward compatibility and will be removed in Transformers v5. loss_type (`str`, *optional*): The type of loss that the model should use. It should be in `LOSS_MAPPING`'s keys, otherwise the loss will be automatically infered from the model architecture. - push_to_hub - all
463_2_20
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/logging.md
https://huggingface.co/docs/transformers/en/main_classes/logging/
.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
464_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/logging.md
https://huggingface.co/docs/transformers/en/main_classes/logging/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
464_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/logging.md
https://huggingface.co/docs/transformers/en/main_classes/logging/#logging
.md
🤗 Transformers has a centralized logging system, so that you can setup the verbosity of the library easily. Currently the default verbosity of the library is `WARNING`. To change the level of verbosity, just use one of the direct setters. For instance, here is how to change the verbosity to the INFO level. ```python import transformers
464_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/logging.md
https://huggingface.co/docs/transformers/en/main_classes/logging/#logging
.md
transformers.logging.set_verbosity_info() ``` You can also use the environment variable `TRANSFORMERS_VERBOSITY` to override the default verbosity. You can set it to one of the following: `debug`, `info`, `warning`, `error`, `critical`, `fatal`. For example: ```bash TRANSFORMERS_VERBOSITY=error ./myprogram.py ``` Additionally, some `warnings` can be disabled by setting the environment variable
464_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/logging.md
https://huggingface.co/docs/transformers/en/main_classes/logging/#logging
.md
``` Additionally, some `warnings` can be disabled by setting the environment variable `TRANSFORMERS_NO_ADVISORY_WARNINGS` to a true value, like *1*. This will disable any warning that is logged using [`logger.warning_advice`]. For example: ```bash TRANSFORMERS_NO_ADVISORY_WARNINGS=1 ./myprogram.py ``` Here is an example of how to use the same logger as the library in your own module or script: ```python from transformers.utils import logging
464_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/logging.md
https://huggingface.co/docs/transformers/en/main_classes/logging/#logging
.md
logging.set_verbosity_info() logger = logging.get_logger("transformers") logger.info("INFO") logger.warning("WARN") ``` All the methods of this logging module are documented below, the main ones are [`logging.get_verbosity`] to get the current level of verbosity in the logger and [`logging.set_verbosity`] to set the verbosity to the level of your choice. In order (from the least verbose to the most verbose), those levels (with their corresponding int values in parenthesis) are:
464_1_3