source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#automodel
.md
```py >>> pt_outputs = pt_model(**pt_batch) ``` The model outputs the final activations in the `logits` attribute. Apply the softmax function to the `logits` to retrieve the probabilities: ```py >>> from torch import nn
23_6_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#automodel
.md
>>> pt_predictions = nn.functional.softmax(pt_outputs.logits, dim=-1) >>> print(pt_predictions) tensor([[0.0021, 0.0018, 0.0115, 0.2121, 0.7725], [0.2084, 0.1826, 0.1969, 0.1755, 0.2365]], grad_fn=<SoftmaxBackward0>) ``` </pt> <tf>
23_6_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#automodel
.md
[0.2084, 0.1826, 0.1969, 0.1755, 0.2365]], grad_fn=<SoftmaxBackward0>) ``` </pt> <tf> 🤗 Transformers provides a simple and unified way to load pretrained instances. This means you can load an [`TFAutoModel`] like you would load an [`AutoTokenizer`]. The only difference is selecting the correct [`TFAutoModel`] for the task. For text (or sequence) classification, you should load [`TFAutoModelForSequenceClassification`]: ```py >>> from transformers import TFAutoModelForSequenceClassification
23_6_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#automodel
.md
>>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment" >>> tf_model = TFAutoModelForSequenceClassification.from_pretrained(model_name) ``` <Tip> See the [task summary](./task_summary) for tasks supported by an [`AutoModel`] class. </Tip> Now pass your preprocessed batch of inputs directly to the model. You can pass the tensors as-is: ```py >>> tf_outputs = tf_model(tf_batch) ```
23_6_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#automodel
.md
```py >>> tf_outputs = tf_model(tf_batch) ``` The model outputs the final activations in the `logits` attribute. Apply the softmax function to the `logits` to retrieve the probabilities: ```py >>> import tensorflow as tf
23_6_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#automodel
.md
>>> tf_predictions = tf.nn.softmax(tf_outputs.logits, axis=-1) >>> tf_predictions # doctest: +IGNORE_RESULT ``` </tf> </frameworkcontent> <Tip> All 🤗 Transformers models (PyTorch or TensorFlow) output the tensors *before* the final activation
23_6_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#automodel
.md
<Tip> All 🤗 Transformers models (PyTorch or TensorFlow) output the tensors *before* the final activation function (like softmax) because the final activation function is often fused with the loss. Model outputs are special dataclasses so their attributes are autocompleted in an IDE. The model outputs behave like a tuple or a dictionary (you can index with an integer, a slice or a string) in which case, attributes that are None are ignored. </Tip>
23_6_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#save-a-model
.md
<frameworkcontent> <pt> Once your model is fine-tuned, you can save it with its tokenizer using [`PreTrainedModel.save_pretrained`]: ```py >>> pt_save_directory = "./pt_save_pretrained" >>> tokenizer.save_pretrained(pt_save_directory) # doctest: +IGNORE_RESULT >>> pt_model.save_pretrained(pt_save_directory) ``` When you are ready to use the model again, reload it with [`PreTrainedModel.from_pretrained`]: ```py >>> pt_model = AutoModelForSequenceClassification.from_pretrained("./pt_save_pretrained")
23_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#save-a-model
.md
```py >>> pt_model = AutoModelForSequenceClassification.from_pretrained("./pt_save_pretrained") ``` </pt> <tf> Once your model is fine-tuned, you can save it with its tokenizer using [`TFPreTrainedModel.save_pretrained`]: ```py >>> tf_save_directory = "./tf_save_pretrained" >>> tokenizer.save_pretrained(tf_save_directory) # doctest: +IGNORE_RESULT >>> tf_model.save_pretrained(tf_save_directory) ``` When you are ready to use the model again, reload it with [`TFPreTrainedModel.from_pretrained`]: ```py
23_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#save-a-model
.md
``` When you are ready to use the model again, reload it with [`TFPreTrainedModel.from_pretrained`]: ```py >>> tf_model = TFAutoModelForSequenceClassification.from_pretrained("./tf_save_pretrained") ``` </tf> </frameworkcontent> One particularly cool 🤗 Transformers feature is the ability to save a model and reload it as either a PyTorch or TensorFlow model. The `from_pt` or `from_tf` parameter can convert the model from one framework to the other: <frameworkcontent> <pt> ```py
23_7_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#save-a-model
.md
<frameworkcontent> <pt> ```py >>> from transformers import AutoModel
23_7_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#save-a-model
.md
>>> tokenizer = AutoTokenizer.from_pretrained(pt_save_directory) >>> pt_model = AutoModelForSequenceClassification.from_pretrained(pt_save_directory, from_pt=True) ``` </pt> <tf> ```py >>> from transformers import TFAutoModel >>> tokenizer = AutoTokenizer.from_pretrained(tf_save_directory) >>> tf_model = TFAutoModelForSequenceClassification.from_pretrained(tf_save_directory, from_tf=True) ``` </tf> </frameworkcontent>
23_7_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#custom-model-builds
.md
You can modify the model's configuration class to change how a model is built. The configuration specifies a model's attributes, such as the number of hidden layers or attention heads. You start from scratch when you initialize a model from a custom configuration class. The model attributes are randomly initialized, and you'll need to train the model before you can use it to get meaningful results.
23_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#custom-model-builds
.md
Start by importing [`AutoConfig`], and then load the pretrained model you want to modify. Within [`AutoConfig.from_pretrained`], you can specify the attribute you want to change, such as the number of attention heads: ```py >>> from transformers import AutoConfig
23_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#custom-model-builds
.md
>>> my_config = AutoConfig.from_pretrained("distilbert/distilbert-base-uncased", n_heads=12) ``` <frameworkcontent> <pt> Create a model from your custom configuration with [`AutoModel.from_config`]: ```py >>> from transformers import AutoModel >>> my_model = AutoModel.from_config(my_config) ``` </pt> <tf> Create a model from your custom configuration with [`TFAutoModel.from_config`]: ```py >>> from transformers import TFAutoModel
23_8_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#custom-model-builds
.md
>>> my_model = TFAutoModel.from_config(my_config) ``` </tf> </frameworkcontent> Take a look at the [Create a custom architecture](./create_a_model) guide for more information about building custom configurations.
23_8_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#trainer---a-pytorch-optimized-training-loop
.md
All models are a standard [`torch.nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) so you can use them in any typical training loop. While you can write your own training loop, 🤗 Transformers provides a [`Trainer`] class for PyTorch, which contains the basic training loop and adds additional functionality for features like distributed training, mixed precision, and more. Depending on your task, you'll typically pass the following parameters to [`Trainer`]:
23_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#trainer---a-pytorch-optimized-training-loop
.md
Depending on your task, you'll typically pass the following parameters to [`Trainer`]: 1. You'll start with a [`PreTrainedModel`] or a [`torch.nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module). Set `torch_dtype="auto"` to automatically load the most memory-efficient data type the weights are stored in. ```py >>> from transformers import AutoModelForSequenceClassification
23_9_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#trainer---a-pytorch-optimized-training-loop
.md
>>> model = AutoModelForSequenceClassification.from_pretrained("distilbert/distilbert-base-uncased", torch_dtype="auto") ``` 2. [`TrainingArguments`] contains the model hyperparameters you can change like learning rate, batch size, and the number of epochs to train for. The default values are used if you don't specify any training arguments: ```py >>> from transformers import TrainingArguments
23_9_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#trainer---a-pytorch-optimized-training-loop
.md
>>> training_args = TrainingArguments( ... output_dir="path/to/save/folder/", ... learning_rate=2e-5, ... per_device_train_batch_size=8, ... per_device_eval_batch_size=8, ... num_train_epochs=2, ... ) ``` 3. Load a preprocessing class like a tokenizer, image processor, feature extractor, or processor: ```py >>> from transformers import AutoTokenizer
23_9_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#trainer---a-pytorch-optimized-training-loop
.md
>>> tokenizer = AutoTokenizer.from_pretrained("distilbert/distilbert-base-uncased") ``` 4. Load a dataset: ```py >>> from datasets import load_dataset
23_9_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#trainer---a-pytorch-optimized-training-loop
.md
>>> dataset = load_dataset("rotten_tomatoes") # doctest: +IGNORE_RESULT ``` 5. Create a function to tokenize the dataset: ```py >>> def tokenize_dataset(dataset): ... return tokenizer(dataset["text"]) ``` Then apply it over the entire dataset with [`~datasets.Dataset.map`]: ```py >>> dataset = dataset.map(tokenize_dataset, batched=True) ``` 6. A [`DataCollatorWithPadding`] to create a batch of examples from your dataset: ```py >>> from transformers import DataCollatorWithPadding
23_9_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#trainer---a-pytorch-optimized-training-loop
.md
>>> data_collator = DataCollatorWithPadding(tokenizer=tokenizer) ``` Now gather all these classes in [`Trainer`]: ```py >>> from transformers import Trainer
23_9_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#trainer---a-pytorch-optimized-training-loop
.md
>>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=dataset["train"], ... eval_dataset=dataset["test"], ... processing_class=tokenizer, ... data_collator=data_collator, ... ) # doctest: +SKIP ``` When you're ready, call [`~Trainer.train`] to start training: ```py >>> trainer.train() # doctest: +SKIP ``` <Tip>
23_9_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#trainer---a-pytorch-optimized-training-loop
.md
``` When you're ready, call [`~Trainer.train`] to start training: ```py >>> trainer.train() # doctest: +SKIP ``` <Tip> For tasks - like translation or summarization - that use a sequence-to-sequence model, use the [`Seq2SeqTrainer`] and [`Seq2SeqTrainingArguments`] classes instead. </Tip>
23_9_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#trainer---a-pytorch-optimized-training-loop
.md
</Tip> You can customize the training loop behavior by subclassing the methods inside [`Trainer`]. This allows you to customize features such as the loss function, optimizer, and scheduler. Take a look at the [`Trainer`] reference for which methods can be subclassed.
23_9_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#trainer---a-pytorch-optimized-training-loop
.md
The other way to customize the training loop is by using [Callbacks](./main_classes/callback). You can use callbacks to integrate with other libraries and inspect the training loop to report on progress or stop the training early. Callbacks do not modify anything in the training loop itself. To customize something like the loss function, you need to subclass the [`Trainer`] instead.
23_9_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#train-with-tensorflow
.md
All models are a standard [`tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) so they can be trained in TensorFlow with the [Keras](https://keras.io/) API. 🤗 Transformers provides the [`~TFPreTrainedModel.prepare_tf_dataset`] method to easily load your dataset as a `tf.data.Dataset` so you can start training right away with Keras' [`compile`](https://keras.io/api/models/model_training_apis/#compile-method) and [`fit`](https://keras.io/api/models/model_training_apis/#fit-method)
23_10_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#train-with-tensorflow
.md
and [`fit`](https://keras.io/api/models/model_training_apis/#fit-method) methods.
23_10_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#train-with-tensorflow
.md
1. You'll start with a [`TFPreTrainedModel`] or a [`tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model): ```py >>> from transformers import TFAutoModelForSequenceClassification
23_10_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#train-with-tensorflow
.md
>>> model = TFAutoModelForSequenceClassification.from_pretrained("distilbert/distilbert-base-uncased") ``` 2. Load a preprocessing class like a tokenizer, image processor, feature extractor, or processor: ```py >>> from transformers import AutoTokenizer
23_10_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#train-with-tensorflow
.md
>>> tokenizer = AutoTokenizer.from_pretrained("distilbert/distilbert-base-uncased") ``` 3. Create a function to tokenize the dataset: ```py >>> def tokenize_dataset(dataset): ... return tokenizer(dataset["text"]) # doctest: +SKIP ``` 4. Apply the tokenizer over the entire dataset with [`~datasets.Dataset.map`] and then pass the dataset and tokenizer to [`~TFPreTrainedModel.prepare_tf_dataset`]. You can also change the batch size and shuffle the dataset here if you'd like: ```py
23_10_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#train-with-tensorflow
.md
```py >>> dataset = dataset.map(tokenize_dataset) # doctest: +SKIP >>> tf_dataset = model.prepare_tf_dataset( ... dataset["train"], batch_size=16, shuffle=True, tokenizer=tokenizer ... ) # doctest: +SKIP ``` 5. When you're ready, you can call `compile` and `fit` to start training. Note that Transformers models all have a default task-relevant loss function, so you don't need to specify one unless you want to: ```py >>> from tensorflow.keras.optimizers import Adam
23_10_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#train-with-tensorflow
.md
>>> model.compile(optimizer='adam') # No loss argument! >>> model.fit(tf_dataset) # doctest: +SKIP ```
23_10_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quicktour.md
https://huggingface.co/docs/transformers/en/quicktour/#whats-next
.md
Now that you've completed the 🤗 Transformers quick tour, check out our guides and learn how to do more specific things like writing a custom model, fine-tuning a model for a task, and how to train a model with a script. If you're interested in learning more about 🤗 Transformers core concepts, grab a cup of coffee and take a look at our Conceptual Guides!
23_11_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md
https://huggingface.co/docs/transformers/en/trainer/
.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
24_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md
https://huggingface.co/docs/transformers/en/trainer/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
24_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md
https://huggingface.co/docs/transformers/en/trainer/#trainer
.md
The [`Trainer`] is a complete training and evaluation loop for PyTorch models implemented in the Transformers library. You only need to pass it the necessary pieces for training (model, tokenizer, dataset, evaluation function, training hyperparameters, etc.), and the [`Trainer`] class takes care of the rest. This makes it easier to start training faster without manually writing your own training loop. But at the same time, [`Trainer`] is very customizable and offers a ton of training options so you can
24_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md
https://huggingface.co/docs/transformers/en/trainer/#trainer
.md
your own training loop. But at the same time, [`Trainer`] is very customizable and offers a ton of training options so you can tailor it to your exact training needs.
24_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md
https://huggingface.co/docs/transformers/en/trainer/#trainer
.md
<Tip>
24_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md
https://huggingface.co/docs/transformers/en/trainer/#trainer
.md
In addition to the [`Trainer`] class, Transformers also provides a [`Seq2SeqTrainer`] class for sequence-to-sequence tasks like translation or summarization. There is also the [`~trl.SFTTrainer`] class from the [TRL](https://hf.co/docs/trl) library which wraps the [`Trainer`] class and is optimized for training language models like Llama-2 and Mistral with autoregressive techniques. [`~trl.SFTTrainer`] also supports features like sequence packing, LoRA, quantization, and DeepSpeed for efficiently scaling
24_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md
https://huggingface.co/docs/transformers/en/trainer/#trainer
.md
[`~trl.SFTTrainer`] also supports features like sequence packing, LoRA, quantization, and DeepSpeed for efficiently scaling to any model size.
24_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md
https://huggingface.co/docs/transformers/en/trainer/#trainer
.md
<br> Feel free to check out the [API reference](./main_classes/trainer) for these other [`Trainer`]-type classes to learn more about when to use which one. In general, [`Trainer`] is the most versatile option and is appropriate for a broad spectrum of tasks. [`Seq2SeqTrainer`] is designed for sequence-to-sequence tasks and [`~trl.SFTTrainer`] is designed for training language models. </Tip>
24_1_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md
https://huggingface.co/docs/transformers/en/trainer/#trainer
.md
</Tip> Before you start, make sure [Accelerate](https://hf.co/docs/accelerate) - a library for enabling and running PyTorch training across distributed environments - is installed. ```bash pip install accelerate
24_1_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md
https://huggingface.co/docs/transformers/en/trainer/#trainer
.md
# upgrade pip install accelerate --upgrade ``` This guide provides an overview of the [`Trainer`] class.
24_1_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md
https://huggingface.co/docs/transformers/en/trainer/#basic-usage
.md
[`Trainer`] includes all the code you'll find in a basic training loop: 1. perform a training step to calculate the loss 2. calculate the gradients with the [`~accelerate.Accelerator.backward`] method 3. update the weights based on the gradients 4. repeat this process until you've reached a predetermined number of epochs
24_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md
https://huggingface.co/docs/transformers/en/trainer/#basic-usage
.md
3. update the weights based on the gradients 4. repeat this process until you've reached a predetermined number of epochs The [`Trainer`] class abstracts all of this code away so you don't have to worry about manually writing a training loop every time or if you're just getting started with PyTorch and training. You only need to provide the essential components required for training, such as a model and a dataset, and the [`Trainer`] class handles everything else.
24_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md
https://huggingface.co/docs/transformers/en/trainer/#basic-usage
.md
If you want to specify any training options or hyperparameters, you can find them in the [`TrainingArguments`] class. For example, let's define where to save the model in `output_dir` and push the model to the Hub after training with `push_to_hub=True`. ```py from transformers import TrainingArguments
24_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md
https://huggingface.co/docs/transformers/en/trainer/#basic-usage
.md
training_args = TrainingArguments( output_dir="your-model", learning_rate=2e-5, per_device_train_batch_size=16, per_device_eval_batch_size=16, num_train_epochs=2, weight_decay=0.01, eval_strategy="epoch", save_strategy="epoch", load_best_model_at_end=True, push_to_hub=True, ) ```
24_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md
https://huggingface.co/docs/transformers/en/trainer/#basic-usage
.md
weight_decay=0.01, eval_strategy="epoch", save_strategy="epoch", load_best_model_at_end=True, push_to_hub=True, ) ``` Pass `training_args` to the [`Trainer`] along with a model, dataset, something to preprocess the dataset with (depending on your data type it could be a tokenizer, feature extractor or image processor), a data collator, and a function to compute the metrics you want to track during training. Finally, call [`~Trainer.train`] to start training! ```py from transformers import Trainer
24_2_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md
https://huggingface.co/docs/transformers/en/trainer/#basic-usage
.md
trainer = Trainer( model=model, args=training_args, train_dataset=dataset["train"], eval_dataset=dataset["test"], processing_class=tokenizer, data_collator=data_collator, compute_metrics=compute_metrics, ) trainer.train() ```
24_2_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md
https://huggingface.co/docs/transformers/en/trainer/#checkpoints
.md
The [`Trainer`] class saves your model checkpoints to the directory specified in the `output_dir` parameter of [`TrainingArguments`]. You'll find the checkpoints saved in a `checkpoint-000` subfolder where the numbers at the end correspond to the training step. Saving checkpoints are useful for resuming training later. ```py # resume from latest checkpoint trainer.train(resume_from_checkpoint=True)
24_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md
https://huggingface.co/docs/transformers/en/trainer/#checkpoints
.md
# resume from specific checkpoint saved in output directory trainer.train(resume_from_checkpoint="your-model/checkpoint-1000") ```
24_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md
https://huggingface.co/docs/transformers/en/trainer/#checkpoints
.md
trainer.train(resume_from_checkpoint="your-model/checkpoint-1000") ``` You can save your checkpoints (the optimizer state is not saved by default) to the Hub by setting `push_to_hub=True` in [`TrainingArguments`] to commit and push them. Other options for deciding how your checkpoints are saved are set up in the [`hub_strategy`](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments.hub_strategy) parameter:
24_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md
https://huggingface.co/docs/transformers/en/trainer/#checkpoints
.md
* `hub_strategy="checkpoint"` pushes the latest checkpoint to a subfolder named "last-checkpoint" from which you can resume training * `hub_strategy="all_checkpoints"` pushes all checkpoints to the directory defined in `output_dir` (you'll see one checkpoint per folder in your model repository)
24_3_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md
https://huggingface.co/docs/transformers/en/trainer/#checkpoints
.md
When you resume training from a checkpoint, the [`Trainer`] tries to keep the Python, NumPy, and PyTorch RNG states the same as they were when the checkpoint was saved. But because PyTorch has various non-deterministic default settings, the RNG states aren't guaranteed to be the same. If you want to enable full determinism, take a look at the [Controlling sources of randomness](https://pytorch.org/docs/stable/notes/randomness#controlling-sources-of-randomness) guide to learn what you can enable to make
24_3_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md
https://huggingface.co/docs/transformers/en/trainer/#checkpoints
.md
guide to learn what you can enable to make your training fully deterministic. Keep in mind though that by making certain settings deterministic, training may be slower.
24_3_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md
https://huggingface.co/docs/transformers/en/trainer/#customize-the-trainer
.md
While the [`Trainer`] class is designed to be accessible and easy-to-use, it also offers a lot of customizability for more adventurous users. Many of the [`Trainer`]'s method can be subclassed and overridden to support the functionality you want, without having to rewrite the entire training loop from scratch to accommodate it. These methods include: * [`~Trainer.get_train_dataloader`] creates a training DataLoader * [`~Trainer.get_eval_dataloader`] creates an evaluation DataLoader
24_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md
https://huggingface.co/docs/transformers/en/trainer/#customize-the-trainer
.md
* [`~Trainer.get_eval_dataloader`] creates an evaluation DataLoader * [`~Trainer.get_test_dataloader`] creates a test DataLoader * [`~Trainer.log`] logs information on the various objects that watch training * [`~Trainer.create_optimizer_and_scheduler`] creates an optimizer and learning rate scheduler if they weren't passed in the `__init__`; these can also be separately customized with [`~Trainer.create_optimizer`] and [`~Trainer.create_scheduler`] respectively
24_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md
https://huggingface.co/docs/transformers/en/trainer/#customize-the-trainer
.md
* [`~Trainer.compute_loss`] computes the loss on a batch of training inputs * [`~Trainer.training_step`] performs the training step * [`~Trainer.prediction_step`] performs the prediction and test step * [`~Trainer.evaluate`] evaluates the model and returns the evaluation metrics * [`~Trainer.predict`] makes predictions (with metrics if labels are available) on the test set For example, if you want to customize the [`~Trainer.compute_loss`] method to use a weighted loss instead. ```py
24_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md
https://huggingface.co/docs/transformers/en/trainer/#customize-the-trainer
.md
For example, if you want to customize the [`~Trainer.compute_loss`] method to use a weighted loss instead. ```py from torch import nn from transformers import Trainer
24_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md
https://huggingface.co/docs/transformers/en/trainer/#customize-the-trainer
.md
class CustomTrainer(Trainer): def compute_loss(self, model, inputs, return_outputs=False): labels = inputs.pop("labels") # forward pass outputs = model(**inputs) logits = outputs.get("logits") # compute custom loss for 3 labels with different weights loss_fct = nn.CrossEntropyLoss(weight=torch.tensor([1.0, 2.0, 3.0], device=model.device)) loss = loss_fct(logits.view(-1, self.model.config.num_labels), labels.view(-1)) return (loss, outputs) if return_outputs else loss ```
24_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md
https://huggingface.co/docs/transformers/en/trainer/#callbacks
.md
Another option for customizing the [`Trainer`] is to use [callbacks](callbacks). Callbacks *don't change* anything in the training loop. They inspect the training loop state and then execute some action (early stopping, logging results, etc.) depending on the state. In other words, a callback can't be used to implement something like a custom loss function and you'll need to subclass and override the [`~Trainer.compute_loss`] method for that.
24_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md
https://huggingface.co/docs/transformers/en/trainer/#callbacks
.md
For example, if you want to add an early stopping callback to the training loop after 10 steps. ```py from transformers import TrainerCallback
24_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md
https://huggingface.co/docs/transformers/en/trainer/#callbacks
.md
class EarlyStoppingCallback(TrainerCallback): def __init__(self, num_steps=10): self.num_steps = num_steps def on_step_end(self, args, state, control, **kwargs): if state.global_step >= self.num_steps: return {"should_training_stop": True} else: return {} ``` Then pass it to the [`Trainer`]'s `callback` parameter. ```py from transformers import Trainer
24_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md
https://huggingface.co/docs/transformers/en/trainer/#callbacks
.md
trainer = Trainer( model=model, args=training_args, train_dataset=dataset["train"], eval_dataset=dataset["test"], processing_class=tokenizer, data_collator=data_collator, compute_metrics=compute_metrics, callbacks=[EarlyStoppingCallback()], ) ```
24_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md
https://huggingface.co/docs/transformers/en/trainer/#logging
.md
<Tip> Check out the [logging](./main_classes/logging) API reference for more information about the different logging levels. </Tip>
24_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md
https://huggingface.co/docs/transformers/en/trainer/#logging
.md
The [`Trainer`] is set to `logging.INFO` by default which reports errors, warnings, and other basic information. A [`Trainer`] replica - in distributed environments - is set to `logging.WARNING` which only reports errors and warnings. You can change the logging level with the [`log_level`](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments.log_level) and
24_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md
https://huggingface.co/docs/transformers/en/trainer/#logging
.md
the [`log_level`](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments.log_level) and [`log_level_replica`](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments.log_level_replica) parameters in [`TrainingArguments`].
24_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md
https://huggingface.co/docs/transformers/en/trainer/#logging
.md
To configure the log level setting for each node, use the [`log_on_each_node`](https://huggingface.co/docs/transformers/main/en/main_classes/trainer#transformers.TrainingArguments.log_on_each_node) parameter to determine whether to use the log level on each node or only on the main node. <Tip>
24_6_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md
https://huggingface.co/docs/transformers/en/trainer/#logging
.md
<Tip> [`Trainer`] sets the log level separately for each node in the [`Trainer.__init__`] method, so you may want to consider setting this sooner if you're using other Transformers functionalities before creating the [`Trainer`] object. </Tip> For example, to set your main code and modules to use the same log level according to each node: ```py logger = logging.getLogger(__name__)
24_6_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md
https://huggingface.co/docs/transformers/en/trainer/#logging
.md
logging.basicConfig( format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", datefmt="%m/%d/%Y %H:%M:%S", handlers=[logging.StreamHandler(sys.stdout)], ) log_level = training_args.get_process_log_level() logger.setLevel(log_level) datasets.utils.logging.set_verbosity(log_level) transformers.utils.logging.set_verbosity(log_level)
24_6_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md
https://huggingface.co/docs/transformers/en/trainer/#logging
.md
trainer = Trainer(...) ``` Use different combinations of `log_level` and `log_level_replica` to configure what gets logged on each of the nodes. <hfoptions id="logging"> <hfoption id="single node"> ```bash my_app.py ... --log_level warning --log_level_replica error ``` </hfoption> <hfoption id="multi-node"> Add the `log_on_each_node 0` parameter for multi-node environments. ```bash my_app.py ... --log_level warning --log_level_replica error --log_on_each_node 0
24_6_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md
https://huggingface.co/docs/transformers/en/trainer/#logging
.md
# set to only report errors my_app.py ... --log_level error --log_level_replica error --log_on_each_node 0 ``` </hfoption> </hfoptions>
24_6_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md
https://huggingface.co/docs/transformers/en/trainer/#neftune
.md
[NEFTune](https://hf.co/papers/2310.05914) is a technique that can improve performance by adding noise to the embedding vectors during training. To enable it in [`Trainer`], set the `neftune_noise_alpha` parameter in [`TrainingArguments`] to control how much noise is added. ```py from transformers import TrainingArguments, Trainer
24_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md
https://huggingface.co/docs/transformers/en/trainer/#neftune
.md
training_args = TrainingArguments(..., neftune_noise_alpha=0.1) trainer = Trainer(..., args=training_args) ``` NEFTune is disabled after training to restore the original embedding layer to avoid any unexpected behavior.
24_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md
https://huggingface.co/docs/transformers/en/trainer/#liger-kernel
.md
[Liger-Kernel](https://github.com/linkedin/Liger-Kernel) Kernel is a collection of Triton kernels developed by Linkedin designed specifically for LLM training. We have implemented Hugging Face Compatible RMSNorm, RoPE, SwiGLU, CrossEntropy, FusedLinearCrossEntropy, and more to come. It can effectively increase multi-GPU training throughput by 20% and reduces memory usage by 60%. The kernel works out of the box with flash attention, PyTorch FSDP, and Microsoft DeepSpeed. <Tip>
24_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md
https://huggingface.co/docs/transformers/en/trainer/#liger-kernel
.md
<Tip> Gain +20% throughput and reduce memory usage by 60% on LLaMA 3-8B model training. Achieve longer context lengths and larger batch sizes. It’s also useful if you want to scale up your model to multi-head training or large vocabulary sizes. Unleash multi-head training (medusa) and more. See details and examples in [Liger](https://github.com/linkedin/Liger-Kernel/tree/main/examples) </Tip> First make sure to install Liger official repository: ```bash pip install liger-kernel ```
24_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md
https://huggingface.co/docs/transformers/en/trainer/#liger-kernel
.md
</Tip> First make sure to install Liger official repository: ```bash pip install liger-kernel ``` You should pass `use_liger_kernel=True` to apply liger kernel on your model, for example: ```py from transformers import TrainingArguments
24_8_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md
https://huggingface.co/docs/transformers/en/trainer/#liger-kernel
.md
training_args = TrainingArguments( output_dir="your-model", learning_rate=2e-5, per_device_train_batch_size=16, per_device_eval_batch_size=16, num_train_epochs=2, weight_decay=0.01, eval_strategy="epoch", save_strategy="epoch", load_best_model_at_end=True, push_to_hub=True, use_liger_kernel=True ) ```
24_8_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md
https://huggingface.co/docs/transformers/en/trainer/#liger-kernel
.md
save_strategy="epoch", load_best_model_at_end=True, push_to_hub=True, use_liger_kernel=True ) ``` The kernel supports the Llama, Gemma, Mistral, and Mixtral model architectures. The most up-to-date list of supported models can be found [here](https://github.com/linkedin/Liger-Kernel). When `use_liger_kernel` is set to `True`, the corresponding layers in the original model will be patched with Liger's efficient implementation, so you don't need to do anything extra other than setting the argument value.
24_8_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md
https://huggingface.co/docs/transformers/en/trainer/#optimizers
.md
You can choose a built-in optimizer for training using: ```python from transformers import TrainingArguments training_args = TrainingArguments(..., optim="adamw_torch") ``` See [`OptimizerNames`](https://github.com/huggingface/transformers/blob/main/src/transformers/training_args.py) for a full list of choices. We include advanced examples in the sections below. You can also use an arbitrary PyTorch optimizer via: ```python import torch
24_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md
https://huggingface.co/docs/transformers/en/trainer/#optimizers
.md
optimizer_cls = torch.optim.AdamW optimizer_kwargs = { "lr": 4e-3, "betas": (0.9, 0.999), "weight_decay": 0.05, } from transformers import Trainer trainer = Trainer(..., optimizer_cls_and_kwargs=(optimizer_cls, optimizer_kwargs)) ```
24_9_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md
https://huggingface.co/docs/transformers/en/trainer/#galore
.md
Gradient Low-Rank Projection (GaLore) is a memory-efficient low-rank training strategy that allows full-parameter learning but is more memory-efficient than common low-rank adaptation methods, such as LoRA. First make sure to install GaLore official repository: ```bash pip install galore-torch ```
24_10_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md
https://huggingface.co/docs/transformers/en/trainer/#galore
.md
First make sure to install GaLore official repository: ```bash pip install galore-torch ``` Then simply add one of `["galore_adamw", "galore_adafactor", "galore_adamw_8bit"]` in `optim` together with `optim_target_modules`, which can be a list of strings, regex or full path corresponding to the target module names you want to adapt. Below is an end-to-end example script (make sure to `pip install trl datasets`): ```python import torch import datasets import trl
24_10_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md
https://huggingface.co/docs/transformers/en/trainer/#galore
.md
from transformers import TrainingArguments, AutoConfig, AutoTokenizer, AutoModelForCausalLM train_dataset = datasets.load_dataset('imdb', split='train') args = TrainingArguments( output_dir="./test-galore", max_steps=100, per_device_train_batch_size=2, optim="galore_adamw", optim_target_modules=[r".*.attn.*", r".*.mlp.*"] ) model_id = "google/gemma-2b" config = AutoConfig.from_pretrained(model_id)
24_10_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md
https://huggingface.co/docs/transformers/en/trainer/#galore
.md
model_id = "google/gemma-2b" config = AutoConfig.from_pretrained(model_id) tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_config(config).to(0) trainer = trl.SFTTrainer( model=model, args=args, train_dataset=train_dataset, dataset_text_field='text', max_seq_length=512, ) trainer.train() ``` To pass extra arguments supported by GaLore, you should pass correctly `optim_args`, for example: ```python import torch import datasets import trl
24_10_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md
https://huggingface.co/docs/transformers/en/trainer/#galore
.md
from transformers import TrainingArguments, AutoConfig, AutoTokenizer, AutoModelForCausalLM train_dataset = datasets.load_dataset('imdb', split='train') args = TrainingArguments( output_dir="./test-galore", max_steps=100, per_device_train_batch_size=2, optim="galore_adamw", optim_target_modules=[r".*.attn.*", r".*.mlp.*"], optim_args="rank=64, update_proj_gap=100, scale=0.10", ) model_id = "google/gemma-2b" config = AutoConfig.from_pretrained(model_id)
24_10_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md
https://huggingface.co/docs/transformers/en/trainer/#galore
.md
model_id = "google/gemma-2b" config = AutoConfig.from_pretrained(model_id) tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_config(config).to(0) trainer = trl.SFTTrainer( model=model, args=args, train_dataset=train_dataset, dataset_text_field='text', max_seq_length=512, )
24_10_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md
https://huggingface.co/docs/transformers/en/trainer/#galore
.md
trainer.train() ``` You can read more about the method in the [original repository](https://github.com/jiaweizzhao/GaLore) or the [paper](https://arxiv.org/abs/2403.03507). Currently you can only train Linear layers that are considered as GaLore layers and will use low-rank decomposition to be trained while remaining layers will be optimized in the conventional manner.
24_10_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md
https://huggingface.co/docs/transformers/en/trainer/#galore
.md
Note it will take a bit of time before starting the training (~3 minutes for a 2B model on a NVIDIA A100), but training should go smoothly afterwards. You can also perform layer-wise optimization by post-pending the optimizer name with `layerwise` like below: ```python import torch import datasets import trl
24_10_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md
https://huggingface.co/docs/transformers/en/trainer/#galore
.md
from transformers import TrainingArguments, AutoConfig, AutoTokenizer, AutoModelForCausalLM train_dataset = datasets.load_dataset('imdb', split='train') args = TrainingArguments( output_dir="./test-galore", max_steps=100, per_device_train_batch_size=2, optim="galore_adamw_layerwise", optim_target_modules=[r".*.attn.*", r".*.mlp.*"] ) model_id = "google/gemma-2b" config = AutoConfig.from_pretrained(model_id)
24_10_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md
https://huggingface.co/docs/transformers/en/trainer/#galore
.md
model_id = "google/gemma-2b" config = AutoConfig.from_pretrained(model_id) tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_config(config).to(0) trainer = trl.SFTTrainer( model=model, args=args, train_dataset=train_dataset, dataset_text_field='text', max_seq_length=512, )
24_10_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md
https://huggingface.co/docs/transformers/en/trainer/#galore
.md
trainer.train() ```
24_10_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md
https://huggingface.co/docs/transformers/en/trainer/#galore
.md
Note layerwise optimization is a bit experimental and does not support DDP (Distributed Data Parallel), thus you can run the training script only on a single GPU. Please see [this appropriate section](https://github.com/jiaweizzhao/GaLore?tab=readme-ov-file#train-7b-model-with-a-single-gpu-with-24gb-memory) for more details. Other features such as gradient clipping, DeepSpeed, etc might not be supported out of the box. Please [raise an issue on GitHub](https://github.com/huggingface/transformers/issues) if
24_10_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md
https://huggingface.co/docs/transformers/en/trainer/#galore
.md
might not be supported out of the box. Please [raise an issue on GitHub](https://github.com/huggingface/transformers/issues) if you encounter such issue.
24_10_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md
https://huggingface.co/docs/transformers/en/trainer/#lomo-optimizer
.md
The LOMO optimizers have been introduced in [Full Parameter Fine-Tuning for Large Language Models with Limited Resources](https://hf.co/papers/2306.09782) and [AdaLomo: Low-memory Optimization with Adaptive Learning Rate](https://hf.co/papers/2310.10195).
24_11_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md
https://huggingface.co/docs/transformers/en/trainer/#lomo-optimizer
.md
They both consist of an efficient full-parameter fine-tuning method. These optimizers fuse the gradient computation and the parameter update in one step to reduce memory usage. Supported optimizers for LOMO are `"lomo"` and `"adalomo"`. First either install LOMO from pypi `pip install lomo-optim` or install it from source with `pip install git+https://github.com/OpenLMLab/LOMO.git`. <Tip>
24_11_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md
https://huggingface.co/docs/transformers/en/trainer/#lomo-optimizer
.md
<Tip> According to the authors, it is recommended to use `AdaLomo` without `grad_norm` to get better performance and higher throughput. </Tip> Below is a simple script to demonstrate how to fine-tune [google/gemma-2b](https://huggingface.co/google/gemma-2b) on IMDB dataset in full precision: ```python import torch import datasets from transformers import TrainingArguments, AutoTokenizer, AutoModelForCausalLM import trl
24_11_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md
https://huggingface.co/docs/transformers/en/trainer/#lomo-optimizer
.md
train_dataset = datasets.load_dataset('imdb', split='train') args = TrainingArguments( output_dir="./test-lomo", max_steps=1000, per_device_train_batch_size=4, optim="adalomo", gradient_checkpointing=True, logging_strategy="steps", logging_steps=1, learning_rate=2e-6, save_strategy="no", run_name="lomo-imdb", ) model_id = "google/gemma-2b" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id, low_cpu_mem_usage=True).to(0)
24_11_3