source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md | https://huggingface.co/docs/transformers/en/trainer/#lomo-optimizer | .md | trainer = trl.SFTTrainer(
model=model,
args=args,
train_dataset=train_dataset,
dataset_text_field='text',
max_seq_length=1024,
)
trainer.train()
``` | 24_11_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md | https://huggingface.co/docs/transformers/en/trainer/#grokadamw-optimizer | .md | The GrokAdamW optimizer is designed to enhance training performance and stability, particularly for models that benefit from grokking signal functions. To use GrokAdamW, first install the optimizer package with `pip install grokadamw`.
<Tip>
GrokAdamW is particularly useful for models that require advanced optimization techniques to achieve better performance and stability.
</Tip> | 24_12_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md | https://huggingface.co/docs/transformers/en/trainer/#grokadamw-optimizer | .md | </Tip>
Below is a simple script to demonstrate how to fine-tune [google/gemma-2b](https://huggingface.co/google/gemma-2b) on the IMDB dataset using the GrokAdamW optimizer:
```python
import torch
import datasets
from transformers import TrainingArguments, AutoTokenizer, AutoModelForCausalLM, Trainer | 24_12_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md | https://huggingface.co/docs/transformers/en/trainer/#grokadamw-optimizer | .md | # Load the IMDB dataset
train_dataset = datasets.load_dataset('imdb', split='train')
# Define the training arguments
args = TrainingArguments(
output_dir="./test-grokadamw",
max_steps=1000,
per_device_train_batch_size=4,
optim="grokadamw",
logging_strategy="steps",
logging_steps=1,
learning_rate=2e-5,
save_strategy="no",
run_name="grokadamw-imdb",
) | 24_12_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md | https://huggingface.co/docs/transformers/en/trainer/#grokadamw-optimizer | .md | # Load the model and tokenizer
model_id = "google/gemma-2b"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, low_cpu_mem_usage=True).to(0)
# Initialize the Trainer
trainer = Trainer(
model=model,
args=args,
train_dataset=train_dataset,
) | 24_12_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md | https://huggingface.co/docs/transformers/en/trainer/#grokadamw-optimizer | .md | # Initialize the Trainer
trainer = Trainer(
model=model,
args=args,
train_dataset=train_dataset,
)
# Train the model
trainer.train()
```
This script demonstrates how to fine-tune the `google/gemma-2b` model on the IMDB dataset using the GrokAdamW optimizer. The `TrainingArguments` are configured to use GrokAdamW, and the dataset is passed to the `Trainer` for training. | 24_12_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md | https://huggingface.co/docs/transformers/en/trainer/#schedule-free-optimizer | .md | The Schedule Free optimizers have been introduced in [The Road Less Scheduled](https://hf.co/papers/2405.15682).
Schedule-Free learning replaces the momentum of the base optimizer with a combination of averaging and interpolation, to completely remove the need to anneal the learning rate with a traditional schedule.
Supported optimizers for SFO are `"schedule_free_adamw"` and `"schedule_free_sgd"`. First install schedulefree from pypi `pip install schedulefree`. | 24_13_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md | https://huggingface.co/docs/transformers/en/trainer/#schedule-free-optimizer | .md | Below is a simple script to demonstrate how to fine-tune [google/gemma-2b](https://huggingface.co/google/gemma-2b) on IMDB dataset in full precision:
```python
import torch
import datasets
from transformers import TrainingArguments, AutoTokenizer, AutoModelForCausalLM
import trl | 24_13_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md | https://huggingface.co/docs/transformers/en/trainer/#schedule-free-optimizer | .md | train_dataset = datasets.load_dataset('imdb', split='train')
args = TrainingArguments(
output_dir="./test-schedulefree",
max_steps=1000,
per_device_train_batch_size=4,
optim="schedule_free_adamw",
gradient_checkpointing=True,
logging_strategy="steps",
logging_steps=1,
learning_rate=2e-6,
save_strategy="no",
run_name="sfo-imdb",
)
model_id = "google/gemma-2b"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, low_cpu_mem_usage=True).to(0) | 24_13_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md | https://huggingface.co/docs/transformers/en/trainer/#schedule-free-optimizer | .md | trainer = trl.SFTTrainer(
model=model,
args=args,
train_dataset=train_dataset,
dataset_text_field='text',
max_seq_length=1024,
)
trainer.train()
``` | 24_13_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md | https://huggingface.co/docs/transformers/en/trainer/#accelerate-and-trainer | .md | The [`Trainer`] class is powered by [Accelerate](https://hf.co/docs/accelerate), a library for easily training PyTorch models in distributed environments with support for integrations such as [FullyShardedDataParallel (FSDP)](https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/) and [DeepSpeed](https://www.deepspeed.ai/).
<Tip>
Learn more about FSDP sharding strategies, CPU offloading, and more with the [`Trainer`] in the [Fully Sharded Data Parallel](fsdp) guide.
</Tip> | 24_14_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md | https://huggingface.co/docs/transformers/en/trainer/#accelerate-and-trainer | .md | </Tip>
To use Accelerate with [`Trainer`], run the [`accelerate.config`](https://huggingface.co/docs/accelerate/package_reference/cli#accelerate-config) command to set up training for your training environment. This command creates a `config_file.yaml` that'll be used when you launch your training script. For example, some example configurations you can setup are:
<hfoptions id="config">
<hfoption id="DistributedDataParallel">
```yml
compute_environment: LOCAL_MACHINE
distributed_type: MULTI_GPU | 24_14_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md | https://huggingface.co/docs/transformers/en/trainer/#accelerate-and-trainer | .md | <hfoption id="DistributedDataParallel">
```yml
compute_environment: LOCAL_MACHINE
distributed_type: MULTI_GPU
downcast_bf16: 'no'
gpu_ids: all
machine_rank: 0 #change rank as per the node
main_process_ip: 192.168.20.1
main_process_port: 9898
main_training_function: main
mixed_precision: fp16
num_machines: 2
num_processes: 8
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
</hfoption>
<hfoption id="FSDP">
```yml | 24_14_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md | https://huggingface.co/docs/transformers/en/trainer/#accelerate-and-trainer | .md | tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
</hfoption>
<hfoption id="FSDP">
```yml
compute_environment: LOCAL_MACHINE
distributed_type: FSDP
downcast_bf16: 'no'
fsdp_config:
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
fsdp_backward_prefetch_policy: BACKWARD_PRE
fsdp_forward_prefetch: true
fsdp_offload_params: false
fsdp_sharding_strategy: 1
fsdp_state_dict_type: FULL_STATE_DICT
fsdp_sync_module_states: true
fsdp_transformer_layer_cls_to_wrap: BertLayer | 24_14_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md | https://huggingface.co/docs/transformers/en/trainer/#accelerate-and-trainer | .md | fsdp_state_dict_type: FULL_STATE_DICT
fsdp_sync_module_states: true
fsdp_transformer_layer_cls_to_wrap: BertLayer
fsdp_use_orig_params: true
machine_rank: 0
main_training_function: main
mixed_precision: bf16
num_machines: 1
num_processes: 2
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
</hfoption>
<hfoption id="DeepSpeed">
```yml
compute_environment: LOCAL_MACHINE
deepspeed_config: | 24_14_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md | https://huggingface.co/docs/transformers/en/trainer/#accelerate-and-trainer | .md | use_cpu: false
```
</hfoption>
<hfoption id="DeepSpeed">
```yml
compute_environment: LOCAL_MACHINE
deepspeed_config:
deepspeed_config_file: /home/user/configs/ds_zero3_config.json
zero3_init_flag: true
distributed_type: DEEPSPEED
downcast_bf16: 'no'
machine_rank: 0
main_training_function: main
num_machines: 1
num_processes: 4
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
</hfoption>
<hfoption id="DeepSpeed with Accelerate plugin"> | 24_14_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md | https://huggingface.co/docs/transformers/en/trainer/#accelerate-and-trainer | .md | tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
</hfoption>
<hfoption id="DeepSpeed with Accelerate plugin">
```yml
compute_environment: LOCAL_MACHINE
deepspeed_config:
gradient_accumulation_steps: 1
gradient_clipping: 0.7
offload_optimizer_device: cpu
offload_param_device: cpu
zero3_init_flag: true
zero_stage: 2
distributed_type: DEEPSPEED
downcast_bf16: 'no'
machine_rank: 0
main_training_function: main
mixed_precision: bf16
num_machines: 1
num_processes: 4
rdzv_backend: static | 24_14_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md | https://huggingface.co/docs/transformers/en/trainer/#accelerate-and-trainer | .md | machine_rank: 0
main_training_function: main
mixed_precision: bf16
num_machines: 1
num_processes: 4
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
</hfoption>
</hfoptions> | 24_14_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md | https://huggingface.co/docs/transformers/en/trainer/#accelerate-and-trainer | .md | same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
</hfoption>
</hfoptions>
The [`accelerate_launch`](https://huggingface.co/docs/accelerate/package_reference/cli#accelerate-launch) command is the recommended way to launch your training script on a distributed system with Accelerate and [`Trainer`] with the parameters specified in `config_file.yaml`. This file is saved to the Accelerate cache folder and automatically loaded when you run `accelerate_launch`. | 24_14_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md | https://huggingface.co/docs/transformers/en/trainer/#accelerate-and-trainer | .md | For example, to run the [run_glue.py](https://github.com/huggingface/transformers/blob/f4db565b695582891e43a5e042e5d318e28f20b8/examples/pytorch/text-classification/run_glue.py#L4) training script with the FSDP configuration:
```bash
accelerate launch \
./examples/pytorch/text-classification/run_glue.py \
--model_name_or_path google-bert/bert-base-cased \
--task_name $TASK_NAME \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 16 \
--learning_rate 5e-5 \ | 24_14_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md | https://huggingface.co/docs/transformers/en/trainer/#accelerate-and-trainer | .md | --do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 16 \
--learning_rate 5e-5 \
--num_train_epochs 3 \
--output_dir /tmp/$TASK_NAME/ \
--overwrite_output_dir
```
You could also specify the parameters from the `config_file.yaml` file directly in the command line:
```bash
accelerate launch --num_processes=2 \
--use_fsdp \
--mixed_precision=bf16 \
--fsdp_auto_wrap_policy=TRANSFORMER_BASED_WRAP \
--fsdp_transformer_layer_cls_to_wrap="BertLayer" \
--fsdp_sharding_strategy=1 \ | 24_14_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md | https://huggingface.co/docs/transformers/en/trainer/#accelerate-and-trainer | .md | --fsdp_transformer_layer_cls_to_wrap="BertLayer" \
--fsdp_sharding_strategy=1 \
--fsdp_state_dict_type=FULL_STATE_DICT \
./examples/pytorch/text-classification/run_glue.py
--model_name_or_path google-bert/bert-base-cased \
--task_name $TASK_NAME \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 16 \
--learning_rate 5e-5 \
--num_train_epochs 3 \
--output_dir /tmp/$TASK_NAME/ \
--overwrite_output_dir
``` | 24_14_11 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/trainer.md | https://huggingface.co/docs/transformers/en/trainer/#accelerate-and-trainer | .md | --learning_rate 5e-5 \
--num_train_epochs 3 \
--output_dir /tmp/$TASK_NAME/ \
--overwrite_output_dir
```
Check out the [Launching your Accelerate scripts](https://huggingface.co/docs/accelerate/basic_tutorials/launch) tutorial to learn more about `accelerate_launch` and custom configurations. | 24_14_12 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/run_scripts.md | https://huggingface.co/docs/transformers/en/run_scripts/ | .md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 25_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/run_scripts.md | https://huggingface.co/docs/transformers/en/run_scripts/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
β οΈ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 25_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/run_scripts.md | https://huggingface.co/docs/transformers/en/run_scripts/#train-with-a-script | .md | Along with the π€ Transformers [notebooks](./notebooks), there are also example scripts demonstrating how to train a model for a task with [PyTorch](https://github.com/huggingface/transformers/tree/main/examples/pytorch), [TensorFlow](https://github.com/huggingface/transformers/tree/main/examples/tensorflow), or [JAX/Flax](https://github.com/huggingface/transformers/tree/main/examples/flax). | 25_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/run_scripts.md | https://huggingface.co/docs/transformers/en/run_scripts/#train-with-a-script | .md | You will also find scripts we've used in our [research projects](https://github.com/huggingface/transformers/tree/main/examples/research_projects) and [legacy examples](https://github.com/huggingface/transformers/tree/main/examples/legacy) which are mostly community contributed. These scripts are not actively maintained and require a specific version of π€ Transformers that will most likely be incompatible with the latest version of the library. | 25_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/run_scripts.md | https://huggingface.co/docs/transformers/en/run_scripts/#train-with-a-script | .md | The example scripts are not expected to work out-of-the-box on every problem, and you may need to adapt the script to the problem you're trying to solve. To help you with this, most of the scripts fully expose how data is preprocessed, allowing you to edit it as necessary for your use case. | 25_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/run_scripts.md | https://huggingface.co/docs/transformers/en/run_scripts/#train-with-a-script | .md | For any feature you'd like to implement in an example script, please discuss it on the [forum](https://discuss.huggingface.co/) or in an [issue](https://github.com/huggingface/transformers/issues) before submitting a Pull Request. While we welcome bug fixes, it is unlikely we will merge a Pull Request that adds more functionality at the cost of readability. | 25_1_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/run_scripts.md | https://huggingface.co/docs/transformers/en/run_scripts/#train-with-a-script | .md | This guide will show you how to run an example summarization training script in [PyTorch](https://github.com/huggingface/transformers/tree/main/examples/pytorch/summarization) and [TensorFlow](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/summarization). All examples are expected to work with both frameworks unless otherwise specified. | 25_1_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/run_scripts.md | https://huggingface.co/docs/transformers/en/run_scripts/#setup | .md | To successfully run the latest version of the example scripts, you have to **install π€ Transformers from source** in a new virtual environment:
```bash
git clone https://github.com/huggingface/transformers
cd transformers
pip install .
```
For older versions of the example scripts, click on the toggle below:
<details>
<summary>Examples for older versions of π€ Transformers</summary>
<ul>
<li><a href="https://github.com/huggingface/transformers/tree/v4.5.1/examples">v4.5.1</a></li> | 25_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/run_scripts.md | https://huggingface.co/docs/transformers/en/run_scripts/#setup | .md | <ul>
<li><a href="https://github.com/huggingface/transformers/tree/v4.5.1/examples">v4.5.1</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v4.4.2/examples">v4.4.2</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v4.3.3/examples">v4.3.3</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v4.2.2/examples">v4.2.2</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v4.1.1/examples">v4.1.1</a></li> | 25_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/run_scripts.md | https://huggingface.co/docs/transformers/en/run_scripts/#setup | .md | <li><a href="https://github.com/huggingface/transformers/tree/v4.1.1/examples">v4.1.1</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v4.0.1/examples">v4.0.1</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v3.5.1/examples">v3.5.1</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v3.4.0/examples">v3.4.0</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v3.3.1/examples">v3.3.1</a></li> | 25_2_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/run_scripts.md | https://huggingface.co/docs/transformers/en/run_scripts/#setup | .md | <li><a href="https://github.com/huggingface/transformers/tree/v3.3.1/examples">v3.3.1</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v3.2.0/examples">v3.2.0</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v3.1.0/examples">v3.1.0</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v3.0.2/examples">v3.0.2</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v2.11.0/examples">v2.11.0</a></li> | 25_2_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/run_scripts.md | https://huggingface.co/docs/transformers/en/run_scripts/#setup | .md | <li><a href="https://github.com/huggingface/transformers/tree/v2.11.0/examples">v2.11.0</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v2.10.0/examples">v2.10.0</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v2.9.1/examples">v2.9.1</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v2.8.0/examples">v2.8.0</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v2.7.0/examples">v2.7.0</a></li> | 25_2_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/run_scripts.md | https://huggingface.co/docs/transformers/en/run_scripts/#setup | .md | <li><a href="https://github.com/huggingface/transformers/tree/v2.7.0/examples">v2.7.0</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v2.6.0/examples">v2.6.0</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v2.5.1/examples">v2.5.1</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v2.4.0/examples">v2.4.0</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v2.3.0/examples">v2.3.0</a></li> | 25_2_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/run_scripts.md | https://huggingface.co/docs/transformers/en/run_scripts/#setup | .md | <li><a href="https://github.com/huggingface/transformers/tree/v2.3.0/examples">v2.3.0</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v2.2.0/examples">v2.2.0</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v2.1.0/examples">v2.1.1</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v2.0.0/examples">v2.0.0</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v1.2.0/examples">v1.2.0</a></li> | 25_2_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/run_scripts.md | https://huggingface.co/docs/transformers/en/run_scripts/#setup | .md | <li><a href="https://github.com/huggingface/transformers/tree/v1.2.0/examples">v1.2.0</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v1.1.0/examples">v1.1.0</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v1.0.0/examples">v1.0.0</a></li>
</ul>
</details>
Then switch your current clone of π€ Transformers to a specific version, like v3.5.1 for example:
```bash
git checkout tags/v3.5.1
``` | 25_2_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/run_scripts.md | https://huggingface.co/docs/transformers/en/run_scripts/#setup | .md | ```bash
git checkout tags/v3.5.1
```
After you've setup the correct library version, navigate to the example folder of your choice and install the example specific requirements:
```bash
pip install -r requirements.txt
``` | 25_2_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/run_scripts.md | https://huggingface.co/docs/transformers/en/run_scripts/#run-a-script | .md | <frameworkcontent>
<pt> | 25_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/run_scripts.md | https://huggingface.co/docs/transformers/en/run_scripts/#run-a-script | .md | The example script downloads and preprocesses a dataset from the π€ [Datasets](https://huggingface.co/docs/datasets/) library. Then the script fine-tunes a dataset with the [Trainer](https://huggingface.co/docs/transformers/main_classes/trainer) on an architecture that supports summarization. The following example shows how to fine-tune [T5-small](https://huggingface.co/google-t5/t5-small) on the [CNN/DailyMail](https://huggingface.co/datasets/cnn_dailymail) dataset. The T5 model requires an additional | 25_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/run_scripts.md | https://huggingface.co/docs/transformers/en/run_scripts/#run-a-script | .md | on the [CNN/DailyMail](https://huggingface.co/datasets/cnn_dailymail) dataset. The T5 model requires an additional `source_prefix` argument due to how it was trained. This prompt lets T5 know this is a summarization task. | 25_3_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/run_scripts.md | https://huggingface.co/docs/transformers/en/run_scripts/#run-a-script | .md | ```bash
python examples/pytorch/summarization/run_summarization.py \
--model_name_or_path google-t5/t5-small \
--do_train \
--do_eval \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--predict_with_generate
```
</pt>
<tf> | 25_3_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/run_scripts.md | https://huggingface.co/docs/transformers/en/run_scripts/#run-a-script | .md | The example script downloads and preprocesses a dataset from the π€ [Datasets](https://huggingface.co/docs/datasets/) library. Then the script fine-tunes a dataset using Keras on an architecture that supports summarization. The following example shows how to fine-tune [T5-small](https://huggingface.co/google-t5/t5-small) on the [CNN/DailyMail](https://huggingface.co/datasets/cnn_dailymail) dataset. The T5 model requires an additional `source_prefix` argument due to how it was trained. This prompt lets T5 | 25_3_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/run_scripts.md | https://huggingface.co/docs/transformers/en/run_scripts/#run-a-script | .md | dataset. The T5 model requires an additional `source_prefix` argument due to how it was trained. This prompt lets T5 know this is a summarization task. | 25_3_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/run_scripts.md | https://huggingface.co/docs/transformers/en/run_scripts/#run-a-script | .md | ```bash
python examples/tensorflow/summarization/run_summarization.py \
--model_name_or_path google-t5/t5-small \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size 8 \
--per_device_eval_batch_size 16 \
--num_train_epochs 3 \
--do_train \
--do_eval
```
</tf>
</frameworkcontent> | 25_3_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/run_scripts.md | https://huggingface.co/docs/transformers/en/run_scripts/#distributed-training-and-mixed-precision | .md | The [Trainer](https://huggingface.co/docs/transformers/main_classes/trainer) supports distributed training and mixed precision, which means you can also use it in a script. To enable both of these features:
- Add the `fp16` or `bf16` argument to enable mixed precision. XPU devices only supports `bf16` for mixed precision training.
- Set the number of GPUs to use with the `nproc_per_node` argument.
```bash
torchrun \
--nproc_per_node 8 pytorch/summarization/run_summarization.py \
--fp16 \ | 25_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/run_scripts.md | https://huggingface.co/docs/transformers/en/run_scripts/#distributed-training-and-mixed-precision | .md | ```bash
torchrun \
--nproc_per_node 8 pytorch/summarization/run_summarization.py \
--fp16 \
--model_name_or_path google-t5/t5-small \
--do_train \
--do_eval \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--predict_with_generate
``` | 25_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/run_scripts.md | https://huggingface.co/docs/transformers/en/run_scripts/#distributed-training-and-mixed-precision | .md | --per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--predict_with_generate
```
TensorFlow scripts utilize a [`MirroredStrategy`](https://www.tensorflow.org/guide/distributed_training#mirroredstrategy) for distributed training, and you don't need to add any additional arguments to the training script. The TensorFlow script will use multiple GPUs by default if they are available. | 25_4_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/run_scripts.md | https://huggingface.co/docs/transformers/en/run_scripts/#run-a-script-on-a-tpu | .md | <frameworkcontent>
<pt>
Tensor Processing Units (TPUs) are specifically designed to accelerate performance. PyTorch supports TPUs with the [XLA](https://www.tensorflow.org/xla) deep learning compiler (see [here](https://github.com/pytorch/xla/blob/master/README.md) for more details). To use a TPU, launch the `xla_spawn.py` script and use the `num_cores` argument to set the number of TPU cores you want to use.
```bash
python xla_spawn.py --num_cores 8 \
summarization/run_summarization.py \ | 25_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/run_scripts.md | https://huggingface.co/docs/transformers/en/run_scripts/#run-a-script-on-a-tpu | .md | ```bash
python xla_spawn.py --num_cores 8 \
summarization/run_summarization.py \
--model_name_or_path google-t5/t5-small \
--do_train \
--do_eval \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--predict_with_generate
```
</pt>
<tf> | 25_5_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/run_scripts.md | https://huggingface.co/docs/transformers/en/run_scripts/#run-a-script-on-a-tpu | .md | --per_device_eval_batch_size=4 \
--overwrite_output_dir \
--predict_with_generate
```
</pt>
<tf>
Tensor Processing Units (TPUs) are specifically designed to accelerate performance. TensorFlow scripts utilize a [`TPUStrategy`](https://www.tensorflow.org/guide/distributed_training#tpustrategy) for training on TPUs. To use a TPU, pass the name of the TPU resource to the `tpu` argument.
```bash
python run_summarization.py \
--tpu name_of_tpu_resource \
--model_name_or_path google-t5/t5-small \ | 25_5_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/run_scripts.md | https://huggingface.co/docs/transformers/en/run_scripts/#run-a-script-on-a-tpu | .md | ```bash
python run_summarization.py \
--tpu name_of_tpu_resource \
--model_name_or_path google-t5/t5-small \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size 8 \
--per_device_eval_batch_size 16 \
--num_train_epochs 3 \
--do_train \
--do_eval
```
</tf>
</frameworkcontent> | 25_5_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/run_scripts.md | https://huggingface.co/docs/transformers/en/run_scripts/#run-a-script-with--accelerate | .md | π€ [Accelerate](https://huggingface.co/docs/accelerate) is a PyTorch-only library that offers a unified method for training a model on several types of setups (CPU-only, multiple GPUs, TPUs) while maintaining complete visibility into the PyTorch training loop. Make sure you have π€ Accelerate installed if you don't already have it:
> Note: As Accelerate is rapidly developing, the git version of accelerate must be installed to run the scripts
```bash
pip install git+https://github.com/huggingface/accelerate | 25_6_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/run_scripts.md | https://huggingface.co/docs/transformers/en/run_scripts/#run-a-script-with--accelerate | .md | ```bash
pip install git+https://github.com/huggingface/accelerate
```
Instead of the `run_summarization.py` script, you need to use the `run_summarization_no_trainer.py` script. π€ Accelerate supported scripts will have a `task_no_trainer.py` file in the folder. Begin by running the following command to create and save a configuration file:
```bash
accelerate config
```
Test your setup to make sure it is configured correctly:
```bash
accelerate test
```
Now you are ready to launch the training: | 25_6_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/run_scripts.md | https://huggingface.co/docs/transformers/en/run_scripts/#run-a-script-with--accelerate | .md | ```bash
accelerate test
```
Now you are ready to launch the training:
```bash
accelerate launch run_summarization_no_trainer.py \
--model_name_or_path google-t5/t5-small \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--output_dir ~/tmp/tst-summarization
``` | 25_6_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/run_scripts.md | https://huggingface.co/docs/transformers/en/run_scripts/#use-a-custom-dataset | .md | The summarization script supports custom datasets as long as they are a CSV or JSON Line file. When you use your own dataset, you need to specify several additional arguments:
- `train_file` and `validation_file` specify the path to your training and validation files.
- `text_column` is the input text to summarize.
- `summary_column` is the target text to output.
A summarization script using a custom dataset would look like this:
```bash
python examples/pytorch/summarization/run_summarization.py \ | 25_7_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/run_scripts.md | https://huggingface.co/docs/transformers/en/run_scripts/#use-a-custom-dataset | .md | ```bash
python examples/pytorch/summarization/run_summarization.py \
--model_name_or_path google-t5/t5-small \
--do_train \
--do_eval \
--train_file path_to_csv_or_jsonlines_file \
--validation_file path_to_csv_or_jsonlines_file \
--text_column text_column_name \
--summary_column summary_column_name \
--source_prefix "summarize: " \
--output_dir /tmp/tst-summarization \
--overwrite_output_dir \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--predict_with_generate
``` | 25_7_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/run_scripts.md | https://huggingface.co/docs/transformers/en/run_scripts/#test-a-script | .md | It is often a good idea to run your script on a smaller number of dataset examples to ensure everything works as expected before committing to an entire dataset which may take hours to complete. Use the following arguments to truncate the dataset to a maximum number of samples:
- `max_train_samples`
- `max_eval_samples`
- `max_predict_samples`
```bash
python examples/pytorch/summarization/run_summarization.py \
--model_name_or_path google-t5/t5-small \
--max_train_samples 50 \
--max_eval_samples 50 \ | 25_8_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/run_scripts.md | https://huggingface.co/docs/transformers/en/run_scripts/#test-a-script | .md | --model_name_or_path google-t5/t5-small \
--max_train_samples 50 \
--max_eval_samples 50 \
--max_predict_samples 50 \
--do_train \
--do_eval \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--predict_with_generate
``` | 25_8_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/run_scripts.md | https://huggingface.co/docs/transformers/en/run_scripts/#test-a-script | .md | --per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--predict_with_generate
```
Not all example scripts support the `max_predict_samples` argument. If you aren't sure whether your script supports this argument, add the `-h` argument to check:
```bash
examples/pytorch/summarization/run_summarization.py -h
``` | 25_8_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/run_scripts.md | https://huggingface.co/docs/transformers/en/run_scripts/#resume-training-from-checkpoint | .md | Another helpful option to enable is resuming training from a previous checkpoint. This will ensure you can pick up where you left off without starting over if your training gets interrupted. There are two methods to resume training from a checkpoint.
The first method uses the `output_dir previous_output_dir` argument to resume training from the latest checkpoint stored in `output_dir`. In this case, you should remove `overwrite_output_dir`:
```bash | 25_9_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/run_scripts.md | https://huggingface.co/docs/transformers/en/run_scripts/#resume-training-from-checkpoint | .md | ```bash
python examples/pytorch/summarization/run_summarization.py \
--model_name_or_path google-t5/t5-small \
--do_train \
--do_eval \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--output_dir previous_output_dir \
--predict_with_generate
``` | 25_9_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/run_scripts.md | https://huggingface.co/docs/transformers/en/run_scripts/#resume-training-from-checkpoint | .md | --per_device_eval_batch_size=4 \
--output_dir previous_output_dir \
--predict_with_generate
```
The second method uses the `resume_from_checkpoint path_to_specific_checkpoint` argument to resume training from a specific checkpoint folder.
```bash
python examples/pytorch/summarization/run_summarization.py \
--model_name_or_path google-t5/t5-small \
--do_train \
--do_eval \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--output_dir /tmp/tst-summarization \ | 25_9_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/run_scripts.md | https://huggingface.co/docs/transformers/en/run_scripts/#resume-training-from-checkpoint | .md | --dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--resume_from_checkpoint path_to_specific_checkpoint \
--predict_with_generate
``` | 25_9_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/run_scripts.md | https://huggingface.co/docs/transformers/en/run_scripts/#share-your-model | .md | All scripts can upload your final model to the [Model Hub](https://huggingface.co/models). Make sure you are logged into Hugging Face before you begin:
```bash
huggingface-cli login
```
Then add the `push_to_hub` argument to the script. This argument will create a repository with your Hugging Face username and the folder name specified in `output_dir`. | 25_10_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/run_scripts.md | https://huggingface.co/docs/transformers/en/run_scripts/#share-your-model | .md | To give your repository a specific name, use the `push_to_hub_model_id` argument to add it. The repository will be automatically listed under your namespace.
The following example shows how to upload a model with a specific repository name:
```bash
python examples/pytorch/summarization/run_summarization.py \
--model_name_or_path google-t5/t5-small \
--do_train \
--do_eval \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--push_to_hub \ | 25_10_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/run_scripts.md | https://huggingface.co/docs/transformers/en/run_scripts/#share-your-model | .md | --do_eval \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--push_to_hub \
--push_to_hub_model_id finetuned-t5-cnn_dailymail \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--predict_with_generate
``` | 25_10_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/custom_models.md | https://huggingface.co/docs/transformers/en/custom_models/ | .md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 26_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/custom_models.md | https://huggingface.co/docs/transformers/en/custom_models/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
β οΈ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 26_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/custom_models.md | https://huggingface.co/docs/transformers/en/custom_models/#building-custom-models | .md | The π€ Transformers library is designed to be easily extensible. Every model is fully coded in a given subfolder
of the repository with no abstraction, so you can easily copy a modeling file and tweak it to your needs.
If you are writing a brand new model, it might be easier to start from scratch. In this tutorial, we will show you
how to write a custom model and its configuration so it can be used inside Transformers, and how you can share it | 26_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/custom_models.md | https://huggingface.co/docs/transformers/en/custom_models/#building-custom-models | .md | how to write a custom model and its configuration so it can be used inside Transformers, and how you can share it
with the community (with the code it relies on) so that anyone can use it, even if it's not present in the π€
Transformers library. We'll see how to build upon transformers and extend the framework with your hooks and
custom code.
We will illustrate all of this on a ResNet model, by wrapping the ResNet class of the | 26_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/custom_models.md | https://huggingface.co/docs/transformers/en/custom_models/#building-custom-models | .md | custom code.
We will illustrate all of this on a ResNet model, by wrapping the ResNet class of the
[timm library](https://github.com/rwightman/pytorch-image-models) into a [`PreTrainedModel`]. | 26_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/custom_models.md | https://huggingface.co/docs/transformers/en/custom_models/#writing-a-custom-configuration | .md | Before we dive into the model, let's first write its configuration. The configuration of a model is an object that
will contain all the necessary information to build the model. As we will see in the next section, the model can only
take a `config` to be initialized, so we really need that object to be as complete as possible.
<Tip>
Models in the `transformers` library itself generally follow the convention that they accept a `config` object | 26_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/custom_models.md | https://huggingface.co/docs/transformers/en/custom_models/#writing-a-custom-configuration | .md | <Tip>
Models in the `transformers` library itself generally follow the convention that they accept a `config` object
in their `__init__` method, and then pass the whole `config` to sub-layers in the model, rather than breaking the
config object into multiple arguments that are all passed individually to sub-layers. Writing your model in this
style results in simpler code with a clear "source of truth" for any hyperparameters, and also makes it easier
to reuse code from other models in `transformers`. | 26_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/custom_models.md | https://huggingface.co/docs/transformers/en/custom_models/#writing-a-custom-configuration | .md | to reuse code from other models in `transformers`.
</Tip>
In our example, we will take a couple of arguments of the ResNet class that we might want to tweak. Different
configurations will then give us the different types of ResNets that are possible. We then just store those arguments,
after checking the validity of a few of them.
```python
from transformers import PretrainedConfig
from typing import List | 26_2_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/custom_models.md | https://huggingface.co/docs/transformers/en/custom_models/#writing-a-custom-configuration | .md | class ResnetConfig(PretrainedConfig):
model_type = "resnet" | 26_2_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/custom_models.md | https://huggingface.co/docs/transformers/en/custom_models/#writing-a-custom-configuration | .md | def __init__(
self,
block_type="bottleneck",
layers: List[int] = [3, 4, 6, 3],
num_classes: int = 1000,
input_channels: int = 3,
cardinality: int = 1,
base_width: int = 64,
stem_width: int = 64,
stem_type: str = "",
avg_down: bool = False,
**kwargs,
):
if block_type not in ["basic", "bottleneck"]:
raise ValueError(f"`block_type` must be 'basic' or bottleneck', got {block_type}.")
if stem_type not in ["", "deep", "deep-tiered"]: | 26_2_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/custom_models.md | https://huggingface.co/docs/transformers/en/custom_models/#writing-a-custom-configuration | .md | if stem_type not in ["", "deep", "deep-tiered"]:
raise ValueError(f"`stem_type` must be '', 'deep' or 'deep-tiered', got {stem_type}.") | 26_2_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/custom_models.md | https://huggingface.co/docs/transformers/en/custom_models/#writing-a-custom-configuration | .md | self.block_type = block_type
self.layers = layers
self.num_classes = num_classes
self.input_channels = input_channels
self.cardinality = cardinality
self.base_width = base_width
self.stem_width = stem_width
self.stem_type = stem_type
self.avg_down = avg_down
super().__init__(**kwargs)
```
The three important things to remember when writing you own configuration are the following:
- you have to inherit from `PretrainedConfig`,
- the `__init__` of your `PretrainedConfig` must accept any kwargs, | 26_2_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/custom_models.md | https://huggingface.co/docs/transformers/en/custom_models/#writing-a-custom-configuration | .md | - you have to inherit from `PretrainedConfig`,
- the `__init__` of your `PretrainedConfig` must accept any kwargs,
- those `kwargs` need to be passed to the superclass `__init__`.
The inheritance is to make sure you get all the functionality from the π€ Transformers library, while the two other
constraints come from the fact a `PretrainedConfig` has more fields than the ones you are setting. When reloading a | 26_2_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/custom_models.md | https://huggingface.co/docs/transformers/en/custom_models/#writing-a-custom-configuration | .md | constraints come from the fact a `PretrainedConfig` has more fields than the ones you are setting. When reloading a
config with the `from_pretrained` method, those fields need to be accepted by your config and then sent to the
superclass.
Defining a `model_type` for your configuration (here `model_type="resnet"`) is not mandatory, unless you want to
register your model with the auto classes (see last section). | 26_2_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/custom_models.md | https://huggingface.co/docs/transformers/en/custom_models/#writing-a-custom-configuration | .md | register your model with the auto classes (see last section).
With this done, you can easily create and save your configuration like you would do with any other model config of the
library. Here is how we can create a resnet50d config and save it:
```py
resnet50d_config = ResnetConfig(block_type="bottleneck", stem_width=32, stem_type="deep", avg_down=True)
resnet50d_config.save_pretrained("custom-resnet")
``` | 26_2_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/custom_models.md | https://huggingface.co/docs/transformers/en/custom_models/#writing-a-custom-configuration | .md | resnet50d_config.save_pretrained("custom-resnet")
```
This will save a file named `config.json` inside the folder `custom-resnet`. You can then reload your config with the
`from_pretrained` method:
```py
resnet50d_config = ResnetConfig.from_pretrained("custom-resnet")
```
You can also use any other method of the [`PretrainedConfig`] class, like [`~PretrainedConfig.push_to_hub`] to
directly upload your config to the Hub. | 26_2_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/custom_models.md | https://huggingface.co/docs/transformers/en/custom_models/#writing-a-custom-model | .md | Now that we have our ResNet configuration, we can go on writing the model. We will actually write two: one that
extracts the hidden features from a batch of images (like [`BertModel`]) and one that is suitable for image
classification (like [`BertForSequenceClassification`]).
As we mentioned before, we'll only write a loose wrapper of the model to keep it simple for this example. The only
thing we need to do before writing this class is a map between the block types and actual block classes. Then the | 26_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/custom_models.md | https://huggingface.co/docs/transformers/en/custom_models/#writing-a-custom-model | .md | thing we need to do before writing this class is a map between the block types and actual block classes. Then the
model is defined from the configuration by passing everything to the `ResNet` class:
```py
from transformers import PreTrainedModel
from timm.models.resnet import BasicBlock, Bottleneck, ResNet
from .configuration_resnet import ResnetConfig | 26_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/custom_models.md | https://huggingface.co/docs/transformers/en/custom_models/#writing-a-custom-model | .md | BLOCK_MAPPING = {"basic": BasicBlock, "bottleneck": Bottleneck}
class ResnetModel(PreTrainedModel):
config_class = ResnetConfig
def __init__(self, config):
super().__init__(config)
block_layer = BLOCK_MAPPING[config.block_type]
self.model = ResNet(
block_layer,
config.layers,
num_classes=config.num_classes,
in_chans=config.input_channels,
cardinality=config.cardinality,
base_width=config.base_width,
stem_width=config.stem_width,
stem_type=config.stem_type,
avg_down=config.avg_down,
) | 26_3_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/custom_models.md | https://huggingface.co/docs/transformers/en/custom_models/#writing-a-custom-model | .md | def forward(self, tensor):
return self.model.forward_features(tensor)
```
For the model that will classify images, we just change the forward method:
```py
import torch
class ResnetModelForImageClassification(PreTrainedModel):
config_class = ResnetConfig | 26_3_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/custom_models.md | https://huggingface.co/docs/transformers/en/custom_models/#writing-a-custom-model | .md | class ResnetModelForImageClassification(PreTrainedModel):
config_class = ResnetConfig
def __init__(self, config):
super().__init__(config)
block_layer = BLOCK_MAPPING[config.block_type]
self.model = ResNet(
block_layer,
config.layers,
num_classes=config.num_classes,
in_chans=config.input_channels,
cardinality=config.cardinality,
base_width=config.base_width,
stem_width=config.stem_width,
stem_type=config.stem_type,
avg_down=config.avg_down,
) | 26_3_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/custom_models.md | https://huggingface.co/docs/transformers/en/custom_models/#writing-a-custom-model | .md | def forward(self, tensor, labels=None):
logits = self.model(tensor)
if labels is not None:
loss = torch.nn.functional.cross_entropy(logits, labels)
return {"loss": loss, "logits": logits}
return {"logits": logits}
```
In both cases, notice how we inherit from `PreTrainedModel` and call the superclass initialization with the `config`
(a bit like when you write a regular `torch.nn.Module`). The line that sets the `config_class` is not mandatory, unless | 26_3_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/custom_models.md | https://huggingface.co/docs/transformers/en/custom_models/#writing-a-custom-model | .md | (a bit like when you write a regular `torch.nn.Module`). The line that sets the `config_class` is not mandatory, unless
you want to register your model with the auto classes (see last section).
<Tip>
If your model is very similar to a model inside the library, you can re-use the same configuration as this model.
</Tip>
You can have your model return anything you want, but returning a dictionary like we did for | 26_3_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/custom_models.md | https://huggingface.co/docs/transformers/en/custom_models/#writing-a-custom-model | .md | </Tip>
You can have your model return anything you want, but returning a dictionary like we did for
`ResnetModelForImageClassification`, with the loss included when labels are passed, will make your model directly
usable inside the [`Trainer`] class. Using another output format is fine as long as you are planning on using your own
training loop or another library for training.
Now that we have our model class, let's create one:
```py
resnet50d = ResnetModelForImageClassification(resnet50d_config) | 26_3_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/custom_models.md | https://huggingface.co/docs/transformers/en/custom_models/#writing-a-custom-model | .md | Now that we have our model class, let's create one:
```py
resnet50d = ResnetModelForImageClassification(resnet50d_config)
```
Again, you can use any of the methods of [`PreTrainedModel`], like [`~PreTrainedModel.save_pretrained`] or
[`~PreTrainedModel.push_to_hub`]. We will use the second in the next section, and see how to push the model weights
with the code of our model. But first, let's load some pretrained weights inside our model. | 26_3_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/custom_models.md | https://huggingface.co/docs/transformers/en/custom_models/#writing-a-custom-model | .md | with the code of our model. But first, let's load some pretrained weights inside our model.
In your own use case, you will probably be training your custom model on your own data. To go fast for this tutorial,
we will use the pretrained version of the resnet50d. Since our model is just a wrapper around it, it's going to be
easy to transfer those weights:
```py
import timm | 26_3_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/custom_models.md | https://huggingface.co/docs/transformers/en/custom_models/#writing-a-custom-model | .md | pretrained_model = timm.create_model("resnet50d", pretrained=True)
resnet50d.model.load_state_dict(pretrained_model.state_dict())
```
Now let's see how to make sure that when we do [`~PreTrainedModel.save_pretrained`] or [`~PreTrainedModel.push_to_hub`], the
code of the model is saved. | 26_3_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/custom_models.md | https://huggingface.co/docs/transformers/en/custom_models/#registering-a-model-with-custom-code-to-the-auto-classes | .md | If you are writing a library that extends π€ Transformers, you may want to extend the auto classes to include your own
model. This is different from pushing the code to the Hub in the sense that users will need to import your library to
get the custom models (contrarily to automatically downloading the model code from the Hub).
As long as your config has a `model_type` attribute that is different from existing model types, and that your model | 26_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/custom_models.md | https://huggingface.co/docs/transformers/en/custom_models/#registering-a-model-with-custom-code-to-the-auto-classes | .md | As long as your config has a `model_type` attribute that is different from existing model types, and that your model
classes have the right `config_class` attributes, you can just add them to the auto classes like this:
```py
from transformers import AutoConfig, AutoModel, AutoModelForImageClassification | 26_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/custom_models.md | https://huggingface.co/docs/transformers/en/custom_models/#registering-a-model-with-custom-code-to-the-auto-classes | .md | AutoConfig.register("resnet", ResnetConfig)
AutoModel.register(ResnetConfig, ResnetModel)
AutoModelForImageClassification.register(ResnetConfig, ResnetModelForImageClassification)
```
Note that the first argument used when registering your custom config to [`AutoConfig`] needs to match the `model_type`
of your custom config, and the first argument used when registering your custom models to any auto model class needs
to match the `config_class` of those models. | 26_4_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/custom_models.md | https://huggingface.co/docs/transformers/en/custom_models/#sending-the-code-to-the-hub | .md | <Tip warning={true}>
This API is experimental and may have some slight breaking changes in the next releases.
</Tip>
First, make sure your model is fully defined in a `.py` file. It can rely on relative imports to some other files as
long as all the files are in the same directory (we don't support submodules for this feature yet). For our example,
we'll define a `modeling_resnet.py` file and a `configuration_resnet.py` file in a folder of the current working | 26_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/custom_models.md | https://huggingface.co/docs/transformers/en/custom_models/#sending-the-code-to-the-hub | .md | we'll define a `modeling_resnet.py` file and a `configuration_resnet.py` file in a folder of the current working
directory named `resnet_model`. The configuration file contains the code for `ResnetConfig` and the modeling file
contains the code of `ResnetModel` and `ResnetModelForImageClassification`.
```
.
βββ resnet_model
βββ __init__.py
βββ configuration_resnet.py
βββ modeling_resnet.py
``` | 26_5_1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.