source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_classification.md | https://huggingface.co/docs/transformers/en/tasks/image_classification/#preprocess | .md | ```py
>>> food = food.with_transform(transforms)
```
Now create a batch of examples using [`DefaultDataCollator`]. Unlike other data collators in 🤗 Transformers, the `DefaultDataCollator` does not apply additional preprocessing such as padding.
```py
>>> from transformers import DefaultDataCollator | 88_3_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_classification.md | https://huggingface.co/docs/transformers/en/tasks/image_classification/#preprocess | .md | >>> data_collator = DefaultDataCollator()
```
</pt>
</frameworkcontent>
<frameworkcontent>
<tf>
To avoid overfitting and to make the model more robust, add some data augmentation to the training part of the dataset.
Here we use Keras preprocessing layers to define the transformations for the training data (includes data augmentation),
and transformations for the validation data (only center cropping, resizing and normalizing). You can use `tf.image`or
any other library you prefer.
```py | 88_3_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_classification.md | https://huggingface.co/docs/transformers/en/tasks/image_classification/#preprocess | .md | any other library you prefer.
```py
>>> from tensorflow import keras
>>> from tensorflow.keras import layers | 88_3_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_classification.md | https://huggingface.co/docs/transformers/en/tasks/image_classification/#preprocess | .md | >>> size = (image_processor.size["height"], image_processor.size["width"])
>>> train_data_augmentation = keras.Sequential(
... [
... layers.RandomCrop(size[0], size[1]),
... layers.Rescaling(scale=1.0 / 127.5, offset=-1),
... layers.RandomFlip("horizontal"),
... layers.RandomRotation(factor=0.02),
... layers.RandomZoom(height_factor=0.2, width_factor=0.2),
... ],
... name="train_data_augmentation",
... ) | 88_3_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_classification.md | https://huggingface.co/docs/transformers/en/tasks/image_classification/#preprocess | .md | >>> val_data_augmentation = keras.Sequential(
... [
... layers.CenterCrop(size[0], size[1]),
... layers.Rescaling(scale=1.0 / 127.5, offset=-1),
... ],
... name="val_data_augmentation",
... )
```
Next, create functions to apply appropriate transformations to a batch of images, instead of one image at a time.
```py
>>> import numpy as np
>>> import tensorflow as tf
>>> from PIL import Image | 88_3_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_classification.md | https://huggingface.co/docs/transformers/en/tasks/image_classification/#preprocess | .md | >>> def convert_to_tf_tensor(image: Image):
... np_image = np.array(image)
... tf_image = tf.convert_to_tensor(np_image)
... # `expand_dims()` is used to add a batch dimension since
... # the TF augmentation layers operates on batched inputs.
... return tf.expand_dims(tf_image, 0) | 88_3_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_classification.md | https://huggingface.co/docs/transformers/en/tasks/image_classification/#preprocess | .md | >>> def preprocess_train(example_batch):
... """Apply train_transforms across a batch."""
... images = [
... train_data_augmentation(convert_to_tf_tensor(image.convert("RGB"))) for image in example_batch["image"]
... ]
... example_batch["pixel_values"] = [tf.transpose(tf.squeeze(image)) for image in images]
... return example_batch | 88_3_11 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_classification.md | https://huggingface.co/docs/transformers/en/tasks/image_classification/#preprocess | .md | ... def preprocess_val(example_batch):
... """Apply val_transforms across a batch."""
... images = [
... val_data_augmentation(convert_to_tf_tensor(image.convert("RGB"))) for image in example_batch["image"]
... ]
... example_batch["pixel_values"] = [tf.transpose(tf.squeeze(image)) for image in images]
... return example_batch
```
Use 🤗 Datasets [`~datasets.Dataset.set_transform`] to apply the transformations on the fly:
```py
food["train"].set_transform(preprocess_train) | 88_3_12 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_classification.md | https://huggingface.co/docs/transformers/en/tasks/image_classification/#preprocess | .md | ```py
food["train"].set_transform(preprocess_train)
food["test"].set_transform(preprocess_val)
```
As a final preprocessing step, create a batch of examples using `DefaultDataCollator`. Unlike other data collators in 🤗 Transformers, the
`DefaultDataCollator` does not apply additional preprocessing, such as padding.
```py
>>> from transformers import DefaultDataCollator | 88_3_13 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_classification.md | https://huggingface.co/docs/transformers/en/tasks/image_classification/#preprocess | .md | >>> data_collator = DefaultDataCollator(return_tensors="tf")
```
</tf>
</frameworkcontent> | 88_3_14 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_classification.md | https://huggingface.co/docs/transformers/en/tasks/image_classification/#evaluate | .md | Including a metric during training is often helpful for evaluating your model's performance. You can quickly load an
evaluation method with the 🤗 [Evaluate](https://huggingface.co/docs/evaluate/index) library. For this task, load
the [accuracy](https://huggingface.co/spaces/evaluate-metric/accuracy) metric (see the 🤗 Evaluate [quick tour](https://huggingface.co/docs/evaluate/a_quick_tour) to learn more about how to load and compute a metric):
```py
>>> import evaluate | 88_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_classification.md | https://huggingface.co/docs/transformers/en/tasks/image_classification/#evaluate | .md | >>> accuracy = evaluate.load("accuracy")
```
Then create a function that passes your predictions and labels to [`~evaluate.EvaluationModule.compute`] to calculate the accuracy:
```py
>>> import numpy as np | 88_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_classification.md | https://huggingface.co/docs/transformers/en/tasks/image_classification/#evaluate | .md | >>> def compute_metrics(eval_pred):
... predictions, labels = eval_pred
... predictions = np.argmax(predictions, axis=1)
... return accuracy.compute(predictions=predictions, references=labels)
```
Your `compute_metrics` function is ready to go now, and you'll return to it when you set up your training. | 88_4_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_classification.md | https://huggingface.co/docs/transformers/en/tasks/image_classification/#train | .md | <frameworkcontent>
<pt>
<Tip>
If you aren't familiar with finetuning a model with the [`Trainer`], take a look at the basic tutorial [here](../training#train-with-pytorch-trainer)!
</Tip>
You're ready to start training your model now! Load ViT with [`AutoModelForImageClassification`]. Specify the number of labels along with the number of expected labels, and the label mappings:
```py
>>> from transformers import AutoModelForImageClassification, TrainingArguments, Trainer | 88_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_classification.md | https://huggingface.co/docs/transformers/en/tasks/image_classification/#train | .md | >>> model = AutoModelForImageClassification.from_pretrained(
... checkpoint,
... num_labels=len(labels),
... id2label=id2label,
... label2id=label2id,
... )
```
At this point, only three steps remain: | 88_5_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_classification.md | https://huggingface.co/docs/transformers/en/tasks/image_classification/#train | .md | 1. Define your training hyperparameters in [`TrainingArguments`]. It is important you don't remove unused columns because that'll drop the `image` column. Without the `image` column, you can't create `pixel_values`. Set `remove_unused_columns=False` to prevent this behavior! The only other required parameter is `output_dir` which specifies where to save your model. You'll push this model to the Hub by setting `push_to_hub=True` (you need to be signed in to Hugging Face to upload your model). At the end of | 88_5_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_classification.md | https://huggingface.co/docs/transformers/en/tasks/image_classification/#train | .md | model to the Hub by setting `push_to_hub=True` (you need to be signed in to Hugging Face to upload your model). At the end of each epoch, the [`Trainer`] will evaluate the accuracy and save the training checkpoint. | 88_5_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_classification.md | https://huggingface.co/docs/transformers/en/tasks/image_classification/#train | .md | 2. Pass the training arguments to [`Trainer`] along with the model, dataset, tokenizer, data collator, and `compute_metrics` function.
3. Call [`~Trainer.train`] to finetune your model.
```py
>>> training_args = TrainingArguments(
... output_dir="my_awesome_food_model",
... remove_unused_columns=False,
... eval_strategy="epoch",
... save_strategy="epoch",
... learning_rate=5e-5,
... per_device_train_batch_size=16,
... gradient_accumulation_steps=4, | 88_5_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_classification.md | https://huggingface.co/docs/transformers/en/tasks/image_classification/#train | .md | ... learning_rate=5e-5,
... per_device_train_batch_size=16,
... gradient_accumulation_steps=4,
... per_device_eval_batch_size=16,
... num_train_epochs=3,
... warmup_ratio=0.1,
... logging_steps=10,
... load_best_model_at_end=True,
... metric_for_best_model="accuracy",
... push_to_hub=True,
... ) | 88_5_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_classification.md | https://huggingface.co/docs/transformers/en/tasks/image_classification/#train | .md | >>> trainer = Trainer(
... model=model,
... args=training_args,
... data_collator=data_collator,
... train_dataset=food["train"],
... eval_dataset=food["test"],
... processing_class=image_processor,
... compute_metrics=compute_metrics,
... ) | 88_5_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_classification.md | https://huggingface.co/docs/transformers/en/tasks/image_classification/#train | .md | >>> trainer.train()
```
Once training is completed, share your model to the Hub with the [`~transformers.Trainer.push_to_hub`] method so everyone can use your model:
```py
>>> trainer.push_to_hub()
```
</pt>
</frameworkcontent>
<frameworkcontent>
<tf>
<Tip>
If you are unfamiliar with fine-tuning a model with Keras, check out the [basic tutorial](./training#train-a-tensorflow-model-with-keras) first!
</Tip>
To fine-tune a model in TensorFlow, follow these steps: | 88_5_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_classification.md | https://huggingface.co/docs/transformers/en/tasks/image_classification/#train | .md | </Tip>
To fine-tune a model in TensorFlow, follow these steps:
1. Define the training hyperparameters, and set up an optimizer and a learning rate schedule.
2. Instantiate a pre-trained model.
3. Convert a 🤗 Dataset to a `tf.data.Dataset`.
4. Compile your model.
5. Add callbacks and use the `fit()` method to run the training.
6. Upload your model to 🤗 Hub to share with the community.
Start by defining the hyperparameters, optimizer and learning rate schedule:
```py | 88_5_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_classification.md | https://huggingface.co/docs/transformers/en/tasks/image_classification/#train | .md | Start by defining the hyperparameters, optimizer and learning rate schedule:
```py
>>> from transformers import create_optimizer | 88_5_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_classification.md | https://huggingface.co/docs/transformers/en/tasks/image_classification/#train | .md | >>> batch_size = 16
>>> num_epochs = 5
>>> num_train_steps = len(food["train"]) * num_epochs
>>> learning_rate = 3e-5
>>> weight_decay_rate = 0.01 | 88_5_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_classification.md | https://huggingface.co/docs/transformers/en/tasks/image_classification/#train | .md | >>> optimizer, lr_schedule = create_optimizer(
... init_lr=learning_rate,
... num_train_steps=num_train_steps,
... weight_decay_rate=weight_decay_rate,
... num_warmup_steps=0,
... )
```
Then, load ViT with [`TFAutoModelForImageClassification`] along with the label mappings:
```py
>>> from transformers import TFAutoModelForImageClassification | 88_5_11 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_classification.md | https://huggingface.co/docs/transformers/en/tasks/image_classification/#train | .md | >>> model = TFAutoModelForImageClassification.from_pretrained(
... checkpoint,
... id2label=id2label,
... label2id=label2id,
... )
```
Convert your datasets to the `tf.data.Dataset` format using the [`~datasets.Dataset.to_tf_dataset`] and your `data_collator`:
```py
>>> # converting our train dataset to tf.data.Dataset
>>> tf_train_dataset = food["train"].to_tf_dataset(
... columns="pixel_values", label_cols="label", shuffle=True, batch_size=batch_size, collate_fn=data_collator
... ) | 88_5_12 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_classification.md | https://huggingface.co/docs/transformers/en/tasks/image_classification/#train | .md | >>> # converting our test dataset to tf.data.Dataset
>>> tf_eval_dataset = food["test"].to_tf_dataset(
... columns="pixel_values", label_cols="label", shuffle=True, batch_size=batch_size, collate_fn=data_collator
... )
```
Configure the model for training with `compile()`:
```py
>>> from tensorflow.keras.losses import SparseCategoricalCrossentropy | 88_5_13 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_classification.md | https://huggingface.co/docs/transformers/en/tasks/image_classification/#train | .md | >>> loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
>>> model.compile(optimizer=optimizer, loss=loss)
```
To compute the accuracy from the predictions and push your model to the 🤗 Hub, use [Keras callbacks](../main_classes/keras_callbacks).
Pass your `compute_metrics` function to [KerasMetricCallback](../main_classes/keras_callbacks#transformers.KerasMetricCallback), | 88_5_14 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_classification.md | https://huggingface.co/docs/transformers/en/tasks/image_classification/#train | .md | and use the [PushToHubCallback](../main_classes/keras_callbacks#transformers.PushToHubCallback) to upload the model:
```py
>>> from transformers.keras_callbacks import KerasMetricCallback, PushToHubCallback | 88_5_15 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_classification.md | https://huggingface.co/docs/transformers/en/tasks/image_classification/#train | .md | >>> metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_eval_dataset)
>>> push_to_hub_callback = PushToHubCallback(
... output_dir="food_classifier",
... tokenizer=image_processor,
... save_strategy="no",
... )
>>> callbacks = [metric_callback, push_to_hub_callback]
```
Finally, you are ready to train your model! Call `fit()` with your training and validation datasets, the number of epochs,
and your callbacks to fine-tune the model:
```py | 88_5_16 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_classification.md | https://huggingface.co/docs/transformers/en/tasks/image_classification/#train | .md | and your callbacks to fine-tune the model:
```py
>>> model.fit(tf_train_dataset, validation_data=tf_eval_dataset, epochs=num_epochs, callbacks=callbacks)
Epoch 1/5
250/250 [==============================] - 313s 1s/step - loss: 2.5623 - val_loss: 1.4161 - accuracy: 0.9290
Epoch 2/5
250/250 [==============================] - 265s 1s/step - loss: 0.9181 - val_loss: 0.6808 - accuracy: 0.9690
Epoch 3/5 | 88_5_17 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_classification.md | https://huggingface.co/docs/transformers/en/tasks/image_classification/#train | .md | 250/250 [==============================] - 265s 1s/step - loss: 0.9181 - val_loss: 0.6808 - accuracy: 0.9690
Epoch 3/5
250/250 [==============================] - 252s 1s/step - loss: 0.3910 - val_loss: 0.4303 - accuracy: 0.9820
Epoch 4/5
250/250 [==============================] - 251s 1s/step - loss: 0.2028 - val_loss: 0.3191 - accuracy: 0.9900
Epoch 5/5
250/250 [==============================] - 238s 949ms/step - loss: 0.1232 - val_loss: 0.3259 - accuracy: 0.9890
``` | 88_5_18 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_classification.md | https://huggingface.co/docs/transformers/en/tasks/image_classification/#train | .md | Epoch 5/5
250/250 [==============================] - 238s 949ms/step - loss: 0.1232 - val_loss: 0.3259 - accuracy: 0.9890
```
Congratulations! You have fine-tuned your model and shared it on the 🤗 Hub. You can now use it for inference!
</tf>
</frameworkcontent>
<Tip> | 88_5_19 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_classification.md | https://huggingface.co/docs/transformers/en/tasks/image_classification/#train | .md | </tf>
</frameworkcontent>
<Tip>
For a more in-depth example of how to finetune a model for image classification, take a look at the corresponding [PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb).
</Tip> | 88_5_20 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_classification.md | https://huggingface.co/docs/transformers/en/tasks/image_classification/#inference | .md | Great, now that you've fine-tuned a model, you can use it for inference!
Load an image you'd like to run inference on:
```py
>>> ds = load_dataset("food101", split="validation[:10]")
>>> image = ds["image"][0]
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png" alt="image of beignets"/>
</div> | 88_6_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_classification.md | https://huggingface.co/docs/transformers/en/tasks/image_classification/#inference | .md | </div>
The simplest way to try out your finetuned model for inference is to use it in a [`pipeline`]. Instantiate a `pipeline` for image classification with your model, and pass your image to it:
```py
>>> from transformers import pipeline | 88_6_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_classification.md | https://huggingface.co/docs/transformers/en/tasks/image_classification/#inference | .md | >>> classifier = pipeline("image-classification", model="my_awesome_food_model")
>>> classifier(image)
[{'score': 0.31856709718704224, 'label': 'beignets'},
{'score': 0.015232225880026817, 'label': 'bruschetta'},
{'score': 0.01519392803311348, 'label': 'chicken_wings'},
{'score': 0.013022331520915031, 'label': 'pork_chop'},
{'score': 0.012728818692266941, 'label': 'prime_rib'}]
```
You can also manually replicate the results of the `pipeline` if you'd like:
<frameworkcontent>
<pt> | 88_6_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_classification.md | https://huggingface.co/docs/transformers/en/tasks/image_classification/#inference | .md | ```
You can also manually replicate the results of the `pipeline` if you'd like:
<frameworkcontent>
<pt>
Load an image processor to preprocess the image and return the `input` as PyTorch tensors:
```py
>>> from transformers import AutoImageProcessor
>>> import torch | 88_6_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_classification.md | https://huggingface.co/docs/transformers/en/tasks/image_classification/#inference | .md | >>> image_processor = AutoImageProcessor.from_pretrained("my_awesome_food_model")
>>> inputs = image_processor(image, return_tensors="pt")
```
Pass your inputs to the model and return the logits:
```py
>>> from transformers import AutoModelForImageClassification | 88_6_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_classification.md | https://huggingface.co/docs/transformers/en/tasks/image_classification/#inference | .md | >>> model = AutoModelForImageClassification.from_pretrained("my_awesome_food_model")
>>> with torch.no_grad():
... logits = model(**inputs).logits
```
Get the predicted label with the highest probability, and use the model's `id2label` mapping to convert it to a label:
```py
>>> predicted_label = logits.argmax(-1).item()
>>> model.config.id2label[predicted_label]
'beignets'
```
</pt>
</frameworkcontent>
<frameworkcontent>
<tf> | 88_6_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_classification.md | https://huggingface.co/docs/transformers/en/tasks/image_classification/#inference | .md | >>> model.config.id2label[predicted_label]
'beignets'
```
</pt>
</frameworkcontent>
<frameworkcontent>
<tf>
Load an image processor to preprocess the image and return the `input` as TensorFlow tensors:
```py
>>> from transformers import AutoImageProcessor | 88_6_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_classification.md | https://huggingface.co/docs/transformers/en/tasks/image_classification/#inference | .md | >>> image_processor = AutoImageProcessor.from_pretrained("MariaK/food_classifier")
>>> inputs = image_processor(image, return_tensors="tf")
```
Pass your inputs to the model and return the logits:
```py
>>> from transformers import TFAutoModelForImageClassification | 88_6_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_classification.md | https://huggingface.co/docs/transformers/en/tasks/image_classification/#inference | .md | >>> model = TFAutoModelForImageClassification.from_pretrained("MariaK/food_classifier")
>>> logits = model(**inputs).logits
```
Get the predicted label with the highest probability, and use the model's `id2label` mapping to convert it to a label:
```py
>>> predicted_class_id = int(tf.math.argmax(logits, axis=-1)[0])
>>> model.config.id2label[predicted_class_id]
'beignets'
```
</tf>
</frameworkcontent> | 88_6_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/monocular_depth_estimation.md | https://huggingface.co/docs/transformers/en/tasks/monocular_depth_estimation/ | .md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 89_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/monocular_depth_estimation.md | https://huggingface.co/docs/transformers/en/tasks/monocular_depth_estimation/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 89_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/monocular_depth_estimation.md | https://huggingface.co/docs/transformers/en/tasks/monocular_depth_estimation/#monocular-depth-estimation | .md | Monocular depth estimation is a computer vision task that involves predicting the depth information of a scene from a
single image. In other words, it is the process of estimating the distance of objects in a scene from
a single camera viewpoint.
Monocular depth estimation has various applications, including 3D reconstruction, augmented reality, autonomous driving,
and robotics. It is a challenging task as it requires the model to understand the complex relationships between objects | 89_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/monocular_depth_estimation.md | https://huggingface.co/docs/transformers/en/tasks/monocular_depth_estimation/#monocular-depth-estimation | .md | and robotics. It is a challenging task as it requires the model to understand the complex relationships between objects
in the scene and the corresponding depth information, which can be affected by factors such as lighting conditions,
occlusion, and texture.
There are two main depth estimation categories: | 89_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/monocular_depth_estimation.md | https://huggingface.co/docs/transformers/en/tasks/monocular_depth_estimation/#monocular-depth-estimation | .md | occlusion, and texture.
There are two main depth estimation categories:
- **Absolute depth estimation**: This task variant aims to provide exact depth measurements from the camera. The term is used interchangeably with metric depth estimation, where depth is provided in precise measurements in meters or feet. Absolute depth estimation models output depth maps with numerical values that represent real-world distances. | 89_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/monocular_depth_estimation.md | https://huggingface.co/docs/transformers/en/tasks/monocular_depth_estimation/#monocular-depth-estimation | .md | - **Relative depth estimation**: Relative depth estimation aims to predict the depth order of objects or points in a scene without providing the precise measurements. These models output a depth map that indicates which parts of the scene are closer or farther relative to each other without the actual distances to A and B. | 89_1_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/monocular_depth_estimation.md | https://huggingface.co/docs/transformers/en/tasks/monocular_depth_estimation/#monocular-depth-estimation | .md | In this guide, we will see how to infer with [Depth Anything V2](https://huggingface.co/depth-anything/Depth-Anything-V2-Large), a state-of-the-art zero-shot relative depth estimation model, and [ZoeDepth](https://huggingface.co/docs/transformers/main/en/model_doc/zoedepth), an absolute depth estimation model.
<Tip>
Check the [Depth Estimation](https://huggingface.co/tasks/depth-estimation) task page to view all compatible architectures and checkpoints.
</Tip> | 89_1_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/monocular_depth_estimation.md | https://huggingface.co/docs/transformers/en/tasks/monocular_depth_estimation/#monocular-depth-estimation | .md | </Tip>
Before we begin, we need to install the latest version of Transformers:
```bash
pip install -q -U transformers
``` | 89_1_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/monocular_depth_estimation.md | https://huggingface.co/docs/transformers/en/tasks/monocular_depth_estimation/#depth-estimation-pipeline | .md | The simplest way to try out inference with a model supporting depth estimation is to use the corresponding [`pipeline`].
Instantiate a pipeline from a [checkpoint on the Hugging Face Hub](https://huggingface.co/models?pipeline_tag=depth-estimation&sort=downloads):
```py
>>> from transformers import pipeline
>>> import torch
>>> from accelerate.test_utils.testing import get_backend
# automatically detects the underlying device type (CUDA, CPU, XPU, MPS, etc.)
>>> device, _, _ = get_backend() | 89_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/monocular_depth_estimation.md | https://huggingface.co/docs/transformers/en/tasks/monocular_depth_estimation/#depth-estimation-pipeline | .md | # automatically detects the underlying device type (CUDA, CPU, XPU, MPS, etc.)
>>> device, _, _ = get_backend()
>>> checkpoint = "depth-anything/Depth-Anything-V2-base-hf"
>>> pipe = pipeline("depth-estimation", model=checkpoint, device=device)
```
Next, choose an image to analyze:
```py
>>> from PIL import Image
>>> import requests | 89_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/monocular_depth_estimation.md | https://huggingface.co/docs/transformers/en/tasks/monocular_depth_estimation/#depth-estimation-pipeline | .md | >>> url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/bee.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> image
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/bee.jpg" alt="Photo of a bee"/>
</div>
Pass the image to the pipeline.
```py
>>> predictions = pipe(image)
``` | 89_2_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/monocular_depth_estimation.md | https://huggingface.co/docs/transformers/en/tasks/monocular_depth_estimation/#depth-estimation-pipeline | .md | </div>
Pass the image to the pipeline.
```py
>>> predictions = pipe(image)
```
The pipeline returns a dictionary with two entries. The first one, called `predicted_depth`, is a tensor with the values
being the depth expressed in meters for each pixel.
The second one, `depth`, is a PIL image that visualizes the depth estimation result.
Let's take a look at the visualized result:
```py
>>> predictions["depth"]
```
<div class="flex justify-center"> | 89_2_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/monocular_depth_estimation.md | https://huggingface.co/docs/transformers/en/tasks/monocular_depth_estimation/#depth-estimation-pipeline | .md | Let's take a look at the visualized result:
```py
>>> predictions["depth"]
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/depth-visualization.png" alt="Depth estimation visualization"/>
</div> | 89_2_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/monocular_depth_estimation.md | https://huggingface.co/docs/transformers/en/tasks/monocular_depth_estimation/#depth-estimation-inference-by-hand | .md | Now that you've seen how to use the depth estimation pipeline, let's see how we can replicate the same result by hand.
Start by loading the model and associated processor from a [checkpoint on the Hugging Face Hub](https://huggingface.co/models?pipeline_tag=depth-estimation&sort=downloads).
Here we'll use the same checkpoint as before:
```py
>>> from transformers import AutoImageProcessor, AutoModelForDepthEstimation
>>> checkpoint = "Intel/zoedepth-nyu-kitti" | 89_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/monocular_depth_estimation.md | https://huggingface.co/docs/transformers/en/tasks/monocular_depth_estimation/#depth-estimation-inference-by-hand | .md | >>> image_processor = AutoImageProcessor.from_pretrained(checkpoint)
>>> model = AutoModelForDepthEstimation.from_pretrained(checkpoint).to(device)
```
Prepare the image input for the model using the `image_processor` that will take care of the necessary image transformations
such as resizing and normalization:
```py
>>> pixel_values = image_processor(image, return_tensors="pt").pixel_values.to(device)
```
Pass the prepared inputs through the model:
```py
>>> import torch | 89_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/monocular_depth_estimation.md | https://huggingface.co/docs/transformers/en/tasks/monocular_depth_estimation/#depth-estimation-inference-by-hand | .md | >>> with torch.no_grad():
... outputs = model(pixel_values)
```
Let's post-process the results to remove any padding and resize the depth map to match the original image size. The `post_process_depth_estimation` outputs a list of dicts containing the `"predicted_depth"`.
```py
>>> # ZoeDepth dynamically pads the input image. Thus we pass the original image size as argument
>>> # to `post_process_depth_estimation` to remove the padding and resize to original dimensions. | 89_3_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/monocular_depth_estimation.md | https://huggingface.co/docs/transformers/en/tasks/monocular_depth_estimation/#depth-estimation-inference-by-hand | .md | >>> # to `post_process_depth_estimation` to remove the padding and resize to original dimensions.
>>> post_processed_output = image_processor.post_process_depth_estimation(
... outputs,
... source_sizes=[(image.height, image.width)],
... ) | 89_3_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/monocular_depth_estimation.md | https://huggingface.co/docs/transformers/en/tasks/monocular_depth_estimation/#depth-estimation-inference-by-hand | .md | >>> predicted_depth = post_processed_output[0]["predicted_depth"]
>>> depth = (predicted_depth - predicted_depth.min()) / (predicted_depth.max() - predicted_depth.min())
>>> depth = depth.detach().cpu().numpy() * 255
>>> depth = Image.fromarray(depth.astype("uint8"))
```
<Tip> | 89_3_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/monocular_depth_estimation.md | https://huggingface.co/docs/transformers/en/tasks/monocular_depth_estimation/#depth-estimation-inference-by-hand | .md | >>> depth = Image.fromarray(depth.astype("uint8"))
```
<Tip>
<p>In the <a href="https://github.com/isl-org/ZoeDepth/blob/edb6daf45458569e24f50250ef1ed08c015f17a7/zoedepth/models/depth_model.py#L131">original implementation</a> ZoeDepth model performs inference on both the original and flipped images and averages out the results. The <code>post_process_depth_estimation</code> function can handle this for us by passing the flipped outputs to the optional <code>outputs_flipped</code> argument:</p> | 89_3_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/monocular_depth_estimation.md | https://huggingface.co/docs/transformers/en/tasks/monocular_depth_estimation/#depth-estimation-inference-by-hand | .md | <pre><code class="language-Python">>>> with torch.no_grad():
... outputs = model(pixel_values)
... outputs_flipped = model(pixel_values=torch.flip(inputs.pixel_values, dims=[3]))
>>> post_processed_output = image_processor.post_process_depth_estimation(
... outputs,
... source_sizes=[(image.height, image.width)],
... outputs_flipped=outputs_flipped,
... )
</code></pre>
</Tip>
<div class="flex justify-center"> | 89_3_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/monocular_depth_estimation.md | https://huggingface.co/docs/transformers/en/tasks/monocular_depth_estimation/#depth-estimation-inference-by-hand | .md | ... outputs_flipped=outputs_flipped,
... )
</code></pre>
</Tip>
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/depth-visualization-zoe.png" alt="Depth estimation visualization"/>
</div> | 89_3_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/multiple_choice.md | https://huggingface.co/docs/transformers/en/tasks/multiple_choice/ | .md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 90_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/multiple_choice.md | https://huggingface.co/docs/transformers/en/tasks/multiple_choice/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 90_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/multiple_choice.md | https://huggingface.co/docs/transformers/en/tasks/multiple_choice/#multiple-choice | .md | [[open-in-colab]]
A multiple choice task is similar to question answering, except several candidate answers are provided along with a context and the model is trained to select the correct answer.
This guide will show you how to:
1. Finetune [BERT](https://huggingface.co/google-bert/bert-base-uncased) on the `regular` configuration of the [SWAG](https://huggingface.co/datasets/swag) dataset to select the best answer given multiple options and some context.
2. Use your finetuned model for inference. | 90_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/multiple_choice.md | https://huggingface.co/docs/transformers/en/tasks/multiple_choice/#multiple-choice | .md | 2. Use your finetuned model for inference.
Before you begin, make sure you have all the necessary libraries installed:
```bash
pip install transformers datasets evaluate
```
We encourage you to login to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to login:
```py
>>> from huggingface_hub import notebook_login | 90_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/multiple_choice.md | https://huggingface.co/docs/transformers/en/tasks/multiple_choice/#multiple-choice | .md | >>> notebook_login()
``` | 90_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/multiple_choice.md | https://huggingface.co/docs/transformers/en/tasks/multiple_choice/#load-swag-dataset | .md | Start by loading the `regular` configuration of the SWAG dataset from the 🤗 Datasets library:
```py
>>> from datasets import load_dataset | 90_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/multiple_choice.md | https://huggingface.co/docs/transformers/en/tasks/multiple_choice/#load-swag-dataset | .md | >>> swag = load_dataset("swag", "regular")
```
Then take a look at an example:
```py
>>> swag["train"][0]
{'ending0': 'passes by walking down the street playing their instruments.',
'ending1': 'has heard approaching them.',
'ending2': "arrives and they're outside dancing and asleep.",
'ending3': 'turns the lead singer watches the performance.',
'fold-ind': '3416',
'gold-source': 'gold',
'label': 0,
'sent1': 'Members of the procession walk down the street holding small horn brass instruments.', | 90_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/multiple_choice.md | https://huggingface.co/docs/transformers/en/tasks/multiple_choice/#load-swag-dataset | .md | 'label': 0,
'sent1': 'Members of the procession walk down the street holding small horn brass instruments.',
'sent2': 'A drum line',
'startphrase': 'Members of the procession walk down the street holding small horn brass instruments. A drum line',
'video-id': 'anetv_jkn6uvmqwh4'}
```
While it looks like there are a lot of fields here, it is actually pretty straightforward:
- `sent1` and `sent2`: these fields show how a sentence starts, and if you put the two together, you get the `startphrase` field. | 90_2_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/multiple_choice.md | https://huggingface.co/docs/transformers/en/tasks/multiple_choice/#load-swag-dataset | .md | - `ending`: suggests a possible ending for how a sentence can end, but only one of them is correct.
- `label`: identifies the correct sentence ending. | 90_2_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/multiple_choice.md | https://huggingface.co/docs/transformers/en/tasks/multiple_choice/#preprocess | .md | The next step is to load a BERT tokenizer to process the sentence starts and the four possible endings:
```py
>>> from transformers import AutoTokenizer | 90_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/multiple_choice.md | https://huggingface.co/docs/transformers/en/tasks/multiple_choice/#preprocess | .md | >>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased")
```
The preprocessing function you want to create needs to:
1. Make four copies of the `sent1` field and combine each of them with `sent2` to recreate how a sentence starts.
2. Combine `sent2` with each of the four possible sentence endings.
3. Flatten these two lists so you can tokenize them, and then unflatten them afterward so each example has a corresponding `input_ids`, `attention_mask`, and `labels` field.
```py | 90_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/multiple_choice.md | https://huggingface.co/docs/transformers/en/tasks/multiple_choice/#preprocess | .md | ```py
>>> ending_names = ["ending0", "ending1", "ending2", "ending3"] | 90_3_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/multiple_choice.md | https://huggingface.co/docs/transformers/en/tasks/multiple_choice/#preprocess | .md | >>> def preprocess_function(examples):
... first_sentences = [[context] * 4 for context in examples["sent1"]]
... question_headers = examples["sent2"]
... second_sentences = [
... [f"{header} {examples[end][i]}" for end in ending_names] for i, header in enumerate(question_headers)
... ]
... first_sentences = sum(first_sentences, [])
... second_sentences = sum(second_sentences, []) | 90_3_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/multiple_choice.md | https://huggingface.co/docs/transformers/en/tasks/multiple_choice/#preprocess | .md | ... tokenized_examples = tokenizer(first_sentences, second_sentences, truncation=True)
... return {k: [v[i : i + 4] for i in range(0, len(v), 4)] for k, v in tokenized_examples.items()}
```
To apply the preprocessing function over the entire dataset, use 🤗 Datasets [`~datasets.Dataset.map`] method. You can speed up the `map` function by setting `batched=True` to process multiple elements of the dataset at once:
```py
tokenized_swag = swag.map(preprocess_function, batched=True)
``` | 90_3_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/multiple_choice.md | https://huggingface.co/docs/transformers/en/tasks/multiple_choice/#preprocess | .md | ```py
tokenized_swag = swag.map(preprocess_function, batched=True)
```
🤗 Transformers doesn't have a data collator for multiple choice, so you'll need to adapt the [`DataCollatorWithPadding`] to create a batch of examples. It's more efficient to *dynamically pad* the sentences to the longest length in a batch during collation, instead of padding the whole dataset to the maximum length.
`DataCollatorForMultipleChoice` flattens all the model inputs, applies padding, and then unflattens the results: | 90_3_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/multiple_choice.md | https://huggingface.co/docs/transformers/en/tasks/multiple_choice/#preprocess | .md | `DataCollatorForMultipleChoice` flattens all the model inputs, applies padding, and then unflattens the results:
<frameworkcontent>
<pt>
```py
>>> from dataclasses import dataclass
>>> from transformers.tokenization_utils_base import PreTrainedTokenizerBase, PaddingStrategy
>>> from typing import Optional, Union
>>> import torch | 90_3_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/multiple_choice.md | https://huggingface.co/docs/transformers/en/tasks/multiple_choice/#preprocess | .md | >>> @dataclass
... class DataCollatorForMultipleChoice:
... """
... Data collator that will dynamically pad the inputs for multiple choice received.
... """
... tokenizer: PreTrainedTokenizerBase
... padding: Union[bool, str, PaddingStrategy] = True
... max_length: Optional[int] = None
... pad_to_multiple_of: Optional[int] = None | 90_3_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/multiple_choice.md | https://huggingface.co/docs/transformers/en/tasks/multiple_choice/#preprocess | .md | ... def __call__(self, features):
... label_name = "label" if "label" in features[0].keys() else "labels"
... labels = [feature.pop(label_name) for feature in features]
... batch_size = len(features)
... num_choices = len(features[0]["input_ids"])
... flattened_features = [
... [{k: v[i] for k, v in feature.items()} for i in range(num_choices)] for feature in features
... ]
... flattened_features = sum(flattened_features, []) | 90_3_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/multiple_choice.md | https://huggingface.co/docs/transformers/en/tasks/multiple_choice/#preprocess | .md | ... batch = self.tokenizer.pad(
... flattened_features,
... padding=self.padding,
... max_length=self.max_length,
... pad_to_multiple_of=self.pad_to_multiple_of,
... return_tensors="pt",
... ) | 90_3_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/multiple_choice.md | https://huggingface.co/docs/transformers/en/tasks/multiple_choice/#preprocess | .md | ... batch = {k: v.view(batch_size, num_choices, -1) for k, v in batch.items()}
... batch["labels"] = torch.tensor(labels, dtype=torch.int64)
... return batch
```
</pt>
<tf>
```py
>>> from dataclasses import dataclass
>>> from transformers.tokenization_utils_base import PreTrainedTokenizerBase, PaddingStrategy
>>> from typing import Optional, Union
>>> import tensorflow as tf | 90_3_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/multiple_choice.md | https://huggingface.co/docs/transformers/en/tasks/multiple_choice/#preprocess | .md | >>> @dataclass
... class DataCollatorForMultipleChoice:
... """
... Data collator that will dynamically pad the inputs for multiple choice received.
... """
... tokenizer: PreTrainedTokenizerBase
... padding: Union[bool, str, PaddingStrategy] = True
... max_length: Optional[int] = None
... pad_to_multiple_of: Optional[int] = None | 90_3_11 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/multiple_choice.md | https://huggingface.co/docs/transformers/en/tasks/multiple_choice/#preprocess | .md | ... def __call__(self, features):
... label_name = "label" if "label" in features[0].keys() else "labels"
... labels = [feature.pop(label_name) for feature in features]
... batch_size = len(features)
... num_choices = len(features[0]["input_ids"])
... flattened_features = [
... [{k: v[i] for k, v in feature.items()} for i in range(num_choices)] for feature in features
... ]
... flattened_features = sum(flattened_features, []) | 90_3_12 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/multiple_choice.md | https://huggingface.co/docs/transformers/en/tasks/multiple_choice/#preprocess | .md | ... batch = self.tokenizer.pad(
... flattened_features,
... padding=self.padding,
... max_length=self.max_length,
... pad_to_multiple_of=self.pad_to_multiple_of,
... return_tensors="tf",
... )
... batch = {k: tf.reshape(v, (batch_size, num_choices, -1)) for k, v in batch.items()}
... batch["labels"] = tf.convert_to_tensor(labels, dtype=tf.int64)
... return batch
```
</tf>
</frameworkcontent> | 90_3_13 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/multiple_choice.md | https://huggingface.co/docs/transformers/en/tasks/multiple_choice/#evaluate | .md | Including a metric during training is often helpful for evaluating your model's performance. You can quickly load a evaluation method with the 🤗 [Evaluate](https://huggingface.co/docs/evaluate/index) library. For this task, load the [accuracy](https://huggingface.co/spaces/evaluate-metric/accuracy) metric (see the 🤗 Evaluate [quick tour](https://huggingface.co/docs/evaluate/a_quick_tour) to learn more about how to load and compute a metric):
```py
>>> import evaluate | 90_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/multiple_choice.md | https://huggingface.co/docs/transformers/en/tasks/multiple_choice/#evaluate | .md | >>> accuracy = evaluate.load("accuracy")
```
Then create a function that passes your predictions and labels to [`~evaluate.EvaluationModule.compute`] to calculate the accuracy:
```py
>>> import numpy as np | 90_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/multiple_choice.md | https://huggingface.co/docs/transformers/en/tasks/multiple_choice/#evaluate | .md | >>> def compute_metrics(eval_pred):
... predictions, labels = eval_pred
... predictions = np.argmax(predictions, axis=1)
... return accuracy.compute(predictions=predictions, references=labels)
```
Your `compute_metrics` function is ready to go now, and you'll return to it when you setup your training. | 90_4_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/multiple_choice.md | https://huggingface.co/docs/transformers/en/tasks/multiple_choice/#train | .md | <frameworkcontent>
<pt>
<Tip>
If you aren't familiar with finetuning a model with the [`Trainer`], take a look at the basic tutorial [here](../training#train-with-pytorch-trainer)!
</Tip>
You're ready to start training your model now! Load BERT with [`AutoModelForMultipleChoice`]:
```py
>>> from transformers import AutoModelForMultipleChoice, TrainingArguments, Trainer | 90_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/multiple_choice.md | https://huggingface.co/docs/transformers/en/tasks/multiple_choice/#train | .md | >>> model = AutoModelForMultipleChoice.from_pretrained("google-bert/bert-base-uncased")
```
At this point, only three steps remain: | 90_5_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/multiple_choice.md | https://huggingface.co/docs/transformers/en/tasks/multiple_choice/#train | .md | ```
At this point, only three steps remain:
1. Define your training hyperparameters in [`TrainingArguments`]. The only required parameter is `output_dir` which specifies where to save your model. You'll push this model to the Hub by setting `push_to_hub=True` (you need to be signed in to Hugging Face to upload your model). At the end of each epoch, the [`Trainer`] will evaluate the accuracy and save the training checkpoint. | 90_5_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/multiple_choice.md | https://huggingface.co/docs/transformers/en/tasks/multiple_choice/#train | .md | 2. Pass the training arguments to [`Trainer`] along with the model, dataset, tokenizer, data collator, and `compute_metrics` function.
3. Call [`~Trainer.train`] to finetune your model.
```py
>>> training_args = TrainingArguments(
... output_dir="my_awesome_swag_model",
... eval_strategy="epoch",
... save_strategy="epoch",
... load_best_model_at_end=True,
... learning_rate=5e-5,
... per_device_train_batch_size=16,
... per_device_eval_batch_size=16,
... num_train_epochs=3, | 90_5_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/multiple_choice.md | https://huggingface.co/docs/transformers/en/tasks/multiple_choice/#train | .md | ... per_device_train_batch_size=16,
... per_device_eval_batch_size=16,
... num_train_epochs=3,
... weight_decay=0.01,
... push_to_hub=True,
... ) | 90_5_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/multiple_choice.md | https://huggingface.co/docs/transformers/en/tasks/multiple_choice/#train | .md | >>> trainer = Trainer(
... model=model,
... args=training_args,
... train_dataset=tokenized_swag["train"],
... eval_dataset=tokenized_swag["validation"],
... processing_class=tokenizer,
... data_collator=DataCollatorForMultipleChoice(tokenizer=tokenizer),
... compute_metrics=compute_metrics,
... ) | 90_5_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/multiple_choice.md | https://huggingface.co/docs/transformers/en/tasks/multiple_choice/#train | .md | >>> trainer.train()
```
Once training is completed, share your model to the Hub with the [`~transformers.Trainer.push_to_hub`] method so everyone can use your model:
```py
>>> trainer.push_to_hub()
```
</pt>
<tf>
<Tip>
If you aren't familiar with finetuning a model with Keras, take a look at the basic tutorial [here](../training#train-a-tensorflow-model-with-keras)!
</Tip> | 90_5_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/multiple_choice.md | https://huggingface.co/docs/transformers/en/tasks/multiple_choice/#train | .md | </Tip>
To finetune a model in TensorFlow, start by setting up an optimizer function, learning rate schedule, and some training hyperparameters:
```py
>>> from transformers import create_optimizer | 90_5_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/multiple_choice.md | https://huggingface.co/docs/transformers/en/tasks/multiple_choice/#train | .md | >>> batch_size = 16
>>> num_train_epochs = 2
>>> total_train_steps = (len(tokenized_swag["train"]) // batch_size) * num_train_epochs
>>> optimizer, schedule = create_optimizer(init_lr=5e-5, num_warmup_steps=0, num_train_steps=total_train_steps)
```
Then you can load BERT with [`TFAutoModelForMultipleChoice`]:
```py
>>> from transformers import TFAutoModelForMultipleChoice | 90_5_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/multiple_choice.md | https://huggingface.co/docs/transformers/en/tasks/multiple_choice/#train | .md | >>> model = TFAutoModelForMultipleChoice.from_pretrained("google-bert/bert-base-uncased")
```
Convert your datasets to the `tf.data.Dataset` format with [`~transformers.TFPreTrainedModel.prepare_tf_dataset`]:
```py
>>> data_collator = DataCollatorForMultipleChoice(tokenizer=tokenizer)
>>> tf_train_set = model.prepare_tf_dataset(
... tokenized_swag["train"],
... shuffle=True,
... batch_size=batch_size,
... collate_fn=data_collator,
... ) | 90_5_9 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.