source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/masked_language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/masked_language_modeling/#preprocess
.md
... if total_length >= block_size: ... total_length = (total_length // block_size) * block_size ... # Split by chunks of block_size. ... result = { ... k: [t[i : i + block_size] for i in range(0, total_length, block_size)] ... for k, t in concatenated_examples.items() ... } ... return result ``` Apply the `group_texts` function over the entire dataset: ```py >>> lm_dataset = tokenized_eli5.map(group_texts, batched=True, num_proc=4) ```
93_3_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/masked_language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/masked_language_modeling/#preprocess
.md
```py >>> lm_dataset = tokenized_eli5.map(group_texts, batched=True, num_proc=4) ``` Now create a batch of examples using [`DataCollatorForLanguageModeling`]. It's more efficient to *dynamically pad* the sentences to the longest length in a batch during collation, instead of padding the whole dataset to the maximum length. <frameworkcontent> <pt> Use the end-of-sequence token as the padding token and specify `mlm_probability` to randomly mask tokens each time you iterate over the data: ```py
93_3_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/masked_language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/masked_language_modeling/#preprocess
.md
```py >>> from transformers import DataCollatorForLanguageModeling
93_3_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/masked_language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/masked_language_modeling/#preprocess
.md
>>> tokenizer.pad_token = tokenizer.eos_token >>> data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm_probability=0.15) ``` </pt> <tf> Use the end-of-sequence token as the padding token and specify `mlm_probability` to randomly mask tokens each time you iterate over the data: ```py >>> from transformers import DataCollatorForLanguageModeling
93_3_14
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/masked_language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/masked_language_modeling/#preprocess
.md
>>> data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm_probability=0.15, return_tensors="tf") ``` </tf> </frameworkcontent>
93_3_15
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/masked_language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/masked_language_modeling/#train
.md
<frameworkcontent> <pt> <Tip> If you aren't familiar with finetuning a model with the [`Trainer`], take a look at the basic tutorial [here](../training#train-with-pytorch-trainer)! </Tip> You're ready to start training your model now! Load DistilRoBERTa with [`AutoModelForMaskedLM`]: ```py >>> from transformers import AutoModelForMaskedLM
93_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/masked_language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/masked_language_modeling/#train
.md
>>> model = AutoModelForMaskedLM.from_pretrained("distilbert/distilroberta-base") ``` At this point, only three steps remain: 1. Define your training hyperparameters in [`TrainingArguments`]. The only required parameter is `output_dir` which specifies where to save your model. You'll push this model to the Hub by setting `push_to_hub=True` (you need to be signed in to Hugging Face to upload your model). 2. Pass the training arguments to [`Trainer`] along with the model, datasets, and data collator.
93_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/masked_language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/masked_language_modeling/#train
.md
2. Pass the training arguments to [`Trainer`] along with the model, datasets, and data collator. 3. Call [`~Trainer.train`] to finetune your model. ```py >>> training_args = TrainingArguments( ... output_dir="my_awesome_eli5_mlm_model", ... eval_strategy="epoch", ... learning_rate=2e-5, ... num_train_epochs=3, ... weight_decay=0.01, ... push_to_hub=True, ... )
93_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/masked_language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/masked_language_modeling/#train
.md
>>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=lm_dataset["train"], ... eval_dataset=lm_dataset["test"], ... data_collator=data_collator, ... tokenizer=tokenizer, ... ) >>> trainer.train() ``` Once training is completed, use the [`~transformers.Trainer.evaluate`] method to evaluate your model and get its perplexity: ```py >>> import math
93_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/masked_language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/masked_language_modeling/#train
.md
>>> eval_results = trainer.evaluate() >>> print(f"Perplexity: {math.exp(eval_results['eval_loss']):.2f}") Perplexity: 8.76 ``` Then share your model to the Hub with the [`~transformers.Trainer.push_to_hub`] method so everyone can use your model: ```py >>> trainer.push_to_hub() ``` </pt> <tf> <Tip> If you aren't familiar with finetuning a model with Keras, take a look at the basic tutorial [here](../training#train-a-tensorflow-model-with-keras)! </Tip>
93_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/masked_language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/masked_language_modeling/#train
.md
</Tip> To finetune a model in TensorFlow, start by setting up an optimizer function, learning rate schedule, and some training hyperparameters: ```py >>> from transformers import create_optimizer, AdamWeightDecay
93_4_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/masked_language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/masked_language_modeling/#train
.md
>>> optimizer = AdamWeightDecay(learning_rate=2e-5, weight_decay_rate=0.01) ``` Then you can load DistilRoBERTa with [`TFAutoModelForMaskedLM`]: ```py >>> from transformers import TFAutoModelForMaskedLM
93_4_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/masked_language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/masked_language_modeling/#train
.md
>>> model = TFAutoModelForMaskedLM.from_pretrained("distilbert/distilroberta-base") ``` Convert your datasets to the `tf.data.Dataset` format with [`~transformers.TFPreTrainedModel.prepare_tf_dataset`]: ```py >>> tf_train_set = model.prepare_tf_dataset( ... lm_dataset["train"], ... shuffle=True, ... batch_size=16, ... collate_fn=data_collator, ... )
93_4_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/masked_language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/masked_language_modeling/#train
.md
>>> tf_test_set = model.prepare_tf_dataset( ... lm_dataset["test"], ... shuffle=False, ... batch_size=16, ... collate_fn=data_collator, ... ) ``` Configure the model for training with [`compile`](https://keras.io/api/models/model_training_apis/#compile-method). Note that Transformers models all have a default task-relevant loss function, so you don't need to specify one unless you want to: ```py >>> import tensorflow as tf
93_4_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/masked_language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/masked_language_modeling/#train
.md
>>> model.compile(optimizer=optimizer) # No loss argument! ``` This can be done by specifying where to push your model and tokenizer in the [`~transformers.PushToHubCallback`]: ```py >>> from transformers.keras_callbacks import PushToHubCallback
93_4_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/masked_language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/masked_language_modeling/#train
.md
>>> callback = PushToHubCallback( ... output_dir="my_awesome_eli5_mlm_model", ... tokenizer=tokenizer, ... ) ``` Finally, you're ready to start training your model! Call [`fit`](https://keras.io/api/models/model_training_apis/#fit-method) with your training and validation datasets, the number of epochs, and your callback to finetune the model: ```py >>> model.fit(x=tf_train_set, validation_data=tf_test_set, epochs=3, callbacks=[callback]) ```
93_4_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/masked_language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/masked_language_modeling/#train
.md
```py >>> model.fit(x=tf_train_set, validation_data=tf_test_set, epochs=3, callbacks=[callback]) ``` Once training is completed, your model is automatically uploaded to the Hub so everyone can use it! </tf> </frameworkcontent> <Tip> For a more in-depth example of how to finetune a model for masked language modeling, take a look at the corresponding [PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb)
93_4_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/masked_language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/masked_language_modeling/#train
.md
[PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb) or [TensorFlow notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb). </Tip>
93_4_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/masked_language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/masked_language_modeling/#inference
.md
Great, now that you've finetuned a model, you can use it for inference! Come up with some text you'd like the model to fill in the blank with, and use the special `<mask>` token to indicate the blank: ```py >>> text = "The Milky Way is a <mask> galaxy." ```
93_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/masked_language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/masked_language_modeling/#inference
.md
```py >>> text = "The Milky Way is a <mask> galaxy." ``` The simplest way to try out your finetuned model for inference is to use it in a [`pipeline`]. Instantiate a `pipeline` for fill-mask with your model, and pass your text to it. If you like, you can use the `top_k` parameter to specify how many predictions to return: ```py >>> from transformers import pipeline
93_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/masked_language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/masked_language_modeling/#inference
.md
>>> mask_filler = pipeline("fill-mask", "username/my_awesome_eli5_mlm_model") >>> mask_filler(text, top_k=3) [{'score': 0.5150994658470154, 'token': 21300, 'token_str': ' spiral', 'sequence': 'The Milky Way is a spiral galaxy.'}, {'score': 0.07087188959121704, 'token': 2232, 'token_str': ' massive', 'sequence': 'The Milky Way is a massive galaxy.'}, {'score': 0.06434620916843414, 'token': 650, 'token_str': ' small', 'sequence': 'The Milky Way is a small galaxy.'}] ``` <frameworkcontent> <pt>
93_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/masked_language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/masked_language_modeling/#inference
.md
'token': 650, 'token_str': ' small', 'sequence': 'The Milky Way is a small galaxy.'}] ``` <frameworkcontent> <pt> Tokenize the text and return the `input_ids` as PyTorch tensors. You'll also need to specify the position of the `<mask>` token: ```py >>> from transformers import AutoTokenizer
93_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/masked_language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/masked_language_modeling/#inference
.md
>>> tokenizer = AutoTokenizer.from_pretrained("username/my_awesome_eli5_mlm_model") >>> inputs = tokenizer(text, return_tensors="pt") >>> mask_token_index = torch.where(inputs["input_ids"] == tokenizer.mask_token_id)[1] ``` Pass your inputs to the model and return the `logits` of the masked token: ```py >>> from transformers import AutoModelForMaskedLM
93_5_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/masked_language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/masked_language_modeling/#inference
.md
>>> model = AutoModelForMaskedLM.from_pretrained("username/my_awesome_eli5_mlm_model") >>> logits = model(**inputs).logits >>> mask_token_logits = logits[0, mask_token_index, :] ``` Then return the three masked tokens with the highest probability and print them out: ```py >>> top_3_tokens = torch.topk(mask_token_logits, 3, dim=1).indices[0].tolist()
93_5_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/masked_language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/masked_language_modeling/#inference
.md
>>> for token in top_3_tokens: ... print(text.replace(tokenizer.mask_token, tokenizer.decode([token]))) The Milky Way is a spiral galaxy. The Milky Way is a massive galaxy. The Milky Way is a small galaxy. ``` </pt> <tf> Tokenize the text and return the `input_ids` as TensorFlow tensors. You'll also need to specify the position of the `<mask>` token: ```py >>> from transformers import AutoTokenizer
93_5_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/masked_language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/masked_language_modeling/#inference
.md
>>> tokenizer = AutoTokenizer.from_pretrained("username/my_awesome_eli5_mlm_model") >>> inputs = tokenizer(text, return_tensors="tf") >>> mask_token_index = tf.where(inputs["input_ids"] == tokenizer.mask_token_id)[0, 1] ``` Pass your inputs to the model and return the `logits` of the masked token: ```py >>> from transformers import TFAutoModelForMaskedLM
93_5_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/masked_language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/masked_language_modeling/#inference
.md
>>> model = TFAutoModelForMaskedLM.from_pretrained("username/my_awesome_eli5_mlm_model") >>> logits = model(**inputs).logits >>> mask_token_logits = logits[0, mask_token_index, :] ``` Then return the three masked tokens with the highest probability and print them out: ```py >>> top_3_tokens = tf.math.top_k(mask_token_logits, 3).indices.numpy()
93_5_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/masked_language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/masked_language_modeling/#inference
.md
>>> for token in top_3_tokens: ... print(text.replace(tokenizer.mask_token, tokenizer.decode([token]))) The Milky Way is a spiral galaxy. The Milky Way is a massive galaxy. The Milky Way is a small galaxy. ``` </tf> </frameworkcontent>
93_5_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_to_image.md
https://huggingface.co/docs/transformers/en/tasks/image_to_image/
.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
94_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_to_image.md
https://huggingface.co/docs/transformers/en/tasks/image_to_image/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
94_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_to_image.md
https://huggingface.co/docs/transformers/en/tasks/image_to_image/#image-to-image-task-guide
.md
[[open-in-colab]] Image-to-Image task is the task where an application receives an image and outputs another image. This has various subtasks, including image enhancement (super resolution, low light enhancement, deraining and so on), image inpainting, and more. This guide will show you how to: - Use an image-to-image pipeline for super resolution task, - Run image-to-image models for same task without a pipeline.
94_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_to_image.md
https://huggingface.co/docs/transformers/en/tasks/image_to_image/#image-to-image-task-guide
.md
- Use an image-to-image pipeline for super resolution task, - Run image-to-image models for same task without a pipeline. Note that as of the time this guide is released, `image-to-image` pipeline only supports super resolution task. Let's begin by installing the necessary libraries. ```bash pip install transformers ```
94_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_to_image.md
https://huggingface.co/docs/transformers/en/tasks/image_to_image/#image-to-image-task-guide
.md
Let's begin by installing the necessary libraries. ```bash pip install transformers ``` We can now initialize the pipeline with a [Swin2SR model](https://huggingface.co/caidas/swin2SR-lightweight-x2-64). We can then infer with the pipeline by calling it with an image. As of now, only [Swin2SR models](https://huggingface.co/models?sort=trending&search=swin2sr) are supported in this pipeline. ```python from transformers import pipeline import torch from accelerate.test_utils.testing import get_backend
94_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_to_image.md
https://huggingface.co/docs/transformers/en/tasks/image_to_image/#image-to-image-task-guide
.md
```python from transformers import pipeline import torch from accelerate.test_utils.testing import get_backend # automatically detects the underlying device type (CUDA, CPU, XPU, MPS, etc.) device, _, _ = get_backend() pipe = pipeline(task="image-to-image", model="caidas/swin2SR-lightweight-x2-64", device=device) ``` Now, let's load an image. ```python from PIL import Image import requests
94_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_to_image.md
https://huggingface.co/docs/transformers/en/tasks/image_to_image/#image-to-image-task-guide
.md
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/cat.jpg" image = Image.open(requests.get(url, stream=True).raw)
94_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_to_image.md
https://huggingface.co/docs/transformers/en/tasks/image_to_image/#image-to-image-task-guide
.md
print(image.size) ``` ```bash # (532, 432) ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/cat.jpg" alt="Photo of a cat"/> </div> We can now do inference with the pipeline. We will get an upscaled version of the cat image. ```python upscaled = pipe(image) print(upscaled.size) ``` ```bash # (1072, 880) ```
94_1_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_to_image.md
https://huggingface.co/docs/transformers/en/tasks/image_to_image/#image-to-image-task-guide
.md
```python upscaled = pipe(image) print(upscaled.size) ``` ```bash # (1072, 880) ``` If you wish to do inference yourself with no pipeline, you can use the `Swin2SRForImageSuperResolution` and `Swin2SRImageProcessor` classes of transformers. We will use the same model checkpoint for this. Let's initialize the model and the processor. ```python from transformers import Swin2SRForImageSuperResolution, Swin2SRImageProcessor
94_1_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_to_image.md
https://huggingface.co/docs/transformers/en/tasks/image_to_image/#image-to-image-task-guide
.md
model = Swin2SRForImageSuperResolution.from_pretrained("caidas/swin2SR-lightweight-x2-64").to(device) processor = Swin2SRImageProcessor("caidas/swin2SR-lightweight-x2-64") ``` `pipeline` abstracts away the preprocessing and postprocessing steps that we have to do ourselves, so let's preprocess the image. We will pass the image to the processor and then move the pixel values to GPU. ```python pixel_values = processor(image, return_tensors="pt").pixel_values print(pixel_values.shape)
94_1_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_to_image.md
https://huggingface.co/docs/transformers/en/tasks/image_to_image/#image-to-image-task-guide
.md
pixel_values = pixel_values.to(device) ``` We can now infer the image by passing pixel values to the model. ```python import torch
94_1_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_to_image.md
https://huggingface.co/docs/transformers/en/tasks/image_to_image/#image-to-image-task-guide
.md
with torch.no_grad(): outputs = model(pixel_values) ``` Output is an object of type `ImageSuperResolutionOutput` that looks like below 👇 ``` (loss=None, reconstruction=tensor([[[[0.8270, 0.8269, 0.8275, ..., 0.7463, 0.7446, 0.7453], [0.8287, 0.8278, 0.8283, ..., 0.7451, 0.7448, 0.7457], [0.8280, 0.8273, 0.8269, ..., 0.7447, 0.7446, 0.7452], ..., [0.5923, 0.5933, 0.5924, ..., 0.0697, 0.0695, 0.0706], [0.5926, 0.5932, 0.5926, ..., 0.0673, 0.0687, 0.0705],
94_1_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_to_image.md
https://huggingface.co/docs/transformers/en/tasks/image_to_image/#image-to-image-task-guide
.md
..., [0.5923, 0.5933, 0.5924, ..., 0.0697, 0.0695, 0.0706], [0.5926, 0.5932, 0.5926, ..., 0.0673, 0.0687, 0.0705], [0.5927, 0.5914, 0.5922, ..., 0.0664, 0.0694, 0.0718]]]], device='cuda:0'), hidden_states=None, attentions=None) ``` We need to get the `reconstruction` and post-process it for visualization. Let's see how it looks like. ```python outputs.reconstruction.data.shape # torch.Size([1, 3, 880, 1072]) ```
94_1_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_to_image.md
https://huggingface.co/docs/transformers/en/tasks/image_to_image/#image-to-image-task-guide
.md
```python outputs.reconstruction.data.shape # torch.Size([1, 3, 880, 1072]) ``` We need to squeeze the output and get rid of axis 0, clip the values, then convert it to be numpy float. Then we will arrange axes to have the shape [1072, 880], and finally, bring the output back to range [0, 255]. ```python import numpy as np
94_1_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_to_image.md
https://huggingface.co/docs/transformers/en/tasks/image_to_image/#image-to-image-task-guide
.md
# squeeze, take to CPU and clip the values output = outputs.reconstruction.data.squeeze().cpu().clamp_(0, 1).numpy() # rearrange the axes output = np.moveaxis(output, source=0, destination=-1) # bring values back to pixel values range output = (output * 255.0).round().astype(np.uint8) Image.fromarray(output) ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/cat_upscaled.png" alt="Upscaled photo of a cat"/>
94_1_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_to_image.md
https://huggingface.co/docs/transformers/en/tasks/image_to_image/#image-to-image-task-guide
.md
</div>
94_1_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_classification.md
https://huggingface.co/docs/transformers/en/tasks/video_classification/
.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
95_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_classification.md
https://huggingface.co/docs/transformers/en/tasks/video_classification/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
95_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_classification.md
https://huggingface.co/docs/transformers/en/tasks/video_classification/#video-classification
.md
[[open-in-colab]]
95_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_classification.md
https://huggingface.co/docs/transformers/en/tasks/video_classification/#video-classification
.md
Video classification is the task of assigning a label or class to an entire video. Videos are expected to have only one class for each video. Video classification models take a video as input and return a prediction about which class the video belongs to. These models can be used to categorize what a video is all about. A real-world application of video classification is action / activity recognition, which is useful for fitness applications. It is also helpful for vision-impaired individuals, especially
95_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_classification.md
https://huggingface.co/docs/transformers/en/tasks/video_classification/#video-classification
.md
activity recognition, which is useful for fitness applications. It is also helpful for vision-impaired individuals, especially when they are commuting.
95_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_classification.md
https://huggingface.co/docs/transformers/en/tasks/video_classification/#video-classification
.md
This guide will show you how to: 1. Fine-tune [VideoMAE](https://huggingface.co/docs/transformers/main/en/model_doc/videomae) on a subset of the [UCF101](https://www.crcv.ucf.edu/data/UCF101.php) dataset. 2. Use your fine-tuned model for inference. <Tip> To see all architectures and checkpoints compatible with this task, we recommend checking the [task-page](https://huggingface.co/tasks/video-classification). </Tip> Before you begin, make sure you have all the necessary libraries installed:
95_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_classification.md
https://huggingface.co/docs/transformers/en/tasks/video_classification/#video-classification
.md
</Tip> Before you begin, make sure you have all the necessary libraries installed: ```bash pip install -q pytorchvideo transformers evaluate ``` You will use [PyTorchVideo](https://pytorchvideo.org/) (dubbed `pytorchvideo`) to process and prepare the videos. We encourage you to log in to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to log in: ```py >>> from huggingface_hub import notebook_login
95_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_classification.md
https://huggingface.co/docs/transformers/en/tasks/video_classification/#video-classification
.md
>>> notebook_login() ```
95_1_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_classification.md
https://huggingface.co/docs/transformers/en/tasks/video_classification/#load-ucf101-dataset
.md
Start by loading a subset of the [UCF-101 dataset](https://www.crcv.ucf.edu/data/UCF101.php). This will give you a chance to experiment and make sure everything works before spending more time training on the full dataset. ```py >>> from huggingface_hub import hf_hub_download
95_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_classification.md
https://huggingface.co/docs/transformers/en/tasks/video_classification/#load-ucf101-dataset
.md
>>> hf_dataset_identifier = "sayakpaul/ucf101-subset" >>> filename = "UCF101_subset.tar.gz" >>> file_path = hf_hub_download(repo_id=hf_dataset_identifier, filename=filename, repo_type="dataset") ``` After the subset has been downloaded, you need to extract the compressed archive: ```py >>> import tarfile
95_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_classification.md
https://huggingface.co/docs/transformers/en/tasks/video_classification/#load-ucf101-dataset
.md
>>> with tarfile.open(file_path) as t: ... t.extractall(".") ``` At a high level, the dataset is organized like so: ```bash UCF101_subset/ train/ BandMarching/ video_1.mp4 video_2.mp4 ... Archery video_1.mp4 video_2.mp4 ... ... val/ BandMarching/ video_1.mp4 video_2.mp4 ... Archery video_1.mp4 video_2.mp4 ... ... test/ BandMarching/ video_1.mp4 video_2.mp4 ... Archery video_1.mp4 video_2.mp4 ... ... ``` You can then count the number of total videos. ```py >>> import pathlib
95_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_classification.md
https://huggingface.co/docs/transformers/en/tasks/video_classification/#load-ucf101-dataset
.md
... Archery video_1.mp4 video_2.mp4 ... ... ``` You can then count the number of total videos. ```py >>> import pathlib >>> dataset_root_path = "UCF101_subset" >>> dataset_root_path = pathlib.Path(dataset_root_path) ``` ```py >>> video_count_train = len(list(dataset_root_path.glob("train/*/*.avi"))) >>> video_count_val = len(list(dataset_root_path.glob("val/*/*.avi"))) >>> video_count_test = len(list(dataset_root_path.glob("test/*/*.avi")))
95_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_classification.md
https://huggingface.co/docs/transformers/en/tasks/video_classification/#load-ucf101-dataset
.md
>>> video_count_test = len(list(dataset_root_path.glob("test/*/*.avi"))) >>> video_total = video_count_train + video_count_val + video_count_test >>> print(f"Total videos: {video_total}") ``` ```py >>> all_video_file_paths = ( ... list(dataset_root_path.glob("train/*/*.avi")) ... + list(dataset_root_path.glob("val/*/*.avi")) ... + list(dataset_root_path.glob("test/*/*.avi")) ... ) >>> all_video_file_paths[:5] ``` The (`sorted`) video paths appear like so: ```bash ...
95_2_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_classification.md
https://huggingface.co/docs/transformers/en/tasks/video_classification/#load-ucf101-dataset
.md
... ) >>> all_video_file_paths[:5] ``` The (`sorted`) video paths appear like so: ```bash ... 'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g07_c04.avi', 'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g07_c06.avi', 'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g08_c01.avi', 'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g09_c02.avi', 'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g09_c06.avi' ... ```
95_2_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_classification.md
https://huggingface.co/docs/transformers/en/tasks/video_classification/#load-ucf101-dataset
.md
'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g09_c06.avi' ... ``` You will notice that there are video clips belonging to the same group / scene where group is denoted by `g` in the video file paths. `v_ApplyEyeMakeup_g07_c04.avi` and `v_ApplyEyeMakeup_g07_c06.avi`, for example.
95_2_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_classification.md
https://huggingface.co/docs/transformers/en/tasks/video_classification/#load-ucf101-dataset
.md
For the validation and evaluation splits, you wouldn't want to have video clips from the same group / scene to prevent [data leakage](https://www.kaggle.com/code/alexisbcook/data-leakage). The subset that you are using in this tutorial takes this information into account. Next up, you will derive the set of labels present in the dataset. Also, create two dictionaries that'll be helpful when initializing the model: * `label2id`: maps the class names to integers.
95_2_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_classification.md
https://huggingface.co/docs/transformers/en/tasks/video_classification/#load-ucf101-dataset
.md
* `label2id`: maps the class names to integers. * `id2label`: maps the integers to class names. ```py >>> class_labels = sorted({str(path).split("/")[2] for path in all_video_file_paths}) >>> label2id = {label: i for i, label in enumerate(class_labels)} >>> id2label = {i: label for label, i in label2id.items()}
95_2_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_classification.md
https://huggingface.co/docs/transformers/en/tasks/video_classification/#load-ucf101-dataset
.md
>>> print(f"Unique classes: {list(label2id.keys())}.") # Unique classes: ['ApplyEyeMakeup', 'ApplyLipstick', 'Archery', 'BabyCrawling', 'BalanceBeam', 'BandMarching', 'BaseballPitch', 'Basketball', 'BasketballDunk', 'BenchPress']. ``` There are 10 unique classes. For each class, there are 30 videos in the training set.
95_2_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_classification.md
https://huggingface.co/docs/transformers/en/tasks/video_classification/#load-a-model-to-fine-tune
.md
Instantiate a video classification model from a pretrained checkpoint and its associated image processor. The model's encoder comes with pre-trained parameters, and the classification head is randomly initialized. The image processor will come in handy when writing the preprocessing pipeline for our dataset. ```py >>> from transformers import VideoMAEImageProcessor, VideoMAEForVideoClassification
95_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_classification.md
https://huggingface.co/docs/transformers/en/tasks/video_classification/#load-a-model-to-fine-tune
.md
>>> model_ckpt = "MCG-NJU/videomae-base" >>> image_processor = VideoMAEImageProcessor.from_pretrained(model_ckpt) >>> model = VideoMAEForVideoClassification.from_pretrained( ... model_ckpt, ... label2id=label2id, ... id2label=id2label, ... ignore_mismatched_sizes=True, # provide this in case you're planning to fine-tune an already fine-tuned checkpoint ... ) ``` While the model is loading, you might notice the following warning: ```bash
95_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_classification.md
https://huggingface.co/docs/transformers/en/tasks/video_classification/#load-a-model-to-fine-tune
.md
... ) ``` While the model is loading, you might notice the following warning: ```bash Some weights of the model checkpoint at MCG-NJU/videomae-base were not used when initializing VideoMAEForVideoClassification: [..., 'decoder.decoder_layers.1.attention.output.dense.bias', 'decoder.decoder_layers.2.attention.attention.key.weight']
95_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_classification.md
https://huggingface.co/docs/transformers/en/tasks/video_classification/#load-a-model-to-fine-tune
.md
- This IS expected if you are initializing VideoMAEForVideoClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing VideoMAEForVideoClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
95_3_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_classification.md
https://huggingface.co/docs/transformers/en/tasks/video_classification/#load-a-model-to-fine-tune
.md
Some weights of VideoMAEForVideoClassification were not initialized from the model checkpoint at MCG-NJU/videomae-base and are newly initialized: ['classifier.bias', 'classifier.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. ```
95_3_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_classification.md
https://huggingface.co/docs/transformers/en/tasks/video_classification/#load-a-model-to-fine-tune
.md
``` The warning is telling us we are throwing away some weights (e.g. the weights and bias of the `classifier` layer) and randomly initializing some others (the weights and bias of a new `classifier` layer). This is expected in this case, because we are adding a new head for which we don't have pretrained weights, so the library warns us we should fine-tune this model before using it for inference, which is exactly what we are going to do.
95_3_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_classification.md
https://huggingface.co/docs/transformers/en/tasks/video_classification/#load-a-model-to-fine-tune
.md
**Note** that [this checkpoint](https://huggingface.co/MCG-NJU/videomae-base-finetuned-kinetics) leads to better performance on this task as the checkpoint was obtained fine-tuning on a similar downstream task having considerable domain overlap. You can check out [this checkpoint](https://huggingface.co/sayakpaul/videomae-base-finetuned-kinetics-finetuned-ucf101-subset) which was obtained by fine-tuning `MCG-NJU/videomae-base-finetuned-kinetics`.
95_3_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_classification.md
https://huggingface.co/docs/transformers/en/tasks/video_classification/#prepare-the-datasets-for-training
.md
For preprocessing the videos, you will leverage the [PyTorchVideo library](https://pytorchvideo.org/). Start by importing the dependencies we need. ```py >>> import pytorchvideo.data >>> from pytorchvideo.transforms import ( ... ApplyTransformToKey, ... Normalize, ... RandomShortSideScale, ... RemoveKey, ... ShortSideScale, ... UniformTemporalSubsample, ... )
95_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_classification.md
https://huggingface.co/docs/transformers/en/tasks/video_classification/#prepare-the-datasets-for-training
.md
>>> from torchvision.transforms import ( ... Compose, ... Lambda, ... RandomCrop, ... RandomHorizontalFlip, ... Resize, ... ) ```
95_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_classification.md
https://huggingface.co/docs/transformers/en/tasks/video_classification/#prepare-the-datasets-for-training
.md
... RandomHorizontalFlip, ... Resize, ... ) ``` For the training dataset transformations, use a combination of uniform temporal subsampling, pixel normalization, random cropping, and random horizontal flipping. For the validation and evaluation dataset transformations, keep the same transformation chain except for random cropping and horizontal flipping. To learn more about the details of these transformations check out the [official documentation of PyTorchVideo](https://pytorchvideo.org).
95_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_classification.md
https://huggingface.co/docs/transformers/en/tasks/video_classification/#prepare-the-datasets-for-training
.md
Use the `image_processor` associated with the pre-trained model to obtain the following information: * Image mean and standard deviation with which the video frame pixels will be normalized. * Spatial resolution to which the video frames will be resized. Start by defining some constants. ```py >>> mean = image_processor.image_mean >>> std = image_processor.image_std >>> if "shortest_edge" in image_processor.size: ... height = width = image_processor.size["shortest_edge"] >>> else:
95_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_classification.md
https://huggingface.co/docs/transformers/en/tasks/video_classification/#prepare-the-datasets-for-training
.md
>>> if "shortest_edge" in image_processor.size: ... height = width = image_processor.size["shortest_edge"] >>> else: ... height = image_processor.size["height"] ... width = image_processor.size["width"] >>> resize_to = (height, width)
95_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_classification.md
https://huggingface.co/docs/transformers/en/tasks/video_classification/#prepare-the-datasets-for-training
.md
>>> num_frames_to_sample = model.config.num_frames >>> sample_rate = 4 >>> fps = 30 >>> clip_duration = num_frames_to_sample * sample_rate / fps ``` Now, define the dataset-specific transformations and the datasets respectively. Starting with the training set: ```py >>> train_transform = Compose( ... [ ... ApplyTransformToKey( ... key="video", ... transform=Compose( ... [ ... UniformTemporalSubsample(num_frames_to_sample),
95_4_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_classification.md
https://huggingface.co/docs/transformers/en/tasks/video_classification/#prepare-the-datasets-for-training
.md
... [ ... UniformTemporalSubsample(num_frames_to_sample), ... Lambda(lambda x: x / 255.0), ... Normalize(mean, std), ... RandomShortSideScale(min_size=256, max_size=320), ... RandomCrop(resize_to), ... RandomHorizontalFlip(p=0.5), ... ] ... ), ... ), ... ] ... )
95_4_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_classification.md
https://huggingface.co/docs/transformers/en/tasks/video_classification/#prepare-the-datasets-for-training
.md
>>> train_dataset = pytorchvideo.data.Ucf101( ... data_path=os.path.join(dataset_root_path, "train"), ... clip_sampler=pytorchvideo.data.make_clip_sampler("random", clip_duration), ... decode_audio=False, ... transform=train_transform, ... ) ``` The same sequence of workflow can be applied to the validation and evaluation sets: ```py >>> val_transform = Compose( ... [ ... ApplyTransformToKey( ... key="video", ... transform=Compose(
95_4_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_classification.md
https://huggingface.co/docs/transformers/en/tasks/video_classification/#prepare-the-datasets-for-training
.md
... [ ... ApplyTransformToKey( ... key="video", ... transform=Compose( ... [ ... UniformTemporalSubsample(num_frames_to_sample), ... Lambda(lambda x: x / 255.0), ... Normalize(mean, std), ... Resize(resize_to), ... ] ... ), ... ), ... ] ... )
95_4_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_classification.md
https://huggingface.co/docs/transformers/en/tasks/video_classification/#prepare-the-datasets-for-training
.md
>>> val_dataset = pytorchvideo.data.Ucf101( ... data_path=os.path.join(dataset_root_path, "val"), ... clip_sampler=pytorchvideo.data.make_clip_sampler("uniform", clip_duration), ... decode_audio=False, ... transform=val_transform, ... )
95_4_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_classification.md
https://huggingface.co/docs/transformers/en/tasks/video_classification/#prepare-the-datasets-for-training
.md
>>> test_dataset = pytorchvideo.data.Ucf101( ... data_path=os.path.join(dataset_root_path, "test"), ... clip_sampler=pytorchvideo.data.make_clip_sampler("uniform", clip_duration), ... decode_audio=False, ... transform=val_transform, ... ) ```
95_4_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_classification.md
https://huggingface.co/docs/transformers/en/tasks/video_classification/#prepare-the-datasets-for-training
.md
**Note**: The above dataset pipelines are taken from the [official PyTorchVideo example](https://pytorchvideo.org/docs/tutorial_classification#dataset). We're using the [`pytorchvideo.data.Ucf101()`](https://pytorchvideo.readthedocs.io/en/latest/api/data/data.html#pytorchvideo.data.Ucf101) function because it's tailored for the UCF-101 dataset. Under the hood, it returns a
95_4_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_classification.md
https://huggingface.co/docs/transformers/en/tasks/video_classification/#prepare-the-datasets-for-training
.md
function because it's tailored for the UCF-101 dataset. Under the hood, it returns a [`pytorchvideo.data.labeled_video_dataset.LabeledVideoDataset`](https://pytorchvideo.readthedocs.io/en/latest/api/data/data.html#pytorchvideo.data.LabeledVideoDataset) object. `LabeledVideoDataset` class is the base class for all things video in the PyTorchVideo dataset. So, if you want to use a custom dataset not supported off-the-shelf by PyTorchVideo, you can extend the `LabeledVideoDataset` class accordingly. Refer to
95_4_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_classification.md
https://huggingface.co/docs/transformers/en/tasks/video_classification/#prepare-the-datasets-for-training
.md
dataset not supported off-the-shelf by PyTorchVideo, you can extend the `LabeledVideoDataset` class accordingly. Refer to the `data` API [documentation to](https://pytorchvideo.readthedocs.io/en/latest/api/data/data.html) learn more. Also, if your dataset follows a similar structure (as shown above), then using the `pytorchvideo.data.Ucf101()` should work just fine.
95_4_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_classification.md
https://huggingface.co/docs/transformers/en/tasks/video_classification/#prepare-the-datasets-for-training
.md
You can access the `num_videos` argument to know the number of videos in the dataset. ```py >>> print(train_dataset.num_videos, val_dataset.num_videos, test_dataset.num_videos) # (300, 30, 75) ```
95_4_14
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_classification.md
https://huggingface.co/docs/transformers/en/tasks/video_classification/#visualize-the-preprocessed-video-for-better-debugging
.md
```py >>> import imageio >>> import numpy as np >>> from IPython.display import Image >>> def unnormalize_img(img): ... """Un-normalizes the image pixels.""" ... img = (img * std) + mean ... img = (img * 255).astype("uint8") ... return img.clip(0, 255)
95_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_classification.md
https://huggingface.co/docs/transformers/en/tasks/video_classification/#visualize-the-preprocessed-video-for-better-debugging
.md
>>> def create_gif(video_tensor, filename="sample.gif"): ... """Prepares a GIF from a video tensor. ... ... The video tensor is expected to have the following shape: ... (num_frames, num_channels, height, width). ... """ ... frames = [] ... for video_frame in video_tensor: ... frame_unnormalized = unnormalize_img(video_frame.permute(1, 2, 0).numpy()) ... frames.append(frame_unnormalized) ... kargs = {"duration": 0.25}
95_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_classification.md
https://huggingface.co/docs/transformers/en/tasks/video_classification/#visualize-the-preprocessed-video-for-better-debugging
.md
... frames.append(frame_unnormalized) ... kargs = {"duration": 0.25} ... imageio.mimsave(filename, frames, "GIF", **kargs) ... return filename
95_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_classification.md
https://huggingface.co/docs/transformers/en/tasks/video_classification/#visualize-the-preprocessed-video-for-better-debugging
.md
>>> def display_gif(video_tensor, gif_name="sample.gif"): ... """Prepares and displays a GIF from a video tensor.""" ... video_tensor = video_tensor.permute(1, 0, 2, 3) ... gif_filename = create_gif(video_tensor, gif_name) ... return Image(filename=gif_filename)
95_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_classification.md
https://huggingface.co/docs/transformers/en/tasks/video_classification/#visualize-the-preprocessed-video-for-better-debugging
.md
>>> sample_video = next(iter(train_dataset)) >>> video_tensor = sample_video["video"] >>> display_gif(video_tensor) ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/sample_gif.gif" alt="Person playing basketball"/> </div>
95_5_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_classification.md
https://huggingface.co/docs/transformers/en/tasks/video_classification/#train-the-model
.md
Leverage [`Trainer`](https://huggingface.co/docs/transformers/main_classes/trainer) from 🤗 Transformers for training the model. To instantiate a `Trainer`, you need to define the training configuration and an evaluation metric. The most important is the [`TrainingArguments`](https://huggingface.co/transformers/main_classes/trainer.html#transformers.TrainingArguments), which is a class that contains all the attributes to configure the training. It requires an output folder name, which will be used to save
95_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_classification.md
https://huggingface.co/docs/transformers/en/tasks/video_classification/#train-the-model
.md
class that contains all the attributes to configure the training. It requires an output folder name, which will be used to save the checkpoints of the model. It also helps sync all the information in the model repository on 🤗 Hub.
95_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_classification.md
https://huggingface.co/docs/transformers/en/tasks/video_classification/#train-the-model
.md
Most of the training arguments are self-explanatory, but one that is quite important here is `remove_unused_columns=False`. This one will drop any features not used by the model's call function. By default it's `True` because usually it's ideal to drop unused feature columns, making it easier to unpack inputs into the model's call function. But, in this case, you need the unused features ('video' in particular) in order to create `pixel_values` (which is a mandatory key our model expects in its inputs).
95_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_classification.md
https://huggingface.co/docs/transformers/en/tasks/video_classification/#train-the-model
.md
```py >>> from transformers import TrainingArguments, Trainer
95_6_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_classification.md
https://huggingface.co/docs/transformers/en/tasks/video_classification/#train-the-model
.md
>>> model_name = model_ckpt.split("/")[-1] >>> new_model_name = f"{model_name}-finetuned-ucf101-subset" >>> num_epochs = 4
95_6_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_classification.md
https://huggingface.co/docs/transformers/en/tasks/video_classification/#train-the-model
.md
>>> args = TrainingArguments( ... new_model_name, ... remove_unused_columns=False, ... eval_strategy="epoch", ... save_strategy="epoch", ... learning_rate=5e-5, ... per_device_train_batch_size=batch_size, ... per_device_eval_batch_size=batch_size, ... warmup_ratio=0.1, ... logging_steps=10, ... load_best_model_at_end=True, ... metric_for_best_model="accuracy", ... push_to_hub=True, ... max_steps=(train_dataset.num_videos // batch_size) * num_epochs,
95_6_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_classification.md
https://huggingface.co/docs/transformers/en/tasks/video_classification/#train-the-model
.md
... push_to_hub=True, ... max_steps=(train_dataset.num_videos // batch_size) * num_epochs, ... ) ``` The dataset returned by `pytorchvideo.data.Ucf101()` doesn't implement the `__len__` method. As such, we must define `max_steps` when instantiating `TrainingArguments`. Next, you need to define a function to compute the metrics from the predictions, which will use the `metric` you'll load now. The only preprocessing you have to do is to take the argmax of our predicted logits: ```py
95_6_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_classification.md
https://huggingface.co/docs/transformers/en/tasks/video_classification/#train-the-model
.md
```py import evaluate
95_6_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_classification.md
https://huggingface.co/docs/transformers/en/tasks/video_classification/#train-the-model
.md
metric = evaluate.load("accuracy")
95_6_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_classification.md
https://huggingface.co/docs/transformers/en/tasks/video_classification/#train-the-model
.md
def compute_metrics(eval_pred): predictions = np.argmax(eval_pred.predictions, axis=1) return metric.compute(predictions=predictions, references=eval_pred.label_ids) ``` **A note on evaluation**:
95_6_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_classification.md
https://huggingface.co/docs/transformers/en/tasks/video_classification/#train-the-model
.md
return metric.compute(predictions=predictions, references=eval_pred.label_ids) ``` **A note on evaluation**: In the [VideoMAE paper](https://arxiv.org/abs/2203.12602), the authors use the following evaluation strategy. They evaluate the model on several clips from test videos and apply different crops to those clips and report the aggregate score. However, in the interest of simplicity and brevity, we don't consider that in this tutorial.
95_6_10