source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/knowledge_distillation_for_image_classification.md
https://huggingface.co/docs/transformers/en/tasks/knowledge_distillation_for_image_classification/#knowledge-distillation-for-computer-vision
.md
distillation parameters and report their findings. The training logs and checkpoints for distilled model can be found in [this repository](https://huggingface.co/merve/vit-mobilenet-beans-224), and MobileNetV2 trained from scratch can be found in this [repository](https://huggingface.co/merve/resnet-mobilenet-beans-5).
84_1_23
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/zero_shot_image_classification.md
https://huggingface.co/docs/transformers/en/tasks/zero_shot_image_classification/
.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
85_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/zero_shot_image_classification.md
https://huggingface.co/docs/transformers/en/tasks/zero_shot_image_classification/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
85_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/zero_shot_image_classification.md
https://huggingface.co/docs/transformers/en/tasks/zero_shot_image_classification/#zero-shot-image-classification
.md
[[open-in-colab]] Zero-shot image classification is a task that involves classifying images into different categories using a model that was not explicitly trained on data containing labeled examples from those specific categories. Traditionally, image classification requires training a model on a specific set of labeled images, and this model learns to "map" certain image features to labels. When there's a need to use such model for a classification task that introduces a
85_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/zero_shot_image_classification.md
https://huggingface.co/docs/transformers/en/tasks/zero_shot_image_classification/#zero-shot-image-classification
.md
"map" certain image features to labels. When there's a need to use such model for a classification task that introduces a new set of labels, fine-tuning is required to "recalibrate" the model. In contrast, zero-shot or open vocabulary image classification models are typically multi-modal models that have been trained on a large
85_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/zero_shot_image_classification.md
https://huggingface.co/docs/transformers/en/tasks/zero_shot_image_classification/#zero-shot-image-classification
.md
dataset of images and associated descriptions. These models learn aligned vision-language representations that can be used for many downstream tasks including zero-shot image classification. This is a more flexible approach to image classification that allows models to generalize to new and unseen categories without the need for additional training data and enables users to query images with free-form text descriptions of their target objects . In this guide you'll learn how to:
85_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/zero_shot_image_classification.md
https://huggingface.co/docs/transformers/en/tasks/zero_shot_image_classification/#zero-shot-image-classification
.md
In this guide you'll learn how to: * create a zero-shot image classification pipeline * run zero-shot image classification inference by hand Before you begin, make sure you have all the necessary libraries installed: ```bash pip install -q "transformers[torch]" pillow ```
85_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/zero_shot_image_classification.md
https://huggingface.co/docs/transformers/en/tasks/zero_shot_image_classification/#zero-shot-image-classification-pipeline
.md
The simplest way to try out inference with a model supporting zero-shot image classification is to use the corresponding [`pipeline`]. Instantiate a pipeline from a [checkpoint on the Hugging Face Hub](https://huggingface.co/models?pipeline_tag=zero-shot-image-classification&sort=downloads): ```python >>> from transformers import pipeline
85_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/zero_shot_image_classification.md
https://huggingface.co/docs/transformers/en/tasks/zero_shot_image_classification/#zero-shot-image-classification-pipeline
.md
>>> checkpoint = "openai/clip-vit-large-patch14" >>> detector = pipeline(model=checkpoint, task="zero-shot-image-classification") ``` Next, choose an image you'd like to classify. ```py >>> from PIL import Image >>> import requests >>> url = "https://unsplash.com/photos/g8oS8-82DxI/download?ixid=MnwxMjA3fDB8MXx0b3BpY3x8SnBnNktpZGwtSGt8fHx8fDJ8fDE2NzgxMDYwODc&force=true&w=640" >>> image = Image.open(requests.get(url, stream=True).raw)
85_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/zero_shot_image_classification.md
https://huggingface.co/docs/transformers/en/tasks/zero_shot_image_classification/#zero-shot-image-classification-pipeline
.md
>>> image ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/owl.jpg" alt="Photo of an owl"/> </div> Pass the image and the candidate object labels to the pipeline. Here we pass the image directly; other suitable options include a local path to an image or an image url. The candidate labels can be simple words like in this example, or more descriptive. ```py
85_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/zero_shot_image_classification.md
https://huggingface.co/docs/transformers/en/tasks/zero_shot_image_classification/#zero-shot-image-classification-pipeline
.md
The candidate labels can be simple words like in this example, or more descriptive. ```py >>> predictions = detector(image, candidate_labels=["fox", "bear", "seagull", "owl"]) >>> predictions [{'score': 0.9996670484542847, 'label': 'owl'}, {'score': 0.000199399160919711, 'label': 'seagull'}, {'score': 7.392891711788252e-05, 'label': 'fox'}, {'score': 5.96074532950297e-05, 'label': 'bear'}] ```
85_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/zero_shot_image_classification.md
https://huggingface.co/docs/transformers/en/tasks/zero_shot_image_classification/#zero-shot-image-classification-by-hand
.md
Now that you've seen how to use the zero-shot image classification pipeline, let's take a look how you can run zero-shot image classification manually. Start by loading the model and associated processor from a [checkpoint on the Hugging Face Hub](https://huggingface.co/models?pipeline_tag=zero-shot-image-classification&sort=downloads). Here we'll use the same checkpoint as before: ```py >>> from transformers import AutoProcessor, AutoModelForZeroShotImageClassification
85_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/zero_shot_image_classification.md
https://huggingface.co/docs/transformers/en/tasks/zero_shot_image_classification/#zero-shot-image-classification-by-hand
.md
>>> model = AutoModelForZeroShotImageClassification.from_pretrained(checkpoint) >>> processor = AutoProcessor.from_pretrained(checkpoint) ``` Let's take a different image to switch things up. ```py >>> from PIL import Image >>> import requests >>> url = "https://unsplash.com/photos/xBRQfR2bqNI/download?ixid=MnwxMjA3fDB8MXxhbGx8fHx8fHx8fHwxNjc4Mzg4ODEx&force=true&w=640" >>> image = Image.open(requests.get(url, stream=True).raw)
85_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/zero_shot_image_classification.md
https://huggingface.co/docs/transformers/en/tasks/zero_shot_image_classification/#zero-shot-image-classification-by-hand
.md
>>> image ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg" alt="Photo of a car"/> </div> Use the processor to prepare the inputs for the model. The processor combines an image processor that prepares the image for the model by resizing and normalizing it, and a tokenizer that takes care of the text inputs. ```py >>> candidate_labels = ["tree", "car", "bike", "cat"]
85_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/zero_shot_image_classification.md
https://huggingface.co/docs/transformers/en/tasks/zero_shot_image_classification/#zero-shot-image-classification-by-hand
.md
```py >>> candidate_labels = ["tree", "car", "bike", "cat"] # follows the pipeline prompt template to get same results >>> candidate_labels = [f'This is a photo of {label}.' for label in candidate_labels] >>> inputs = processor(images=image, text=candidate_labels, return_tensors="pt", padding=True) ``` Pass the inputs through the model, and post-process the results: ```py >>> import torch
85_3_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/zero_shot_image_classification.md
https://huggingface.co/docs/transformers/en/tasks/zero_shot_image_classification/#zero-shot-image-classification-by-hand
.md
>>> with torch.no_grad(): ... outputs = model(**inputs) >>> logits = outputs.logits_per_image[0] >>> probs = logits.softmax(dim=-1).numpy() >>> scores = probs.tolist() >>> result = [ ... {"score": score, "label": candidate_label} ... for score, candidate_label in sorted(zip(probs, candidate_labels), key=lambda x: -x[0]) ... ]
85_3_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/zero_shot_image_classification.md
https://huggingface.co/docs/transformers/en/tasks/zero_shot_image_classification/#zero-shot-image-classification-by-hand
.md
>>> result [{'score': 0.998572, 'label': 'car'}, {'score': 0.0010570387, 'label': 'bike'}, {'score': 0.0003393686, 'label': 'tree'}, {'score': 3.1572064e-05, 'label': 'cat'}] ```
85_3_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/visual_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/visual_question_answering/
.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
86_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/visual_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/visual_question_answering/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
86_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/visual_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/visual_question_answering/#visual-question-answering
.md
[[open-in-colab]] Visual Question Answering (VQA) is the task of answering open-ended questions based on an image. The input to models supporting this task is typically a combination of an image and a question, and the output is an answer expressed in natural language. Some noteworthy use case examples for VQA include: * Accessibility applications for visually impaired individuals.
86_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/visual_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/visual_question_answering/#visual-question-answering
.md
Some noteworthy use case examples for VQA include: * Accessibility applications for visually impaired individuals. * Education: posing questions about visual materials presented in lectures or textbooks. VQA can also be utilized in interactive museum exhibits or historical sites. * Customer service and e-commerce: VQA can enhance user experience by letting users ask questions about products.
86_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/visual_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/visual_question_answering/#visual-question-answering
.md
* Customer service and e-commerce: VQA can enhance user experience by letting users ask questions about products. * Image retrieval: VQA models can be used to retrieve images with specific characteristics. For example, the user can ask "Is there a dog?" to find all images with dogs from a set of images. In this guide you'll learn how to: - Fine-tune a classification VQA model, specifically [ViLT](../model_doc/vilt), on the [`Graphcore/vqa` dataset](https://huggingface.co/datasets/Graphcore/vqa).
86_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/visual_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/visual_question_answering/#visual-question-answering
.md
- Use your fine-tuned ViLT for inference. - Run zero-shot VQA inference with a generative model, like BLIP-2.
86_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/visual_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/visual_question_answering/#fine-tuning-vilt
.md
ViLT model incorporates text embeddings into a Vision Transformer (ViT), allowing it to have a minimal design for Vision-and-Language Pre-training (VLP). This model can be used for several downstream tasks. For the VQA task, a classifier head is placed on top (a linear layer on top of the final hidden state of the `[CLS]` token) and randomly initialized. Visual Question Answering is thus treated as a **classification problem**.
86_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/visual_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/visual_question_answering/#fine-tuning-vilt
.md
Visual Question Answering is thus treated as a **classification problem**. More recent models, such as BLIP, BLIP-2, and InstructBLIP, treat VQA as a generative task. Later in this guide we illustrate how to use them for zero-shot VQA inference. Before you begin, make sure you have all the necessary libraries installed. ```bash pip install -q transformers datasets ``` We encourage you to share your model with the community. Log in to your Hugging Face account to upload it to the 🤗 Hub.
86_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/visual_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/visual_question_answering/#fine-tuning-vilt
.md
``` We encourage you to share your model with the community. Log in to your Hugging Face account to upload it to the 🤗 Hub. When prompted, enter your token to log in: ```py >>> from huggingface_hub import notebook_login
86_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/visual_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/visual_question_answering/#fine-tuning-vilt
.md
>>> notebook_login() ``` Let's define the model checkpoint as a global variable. ```py >>> model_checkpoint = "dandelin/vilt-b32-mlm" ```
86_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/visual_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/visual_question_answering/#load-the-data
.md
For illustration purposes, in this guide we use a very small sample of the annotated visual question answering `Graphcore/vqa` dataset. You can find the full dataset on [🤗 Hub](https://huggingface.co/datasets/Graphcore/vqa). As an alternative to the [`Graphcore/vqa` dataset](https://huggingface.co/datasets/Graphcore/vqa), you can download the same data manually from the official [VQA dataset page](https://visualqa.org/download.html). If you prefer to follow the
86_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/visual_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/visual_question_answering/#load-the-data
.md
same data manually from the official [VQA dataset page](https://visualqa.org/download.html). If you prefer to follow the tutorial with your custom data, check out how to [Create an image dataset](https://huggingface.co/docs/datasets/image_dataset#loading-script) guide in the 🤗 Datasets documentation. Let's load the first 200 examples from the validation split and explore the dataset's features: ```python >>> from datasets import load_dataset
86_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/visual_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/visual_question_answering/#load-the-data
.md
>>> dataset = load_dataset("Graphcore/vqa", split="validation[:200]") >>> dataset Dataset({ features: ['question', 'question_type', 'question_id', 'image_id', 'answer_type', 'label'], num_rows: 200 }) ``` Let's take a look at an example to understand the dataset's features: ```py >>> dataset[0] {'question': 'Where is he looking?', 'question_type': 'none of the above', 'question_id': 262148000,
86_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/visual_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/visual_question_answering/#load-the-data
.md
```py >>> dataset[0] {'question': 'Where is he looking?', 'question_type': 'none of the above', 'question_id': 262148000, 'image_id': '/root/.cache/huggingface/datasets/downloads/extracted/ca733e0e000fb2d7a09fbcc94dbfe7b5a30750681d0e965f8e0a23b1c2f98c75/val2014/COCO_val2014_000000262148.jpg', 'answer_type': 'other', 'label': {'ids': ['at table', 'down', 'skateboard', 'table'], 'weights': [0.30000001192092896, 1.0, 0.30000001192092896, 0.30000001192092896]}} ``` The features relevant to the task include:
86_3_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/visual_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/visual_question_answering/#load-the-data
.md
1.0, 0.30000001192092896, 0.30000001192092896]}} ``` The features relevant to the task include: * `question`: the question to be answered from the image * `image_id`: the path to the image the question refers to * `label`: the annotations We can remove the rest of the features as they won't be necessary: ```py >>> dataset = dataset.remove_columns(['question_type', 'question_id', 'answer_type']) ```
86_3_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/visual_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/visual_question_answering/#load-the-data
.md
```py >>> dataset = dataset.remove_columns(['question_type', 'question_id', 'answer_type']) ``` As you can see, the `label` feature contains several answers to the same question (called `ids` here) collected by different human annotators. This is because the answer to a question can be subjective. In this case, the question is "where is he looking?". Some people annotated this with "down", others with "at table", another one with "skateboard", etc.
86_3_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/visual_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/visual_question_answering/#load-the-data
.md
annotated this with "down", others with "at table", another one with "skateboard", etc. Take a look at the image and consider which answer would you give: ```python >>> from PIL import Image
86_3_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/visual_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/visual_question_answering/#load-the-data
.md
>>> image = Image.open(dataset[0]['image_id']) >>> image ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/vqa-example.png" alt="VQA Image Example"/> </div> Due to the questions' and answers' ambiguity, datasets like this are treated as a multi-label classification problem (as multiple answers are possibly valid). Moreover, rather than just creating a one-hot encoded vector, one creates a
86_3_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/visual_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/visual_question_answering/#load-the-data
.md
multiple answers are possibly valid). Moreover, rather than just creating a one-hot encoded vector, one creates a soft encoding, based on the number of times a certain answer appeared in the annotations. For instance, in the example above, because the answer "down" is selected way more often than other answers, it has a score (called `weight` in the dataset) of 1.0, and the rest of the answers have scores < 1.0.
86_3_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/visual_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/visual_question_answering/#load-the-data
.md
score (called `weight` in the dataset) of 1.0, and the rest of the answers have scores < 1.0. To later instantiate the model with an appropriate classification head, let's create two dictionaries: one that maps the label name to an integer and vice versa: ```py >>> import itertools
86_3_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/visual_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/visual_question_answering/#load-the-data
.md
>>> labels = [item['ids'] for item in dataset['label']] >>> flattened_labels = list(itertools.chain(*labels)) >>> unique_labels = list(set(flattened_labels))
86_3_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/visual_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/visual_question_answering/#load-the-data
.md
>>> label2id = {label: idx for idx, label in enumerate(unique_labels)} >>> id2label = {idx: label for label, idx in label2id.items()} ``` Now that we have the mappings, we can replace the string answers with their ids, and flatten the dataset for a more convenient further preprocessing. ```python >>> def replace_ids(inputs): ... inputs["label"]["ids"] = [label2id[x] for x in inputs["label"]["ids"]] ... return inputs
86_3_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/visual_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/visual_question_answering/#load-the-data
.md
>>> dataset = dataset.map(replace_ids) >>> flat_dataset = dataset.flatten() >>> flat_dataset.features {'question': Value(dtype='string', id=None), 'image_id': Value(dtype='string', id=None), 'label.ids': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'label.weights': Sequence(feature=Value(dtype='float64', id=None), length=-1, id=None)} ```
86_3_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/visual_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/visual_question_answering/#preprocessing-data
.md
The next step is to load a ViLT processor to prepare the image and text data for the model. [`ViltProcessor`] wraps a BERT tokenizer and ViLT image processor into a convenient single processor: ```py >>> from transformers import ViltProcessor
86_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/visual_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/visual_question_answering/#preprocessing-data
.md
>>> processor = ViltProcessor.from_pretrained(model_checkpoint) ``` To preprocess the data we need to encode the images and questions using the [`ViltProcessor`]. The processor will use the [`BertTokenizerFast`] to tokenize the text and create `input_ids`, `attention_mask` and `token_type_ids` for the text data. As for images, the processor will leverage [`ViltImageProcessor`] to resize and normalize the image, and create `pixel_values` and `pixel_mask`.
86_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/visual_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/visual_question_answering/#preprocessing-data
.md
All these preprocessing steps are done under the hood, we only need to call the `processor`. However, we still need to prepare the target labels. In this representation, each element corresponds to a possible answer (label). For correct answers, the element holds their respective score (weight), while the remaining elements are set to zero. The following function applies the `processor` to the images and questions and formats the labels as described above: ```py >>> import torch
86_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/visual_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/visual_question_answering/#preprocessing-data
.md
>>> def preprocess_data(examples): ... image_paths = examples['image_id'] ... images = [Image.open(image_path) for image_path in image_paths] ... texts = examples['question'] ... encoding = processor(images, texts, padding="max_length", truncation=True, return_tensors="pt") ... for k, v in encoding.items(): ... encoding[k] = v.squeeze() ... targets = []
86_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/visual_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/visual_question_answering/#preprocessing-data
.md
... for k, v in encoding.items(): ... encoding[k] = v.squeeze() ... targets = [] ... for labels, scores in zip(examples['label.ids'], examples['label.weights']): ... target = torch.zeros(len(id2label)) ... for label, score in zip(labels, scores): ... target[label] = score ... targets.append(target) ... encoding["labels"] = targets
86_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/visual_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/visual_question_answering/#preprocessing-data
.md
... return encoding ``` To apply the preprocessing function over the entire dataset, use 🤗 Datasets [`~datasets.map`] function. You can speed up `map` by setting `batched=True` to process multiple elements of the dataset at once. At this point, feel free to remove the columns you don't need. ```py >>> processed_dataset = flat_dataset.map(preprocess_data, batched=True, remove_columns=['question','question_type', 'question_id', 'image_id', 'answer_type', 'label.ids', 'label.weights'])
86_4_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/visual_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/visual_question_answering/#preprocessing-data
.md
>>> processed_dataset Dataset({ features: ['input_ids', 'token_type_ids', 'attention_mask', 'pixel_values', 'pixel_mask', 'labels'], num_rows: 200 }) ``` As a final step, create a batch of examples using [`DefaultDataCollator`]: ```py >>> from transformers import DefaultDataCollator
86_4_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/visual_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/visual_question_answering/#preprocessing-data
.md
>>> data_collator = DefaultDataCollator() ```
86_4_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/visual_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/visual_question_answering/#train-the-model
.md
You’re ready to start training your model now! Load ViLT with [`ViltForQuestionAnswering`]. Specify the number of labels along with the label mappings: ```py >>> from transformers import ViltForQuestionAnswering
86_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/visual_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/visual_question_answering/#train-the-model
.md
>>> model = ViltForQuestionAnswering.from_pretrained(model_checkpoint, num_labels=len(id2label), id2label=id2label, label2id=label2id) ``` At this point, only three steps remain: 1. Define your training hyperparameters in [`TrainingArguments`]: ```py >>> from transformers import TrainingArguments >>> repo_id = "MariaK/vilt_finetuned_200"
86_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/visual_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/visual_question_answering/#train-the-model
.md
>>> repo_id = "MariaK/vilt_finetuned_200" >>> training_args = TrainingArguments( ... output_dir=repo_id, ... per_device_train_batch_size=4, ... num_train_epochs=20, ... save_steps=200, ... logging_steps=50, ... learning_rate=5e-5, ... save_total_limit=2, ... remove_unused_columns=False, ... push_to_hub=True, ... ) ``` 2. Pass the training arguments to [`Trainer`] along with the model, dataset, processor, and data collator. ```py >>> from transformers import Trainer
86_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/visual_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/visual_question_answering/#train-the-model
.md
>>> trainer = Trainer( ... model=model, ... args=training_args, ... data_collator=data_collator, ... train_dataset=processed_dataset, ... processing_class=processor, ... ) ``` 3. Call [`~Trainer.train`] to finetune your model. ```py >>> trainer.train() ``` Once training is completed, share your model to the Hub with the [`~Trainer.push_to_hub`] method to share your final model on the 🤗 Hub: ```py >>> trainer.push_to_hub() ```
86_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/visual_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/visual_question_answering/#inference
.md
Now that you have fine-tuned a ViLT model, and uploaded it to the 🤗 Hub, you can use it for inference. The simplest way to try out your fine-tuned model for inference is to use it in a [`Pipeline`]. ```py >>> from transformers import pipeline
86_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/visual_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/visual_question_answering/#inference
.md
>>> pipe = pipeline("visual-question-answering", model="MariaK/vilt_finetuned_200") ``` The model in this guide has only been trained on 200 examples, so don't expect a lot from it. Let's see if it at least learned something from the data and take the first example from the dataset to illustrate inference: ```py >>> example = dataset[0] >>> image = Image.open(example['image_id']) >>> question = example['question'] >>> print(question) >>> pipe(image, question, top_k=1) "Where is he looking?"
86_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/visual_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/visual_question_answering/#inference
.md
>>> question = example['question'] >>> print(question) >>> pipe(image, question, top_k=1) "Where is he looking?" [{'score': 0.5498199462890625, 'answer': 'down'}] ``` Even though not very confident, the model indeed has learned something. With more examples and longer training, you'll get far better results! You can also manually replicate the results of the pipeline if you'd like: 1. Take an image and a question, prepare them for the model using the processor from your model.
86_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/visual_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/visual_question_answering/#inference
.md
1. Take an image and a question, prepare them for the model using the processor from your model. 2. Forward the result or preprocessing through the model. 3. From the logits, get the most likely answer's id, and find the actual answer in the `id2label`. ```py >>> processor = ViltProcessor.from_pretrained("MariaK/vilt_finetuned_200")
86_6_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/visual_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/visual_question_answering/#inference
.md
>>> image = Image.open(example['image_id']) >>> question = example['question'] >>> # prepare inputs >>> inputs = processor(image, question, return_tensors="pt") >>> model = ViltForQuestionAnswering.from_pretrained("MariaK/vilt_finetuned_200") >>> # forward pass >>> with torch.no_grad(): ... outputs = model(**inputs) >>> logits = outputs.logits >>> idx = logits.argmax(-1).item() >>> print("Predicted answer:", model.config.id2label[idx]) Predicted answer: down ```
86_6_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/visual_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/visual_question_answering/#zero-shot-vqa
.md
The previous model treated VQA as a classification task. Some recent models, such as BLIP, BLIP-2, and InstructBLIP approach VQA as a generative task. Let's take [BLIP-2](../model_doc/blip-2) as an example. It introduced a new visual-language pre-training paradigm in which any combination of pre-trained vision encoder and LLM can be used (learn more in the [BLIP-2 blog post](https://huggingface.co/blog/blip-2)).
86_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/visual_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/visual_question_answering/#zero-shot-vqa
.md
This enables achieving state-of-the-art results on multiple visual-language tasks including visual question answering. Let's illustrate how you can use this model for VQA. First, let's load the model. Here we'll explicitly send the model to a GPU, if available, which we didn't need to do earlier when training, as [`Trainer`] handles this automatically: ```py >>> from transformers import AutoProcessor, Blip2ForConditionalGeneration >>> import torch
86_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/visual_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/visual_question_answering/#zero-shot-vqa
.md
```py >>> from transformers import AutoProcessor, Blip2ForConditionalGeneration >>> import torch >>> from accelerate.test_utils.testing import get_backend
86_7_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/visual_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/visual_question_answering/#zero-shot-vqa
.md
>>> processor = AutoProcessor.from_pretrained("Salesforce/blip2-opt-2.7b") >>> model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", torch_dtype=torch.float16) >>> device, _, _ = get_backend() # automatically detects the underlying device type (CUDA, CPU, XPU, MPS, etc.) >>> model.to(device) ``` The model takes image and text as input, so let's use the exact same image/question pair from the first example in the VQA dataset: ```py >>> example = dataset[0]
86_7_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/visual_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/visual_question_answering/#zero-shot-vqa
.md
```py >>> example = dataset[0] >>> image = Image.open(example['image_id']) >>> question = example['question'] ``` To use BLIP-2 for visual question answering task, the textual prompt has to follow a specific format: `Question: {} Answer:`. ```py >>> prompt = f"Question: {question} Answer:" ``` Now we need to preprocess the image/prompt with the model's processor, pass the processed input through the model, and decode the output: ```py
86_7_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/visual_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/visual_question_answering/#zero-shot-vqa
.md
```py >>> inputs = processor(image, text=prompt, return_tensors="pt").to(device, torch.float16)
86_7_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/visual_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/visual_question_answering/#zero-shot-vqa
.md
>>> generated_ids = model.generate(**inputs, max_new_tokens=10) >>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0].strip() >>> print(generated_text) "He is looking at the crowd" ``` As you can see, the model recognized the crowd, and the direction of the face (looking down), however, it seems to miss the fact the crowd is behind the skater. Still, in cases where acquiring human-annotated datasets is not feasible, this approach can quickly produce useful results.
86_7_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_text_to_text.md
https://huggingface.co/docs/transformers/en/tasks/video_text_to_text/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
87_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_text_to_text.md
https://huggingface.co/docs/transformers/en/tasks/video_text_to_text/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
87_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_text_to_text.md
https://huggingface.co/docs/transformers/en/tasks/video_text_to_text/#video-text-to-text
.md
[[open-in-colab]] Video-text-to-text models, also known as video language models or vision language models with video input, are language models that take a video input. These models can tackle various tasks, from video question answering to video captioning.
87_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_text_to_text.md
https://huggingface.co/docs/transformers/en/tasks/video_text_to_text/#video-text-to-text
.md
These models have nearly the same architecture as [image-text-to-text](../image_text_to_text.md) models except for some changes to accept video data, since video data is essentially image frames with temporal dependencies. Some image-text-to-text models take in multiple images, but this alone is inadequate for a model to accept videos. Moreover, video-text-to-text models are often trained with all vision modalities. Each example might have videos, multiple videos, images and multiple images. Some of these
87_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_text_to_text.md
https://huggingface.co/docs/transformers/en/tasks/video_text_to_text/#video-text-to-text
.md
trained with all vision modalities. Each example might have videos, multiple videos, images and multiple images. Some of these models can also take interleaved inputs. For example, you can refer to a specific video inside a string of text by adding a video token in text like "What is happening in this video? `<video>`".
87_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_text_to_text.md
https://huggingface.co/docs/transformers/en/tasks/video_text_to_text/#video-text-to-text
.md
In this guide, we provide a brief overview of video LMs and show how to use them with Transformers for inference. To begin with, there are multiple types of video LMs: - base models used for fine-tuning - chat fine-tuned models for conversation - instruction fine-tuned models
87_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_text_to_text.md
https://huggingface.co/docs/transformers/en/tasks/video_text_to_text/#video-text-to-text
.md
- base models used for fine-tuning - chat fine-tuned models for conversation - instruction fine-tuned models This guide focuses on inference with an instruction-tuned model, [llava-hf/llava-interleave-qwen-7b-hf](https://huggingface.co/llava-hf/llava-interleave-qwen-7b-hf) which can take in interleaved data. Alternatively, you can try [llava-interleave-qwen-0.5b-hf](https://huggingface.co/llava-hf/llava-interleave-qwen-0.5b-hf) if your hardware doesn't allow running a 7B model.
87_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_text_to_text.md
https://huggingface.co/docs/transformers/en/tasks/video_text_to_text/#video-text-to-text
.md
Let's begin installing the dependencies. ```bash pip install -q transformers accelerate flash_attn ``` Let's initialize the model and the processor. ```python from transformers import LlavaProcessor, LlavaForConditionalGeneration import torch model_id = "llava-hf/llava-interleave-qwen-0.5b-hf"
87_1_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_text_to_text.md
https://huggingface.co/docs/transformers/en/tasks/video_text_to_text/#video-text-to-text
.md
processor = LlavaProcessor.from_pretrained(model_id)
87_1_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_text_to_text.md
https://huggingface.co/docs/transformers/en/tasks/video_text_to_text/#video-text-to-text
.md
model = LlavaForConditionalGeneration.from_pretrained(model_id, torch_dtype=torch.float16) model.to("cuda") # can also be xpu, mps, npu etc. depending on your hardware accelerator ``` Some models directly consume the `<video>` token, and others accept `<image>` tokens equal to the number of sampled frames. This model handles videos in the latter fashion. We will write a simple utility to handle image tokens, and another utility to get a video from a url and sample frames from it. ```python import uuid
87_1_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_text_to_text.md
https://huggingface.co/docs/transformers/en/tasks/video_text_to_text/#video-text-to-text
.md
```python import uuid import requests import cv2 from PIL import Image
87_1_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_text_to_text.md
https://huggingface.co/docs/transformers/en/tasks/video_text_to_text/#video-text-to-text
.md
def replace_video_with_images(text, frames): return text.replace("<video>", "<image>" * frames) def sample_frames(url, num_frames): response = requests.get(url) path_id = str(uuid.uuid4()) path = f"./{path_id}.mp4" with open(path, "wb") as f: f.write(response.content)
87_1_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_text_to_text.md
https://huggingface.co/docs/transformers/en/tasks/video_text_to_text/#video-text-to-text
.md
video = cv2.VideoCapture(path) total_frames = int(video.get(cv2.CAP_PROP_FRAME_COUNT)) interval = total_frames // num_frames frames = [] for i in range(total_frames): ret, frame = video.read() pil_img = Image.fromarray(cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)) if not ret: continue if i % interval == 0: frames.append(pil_img) video.release() return frames[:num_frames] ``` Let's get our inputs. We will sample frames and concatenate them. ```python
87_1_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_text_to_text.md
https://huggingface.co/docs/transformers/en/tasks/video_text_to_text/#video-text-to-text
.md
video.release() return frames[:num_frames] ``` Let's get our inputs. We will sample frames and concatenate them. ```python video_1 = "https://huggingface.co/spaces/merve/llava-interleave/resolve/main/cats_1.mp4" video_2 = "https://huggingface.co/spaces/merve/llava-interleave/resolve/main/cats_2.mp4"
87_1_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_text_to_text.md
https://huggingface.co/docs/transformers/en/tasks/video_text_to_text/#video-text-to-text
.md
video_1 = sample_frames(video_1, 6) video_2 = sample_frames(video_2, 6) videos = video_1 + video_2 videos
87_1_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_text_to_text.md
https://huggingface.co/docs/transformers/en/tasks/video_text_to_text/#video-text-to-text
.md
# [<PIL.Image.Image image mode=RGB size=1920x1080>, # <PIL.Image.Image image mode=RGB size=1920x1080>, # <PIL.Image.Image image mode=RGB size=1920x1080>, ...] ``` Both videos have cats. <div class="container"> <div class="video-container"> <video width="400" controls> <source src="https://huggingface.co/spaces/merve/llava-interleave/resolve/main/cats_1.mp4" type="video/mp4"> </video> </div> <div class="video-container"> <video width="400" controls>
87_1_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_text_to_text.md
https://huggingface.co/docs/transformers/en/tasks/video_text_to_text/#video-text-to-text
.md
</video> </div> <div class="video-container"> <video width="400" controls> <source src="https://huggingface.co/spaces/merve/llava-interleave/resolve/main/cats_2.mp4" type="video/mp4"> </video> </div> </div> Now we can preprocess the inputs.
87_1_14
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_text_to_text.md
https://huggingface.co/docs/transformers/en/tasks/video_text_to_text/#video-text-to-text
.md
</video> </div> </div> Now we can preprocess the inputs. This model has a prompt template that looks like following. First, we'll put all the sampled frames into one list. Since we have eight frames in each video, we will insert 12 `<image>`tokens to our prompt. Add `assistant` at the end of the prompt to trigger the model to give answers. Then we can preprocess. ```python user_prompt = "Are these two cats in these two videos doing the same thing?" toks = "<image>" * 12
87_1_15
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_text_to_text.md
https://huggingface.co/docs/transformers/en/tasks/video_text_to_text/#video-text-to-text
.md
```python user_prompt = "Are these two cats in these two videos doing the same thing?" toks = "<image>" * 12 prompt = "<|im_start|>user"+ toks + f"\n{user_prompt}<|im_end|><|im_start|>assistant" inputs = processor(text=prompt, images=videos, return_tensors="pt").to(model.device, model.dtype) ``` We can now call [`~GenerationMixin.generate`] for inference. The model outputs the question in our input and answer, so we only take the text after the prompt and `assistant` part from the model output.
87_1_16
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_text_to_text.md
https://huggingface.co/docs/transformers/en/tasks/video_text_to_text/#video-text-to-text
.md
```python output = model.generate(**inputs, max_new_tokens=100, do_sample=False) print(processor.decode(output[0][2:], skip_special_tokens=True)[len(user_prompt)+10:])
87_1_17
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/video_text_to_text.md
https://huggingface.co/docs/transformers/en/tasks/video_text_to_text/#video-text-to-text
.md
# The first cat is shown in a relaxed state, with its eyes closed and a content expression, while the second cat is shown in a more active state, with its mouth open wide, possibly in a yawn or a vocalization. ``` And voila! To learn more about chat templates and token streaming for video-text-to-text models, refer to the [image-text-to-text](../tasks/image_text_to_text) task guide because these models work similarly.
87_1_18
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_classification.md
https://huggingface.co/docs/transformers/en/tasks/image_classification/
.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
88_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_classification.md
https://huggingface.co/docs/transformers/en/tasks/image_classification/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
88_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_classification.md
https://huggingface.co/docs/transformers/en/tasks/image_classification/#image-classification
.md
[[open-in-colab]] <Youtube id="tjAIM7BOYhw"/> Image classification assigns a label or class to an image. Unlike text or audio classification, the inputs are the pixel values that comprise an image. There are many applications for image classification, such as detecting damage after a natural disaster, monitoring crop health, or helping screen medical images for signs of disease. This guide illustrates how to:
88_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_classification.md
https://huggingface.co/docs/transformers/en/tasks/image_classification/#image-classification
.md
This guide illustrates how to: 1. Fine-tune [ViT](../model_doc/vit) on the [Food-101](https://huggingface.co/datasets/food101) dataset to classify a food item in an image. 2. Use your fine-tuned model for inference. <Tip> To see all architectures and checkpoints compatible with this task, we recommend checking the [task-page](https://huggingface.co/tasks/image-classification) </Tip> Before you begin, make sure you have all the necessary libraries installed: ```bash
88_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_classification.md
https://huggingface.co/docs/transformers/en/tasks/image_classification/#image-classification
.md
</Tip> Before you begin, make sure you have all the necessary libraries installed: ```bash pip install transformers datasets evaluate accelerate pillow torchvision scikit-learn ``` We encourage you to log in to your Hugging Face account to upload and share your model with the community. When prompted, enter your token to log in: ```py >>> from huggingface_hub import notebook_login
88_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_classification.md
https://huggingface.co/docs/transformers/en/tasks/image_classification/#image-classification
.md
>>> notebook_login() ```
88_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_classification.md
https://huggingface.co/docs/transformers/en/tasks/image_classification/#load-food-101-dataset
.md
Start by loading a smaller subset of the Food-101 dataset from the 🤗 Datasets library. This will give you a chance to experiment and make sure everything works before spending more time training on the full dataset. ```py >>> from datasets import load_dataset
88_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_classification.md
https://huggingface.co/docs/transformers/en/tasks/image_classification/#load-food-101-dataset
.md
>>> food = load_dataset("food101", split="train[:5000]") ``` Split the dataset's `train` split into a train and test set with the [`~datasets.Dataset.train_test_split`] method: ```py >>> food = food.train_test_split(test_size=0.2) ``` Then take a look at an example: ```py >>> food["train"][0] {'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=512x512 at 0x7F52AFC8AC50>, 'label': 79} ``` Each example in the dataset has two fields: - `image`: a PIL image of the food item
88_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_classification.md
https://huggingface.co/docs/transformers/en/tasks/image_classification/#load-food-101-dataset
.md
'label': 79} ``` Each example in the dataset has two fields: - `image`: a PIL image of the food item - `label`: the label class of the food item To make it easier for the model to get the label name from the label id, create a dictionary that maps the label name to an integer and vice versa: ```py >>> labels = food["train"].features["label"].names >>> label2id, id2label = dict(), dict() >>> for i, label in enumerate(labels): ... label2id[label] = str(i) ... id2label[str(i)] = label ```
88_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_classification.md
https://huggingface.co/docs/transformers/en/tasks/image_classification/#load-food-101-dataset
.md
>>> for i, label in enumerate(labels): ... label2id[label] = str(i) ... id2label[str(i)] = label ``` Now you can convert the label id to a label name: ```py >>> id2label[str(79)] 'prime_rib' ```
88_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_classification.md
https://huggingface.co/docs/transformers/en/tasks/image_classification/#preprocess
.md
The next step is to load a ViT image processor to process the image into a tensor: ```py >>> from transformers import AutoImageProcessor
88_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_classification.md
https://huggingface.co/docs/transformers/en/tasks/image_classification/#preprocess
.md
>>> checkpoint = "google/vit-base-patch16-224-in21k" >>> image_processor = AutoImageProcessor.from_pretrained(checkpoint) ``` <frameworkcontent> <pt> Apply some image transformations to the images to make the model more robust against overfitting. Here you'll use torchvision's [`transforms`](https://pytorch.org/vision/stable/transforms.html) module, but you can also use any image library you like.
88_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_classification.md
https://huggingface.co/docs/transformers/en/tasks/image_classification/#preprocess
.md
Crop a random part of the image, resize it, and normalize it with the image mean and standard deviation: ```py >>> from torchvision.transforms import RandomResizedCrop, Compose, Normalize, ToTensor
88_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_classification.md
https://huggingface.co/docs/transformers/en/tasks/image_classification/#preprocess
.md
>>> normalize = Normalize(mean=image_processor.image_mean, std=image_processor.image_std) >>> size = ( ... image_processor.size["shortest_edge"] ... if "shortest_edge" in image_processor.size ... else (image_processor.size["height"], image_processor.size["width"]) ... ) >>> _transforms = Compose([RandomResizedCrop(size), ToTensor(), normalize]) ``` Then create a preprocessing function to apply the transforms and return the `pixel_values` - the inputs to the model - of the image: ```py
88_3_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/image_classification.md
https://huggingface.co/docs/transformers/en/tasks/image_classification/#preprocess
.md
```py >>> def transforms(examples): ... examples["pixel_values"] = [_transforms(img.convert("RGB")) for img in examples["image"]] ... del examples["image"] ... return examples ``` To apply the preprocessing function over the entire dataset, use 🤗 Datasets [`~datasets.Dataset.with_transform`] method. The transforms are applied on the fly when you load an element of the dataset: ```py >>> food = food.with_transform(transforms) ```
88_3_4