source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/document_question_answering.md | https://huggingface.co/docs/transformers/en/tasks/document_question_answering/#preprocessing-text-data | .md | ... if match:
... # if match is found, use `token_type_ids` to find where words start in the encoding
... token_type_ids = encoding["token_type_ids"][i]
... token_start_index = 0
... while token_type_ids[token_start_index] != 1:
... token_start_index += 1
... token_end_index = len(encoding["input_ids"][i]) - 1
... while token_type_ids[token_end_index] != 1:
... token_end_index -= 1 | 73_5_20 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/document_question_answering.md | https://huggingface.co/docs/transformers/en/tasks/document_question_answering/#preprocessing-text-data | .md | ... word_ids = encoding.word_ids(i)[token_start_index : token_end_index + 1]
... start_position = cls_index
... end_position = cls_index | 73_5_21 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/document_question_answering.md | https://huggingface.co/docs/transformers/en/tasks/document_question_answering/#preprocessing-text-data | .md | ... # loop over word_ids and increase `token_start_index` until it matches the answer position in words
... # once it matches, save the `token_start_index` as the `start_position` of the answer in the encoding
... for id in word_ids:
... if id == word_idx_start:
... start_position = token_start_index
... else:
... token_start_index += 1 | 73_5_22 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/document_question_answering.md | https://huggingface.co/docs/transformers/en/tasks/document_question_answering/#preprocessing-text-data | .md | ... # similarly loop over `word_ids` starting from the end to find the `end_position` of the answer
... for id in word_ids[::-1]:
... if id == word_idx_end:
... end_position = token_end_index
... else:
... token_end_index -= 1
... start_positions.append(start_position)
... end_positions.append(end_position) | 73_5_23 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/document_question_answering.md | https://huggingface.co/docs/transformers/en/tasks/document_question_answering/#preprocessing-text-data | .md | ... start_positions.append(start_position)
... end_positions.append(end_position)
... else:
... start_positions.append(cls_index)
... end_positions.append(cls_index)
... encoding["image"] = examples["image"]
... encoding["start_positions"] = start_positions
... encoding["end_positions"] = end_positions | 73_5_24 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/document_question_answering.md | https://huggingface.co/docs/transformers/en/tasks/document_question_answering/#preprocessing-text-data | .md | ... return encoding
```
Now that we have this preprocessing function, we can encode the entire dataset:
```py
>>> encoded_train_dataset = dataset_with_ocr["train"].map(
... encode_dataset, batched=True, batch_size=2, remove_columns=dataset_with_ocr["train"].column_names
... )
>>> encoded_test_dataset = dataset_with_ocr["test"].map(
... encode_dataset, batched=True, batch_size=2, remove_columns=dataset_with_ocr["test"].column_names
... )
``` | 73_5_25 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/document_question_answering.md | https://huggingface.co/docs/transformers/en/tasks/document_question_answering/#preprocessing-text-data | .md | ... encode_dataset, batched=True, batch_size=2, remove_columns=dataset_with_ocr["test"].column_names
... )
```
Let's check what the features of the encoded dataset look like:
```py
>>> encoded_train_dataset.features
{'image': Sequence(feature=Sequence(feature=Sequence(feature=Value(dtype='uint8', id=None), length=-1, id=None), length=-1, id=None), length=-1, id=None),
'input_ids': Sequence(feature=Value(dtype='int32', id=None), length=-1, id=None), | 73_5_26 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/document_question_answering.md | https://huggingface.co/docs/transformers/en/tasks/document_question_answering/#preprocessing-text-data | .md | 'input_ids': Sequence(feature=Value(dtype='int32', id=None), length=-1, id=None),
'token_type_ids': Sequence(feature=Value(dtype='int8', id=None), length=-1, id=None),
'attention_mask': Sequence(feature=Value(dtype='int8', id=None), length=-1, id=None),
'bbox': Sequence(feature=Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), length=-1, id=None),
'start_positions': Value(dtype='int64', id=None),
'end_positions': Value(dtype='int64', id=None)}
``` | 73_5_27 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/document_question_answering.md | https://huggingface.co/docs/transformers/en/tasks/document_question_answering/#evaluation | .md | Evaluation for document question answering requires a significant amount of postprocessing. To avoid taking up too much
of your time, this guide skips the evaluation step. The [`Trainer`] still calculates the evaluation loss during training so
you're not completely in the dark about your model's performance. Extractive question answering is typically evaluated using F1/exact match. | 73_6_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/document_question_answering.md | https://huggingface.co/docs/transformers/en/tasks/document_question_answering/#evaluation | .md | If you'd like to implement it yourself, check out the [Question Answering chapter](https://huggingface.co/course/chapter7/7?fw=pt#postprocessing)
of the Hugging Face course for inspiration. | 73_6_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/document_question_answering.md | https://huggingface.co/docs/transformers/en/tasks/document_question_answering/#train | .md | Congratulations! You've successfully navigated the toughest part of this guide and now you are ready to train your own model.
Training involves the following steps:
* Load the model with [`AutoModelForDocumentQuestionAnswering`] using the same checkpoint as in the preprocessing.
* Define your training hyperparameters in [`TrainingArguments`].
* Define a function to batch examples together, here the [`DefaultDataCollator`] will do just fine | 73_7_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/document_question_answering.md | https://huggingface.co/docs/transformers/en/tasks/document_question_answering/#train | .md | * Define a function to batch examples together, here the [`DefaultDataCollator`] will do just fine
* Pass the training arguments to [`Trainer`] along with the model, dataset, and data collator.
* Call [`~Trainer.train`] to finetune your model.
```py
>>> from transformers import AutoModelForDocumentQuestionAnswering | 73_7_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/document_question_answering.md | https://huggingface.co/docs/transformers/en/tasks/document_question_answering/#train | .md | >>> model = AutoModelForDocumentQuestionAnswering.from_pretrained(model_checkpoint)
```
In the [`TrainingArguments`] use `output_dir` to specify where to save your model, and configure hyperparameters as you see fit.
If you wish to share your model with the community, set `push_to_hub` to `True` (you must be signed in to Hugging Face to upload your model).
In this case the `output_dir` will also be the name of the repo where your model checkpoint will be pushed.
```py | 73_7_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/document_question_answering.md | https://huggingface.co/docs/transformers/en/tasks/document_question_answering/#train | .md | In this case the `output_dir` will also be the name of the repo where your model checkpoint will be pushed.
```py
>>> from transformers import TrainingArguments | 73_7_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/document_question_answering.md | https://huggingface.co/docs/transformers/en/tasks/document_question_answering/#train | .md | >>> # REPLACE THIS WITH YOUR REPO ID
>>> repo_id = "MariaK/layoutlmv2-base-uncased_finetuned_docvqa" | 73_7_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/document_question_answering.md | https://huggingface.co/docs/transformers/en/tasks/document_question_answering/#train | .md | >>> training_args = TrainingArguments(
... output_dir=repo_id,
... per_device_train_batch_size=4,
... num_train_epochs=20,
... save_steps=200,
... logging_steps=50,
... eval_strategy="steps",
... learning_rate=5e-5,
... save_total_limit=2,
... remove_unused_columns=False,
... push_to_hub=True,
... )
```
Define a simple data collator to batch examples together.
```py
>>> from transformers import DefaultDataCollator | 73_7_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/document_question_answering.md | https://huggingface.co/docs/transformers/en/tasks/document_question_answering/#train | .md | >>> data_collator = DefaultDataCollator()
```
Finally, bring everything together, and call [`~Trainer.train`]:
```py
>>> from transformers import Trainer
>>> trainer = Trainer(
... model=model,
... args=training_args,
... data_collator=data_collator,
... train_dataset=encoded_train_dataset,
... eval_dataset=encoded_test_dataset,
... processing_class=processor,
... ) | 73_7_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/document_question_answering.md | https://huggingface.co/docs/transformers/en/tasks/document_question_answering/#train | .md | >>> trainer.train()
```
To add the final model to 🤗 Hub, create a model card and call `push_to_hub`:
```py
>>> trainer.create_model_card()
>>> trainer.push_to_hub()
``` | 73_7_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/document_question_answering.md | https://huggingface.co/docs/transformers/en/tasks/document_question_answering/#inference | .md | Now that you have finetuned a LayoutLMv2 model, and uploaded it to the 🤗 Hub, you can use it for inference. The simplest
way to try out your finetuned model for inference is to use it in a [`Pipeline`].
Let's take an example:
```py
>>> example = dataset["test"][2]
>>> question = example["query"]["en"]
>>> image = example["image"]
>>> print(question)
>>> print(example["answers"])
'Who is ‘presiding’ TRRF GENERAL SESSION (PART 1)?'
['TRRF Vice President', 'lee a. waller']
``` | 73_8_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/document_question_answering.md | https://huggingface.co/docs/transformers/en/tasks/document_question_answering/#inference | .md | 'Who is ‘presiding’ TRRF GENERAL SESSION (PART 1)?'
['TRRF Vice President', 'lee a. waller']
```
Next, instantiate a pipeline for
document question answering with your model, and pass the image + question combination to it.
```py
>>> from transformers import pipeline | 73_8_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/document_question_answering.md | https://huggingface.co/docs/transformers/en/tasks/document_question_answering/#inference | .md | >>> qa_pipeline = pipeline("document-question-answering", model="MariaK/layoutlmv2-base-uncased_finetuned_docvqa")
>>> qa_pipeline(image, question)
[{'score': 0.9949808120727539,
'answer': 'Lee A. Waller',
'start': 55,
'end': 57}]
```
You can also manually replicate the results of the pipeline if you'd like:
1. Take an image and a question, prepare them for the model using the processor from your model.
2. Forward the result or preprocessing through the model. | 73_8_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/document_question_answering.md | https://huggingface.co/docs/transformers/en/tasks/document_question_answering/#inference | .md | 2. Forward the result or preprocessing through the model.
3. The model returns `start_logits` and `end_logits`, which indicate which token is at the start of the answer and
which token is at the end of the answer. Both have shape (batch_size, sequence_length).
4. Take an argmax on the last dimension of both the `start_logits` and `end_logits` to get the predicted `start_idx` and `end_idx`.
5. Decode the answer with the tokenizer.
```py
>>> import torch
>>> from transformers import AutoProcessor | 73_8_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/document_question_answering.md | https://huggingface.co/docs/transformers/en/tasks/document_question_answering/#inference | .md | 5. Decode the answer with the tokenizer.
```py
>>> import torch
>>> from transformers import AutoProcessor
>>> from transformers import AutoModelForDocumentQuestionAnswering | 73_8_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/document_question_answering.md | https://huggingface.co/docs/transformers/en/tasks/document_question_answering/#inference | .md | >>> processor = AutoProcessor.from_pretrained("MariaK/layoutlmv2-base-uncased_finetuned_docvqa")
>>> model = AutoModelForDocumentQuestionAnswering.from_pretrained("MariaK/layoutlmv2-base-uncased_finetuned_docvqa") | 73_8_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/document_question_answering.md | https://huggingface.co/docs/transformers/en/tasks/document_question_answering/#inference | .md | >>> with torch.no_grad():
... encoding = processor(image.convert("RGB"), question, return_tensors="pt")
... outputs = model(**encoding)
... start_logits = outputs.start_logits
... end_logits = outputs.end_logits
... predicted_start_idx = start_logits.argmax(-1).item()
... predicted_end_idx = end_logits.argmax(-1).item()
>>> processor.tokenizer.decode(encoding.input_ids.squeeze()[predicted_start_idx : predicted_end_idx + 1])
'lee a. waller'
``` | 73_8_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/idefics.md | https://huggingface.co/docs/transformers/en/tasks/idefics/ | .md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 74_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/idefics.md | https://huggingface.co/docs/transformers/en/tasks/idefics/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 74_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/idefics.md | https://huggingface.co/docs/transformers/en/tasks/idefics/#image-tasks-with-idefics | .md | [[open-in-colab]]
While individual tasks can be tackled by fine-tuning specialized models, an alternative approach
that has recently emerged and gained popularity is to use large models for a diverse set of tasks without fine-tuning.
For instance, large language models can handle such NLP tasks as summarization, translation, classification, and more.
This approach is no longer limited to a single modality, such as text, and in this guide, we will illustrate how you can | 74_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/idefics.md | https://huggingface.co/docs/transformers/en/tasks/idefics/#image-tasks-with-idefics | .md | This approach is no longer limited to a single modality, such as text, and in this guide, we will illustrate how you can
solve image-text tasks with a large multimodal model called IDEFICS.
[IDEFICS](../model_doc/idefics) is an open-access vision and language model based on [Flamingo](https://huggingface.co/papers/2204.14198),
a state-of-the-art visual language model initially developed by DeepMind. The model accepts arbitrary sequences of image | 74_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/idefics.md | https://huggingface.co/docs/transformers/en/tasks/idefics/#image-tasks-with-idefics | .md | a state-of-the-art visual language model initially developed by DeepMind. The model accepts arbitrary sequences of image
and text inputs and generates coherent text as output. It can answer questions about images, describe visual content,
create stories grounded in multiple images, and so on. IDEFICS comes in two variants - [80 billion parameters](https://huggingface.co/HuggingFaceM4/idefics-80b) | 74_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/idefics.md | https://huggingface.co/docs/transformers/en/tasks/idefics/#image-tasks-with-idefics | .md | and [9 billion parameters](https://huggingface.co/HuggingFaceM4/idefics-9b), both of which are available on the 🤗 Hub. For each variant, you can also find fine-tuned instructed
versions of the model adapted for conversational use cases.
This model is exceptionally versatile and can be used for a wide range of image and multimodal tasks. However,
being a large model means it requires significant computational resources and infrastructure. It is up to you to decide whether | 74_1_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/idefics.md | https://huggingface.co/docs/transformers/en/tasks/idefics/#image-tasks-with-idefics | .md | being a large model means it requires significant computational resources and infrastructure. It is up to you to decide whether
this approach suits your use case better than fine-tuning specialized models for each individual task.
In this guide, you'll learn how to:
- [Load IDEFICS](#loading-the-model) and [load the quantized version of the model](#quantized-model)
- Use IDEFICS for:
- [Image captioning](#image-captioning)
- [Prompted image captioning](#prompted-image-captioning) | 74_1_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/idefics.md | https://huggingface.co/docs/transformers/en/tasks/idefics/#image-tasks-with-idefics | .md | - Use IDEFICS for:
- [Image captioning](#image-captioning)
- [Prompted image captioning](#prompted-image-captioning)
- [Few-shot prompting](#few-shot-prompting)
- [Visual question answering](#visual-question-answering)
- [Image classification](#image-classification)
- [Image-guided text generation](#image-guided-text-generation)
- [Run inference in batch mode](#running-inference-in-batch-mode)
- [Run IDEFICS instruct for conversational use](#idefics-instruct-for-conversational-use) | 74_1_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/idefics.md | https://huggingface.co/docs/transformers/en/tasks/idefics/#image-tasks-with-idefics | .md | - [Run IDEFICS instruct for conversational use](#idefics-instruct-for-conversational-use)
Before you begin, make sure you have all the necessary libraries installed.
```bash
pip install -q bitsandbytes sentencepiece accelerate transformers
```
<Tip>
To run the following examples with a non-quantized version of the model checkpoint you will need at least 20GB of GPU memory.
</Tip> | 74_1_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/idefics.md | https://huggingface.co/docs/transformers/en/tasks/idefics/#loading-the-model | .md | Let's start by loading the model's 9 billion parameters checkpoint:
```py
>>> checkpoint = "HuggingFaceM4/idefics-9b"
```
Just like for other Transformers models, you need to load a processor and the model itself from the checkpoint.
The IDEFICS processor wraps a [`LlamaTokenizer`] and IDEFICS image processor into a single processor to take care of
preparing text and image inputs for the model.
```py
>>> import torch
>>> from transformers import IdeficsForVisionText2Text, AutoProcessor | 74_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/idefics.md | https://huggingface.co/docs/transformers/en/tasks/idefics/#loading-the-model | .md | >>> from transformers import IdeficsForVisionText2Text, AutoProcessor
>>> processor = AutoProcessor.from_pretrained(checkpoint)
>>> model = IdeficsForVisionText2Text.from_pretrained(checkpoint, torch_dtype=torch.bfloat16, device_map="auto")
```
Setting `device_map` to `"auto"` will automatically determine how to load and store the model weights in the most optimized
manner given existing devices. | 74_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/idefics.md | https://huggingface.co/docs/transformers/en/tasks/idefics/#quantized-model | .md | If high-memory GPU availability is an issue, you can load the quantized version of the model. To load the model and the
processor in 4bit precision, pass a `BitsAndBytesConfig` to the `from_pretrained` method and the model will be compressed
on the fly while loading.
```py
>>> import torch
>>> from transformers import IdeficsForVisionText2Text, AutoProcessor, BitsAndBytesConfig
>>> quantization_config = BitsAndBytesConfig(
... load_in_4bit=True,
... bnb_4bit_compute_dtype=torch.float16,
... ) | 74_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/idefics.md | https://huggingface.co/docs/transformers/en/tasks/idefics/#quantized-model | .md | >>> quantization_config = BitsAndBytesConfig(
... load_in_4bit=True,
... bnb_4bit_compute_dtype=torch.float16,
... )
>>> processor = AutoProcessor.from_pretrained(checkpoint)
>>> model = IdeficsForVisionText2Text.from_pretrained(
... checkpoint,
... quantization_config=quantization_config,
... device_map="auto"
... )
```
Now that you have the model loaded in one of the suggested ways, let's move on to exploring tasks that you can use IDEFICS for. | 74_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/idefics.md | https://huggingface.co/docs/transformers/en/tasks/idefics/#image-captioning | .md | Image captioning is the task of predicting a caption for a given image. A common application is to aid visually impaired
people navigate through different situations, for instance, explore image content online.
To illustrate the task, get an image to be captioned, e.g.:
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/idefics-im-captioning.jpg" alt="Image of a puppy in a flower bed"/>
</div> | 74_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/idefics.md | https://huggingface.co/docs/transformers/en/tasks/idefics/#image-captioning | .md | </div>
Photo by [Hendo Wang](https://unsplash.com/@hendoo).
IDEFICS accepts text and image prompts. However, to caption an image, you do not have to provide a text prompt to the
model, only the preprocessed input image. Without a text prompt, the model will start generating text from the
BOS (beginning-of-sequence) token thus creating a caption.
As image input to the model, you can use either an image object (`PIL.Image`) or a url from which the image can be retrieved.
```py
>>> prompt = [ | 74_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/idefics.md | https://huggingface.co/docs/transformers/en/tasks/idefics/#image-captioning | .md | ```py
>>> prompt = [
... "https://images.unsplash.com/photo-1583160247711-2191776b4b91?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=3542&q=80",
... ] | 74_4_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/idefics.md | https://huggingface.co/docs/transformers/en/tasks/idefics/#image-captioning | .md | >>> inputs = processor(prompt, return_tensors="pt").to("cuda")
>>> bad_words_ids = processor.tokenizer(["<image>", "<fake_token_around_image>"], add_special_tokens=False).input_ids | 74_4_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/idefics.md | https://huggingface.co/docs/transformers/en/tasks/idefics/#image-captioning | .md | >>> generated_ids = model.generate(**inputs, max_new_tokens=10, bad_words_ids=bad_words_ids)
>>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)
>>> print(generated_text[0])
A puppy in a flower bed
```
<Tip>
It is a good idea to include the `bad_words_ids` in the call to `generate` to avoid errors arising when increasing
the `max_new_tokens`: the model will want to generate a new `<image>` or `<fake_token_around_image>` token when there | 74_4_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/idefics.md | https://huggingface.co/docs/transformers/en/tasks/idefics/#image-captioning | .md | the `max_new_tokens`: the model will want to generate a new `<image>` or `<fake_token_around_image>` token when there
is no image being generated by the model.
You can set it on-the-fly as in this guide, or store in the `GenerationConfig` as described in the [Text generation strategies](../generation_strategies) guide.
</Tip> | 74_4_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/idefics.md | https://huggingface.co/docs/transformers/en/tasks/idefics/#prompted-image-captioning | .md | You can extend image captioning by providing a text prompt, which the model will continue given the image. Let's take
another image to illustrate:
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/idefics-prompted-im-captioning.jpg" alt="Image of the Eiffel Tower at night"/>
</div>
Photo by [Denys Nevozhai](https://unsplash.com/@dnevozhai). | 74_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/idefics.md | https://huggingface.co/docs/transformers/en/tasks/idefics/#prompted-image-captioning | .md | </div>
Photo by [Denys Nevozhai](https://unsplash.com/@dnevozhai).
Textual and image prompts can be passed to the model's processor as a single list to create appropriate inputs.
```py
>>> prompt = [
... "https://images.unsplash.com/photo-1543349689-9a4d426bee8e?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=3501&q=80",
... "This is an image of ",
... ] | 74_5_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/idefics.md | https://huggingface.co/docs/transformers/en/tasks/idefics/#prompted-image-captioning | .md | >>> inputs = processor(prompt, return_tensors="pt").to("cuda")
>>> bad_words_ids = processor.tokenizer(["<image>", "<fake_token_around_image>"], add_special_tokens=False).input_ids
>>> generated_ids = model.generate(**inputs, max_new_tokens=10, bad_words_ids=bad_words_ids)
>>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)
>>> print(generated_text[0])
This is an image of the Eiffel Tower in Paris, France.
``` | 74_5_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/idefics.md | https://huggingface.co/docs/transformers/en/tasks/idefics/#few-shot-prompting | .md | While IDEFICS demonstrates great zero-shot results, your task may require a certain format of the caption, or come with
other restrictions or requirements that increase task's complexity. Few-shot prompting can be used to enable in-context learning.
By providing examples in the prompt, you can steer the model to generate results that mimic the format of given examples.
Let's use the previous image of the Eiffel Tower as an example for the model and build a prompt that demonstrates to the model | 74_6_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/idefics.md | https://huggingface.co/docs/transformers/en/tasks/idefics/#few-shot-prompting | .md | Let's use the previous image of the Eiffel Tower as an example for the model and build a prompt that demonstrates to the model
that in addition to learning what the object in an image is, we would also like to get some interesting information about it.
Then, let's see, if we can get the same response format for an image of the Statue of Liberty:
<div class="flex justify-center"> | 74_6_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/idefics.md | https://huggingface.co/docs/transformers/en/tasks/idefics/#few-shot-prompting | .md | <div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/idefics-few-shot.jpg" alt="Image of the Statue of Liberty"/>
</div>
Photo by [Juan Mayobre](https://unsplash.com/@jmayobres).
```py
>>> prompt = ["User:",
... "https://images.unsplash.com/photo-1543349689-9a4d426bee8e?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=3501&q=80", | 74_6_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/idefics.md | https://huggingface.co/docs/transformers/en/tasks/idefics/#few-shot-prompting | .md | ... "Describe this image.\nAssistant: An image of the Eiffel Tower at night. Fun fact: the Eiffel Tower is the same height as an 81-storey building.\n",
... "User:",
... "https://images.unsplash.com/photo-1524099163253-32b7f0256868?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=3387&q=80",
... "Describe this image.\nAssistant:"
... ] | 74_6_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/idefics.md | https://huggingface.co/docs/transformers/en/tasks/idefics/#few-shot-prompting | .md | >>> inputs = processor(prompt, return_tensors="pt").to("cuda")
>>> bad_words_ids = processor.tokenizer(["<image>", "<fake_token_around_image>"], add_special_tokens=False).input_ids | 74_6_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/idefics.md | https://huggingface.co/docs/transformers/en/tasks/idefics/#few-shot-prompting | .md | >>> generated_ids = model.generate(**inputs, max_new_tokens=30, bad_words_ids=bad_words_ids)
>>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)
>>> print(generated_text[0])
User: Describe this image.
Assistant: An image of the Eiffel Tower at night. Fun fact: the Eiffel Tower is the same height as an 81-storey building.
User: Describe this image.
Assistant: An image of the Statue of Liberty. Fun fact: the Statue of Liberty is 151 feet tall.
``` | 74_6_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/idefics.md | https://huggingface.co/docs/transformers/en/tasks/idefics/#few-shot-prompting | .md | Assistant: An image of the Statue of Liberty. Fun fact: the Statue of Liberty is 151 feet tall.
```
Notice that just from a single example (i.e., 1-shot) the model has learned how to perform the task. For more complex tasks,
feel free to experiment with a larger number of examples (e.g., 3-shot, 5-shot, etc.). | 74_6_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/idefics.md | https://huggingface.co/docs/transformers/en/tasks/idefics/#visual-question-answering | .md | Visual Question Answering (VQA) is the task of answering open-ended questions based on an image. Similar to image
captioning it can be used in accessibility applications, but also in education (reasoning about visual materials), customer
service (questions about products based on images), and image retrieval.
Let's get a new image for this task:
<div class="flex justify-center"> | 74_7_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/idefics.md | https://huggingface.co/docs/transformers/en/tasks/idefics/#visual-question-answering | .md | Let's get a new image for this task:
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/idefics-vqa.jpg" alt="Image of a couple having a picnic"/>
</div>
Photo by [Jarritos Mexican Soda](https://unsplash.com/@jarritos).
You can steer the model from image captioning to visual question answering by prompting it with appropriate instructions:
```py
>>> prompt = [ | 74_7_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/idefics.md | https://huggingface.co/docs/transformers/en/tasks/idefics/#visual-question-answering | .md | ```py
>>> prompt = [
... "Instruction: Provide an answer to the question. Use the image to answer.\n",
... "https://images.unsplash.com/photo-1623944889288-cd147dbb517c?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=3540&q=80",
... "Question: Where are these people and what's the weather like? Answer:"
... ] | 74_7_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/idefics.md | https://huggingface.co/docs/transformers/en/tasks/idefics/#visual-question-answering | .md | >>> inputs = processor(prompt, return_tensors="pt").to("cuda")
>>> bad_words_ids = processor.tokenizer(["<image>", "<fake_token_around_image>"], add_special_tokens=False).input_ids | 74_7_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/idefics.md | https://huggingface.co/docs/transformers/en/tasks/idefics/#visual-question-answering | .md | >>> generated_ids = model.generate(**inputs, max_new_tokens=20, bad_words_ids=bad_words_ids)
>>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)
>>> print(generated_text[0])
Instruction: Provide an answer to the question. Use the image to answer.
Question: Where are these people and what's the weather like? Answer: They're in a park in New York City, and it's a beautiful day.
``` | 74_7_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/idefics.md | https://huggingface.co/docs/transformers/en/tasks/idefics/#image-classification | .md | IDEFICS is capable of classifying images into different categories without being explicitly trained on data containing
labeled examples from those specific categories. Given a list of categories and using its image and text understanding
capabilities, the model can infer which category the image likely belongs to.
Say, we have this image of a vegetable stand:
<div class="flex justify-center"> | 74_8_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/idefics.md | https://huggingface.co/docs/transformers/en/tasks/idefics/#image-classification | .md | Say, we have this image of a vegetable stand:
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/idefics-classification.jpg" alt="Image of a vegetable stand"/>
</div>
Photo by [Peter Wendt](https://unsplash.com/@peterwendt).
We can instruct the model to classify the image into one of the categories that we have:
```py
>>> categories = ['animals','vegetables', 'city landscape', 'cars', 'office'] | 74_8_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/idefics.md | https://huggingface.co/docs/transformers/en/tasks/idefics/#image-classification | .md | ```py
>>> categories = ['animals','vegetables', 'city landscape', 'cars', 'office']
>>> prompt = [f"Instruction: Classify the following image into a single category from the following list: {categories}.\n",
... "https://images.unsplash.com/photo-1471193945509-9ad0617afabf?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=3540&q=80",
... "Category: "
... ] | 74_8_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/idefics.md | https://huggingface.co/docs/transformers/en/tasks/idefics/#image-classification | .md | >>> inputs = processor(prompt, return_tensors="pt").to("cuda")
>>> bad_words_ids = processor.tokenizer(["<image>", "<fake_token_around_image>"], add_special_tokens=False).input_ids | 74_8_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/idefics.md | https://huggingface.co/docs/transformers/en/tasks/idefics/#image-classification | .md | >>> generated_ids = model.generate(**inputs, max_new_tokens=6, bad_words_ids=bad_words_ids)
>>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)
>>> print(generated_text[0])
Instruction: Classify the following image into a single category from the following list: ['animals', 'vegetables', 'city landscape', 'cars', 'office'].
Category: Vegetables
``` | 74_8_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/idefics.md | https://huggingface.co/docs/transformers/en/tasks/idefics/#image-classification | .md | Category: Vegetables
```
In the example above we instruct the model to classify the image into a single category, however, you can also prompt the model to do rank classification. | 74_8_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/idefics.md | https://huggingface.co/docs/transformers/en/tasks/idefics/#image-guided-text-generation | .md | For more creative applications, you can use image-guided text generation to generate text based on an image. This can be
useful to create descriptions of products, ads, descriptions of a scene, etc.
Let's prompt IDEFICS to write a story based on a simple image of a red door:
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/idefics-story-generation.jpg" alt="Image of a red door with a pumpkin on the steps"/> | 74_9_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/idefics.md | https://huggingface.co/docs/transformers/en/tasks/idefics/#image-guided-text-generation | .md | </div>
Photo by [Craig Tidball](https://unsplash.com/@devonshiremedia).
```py
>>> prompt = ["Instruction: Use the image to write a story. \n",
... "https://images.unsplash.com/photo-1517086822157-2b0358e7684a?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=2203&q=80",
... "Story: \n"] | 74_9_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/idefics.md | https://huggingface.co/docs/transformers/en/tasks/idefics/#image-guided-text-generation | .md | >>> inputs = processor(prompt, return_tensors="pt").to("cuda")
>>> bad_words_ids = processor.tokenizer(["<image>", "<fake_token_around_image>"], add_special_tokens=False).input_ids | 74_9_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/idefics.md | https://huggingface.co/docs/transformers/en/tasks/idefics/#image-guided-text-generation | .md | >>> generated_ids = model.generate(**inputs, num_beams=2, max_new_tokens=200, bad_words_ids=bad_words_ids)
>>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)
>>> print(generated_text[0])
Instruction: Use the image to write a story.
Story:
Once upon a time, there was a little girl who lived in a house with a red door. She loved her red door. It was the prettiest door in the whole world. | 74_9_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/idefics.md | https://huggingface.co/docs/transformers/en/tasks/idefics/#image-guided-text-generation | .md | One day, the little girl was playing in her yard when she noticed a man standing on her doorstep. He was wearing a long black coat and a top hat.
The little girl ran inside and told her mother about the man.
Her mother said, “Don’t worry, honey. He’s just a friendly ghost.”
The little girl wasn’t sure if she believed her mother, but she went outside anyway.
When she got to the door, the man was gone. | 74_9_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/idefics.md | https://huggingface.co/docs/transformers/en/tasks/idefics/#image-guided-text-generation | .md | When she got to the door, the man was gone.
The next day, the little girl was playing in her yard again when she noticed the man standing on her doorstep.
He was wearing a long black coat and a top hat. | 74_9_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/idefics.md | https://huggingface.co/docs/transformers/en/tasks/idefics/#image-guided-text-generation | .md | He was wearing a long black coat and a top hat.
The little girl ran
```
Looks like IDEFICS noticed the pumpkin on the doorstep and went with a spooky Halloween story about a ghost.
<Tip>
For longer outputs like this, you will greatly benefit from tweaking the text generation strategy. This can help
you significantly improve the quality of the generated output. Check out [Text generation strategies](../generation_strategies)
to learn more.
</Tip> | 74_9_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/idefics.md | https://huggingface.co/docs/transformers/en/tasks/idefics/#running-inference-in-batch-mode | .md | All of the earlier sections illustrated IDEFICS for a single example. In a very similar fashion, you can run inference
for a batch of examples by passing a list of prompts:
```py
>>> prompts = [
... [ "https://images.unsplash.com/photo-1543349689-9a4d426bee8e?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=3501&q=80",
... "This is an image of ",
... ], | 74_10_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/idefics.md | https://huggingface.co/docs/transformers/en/tasks/idefics/#running-inference-in-batch-mode | .md | ... "This is an image of ",
... ],
... [ "https://images.unsplash.com/photo-1623944889288-cd147dbb517c?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=3540&q=80",
... "This is an image of ",
... ],
... [ "https://images.unsplash.com/photo-1471193945509-9ad0617afabf?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=3540&q=80",
... "This is an image of ",
... ],
... ] | 74_10_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/idefics.md | https://huggingface.co/docs/transformers/en/tasks/idefics/#running-inference-in-batch-mode | .md | >>> inputs = processor(prompts, return_tensors="pt").to("cuda")
>>> bad_words_ids = processor.tokenizer(["<image>", "<fake_token_around_image>"], add_special_tokens=False).input_ids
>>> generated_ids = model.generate(**inputs, max_new_tokens=10, bad_words_ids=bad_words_ids)
>>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)
>>> for i,t in enumerate(generated_text):
... print(f"{i}:\n{t}\n")
0:
This is an image of the Eiffel Tower in Paris, France. | 74_10_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/idefics.md | https://huggingface.co/docs/transformers/en/tasks/idefics/#running-inference-in-batch-mode | .md | 1:
This is an image of a couple on a picnic blanket.
2:
This is an image of a vegetable stand.
``` | 74_10_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/idefics.md | https://huggingface.co/docs/transformers/en/tasks/idefics/#idefics-instruct-for-conversational-use | .md | For conversational use cases, you can find fine-tuned instructed versions of the model on the 🤗 Hub:
`HuggingFaceM4/idefics-80b-instruct` and `HuggingFaceM4/idefics-9b-instruct`.
These checkpoints are the result of fine-tuning the respective base models on a mixture of supervised and instruction
fine-tuning datasets, which boosts the downstream performance while making the models more usable in conversational settings. | 74_11_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/idefics.md | https://huggingface.co/docs/transformers/en/tasks/idefics/#idefics-instruct-for-conversational-use | .md | fine-tuning datasets, which boosts the downstream performance while making the models more usable in conversational settings.
The use and prompting for the conversational use is very similar to using the base models:
```py
>>> import torch
>>> from transformers import IdeficsForVisionText2Text, AutoProcessor
>>> from accelerate.test_utils.testing import get_backend | 74_11_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/idefics.md | https://huggingface.co/docs/transformers/en/tasks/idefics/#idefics-instruct-for-conversational-use | .md | >>> device, _, _ = get_backend() # automatically detects the underlying device type (CUDA, CPU, XPU, MPS, etc.)
>>> checkpoint = "HuggingFaceM4/idefics-9b-instruct"
>>> model = IdeficsForVisionText2Text.from_pretrained(checkpoint, torch_dtype=torch.bfloat16).to(device)
>>> processor = AutoProcessor.from_pretrained(checkpoint) | 74_11_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/idefics.md | https://huggingface.co/docs/transformers/en/tasks/idefics/#idefics-instruct-for-conversational-use | .md | >>> prompts = [
... [
... "User: What is in this image?",
... "https://upload.wikimedia.org/wikipedia/commons/8/86/Id%C3%A9fix.JPG",
... "<end_of_utterance>",
... "\nAssistant: This picture depicts Idefix, the dog of Obelix in Asterix and Obelix. Idefix is running on the ground.<end_of_utterance>", | 74_11_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/idefics.md | https://huggingface.co/docs/transformers/en/tasks/idefics/#idefics-instruct-for-conversational-use | .md | ... "\nUser:",
... "https://static.wikia.nocookie.net/asterix/images/2/25/R22b.gif/revision/latest?cb=20110815073052",
... "And who is that?<end_of_utterance>",
... "\nAssistant:",
... ],
... ]
>>> # --batched mode
>>> inputs = processor(prompts, add_end_of_utterance_token=False, return_tensors="pt").to(device)
>>> # --single sample mode
>>> # inputs = processor(prompts[0], return_tensors="pt").to(device) | 74_11_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/idefics.md | https://huggingface.co/docs/transformers/en/tasks/idefics/#idefics-instruct-for-conversational-use | .md | >>> # Generation args
>>> exit_condition = processor.tokenizer("<end_of_utterance>", add_special_tokens=False).input_ids
>>> bad_words_ids = processor.tokenizer(["<image>", "<fake_token_around_image>"], add_special_tokens=False).input_ids | 74_11_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/idefics.md | https://huggingface.co/docs/transformers/en/tasks/idefics/#idefics-instruct-for-conversational-use | .md | >>> generated_ids = model.generate(**inputs, eos_token_id=exit_condition, bad_words_ids=bad_words_ids, max_length=100)
>>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)
>>> for i, t in enumerate(generated_text):
... print(f"{i}:\n{t}\n")
``` | 74_11_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/summarization.md | https://huggingface.co/docs/transformers/en/tasks/summarization/ | .md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 75_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/summarization.md | https://huggingface.co/docs/transformers/en/tasks/summarization/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 75_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/summarization.md | https://huggingface.co/docs/transformers/en/tasks/summarization/#summarization | .md | [[open-in-colab]]
<Youtube id="yHnr5Dk2zCI"/>
Summarization creates a shorter version of a document or an article that captures all the important information. Along with translation, it is another example of a task that can be formulated as a sequence-to-sequence task. Summarization can be:
- Extractive: extract the most relevant information from a document.
- Abstractive: generate new text that captures the most relevant information.
This guide will show you how to: | 75_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/summarization.md | https://huggingface.co/docs/transformers/en/tasks/summarization/#summarization | .md | - Abstractive: generate new text that captures the most relevant information.
This guide will show you how to:
1. Finetune [T5](https://huggingface.co/google-t5/t5-small) on the California state bill subset of the [BillSum](https://huggingface.co/datasets/billsum) dataset for abstractive summarization.
2. Use your finetuned model for inference.
<Tip> | 75_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/summarization.md | https://huggingface.co/docs/transformers/en/tasks/summarization/#summarization | .md | 2. Use your finetuned model for inference.
<Tip>
To see all architectures and checkpoints compatible with this task, we recommend checking the [task-page](https://huggingface.co/tasks/summarization)
</Tip>
Before you begin, make sure you have all the necessary libraries installed:
```bash
pip install transformers datasets evaluate rouge_score
``` | 75_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/summarization.md | https://huggingface.co/docs/transformers/en/tasks/summarization/#summarization | .md | ```bash
pip install transformers datasets evaluate rouge_score
```
We encourage you to login to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to login:
```py
>>> from huggingface_hub import notebook_login | 75_1_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/summarization.md | https://huggingface.co/docs/transformers/en/tasks/summarization/#summarization | .md | >>> notebook_login()
``` | 75_1_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/summarization.md | https://huggingface.co/docs/transformers/en/tasks/summarization/#load-billsum-dataset | .md | Start by loading the smaller California state bill subset of the BillSum dataset from the 🤗 Datasets library:
```py
>>> from datasets import load_dataset | 75_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/summarization.md | https://huggingface.co/docs/transformers/en/tasks/summarization/#load-billsum-dataset | .md | >>> billsum = load_dataset("billsum", split="ca_test")
```
Split the dataset into a train and test set with the [`~datasets.Dataset.train_test_split`] method:
```py
>>> billsum = billsum.train_test_split(test_size=0.2)
```
Then take a look at an example:
```py
>>> billsum["train"][0] | 75_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/summarization.md | https://huggingface.co/docs/transformers/en/tasks/summarization/#load-billsum-dataset | .md | {'summary': 'Existing law authorizes state agencies to enter into contracts for the acquisition of goods or services upon approval by the Department of General Services. Existing law sets forth various requirements and prohibitions for those contracts, including, but not limited to, a prohibition on entering into contracts for the acquisition of goods or services of $100,000 or more with a contractor that discriminates between spouses and domestic partners or same-sex and different-sex couples in the | 75_2_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/summarization.md | https://huggingface.co/docs/transformers/en/tasks/summarization/#load-billsum-dataset | .md | or more with a contractor that discriminates between spouses and domestic partners or same-sex and different-sex couples in the provision of benefits. Existing law provides that a contract entered into in violation of those requirements and prohibitions is void and authorizes the state or any person acting on behalf of the state to bring a civil action seeking a determination that a contract is in violation and therefore void. Under existing law, a willful violation of those requirements and prohibitions | 75_2_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/summarization.md | https://huggingface.co/docs/transformers/en/tasks/summarization/#load-billsum-dataset | .md | a contract is in violation and therefore void. Under existing law, a willful violation of those requirements and prohibitions is a misdemeanor.\nThis bill would also prohibit a state agency from entering into contracts for the acquisition of goods or services of $100,000 or more with a contractor that discriminates between employees on the basis of gender identity in the provision of benefits, as specified. By expanding the scope of a crime, this bill would impose a state-mandated local program.\nThe | 75_2_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/summarization.md | https://huggingface.co/docs/transformers/en/tasks/summarization/#load-billsum-dataset | .md | of benefits, as specified. By expanding the scope of a crime, this bill would impose a state-mandated local program.\nThe California Constitution requires the state to reimburse local agencies and school districts for certain costs mandated by the state. Statutory provisions establish procedures for making that reimbursement.\nThis bill would provide that no reimbursement is required by this act for a specified reason.', | 75_2_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/summarization.md | https://huggingface.co/docs/transformers/en/tasks/summarization/#load-billsum-dataset | .md | 'text': 'The people of the State of California do enact as follows:\n\n\nSECTION 1.\nSection 10295.35 is added to the Public Contract Code, to read:\n10295.35.\n(a) (1) Notwithstanding any other law, a state agency shall not enter into any contract for the acquisition of goods or services in the amount of one hundred thousand dollars ($100,000) or more with a contractor that, in the provision of benefits, discriminates between employees on the basis of an employee’s or dependent’s actual or perceived | 75_2_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/summarization.md | https://huggingface.co/docs/transformers/en/tasks/summarization/#load-billsum-dataset | .md | in the provision of benefits, discriminates between employees on the basis of an employee’s or dependent’s actual or perceived gender identity, including, but not limited to, the employee’s or dependent’s identification as transgender.\n(2) For purposes of this section, “contract” includes contracts with a cumulative amount of one hundred thousand dollars ($100,000) or more per contractor in each fiscal year.\n(3) For purposes of this section, an employee health plan is discriminatory if the plan is not | 75_2_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/summarization.md | https://huggingface.co/docs/transformers/en/tasks/summarization/#load-billsum-dataset | .md | contractor in each fiscal year.\n(3) For purposes of this section, an employee health plan is discriminatory if the plan is not consistent with Section 1365.5 of the Health and Safety Code and Section 10140 of the Insurance Code.\n(4) The requirements of this section shall apply only to those portions of a contractor’s operations that occur under any of the following conditions:\n(A) Within the state.\n(B) On real property outside the state if the property is owned by the state or if the state has a right | 75_2_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/summarization.md | https://huggingface.co/docs/transformers/en/tasks/summarization/#load-billsum-dataset | .md | Within the state.\n(B) On real property outside the state if the property is owned by the state or if the state has a right to occupy the property, and if the contractor’s presence at that location is connected to a contract with the state.\n(C) Elsewhere in the United States where work related to a state contract is being performed.\n(b) Contractors shall treat as confidential, to the maximum extent allowed by law or by the requirement of the contractor’s insurance provider, any request by an employee or | 75_2_9 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.