source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/multiple_choice.md | https://huggingface.co/docs/transformers/en/tasks/multiple_choice/#train | .md | >>> tf_validation_set = model.prepare_tf_dataset(
... tokenized_swag["validation"],
... shuffle=False,
... batch_size=batch_size,
... collate_fn=data_collator,
... )
```
Configure the model for training with [`compile`](https://keras.io/api/models/model_training_apis/#compile-method). Note that Transformers models all have a default task-relevant loss function, so you don't need to specify one unless you want to:
```py
>>> model.compile(optimizer=optimizer) # No loss argument!
``` | 90_5_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/multiple_choice.md | https://huggingface.co/docs/transformers/en/tasks/multiple_choice/#train | .md | ```py
>>> model.compile(optimizer=optimizer) # No loss argument!
```
The last two things to setup before you start training is to compute the accuracy from the predictions, and provide a way to push your model to the Hub. Both are done by using [Keras callbacks](../main_classes/keras_callbacks).
Pass your `compute_metrics` function to [`~transformers.KerasMetricCallback`]:
```py
>>> from transformers.keras_callbacks import KerasMetricCallback | 90_5_11 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/multiple_choice.md | https://huggingface.co/docs/transformers/en/tasks/multiple_choice/#train | .md | >>> metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_validation_set)
```
Specify where to push your model and tokenizer in the [`~transformers.PushToHubCallback`]:
```py
>>> from transformers.keras_callbacks import PushToHubCallback | 90_5_12 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/multiple_choice.md | https://huggingface.co/docs/transformers/en/tasks/multiple_choice/#train | .md | >>> push_to_hub_callback = PushToHubCallback(
... output_dir="my_awesome_model",
... tokenizer=tokenizer,
... )
```
Then bundle your callbacks together:
```py
>>> callbacks = [metric_callback, push_to_hub_callback]
```
Finally, you're ready to start training your model! Call [`fit`](https://keras.io/api/models/model_training_apis/#fit-method) with your training and validation datasets, the number of epochs, and your callbacks to finetune the model:
```py | 90_5_13 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/multiple_choice.md | https://huggingface.co/docs/transformers/en/tasks/multiple_choice/#train | .md | ```py
>>> model.fit(x=tf_train_set, validation_data=tf_validation_set, epochs=2, callbacks=callbacks)
```
Once training is completed, your model is automatically uploaded to the Hub so everyone can use it!
</tf>
</frameworkcontent>
<Tip>
For a more in-depth example of how to finetune a model for multiple choice, take a look at the corresponding
[PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice.ipynb) | 90_5_14 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/multiple_choice.md | https://huggingface.co/docs/transformers/en/tasks/multiple_choice/#train | .md | [PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice.ipynb)
or [TensorFlow notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice-tf.ipynb).
</Tip> | 90_5_15 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/multiple_choice.md | https://huggingface.co/docs/transformers/en/tasks/multiple_choice/#inference | .md | Great, now that you've finetuned a model, you can use it for inference!
Come up with some text and two candidate answers:
```py
>>> prompt = "France has a bread law, Le Décret Pain, with strict rules on what is allowed in a traditional baguette."
>>> candidate1 = "The law does not apply to croissants and brioche."
>>> candidate2 = "The law applies to baguettes."
```
<frameworkcontent>
<pt> | 90_6_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/multiple_choice.md | https://huggingface.co/docs/transformers/en/tasks/multiple_choice/#inference | .md | >>> candidate2 = "The law applies to baguettes."
```
<frameworkcontent>
<pt>
Tokenize each prompt and candidate answer pair and return PyTorch tensors. You should also create some `labels`:
```py
>>> from transformers import AutoTokenizer | 90_6_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/multiple_choice.md | https://huggingface.co/docs/transformers/en/tasks/multiple_choice/#inference | .md | >>> tokenizer = AutoTokenizer.from_pretrained("username/my_awesome_swag_model")
>>> inputs = tokenizer([[prompt, candidate1], [prompt, candidate2]], return_tensors="pt", padding=True)
>>> labels = torch.tensor(0).unsqueeze(0)
```
Pass your inputs and labels to the model and return the `logits`:
```py
>>> from transformers import AutoModelForMultipleChoice | 90_6_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/multiple_choice.md | https://huggingface.co/docs/transformers/en/tasks/multiple_choice/#inference | .md | >>> model = AutoModelForMultipleChoice.from_pretrained("username/my_awesome_swag_model")
>>> outputs = model(**{k: v.unsqueeze(0) for k, v in inputs.items()}, labels=labels)
>>> logits = outputs.logits
```
Get the class with the highest probability:
```py
>>> predicted_class = logits.argmax().item()
>>> predicted_class
0
```
</pt>
<tf>
Tokenize each prompt and candidate answer pair and return TensorFlow tensors:
```py
>>> from transformers import AutoTokenizer | 90_6_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/multiple_choice.md | https://huggingface.co/docs/transformers/en/tasks/multiple_choice/#inference | .md | >>> tokenizer = AutoTokenizer.from_pretrained("username/my_awesome_swag_model")
>>> inputs = tokenizer([[prompt, candidate1], [prompt, candidate2]], return_tensors="tf", padding=True)
```
Pass your inputs to the model and return the `logits`:
```py
>>> from transformers import TFAutoModelForMultipleChoice | 90_6_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/multiple_choice.md | https://huggingface.co/docs/transformers/en/tasks/multiple_choice/#inference | .md | >>> model = TFAutoModelForMultipleChoice.from_pretrained("username/my_awesome_swag_model")
>>> inputs = {k: tf.expand_dims(v, 0) for k, v in inputs.items()}
>>> outputs = model(inputs)
>>> logits = outputs.logits
```
Get the class with the highest probability:
```py
>>> predicted_class = int(tf.math.argmax(logits, axis=-1)[0])
>>> predicted_class
0
```
</tf>
</frameworkcontent> | 90_6_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/keypoint_detection.md | https://huggingface.co/docs/transformers/en/tasks/keypoint_detection/ | .md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 91_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/keypoint_detection.md | https://huggingface.co/docs/transformers/en/tasks/keypoint_detection/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 91_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/keypoint_detection.md | https://huggingface.co/docs/transformers/en/tasks/keypoint_detection/#keypoint-detection | .md | [[open-in-colab]]
Keypoint detection identifies and locates specific points of interest within an image. These keypoints, also known as landmarks, represent meaningful features of objects, such as facial features or object parts. These models take an image input and return the following outputs:
- **Keypoints and Scores**: Points of interest and their confidence scores. | 91_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/keypoint_detection.md | https://huggingface.co/docs/transformers/en/tasks/keypoint_detection/#keypoint-detection | .md | - **Keypoints and Scores**: Points of interest and their confidence scores.
- **Descriptors**: A representation of the image region surrounding each keypoint, capturing its texture, gradient, orientation and other properties.
In this guide, we will show how to extract keypoints from images.
For this tutorial, we will use [SuperPoint](./model_doc/superpoint.md), a foundation model for keypoint detection.
```python
from transformers import AutoImageProcessor, SuperPointForKeypointDetection | 91_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/keypoint_detection.md | https://huggingface.co/docs/transformers/en/tasks/keypoint_detection/#keypoint-detection | .md | ```python
from transformers import AutoImageProcessor, SuperPointForKeypointDetection
processor = AutoImageProcessor.from_pretrained("magic-leap-community/superpoint")
model = SuperPointForKeypointDetection.from_pretrained("magic-leap-community/superpoint")
```
Let's test the model on the images below.
<div style="display: flex; align-items: center;">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/bee.jpg"
alt="Bee" | 91_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/keypoint_detection.md | https://huggingface.co/docs/transformers/en/tasks/keypoint_detection/#keypoint-detection | .md | <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/bee.jpg"
alt="Bee"
style="height: 200px; object-fit: contain; margin-right: 10px;">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/cats.png"
alt="Cats"
style="height: 200px; object-fit: contain;">
</div>
```python
import torch
from PIL import Image
import requests
import cv2 | 91_1_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/keypoint_detection.md | https://huggingface.co/docs/transformers/en/tasks/keypoint_detection/#keypoint-detection | .md | url_image_1 = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/bee.jpg"
image_1 = Image.open(requests.get(url_image_1, stream=True).raw)
url_image_2 = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/cats.png"
image_2 = Image.open(requests.get(url_image_2, stream=True).raw) | 91_1_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/keypoint_detection.md | https://huggingface.co/docs/transformers/en/tasks/keypoint_detection/#keypoint-detection | .md | images = [image_1, image_2]
```
We can now process our inputs and infer.
```python
inputs = processor(images,return_tensors="pt").to(model.device, model.dtype)
outputs = model(**inputs)
```
The model output has relative keypoints, descriptors, masks and scores for each item in the batch. The mask highlights areas of the image where keypoints are present.
```python
SuperPointKeypointDescriptionOutput(loss=None, keypoints=tensor([[[0.0437, 0.0167],
[0.0688, 0.0167],
[0.0172, 0.0188],
..., | 91_1_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/keypoint_detection.md | https://huggingface.co/docs/transformers/en/tasks/keypoint_detection/#keypoint-detection | .md | SuperPointKeypointDescriptionOutput(loss=None, keypoints=tensor([[[0.0437, 0.0167],
[0.0688, 0.0167],
[0.0172, 0.0188],
...,
[0.5984, 0.9812],
[0.6953, 0.9812]]]),
scores=tensor([[0.0056, 0.0053, 0.0079, ..., 0.0125, 0.0539, 0.0377],
[0.0206, 0.0058, 0.0065, ..., 0.0000, 0.0000, 0.0000]],
grad_fn=<CopySlices>), descriptors=tensor([[[-0.0807, 0.0114, -0.1210, ..., -0.1122, 0.0899, 0.0357],
[-0.0807, 0.0114, -0.1210, ..., -0.1122, 0.0899, 0.0357], | 91_1_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/keypoint_detection.md | https://huggingface.co/docs/transformers/en/tasks/keypoint_detection/#keypoint-detection | .md | [-0.0807, 0.0114, -0.1210, ..., -0.1122, 0.0899, 0.0357],
[-0.0807, 0.0114, -0.1210, ..., -0.1122, 0.0899, 0.0357],
...],
grad_fn=<CopySlices>), mask=tensor([[1, 1, 1, ..., 1, 1, 1],
[1, 1, 1, ..., 0, 0, 0]], dtype=torch.int32), hidden_states=None)
```
To plot actual keypoints in the image, we need to postprocess the output. To do so, we have to pass the actual image sizes to `post_process_keypoint_detection` along with outputs.
```python | 91_1_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/keypoint_detection.md | https://huggingface.co/docs/transformers/en/tasks/keypoint_detection/#keypoint-detection | .md | ```python
image_sizes = [(image.size[1], image.size[0]) for image in images]
outputs = processor.post_process_keypoint_detection(outputs, image_sizes)
```
The outputs are now a list of dictionaries where each dictionary is a processed output of keypoints, scores and descriptors.
```python
[{'keypoints': tensor([[ 226, 57],
[ 356, 57],
[ 89, 64],
...,
[3604, 3391]], dtype=torch.int32),
'scores': tensor([0.0056, 0.0053, ...], grad_fn=<IndexBackward0>), | 91_1_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/keypoint_detection.md | https://huggingface.co/docs/transformers/en/tasks/keypoint_detection/#keypoint-detection | .md | [ 89, 64],
...,
[3604, 3391]], dtype=torch.int32),
'scores': tensor([0.0056, 0.0053, ...], grad_fn=<IndexBackward0>),
'descriptors': tensor([[-0.0807, 0.0114, -0.1210, ..., -0.1122, 0.0899, 0.0357],
[-0.0807, 0.0114, -0.1210, ..., -0.1122, 0.0899, 0.0357]],
grad_fn=<IndexBackward0>)},
{'keypoints': tensor([[ 46, 6],
[ 78, 6],
[422, 6],
[206, 404]], dtype=torch.int32),
'scores': tensor([0.0206, 0.0058, 0.0065, 0.0053, 0.0070, ...,grad_fn=<IndexBackward0>), | 91_1_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/keypoint_detection.md | https://huggingface.co/docs/transformers/en/tasks/keypoint_detection/#keypoint-detection | .md | [206, 404]], dtype=torch.int32),
'scores': tensor([0.0206, 0.0058, 0.0065, 0.0053, 0.0070, ...,grad_fn=<IndexBackward0>),
'descriptors': tensor([[-0.0525, 0.0726, 0.0270, ..., 0.0389, -0.0189, -0.0211],
[-0.0525, 0.0726, 0.0270, ..., 0.0389, -0.0189, -0.0211]}]
```
We can use these to plot the keypoints.
```python
import matplotlib.pyplot as plt
import torch | 91_1_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/keypoint_detection.md | https://huggingface.co/docs/transformers/en/tasks/keypoint_detection/#keypoint-detection | .md | for i in range(len(images)):
keypoints = outputs[i]["keypoints"]
scores = outputs[i]["scores"]
descriptors = outputs[i]["descriptors"]
keypoints = outputs[i]["keypoints"].detach().numpy()
scores = outputs[i]["scores"].detach().numpy()
image = images[i]
image_width, image_height = image.size | 91_1_11 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/keypoint_detection.md | https://huggingface.co/docs/transformers/en/tasks/keypoint_detection/#keypoint-detection | .md | plt.axis('off')
plt.imshow(image)
plt.scatter(
keypoints[:, 0],
keypoints[:, 1],
s=scores * 100,
c='cyan',
alpha=0.4
)
plt.show()
```
Below you can see the outputs.
<div style="display: flex; align-items: center;">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/bee_keypoint.png"
alt="Bee"
style="height: 200px; object-fit: contain; margin-right: 10px;">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/cats_keypoint.png" | 91_1_12 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/keypoint_detection.md | https://huggingface.co/docs/transformers/en/tasks/keypoint_detection/#keypoint-detection | .md | <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/cats_keypoint.png"
alt="Cats"
style="height: 200px; object-fit: contain;">
</div> | 91_1_13 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/question_answering.md | https://huggingface.co/docs/transformers/en/tasks/question_answering/ | .md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 92_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/question_answering.md | https://huggingface.co/docs/transformers/en/tasks/question_answering/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 92_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/question_answering.md | https://huggingface.co/docs/transformers/en/tasks/question_answering/#question-answering | .md | [[open-in-colab]]
<Youtube id="ajPx5LwJD-I"/>
Question answering tasks return an answer given a question. If you've ever asked a virtual assistant like Alexa, Siri or Google what the weather is, then you've used a question answering model before. There are two common types of question answering tasks:
- Extractive: extract the answer from the given context.
- Abstractive: generate an answer from the context that correctly answers the question.
This guide will show you how to: | 92_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/question_answering.md | https://huggingface.co/docs/transformers/en/tasks/question_answering/#question-answering | .md | - Abstractive: generate an answer from the context that correctly answers the question.
This guide will show you how to:
1. Finetune [DistilBERT](https://huggingface.co/distilbert/distilbert-base-uncased) on the [SQuAD](https://huggingface.co/datasets/squad) dataset for extractive question answering.
2. Use your finetuned model for inference.
<Tip> | 92_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/question_answering.md | https://huggingface.co/docs/transformers/en/tasks/question_answering/#question-answering | .md | 2. Use your finetuned model for inference.
<Tip>
To see all architectures and checkpoints compatible with this task, we recommend checking the [task-page](https://huggingface.co/tasks/question-answering)
</Tip>
Before you begin, make sure you have all the necessary libraries installed:
```bash
pip install transformers datasets evaluate
```
We encourage you to login to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to login: | 92_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/question_answering.md | https://huggingface.co/docs/transformers/en/tasks/question_answering/#question-answering | .md | ```py
>>> from huggingface_hub import notebook_login | 92_1_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/question_answering.md | https://huggingface.co/docs/transformers/en/tasks/question_answering/#question-answering | .md | >>> notebook_login()
``` | 92_1_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/question_answering.md | https://huggingface.co/docs/transformers/en/tasks/question_answering/#load-squad-dataset | .md | Start by loading a smaller subset of the SQuAD dataset from the 🤗 Datasets library. This'll give you a chance to experiment and make sure everything works before spending more time training on the full dataset.
```py
>>> from datasets import load_dataset | 92_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/question_answering.md | https://huggingface.co/docs/transformers/en/tasks/question_answering/#load-squad-dataset | .md | >>> squad = load_dataset("squad", split="train[:5000]")
```
Split the dataset's `train` split into a train and test set with the [`~datasets.Dataset.train_test_split`] method:
```py
>>> squad = squad.train_test_split(test_size=0.2)
```
Then take a look at an example:
```py
>>> squad["train"][0]
{'answers': {'answer_start': [515], 'text': ['Saint Bernadette Soubirous']}, | 92_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/question_answering.md | https://huggingface.co/docs/transformers/en/tasks/question_answering/#load-squad-dataset | .md | 'context': 'Architecturally, the school has a Catholic character. Atop the Main Building\'s gold dome is a golden statue of the Virgin Mary. Immediately in front of the Main Building and facing it, is a copper statue of Christ with arms upraised with the legend "Venite Ad Me Omnes". Next to the Main Building is the Basilica of the Sacred Heart. Immediately behind the basilica is the Grotto, a Marian place of prayer and reflection. It is a replica of the grotto at Lourdes, France where the Virgin Mary | 92_2_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/question_answering.md | https://huggingface.co/docs/transformers/en/tasks/question_answering/#load-squad-dataset | .md | is the Grotto, a Marian place of prayer and reflection. It is a replica of the grotto at Lourdes, France where the Virgin Mary reputedly appeared to Saint Bernadette Soubirous in 1858. At the end of the main drive (and in a direct line that connects through 3 statues and the Gold Dome), is a simple, modern stone statue of Mary.', | 92_2_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/question_answering.md | https://huggingface.co/docs/transformers/en/tasks/question_answering/#load-squad-dataset | .md | 'id': '5733be284776f41900661182',
'question': 'To whom did the Virgin Mary allegedly appear in 1858 in Lourdes France?',
'title': 'University_of_Notre_Dame'
}
```
There are several important fields here:
- `answers`: the starting location of the answer token and the answer text.
- `context`: background information from which the model needs to extract the answer.
- `question`: the question a model should answer. | 92_2_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/question_answering.md | https://huggingface.co/docs/transformers/en/tasks/question_answering/#preprocess | .md | <Youtube id="qgaM0weJHpA"/>
The next step is to load a DistilBERT tokenizer to process the `question` and `context` fields:
```py
>>> from transformers import AutoTokenizer | 92_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/question_answering.md | https://huggingface.co/docs/transformers/en/tasks/question_answering/#preprocess | .md | >>> tokenizer = AutoTokenizer.from_pretrained("distilbert/distilbert-base-uncased")
```
There are a few preprocessing steps particular to question answering tasks you should be aware of:
1. Some examples in a dataset may have a very long `context` that exceeds the maximum input length of the model. To deal with longer sequences, truncate only the `context` by setting `truncation="only_second"`.
2. Next, map the start and end positions of the answer to the original `context` by setting | 92_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/question_answering.md | https://huggingface.co/docs/transformers/en/tasks/question_answering/#preprocess | .md | 2. Next, map the start and end positions of the answer to the original `context` by setting
`return_offset_mapping=True`.
3. With the mapping in hand, now you can find the start and end tokens of the answer. Use the [`~tokenizers.Encoding.sequence_ids`] method to
find which part of the offset corresponds to the `question` and which corresponds to the `context`.
Here is how you can create a function to truncate and map the start and end tokens of the `answer` to the `context`:
```py | 92_3_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/question_answering.md | https://huggingface.co/docs/transformers/en/tasks/question_answering/#preprocess | .md | Here is how you can create a function to truncate and map the start and end tokens of the `answer` to the `context`:
```py
>>> def preprocess_function(examples):
... questions = [q.strip() for q in examples["question"]]
... inputs = tokenizer(
... questions,
... examples["context"],
... max_length=384,
... truncation="only_second",
... return_offsets_mapping=True,
... padding="max_length",
... ) | 92_3_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/question_answering.md | https://huggingface.co/docs/transformers/en/tasks/question_answering/#preprocess | .md | ... offset_mapping = inputs.pop("offset_mapping")
... answers = examples["answers"]
... start_positions = []
... end_positions = []
... for i, offset in enumerate(offset_mapping):
... answer = answers[i]
... start_char = answer["answer_start"][0]
... end_char = answer["answer_start"][0] + len(answer["text"][0])
... sequence_ids = inputs.sequence_ids(i) | 92_3_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/question_answering.md | https://huggingface.co/docs/transformers/en/tasks/question_answering/#preprocess | .md | ... # Find the start and end of the context
... idx = 0
... while sequence_ids[idx] != 1:
... idx += 1
... context_start = idx
... while sequence_ids[idx] == 1:
... idx += 1
... context_end = idx - 1 | 92_3_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/question_answering.md | https://huggingface.co/docs/transformers/en/tasks/question_answering/#preprocess | .md | ... # If the answer is not fully inside the context, label it (0, 0)
... if offset[context_start][0] > end_char or offset[context_end][1] < start_char:
... start_positions.append(0)
... end_positions.append(0)
... else:
... # Otherwise it's the start and end token positions
... idx = context_start
... while idx <= context_end and offset[idx][0] <= start_char:
... idx += 1 | 92_3_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/question_answering.md | https://huggingface.co/docs/transformers/en/tasks/question_answering/#preprocess | .md | ... while idx <= context_end and offset[idx][0] <= start_char:
... idx += 1
... start_positions.append(idx - 1) | 92_3_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/question_answering.md | https://huggingface.co/docs/transformers/en/tasks/question_answering/#preprocess | .md | ... idx = context_end
... while idx >= context_start and offset[idx][1] >= end_char:
... idx -= 1
... end_positions.append(idx + 1) | 92_3_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/question_answering.md | https://huggingface.co/docs/transformers/en/tasks/question_answering/#preprocess | .md | ... inputs["start_positions"] = start_positions
... inputs["end_positions"] = end_positions
... return inputs
```
To apply the preprocessing function over the entire dataset, use 🤗 Datasets [`~datasets.Dataset.map`] function. You can speed up the `map` function by setting `batched=True` to process multiple elements of the dataset at once. Remove any columns you don't need:
```py
>>> tokenized_squad = squad.map(preprocess_function, batched=True, remove_columns=squad["train"].column_names) | 92_3_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/question_answering.md | https://huggingface.co/docs/transformers/en/tasks/question_answering/#preprocess | .md | ```py
>>> tokenized_squad = squad.map(preprocess_function, batched=True, remove_columns=squad["train"].column_names)
```
Now create a batch of examples using [`DefaultDataCollator`]. Unlike other data collators in 🤗 Transformers, the [`DefaultDataCollator`] does not apply any additional preprocessing such as padding.
<frameworkcontent>
<pt>
```py
>>> from transformers import DefaultDataCollator | 92_3_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/question_answering.md | https://huggingface.co/docs/transformers/en/tasks/question_answering/#preprocess | .md | >>> data_collator = DefaultDataCollator()
```
</pt>
<tf>
```py
>>> from transformers import DefaultDataCollator
>>> data_collator = DefaultDataCollator(return_tensors="tf")
```
</tf>
</frameworkcontent> | 92_3_11 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/question_answering.md | https://huggingface.co/docs/transformers/en/tasks/question_answering/#train | .md | <frameworkcontent>
<pt>
<Tip>
If you aren't familiar with finetuning a model with the [`Trainer`], take a look at the basic tutorial [here](../training#train-with-pytorch-trainer)!
</Tip>
You're ready to start training your model now! Load DistilBERT with [`AutoModelForQuestionAnswering`]:
```py
>>> from transformers import AutoModelForQuestionAnswering, TrainingArguments, Trainer | 92_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/question_answering.md | https://huggingface.co/docs/transformers/en/tasks/question_answering/#train | .md | >>> model = AutoModelForQuestionAnswering.from_pretrained("distilbert/distilbert-base-uncased")
```
At this point, only three steps remain:
1. Define your training hyperparameters in [`TrainingArguments`]. The only required parameter is `output_dir` which specifies where to save your model. You'll push this model to the Hub by setting `push_to_hub=True` (you need to be signed in to Hugging Face to upload your model). | 92_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/question_answering.md | https://huggingface.co/docs/transformers/en/tasks/question_answering/#train | .md | 2. Pass the training arguments to [`Trainer`] along with the model, dataset, tokenizer, and data collator.
3. Call [`~Trainer.train`] to finetune your model.
```py
>>> training_args = TrainingArguments(
... output_dir="my_awesome_qa_model",
... eval_strategy="epoch",
... learning_rate=2e-5,
... per_device_train_batch_size=16,
... per_device_eval_batch_size=16,
... num_train_epochs=3,
... weight_decay=0.01,
... push_to_hub=True,
... ) | 92_4_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/question_answering.md | https://huggingface.co/docs/transformers/en/tasks/question_answering/#train | .md | >>> trainer = Trainer(
... model=model,
... args=training_args,
... train_dataset=tokenized_squad["train"],
... eval_dataset=tokenized_squad["test"],
... processing_class=tokenizer,
... data_collator=data_collator,
... ) | 92_4_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/question_answering.md | https://huggingface.co/docs/transformers/en/tasks/question_answering/#train | .md | >>> trainer.train()
```
Once training is completed, share your model to the Hub with the [`~transformers.Trainer.push_to_hub`] method so everyone can use your model:
```py
>>> trainer.push_to_hub()
```
</pt>
<tf>
<Tip>
If you aren't familiar with finetuning a model with Keras, take a look at the basic tutorial [here](../training#train-a-tensorflow-model-with-keras)!
</Tip> | 92_4_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/question_answering.md | https://huggingface.co/docs/transformers/en/tasks/question_answering/#train | .md | </Tip>
To finetune a model in TensorFlow, start by setting up an optimizer function, learning rate schedule, and some training hyperparameters:
```py
>>> from transformers import create_optimizer | 92_4_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/question_answering.md | https://huggingface.co/docs/transformers/en/tasks/question_answering/#train | .md | >>> batch_size = 16
>>> num_epochs = 2
>>> total_train_steps = (len(tokenized_squad["train"]) // batch_size) * num_epochs
>>> optimizer, schedule = create_optimizer(
... init_lr=2e-5,
... num_warmup_steps=0,
... num_train_steps=total_train_steps,
... )
```
Then you can load DistilBERT with [`TFAutoModelForQuestionAnswering`]:
```py
>>> from transformers import TFAutoModelForQuestionAnswering | 92_4_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/question_answering.md | https://huggingface.co/docs/transformers/en/tasks/question_answering/#train | .md | >>> model = TFAutoModelForQuestionAnswering.from_pretrained("distilbert/distilbert-base-uncased")
```
Convert your datasets to the `tf.data.Dataset` format with [`~transformers.TFPreTrainedModel.prepare_tf_dataset`]:
```py
>>> tf_train_set = model.prepare_tf_dataset(
... tokenized_squad["train"],
... shuffle=True,
... batch_size=16,
... collate_fn=data_collator,
... ) | 92_4_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/question_answering.md | https://huggingface.co/docs/transformers/en/tasks/question_answering/#train | .md | >>> tf_validation_set = model.prepare_tf_dataset(
... tokenized_squad["test"],
... shuffle=False,
... batch_size=16,
... collate_fn=data_collator,
... )
```
Configure the model for training with [`compile`](https://keras.io/api/models/model_training_apis/#compile-method):
```py
>>> import tensorflow as tf | 92_4_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/question_answering.md | https://huggingface.co/docs/transformers/en/tasks/question_answering/#train | .md | >>> model.compile(optimizer=optimizer)
```
The last thing to setup before you start training is to provide a way to push your model to the Hub. This can be done by specifying where to push your model and tokenizer in the [`~transformers.PushToHubCallback`]:
```py
>>> from transformers.keras_callbacks import PushToHubCallback | 92_4_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/question_answering.md | https://huggingface.co/docs/transformers/en/tasks/question_answering/#train | .md | >>> callback = PushToHubCallback(
... output_dir="my_awesome_qa_model",
... tokenizer=tokenizer,
... )
```
Finally, you're ready to start training your model! Call [`fit`](https://keras.io/api/models/model_training_apis/#fit-method) with your training and validation datasets, the number of epochs, and your callback to finetune the model:
```py
>>> model.fit(x=tf_train_set, validation_data=tf_validation_set, epochs=3, callbacks=[callback])
``` | 92_4_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/question_answering.md | https://huggingface.co/docs/transformers/en/tasks/question_answering/#train | .md | ```py
>>> model.fit(x=tf_train_set, validation_data=tf_validation_set, epochs=3, callbacks=[callback])
```
Once training is completed, your model is automatically uploaded to the Hub so everyone can use it!
</tf>
</frameworkcontent>
<Tip>
For a more in-depth example of how to finetune a model for question answering, take a look at the corresponding
[PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering.ipynb) | 92_4_11 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/question_answering.md | https://huggingface.co/docs/transformers/en/tasks/question_answering/#train | .md | [PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering.ipynb)
or [TensorFlow notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering-tf.ipynb).
</Tip> | 92_4_12 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/question_answering.md | https://huggingface.co/docs/transformers/en/tasks/question_answering/#evaluate | .md | Evaluation for question answering requires a significant amount of postprocessing. To avoid taking up too much of your time, this guide skips the evaluation step. The [`Trainer`] still calculates the evaluation loss during training so you're not completely in the dark about your model's performance. | 92_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/question_answering.md | https://huggingface.co/docs/transformers/en/tasks/question_answering/#evaluate | .md | If you have more time and you're interested in how to evaluate your model for question answering, take a look at the [Question answering](https://huggingface.co/course/chapter7/7?fw=pt#post-processing) chapter from the 🤗 Hugging Face Course! | 92_5_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/question_answering.md | https://huggingface.co/docs/transformers/en/tasks/question_answering/#inference | .md | Great, now that you've finetuned a model, you can use it for inference!
Come up with a question and some context you'd like the model to predict:
```py
>>> question = "How many programming languages does BLOOM support?"
>>> context = "BLOOM has 176 billion parameters and can generate text in 46 languages natural languages and 13 programming languages."
``` | 92_6_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/question_answering.md | https://huggingface.co/docs/transformers/en/tasks/question_answering/#inference | .md | ```
The simplest way to try out your finetuned model for inference is to use it in a [`pipeline`]. Instantiate a `pipeline` for question answering with your model, and pass your text to it:
```py
>>> from transformers import pipeline | 92_6_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/question_answering.md | https://huggingface.co/docs/transformers/en/tasks/question_answering/#inference | .md | >>> question_answerer = pipeline("question-answering", model="my_awesome_qa_model")
>>> question_answerer(question=question, context=context)
{'score': 0.2058267742395401,
'start': 10,
'end': 95,
'answer': '176 billion parameters and can generate text in 46 languages natural languages and 13'}
```
You can also manually replicate the results of the `pipeline` if you'd like:
<frameworkcontent>
<pt>
Tokenize the text and return PyTorch tensors:
```py
>>> from transformers import AutoTokenizer | 92_6_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/question_answering.md | https://huggingface.co/docs/transformers/en/tasks/question_answering/#inference | .md | >>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_qa_model")
>>> inputs = tokenizer(question, context, return_tensors="pt")
```
Pass your inputs to the model and return the `logits`:
```py
>>> import torch
>>> from transformers import AutoModelForQuestionAnswering | 92_6_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/question_answering.md | https://huggingface.co/docs/transformers/en/tasks/question_answering/#inference | .md | >>> model = AutoModelForQuestionAnswering.from_pretrained("my_awesome_qa_model")
>>> with torch.no_grad():
... outputs = model(**inputs)
```
Get the highest probability from the model output for the start and end positions:
```py
>>> answer_start_index = outputs.start_logits.argmax()
>>> answer_end_index = outputs.end_logits.argmax()
```
Decode the predicted tokens to get the answer:
```py
>>> predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1] | 92_6_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/question_answering.md | https://huggingface.co/docs/transformers/en/tasks/question_answering/#inference | .md | ```py
>>> predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
>>> tokenizer.decode(predict_answer_tokens)
'176 billion parameters and can generate text in 46 languages natural languages and 13'
```
</pt>
<tf>
Tokenize the text and return TensorFlow tensors:
```py
>>> from transformers import AutoTokenizer | 92_6_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/question_answering.md | https://huggingface.co/docs/transformers/en/tasks/question_answering/#inference | .md | >>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_qa_model")
>>> inputs = tokenizer(question, context, return_tensors="tf")
```
Pass your inputs to the model and return the `logits`:
```py
>>> from transformers import TFAutoModelForQuestionAnswering | 92_6_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/question_answering.md | https://huggingface.co/docs/transformers/en/tasks/question_answering/#inference | .md | >>> model = TFAutoModelForQuestionAnswering.from_pretrained("my_awesome_qa_model")
>>> outputs = model(**inputs)
```
Get the highest probability from the model output for the start and end positions:
```py
>>> answer_start_index = int(tf.math.argmax(outputs.start_logits, axis=-1)[0])
>>> answer_end_index = int(tf.math.argmax(outputs.end_logits, axis=-1)[0])
```
Decode the predicted tokens to get the answer:
```py | 92_6_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/question_answering.md | https://huggingface.co/docs/transformers/en/tasks/question_answering/#inference | .md | ```
Decode the predicted tokens to get the answer:
```py
>>> predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
>>> tokenizer.decode(predict_answer_tokens)
'176 billion parameters and can generate text in 46 languages natural languages and 13'
```
</tf>
</frameworkcontent> | 92_6_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/masked_language_modeling.md | https://huggingface.co/docs/transformers/en/tasks/masked_language_modeling/ | .md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 93_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/masked_language_modeling.md | https://huggingface.co/docs/transformers/en/tasks/masked_language_modeling/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 93_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/masked_language_modeling.md | https://huggingface.co/docs/transformers/en/tasks/masked_language_modeling/#masked-language-modeling | .md | [[open-in-colab]]
<Youtube id="mqElG5QJWUg"/>
Masked language modeling predicts a masked token in a sequence, and the model can attend to tokens bidirectionally. This
means the model has full access to the tokens on the left and right. Masked language modeling is great for tasks that
require a good contextual understanding of an entire sequence. BERT is an example of a masked language model.
This guide will show you how to: | 93_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/masked_language_modeling.md | https://huggingface.co/docs/transformers/en/tasks/masked_language_modeling/#masked-language-modeling | .md | This guide will show you how to:
1. Finetune [DistilRoBERTa](https://huggingface.co/distilbert/distilroberta-base) on the [r/askscience](https://www.reddit.com/r/askscience/) subset of the [ELI5](https://huggingface.co/datasets/eli5) dataset.
2. Use your finetuned model for inference.
<Tip>
To see all architectures and checkpoints compatible with this task, we recommend checking the [task-page](https://huggingface.co/tasks/fill-mask)
</Tip> | 93_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/masked_language_modeling.md | https://huggingface.co/docs/transformers/en/tasks/masked_language_modeling/#masked-language-modeling | .md | </Tip>
Before you begin, make sure you have all the necessary libraries installed:
```bash
pip install transformers datasets evaluate
```
We encourage you to log in to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to log in:
```py
>>> from huggingface_hub import notebook_login | 93_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/masked_language_modeling.md | https://huggingface.co/docs/transformers/en/tasks/masked_language_modeling/#masked-language-modeling | .md | >>> notebook_login()
``` | 93_1_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/masked_language_modeling.md | https://huggingface.co/docs/transformers/en/tasks/masked_language_modeling/#load-eli5-dataset | .md | Start by loading the first 5000 examples from the [ELI5-Category](https://huggingface.co/datasets/eli5_category) dataset with the 🤗 Datasets library. This'll give you a chance to experiment and make sure everything works before spending more time training on the full dataset.
```py
>>> from datasets import load_dataset | 93_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/masked_language_modeling.md | https://huggingface.co/docs/transformers/en/tasks/masked_language_modeling/#load-eli5-dataset | .md | >>> eli5 = load_dataset("eli5_category", split="train[:5000]")
```
Split the dataset's `train` split into a train and test set with the [`~datasets.Dataset.train_test_split`] method:
```py
>>> eli5 = eli5.train_test_split(test_size=0.2)
```
Then take a look at an example:
```py
>>> eli5["train"][0]
{'q_id': '7h191n',
'title': 'What does the tax bill that was passed today mean? How will it affect Americans in each tax bracket?',
'selftext': '',
'category': 'Economics', | 93_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/masked_language_modeling.md | https://huggingface.co/docs/transformers/en/tasks/masked_language_modeling/#load-eli5-dataset | .md | 'selftext': '',
'category': 'Economics',
'subreddit': 'explainlikeimfive',
'answers': {'a_id': ['dqnds8l', 'dqnd1jl', 'dqng3i1', 'dqnku5x'], | 93_2_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/masked_language_modeling.md | https://huggingface.co/docs/transformers/en/tasks/masked_language_modeling/#load-eli5-dataset | .md | 'text': ["The tax bill is 500 pages long and there were a lot of changes still going on right to the end. It's not just an adjustment to the income tax brackets, it's a whole bunch of changes. As such there is no good answer to your question. The big take aways are: - Big reduction in corporate income tax rate will make large companies very happy. - Pass through rate change will make certain styles of business (law firms, hedge funds) extremely happy - Income tax changes are moderate, and are set to expire | 93_2_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/masked_language_modeling.md | https://huggingface.co/docs/transformers/en/tasks/masked_language_modeling/#load-eli5-dataset | .md | certain styles of business (law firms, hedge funds) extremely happy - Income tax changes are moderate, and are set to expire (though it's the kind of thing that might just always get re-applied without being made permanent) - People in high tax states (California, New York) lose out, and many of them will end up with their taxes raised.", | 93_2_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/masked_language_modeling.md | https://huggingface.co/docs/transformers/en/tasks/masked_language_modeling/#load-eli5-dataset | .md | 'None yet. It has to be reconciled with a vastly different house bill and then passed again.',
'Also: does this apply to 2017 taxes? Or does it start with 2018 taxes?',
'This article explains both the House and senate bills, including the proposed changes to your income taxes based on your income level. URL_0'],
'score': [21, 19, 5, 3],
'text_urls': [[],
[],
[],
['https://www.investopedia.com/news/trumps-tax-reform-what-can-be-done/']]},
'title_urls': ['url'],
'selftext_urls': ['url']}
``` | 93_2_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/masked_language_modeling.md | https://huggingface.co/docs/transformers/en/tasks/masked_language_modeling/#load-eli5-dataset | .md | 'title_urls': ['url'],
'selftext_urls': ['url']}
```
While this may look like a lot, you're only really interested in the `text` field. What's cool about language modeling tasks is you don't need labels (also known as an unsupervised task) because the next word *is* the label. | 93_2_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/masked_language_modeling.md | https://huggingface.co/docs/transformers/en/tasks/masked_language_modeling/#preprocess | .md | <Youtube id="8PmhEIXhBvI"/>
For masked language modeling, the next step is to load a DistilRoBERTa tokenizer to process the `text` subfield:
```py
>>> from transformers import AutoTokenizer | 93_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/masked_language_modeling.md | https://huggingface.co/docs/transformers/en/tasks/masked_language_modeling/#preprocess | .md | >>> tokenizer = AutoTokenizer.from_pretrained("distilbert/distilroberta-base")
```
You'll notice from the example above, the `text` field is actually nested inside `answers`. This means you'll need to extract the `text` subfield from its nested structure with the [`flatten`](https://huggingface.co/docs/datasets/process#flatten) method:
```py
>>> eli5 = eli5.flatten()
>>> eli5["train"][0]
{'q_id': '7h191n', | 93_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/masked_language_modeling.md | https://huggingface.co/docs/transformers/en/tasks/masked_language_modeling/#preprocess | .md | ```py
>>> eli5 = eli5.flatten()
>>> eli5["train"][0]
{'q_id': '7h191n',
'title': 'What does the tax bill that was passed today mean? How will it affect Americans in each tax bracket?',
'selftext': '',
'category': 'Economics',
'subreddit': 'explainlikeimfive',
'answers.a_id': ['dqnds8l', 'dqnd1jl', 'dqng3i1', 'dqnku5x'], | 93_3_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/masked_language_modeling.md | https://huggingface.co/docs/transformers/en/tasks/masked_language_modeling/#preprocess | .md | 'answers.text': ["The tax bill is 500 pages long and there were a lot of changes still going on right to the end. It's not just an adjustment to the income tax brackets, it's a whole bunch of changes. As such there is no good answer to your question. The big take aways are: - Big reduction in corporate income tax rate will make large companies very happy. - Pass through rate change will make certain styles of business (law firms, hedge funds) extremely happy - Income tax changes are moderate, and are set | 93_3_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/masked_language_modeling.md | https://huggingface.co/docs/transformers/en/tasks/masked_language_modeling/#preprocess | .md | will make certain styles of business (law firms, hedge funds) extremely happy - Income tax changes are moderate, and are set to expire (though it's the kind of thing that might just always get re-applied without being made permanent) - People in high tax states (California, New York) lose out, and many of them will end up with their taxes raised.", | 93_3_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/masked_language_modeling.md | https://huggingface.co/docs/transformers/en/tasks/masked_language_modeling/#preprocess | .md | 'None yet. It has to be reconciled with a vastly different house bill and then passed again.',
'Also: does this apply to 2017 taxes? Or does it start with 2018 taxes?',
'This article explains both the House and senate bills, including the proposed changes to your income taxes based on your income level. URL_0'],
'answers.score': [21, 19, 5, 3],
'answers.text_urls': [[],
[],
[],
['https://www.investopedia.com/news/trumps-tax-reform-what-can-be-done/']],
'title_urls': ['url'],
'selftext_urls': ['url']}
``` | 93_3_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/masked_language_modeling.md | https://huggingface.co/docs/transformers/en/tasks/masked_language_modeling/#preprocess | .md | 'title_urls': ['url'],
'selftext_urls': ['url']}
```
Each subfield is now a separate column as indicated by the `answers` prefix, and the `text` field is a list now. Instead
of tokenizing each sentence separately, convert the list to a string so you can jointly tokenize them.
Here is a first preprocessing function to join the list of strings for each example and tokenize the result:
```py
>>> def preprocess_function(examples):
... return tokenizer([" ".join(x) for x in examples["answers.text"]]) | 93_3_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/masked_language_modeling.md | https://huggingface.co/docs/transformers/en/tasks/masked_language_modeling/#preprocess | .md | ```py
>>> def preprocess_function(examples):
... return tokenizer([" ".join(x) for x in examples["answers.text"]])
```
To apply this preprocessing function over the entire dataset, use the 🤗 Datasets [`~datasets.Dataset.map`] method. You can speed up the `map` function by setting `batched=True` to process multiple elements of the dataset at once, and increasing the number of processes with `num_proc`. Remove any columns you don't need:
```py
>>> tokenized_eli5 = eli5.map( | 93_3_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/masked_language_modeling.md | https://huggingface.co/docs/transformers/en/tasks/masked_language_modeling/#preprocess | .md | ```py
>>> tokenized_eli5 = eli5.map(
... preprocess_function,
... batched=True,
... num_proc=4,
... remove_columns=eli5["train"].column_names,
... )
```
This dataset contains the token sequences, but some of these are longer than the maximum input length for the model.
You can now use a second preprocessing function to
- concatenate all the sequences | 93_3_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/masked_language_modeling.md | https://huggingface.co/docs/transformers/en/tasks/masked_language_modeling/#preprocess | .md | You can now use a second preprocessing function to
- concatenate all the sequences
- split the concatenated sequences into shorter chunks defined by `block_size`, which should be both shorter than the maximum input length and short enough for your GPU RAM.
```py
>>> block_size = 128 | 93_3_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/masked_language_modeling.md | https://huggingface.co/docs/transformers/en/tasks/masked_language_modeling/#preprocess | .md | >>> def group_texts(examples):
... # Concatenate all texts.
... concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}
... total_length = len(concatenated_examples[list(examples.keys())[0]])
... # We drop the small remainder, we could add padding if the model supported it instead of this drop, you can
... # customize this part to your needs.
... if total_length >= block_size:
... total_length = (total_length // block_size) * block_size | 93_3_10 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.