source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/audio_classification.md
https://huggingface.co/docs/transformers/en/tasks/audio_classification/#evaluate
.md
>>> accuracy = evaluate.load("accuracy") ``` Then create a function that passes your predictions and labels to [`~evaluate.EvaluationModule.compute`] to calculate the accuracy: ```py >>> import numpy as np
97_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/audio_classification.md
https://huggingface.co/docs/transformers/en/tasks/audio_classification/#evaluate
.md
>>> def compute_metrics(eval_pred): ... predictions = np.argmax(eval_pred.predictions, axis=1) ... return accuracy.compute(predictions=predictions, references=eval_pred.label_ids) ``` Your `compute_metrics` function is ready to go now, and you'll return to it when you setup your training.
97_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/audio_classification.md
https://huggingface.co/docs/transformers/en/tasks/audio_classification/#train
.md
<frameworkcontent> <pt> <Tip> If you aren't familiar with finetuning a model with the [`Trainer`], take a look at the basic tutorial [here](../training#train-with-pytorch-trainer)! </Tip> You're ready to start training your model now! Load Wav2Vec2 with [`AutoModelForAudioClassification`] along with the number of expected labels, and the label mappings: ```py >>> from transformers import AutoModelForAudioClassification, TrainingArguments, Trainer
97_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/audio_classification.md
https://huggingface.co/docs/transformers/en/tasks/audio_classification/#train
.md
>>> num_labels = len(id2label) >>> model = AutoModelForAudioClassification.from_pretrained( ... "facebook/wav2vec2-base", num_labels=num_labels, label2id=label2id, id2label=id2label ... ) ``` At this point, only three steps remain:
97_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/audio_classification.md
https://huggingface.co/docs/transformers/en/tasks/audio_classification/#train
.md
... ) ``` At this point, only three steps remain: 1. Define your training hyperparameters in [`TrainingArguments`]. The only required parameter is `output_dir`, which specifies where to save your model. You'll push this model to the Hub by setting `push_to_hub=True` (you need to be signed in to Hugging Face to upload your model). At the end of each epoch, the [`Trainer`] will evaluate the accuracy and save the training checkpoint.
97_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/audio_classification.md
https://huggingface.co/docs/transformers/en/tasks/audio_classification/#train
.md
2. Pass the training arguments to [`Trainer`] along with the model, dataset, tokenizer, data collator, and `compute_metrics` function. 3. Call [`~Trainer.train`] to fine-tune your model. ```py >>> training_args = TrainingArguments( ... output_dir="my_awesome_mind_model", ... eval_strategy="epoch", ... save_strategy="epoch", ... learning_rate=3e-5, ... per_device_train_batch_size=32, ... gradient_accumulation_steps=4, ... per_device_eval_batch_size=32,
97_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/audio_classification.md
https://huggingface.co/docs/transformers/en/tasks/audio_classification/#train
.md
... per_device_train_batch_size=32, ... gradient_accumulation_steps=4, ... per_device_eval_batch_size=32, ... num_train_epochs=10, ... warmup_ratio=0.1, ... logging_steps=10, ... load_best_model_at_end=True, ... metric_for_best_model="accuracy", ... push_to_hub=True, ... )
97_5_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/audio_classification.md
https://huggingface.co/docs/transformers/en/tasks/audio_classification/#train
.md
>>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=encoded_minds["train"], ... eval_dataset=encoded_minds["test"], ... processing_class=feature_extractor, ... compute_metrics=compute_metrics, ... )
97_5_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/audio_classification.md
https://huggingface.co/docs/transformers/en/tasks/audio_classification/#train
.md
>>> trainer.train() ``` Once training is completed, share your model to the Hub with the [`~transformers.Trainer.push_to_hub`] method so everyone can use your model: ```py >>> trainer.push_to_hub() ``` </pt> </frameworkcontent> <Tip> For a more in-depth example of how to fine-tune a model for audio classification, take a look at the corresponding [PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/audio_classification.ipynb). </Tip>
97_5_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/audio_classification.md
https://huggingface.co/docs/transformers/en/tasks/audio_classification/#inference
.md
Great, now that you've fine-tuned a model, you can use it for inference! Load an audio file for inference. Remember to resample the sampling rate of the audio file to match the model's sampling rate, if necessary. ```py >>> from datasets import load_dataset, Audio
97_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/audio_classification.md
https://huggingface.co/docs/transformers/en/tasks/audio_classification/#inference
.md
>>> dataset = load_dataset("PolyAI/minds14", name="en-US", split="train") >>> dataset = dataset.cast_column("audio", Audio(sampling_rate=16000)) >>> sampling_rate = dataset.features["audio"].sampling_rate >>> audio_file = dataset[0]["audio"]["path"] ``` The simplest way to try out your fine-tuned model for inference is to use it in a [`pipeline`]. Instantiate a `pipeline` for audio classification with your model, and pass your audio file to it: ```py >>> from transformers import pipeline
97_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/audio_classification.md
https://huggingface.co/docs/transformers/en/tasks/audio_classification/#inference
.md
>>> classifier = pipeline("audio-classification", model="stevhliu/my_awesome_minds_model") >>> classifier(audio_file) [ {'score': 0.09766869246959686, 'label': 'cash_deposit'}, {'score': 0.07998877018690109, 'label': 'app_error'}, {'score': 0.0781070664525032, 'label': 'joint_account'}, {'score': 0.07667109370231628, 'label': 'pay_bill'}, {'score': 0.0755252093076706, 'label': 'balance'} ] ``` You can also manually replicate the results of the `pipeline` if you'd like: <frameworkcontent> <pt>
97_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/audio_classification.md
https://huggingface.co/docs/transformers/en/tasks/audio_classification/#inference
.md
] ``` You can also manually replicate the results of the `pipeline` if you'd like: <frameworkcontent> <pt> Load a feature extractor to preprocess the audio file and return the `input` as PyTorch tensors: ```py >>> from transformers import AutoFeatureExtractor
97_6_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/audio_classification.md
https://huggingface.co/docs/transformers/en/tasks/audio_classification/#inference
.md
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("stevhliu/my_awesome_minds_model") >>> inputs = feature_extractor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt") ``` Pass your inputs to the model and return the logits: ```py >>> from transformers import AutoModelForAudioClassification
97_6_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/audio_classification.md
https://huggingface.co/docs/transformers/en/tasks/audio_classification/#inference
.md
>>> model = AutoModelForAudioClassification.from_pretrained("stevhliu/my_awesome_minds_model") >>> with torch.no_grad(): ... logits = model(**inputs).logits ``` Get the class with the highest probability, and use the model's `id2label` mapping to convert it to a label: ```py >>> import torch >>> predicted_class_ids = torch.argmax(logits).item() >>> predicted_label = model.config.id2label[predicted_class_ids] >>> predicted_label 'cash_deposit' ``` </pt> </frameworkcontent>
97_6_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/asr.md
https://huggingface.co/docs/transformers/en/tasks/asr/
.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
98_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/asr.md
https://huggingface.co/docs/transformers/en/tasks/asr/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
98_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/asr.md
https://huggingface.co/docs/transformers/en/tasks/asr/#automatic-speech-recognition
.md
[[open-in-colab]] <Youtube id="TksaY_FDgnk"/> Automatic speech recognition (ASR) converts a speech signal to text, mapping a sequence of audio inputs to text outputs. Virtual assistants like Siri and Alexa use ASR models to help users every day, and there are many other useful user-facing applications like live captioning and note-taking during meetings. This guide will show you how to:
98_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/asr.md
https://huggingface.co/docs/transformers/en/tasks/asr/#automatic-speech-recognition
.md
This guide will show you how to: 1. Fine-tune [Wav2Vec2](https://huggingface.co/facebook/wav2vec2-base) on the [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) dataset to transcribe audio to text. 2. Use your fine-tuned model for inference. <Tip> To see all architectures and checkpoints compatible with this task, we recommend checking the [task-page](https://huggingface.co/tasks/automatic-speech-recognition) </Tip>
98_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/asr.md
https://huggingface.co/docs/transformers/en/tasks/asr/#automatic-speech-recognition
.md
</Tip> Before you begin, make sure you have all the necessary libraries installed: ```bash pip install transformers datasets evaluate jiwer ``` We encourage you to login to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to login: ```py >>> from huggingface_hub import notebook_login
98_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/asr.md
https://huggingface.co/docs/transformers/en/tasks/asr/#automatic-speech-recognition
.md
>>> notebook_login() ```
98_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/asr.md
https://huggingface.co/docs/transformers/en/tasks/asr/#load-minds-14-dataset
.md
Start by loading a smaller subset of the [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) dataset from the 🤗 Datasets library. This will give you a chance to experiment and make sure everything works before spending more time training on the full dataset. ```py >>> from datasets import load_dataset, Audio
98_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/asr.md
https://huggingface.co/docs/transformers/en/tasks/asr/#load-minds-14-dataset
.md
>>> minds = load_dataset("PolyAI/minds14", name="en-US", split="train[:100]") ``` Split the dataset's `train` split into a train and test set with the [`~Dataset.train_test_split`] method: ```py >>> minds = minds.train_test_split(test_size=0.2) ``` Then take a look at the dataset: ```py >>> minds DatasetDict({ train: Dataset({ features: ['path', 'audio', 'transcription', 'english_transcription', 'intent_class', 'lang_id'], num_rows: 16 }) test: Dataset({
98_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/asr.md
https://huggingface.co/docs/transformers/en/tasks/asr/#load-minds-14-dataset
.md
num_rows: 16 }) test: Dataset({ features: ['path', 'audio', 'transcription', 'english_transcription', 'intent_class', 'lang_id'], num_rows: 4 }) }) ``` While the dataset contains a lot of useful information, like `lang_id` and `english_transcription`, this guide focuses on the `audio` and `transcription`. Remove the other columns with the [`~datasets.Dataset.remove_columns`] method: ```py >>> minds = minds.remove_columns(["english_transcription", "intent_class", "lang_id"]) ```
98_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/asr.md
https://huggingface.co/docs/transformers/en/tasks/asr/#load-minds-14-dataset
.md
```py >>> minds = minds.remove_columns(["english_transcription", "intent_class", "lang_id"]) ``` Review the example again: ```py >>> minds["train"][0] {'audio': {'array': array([-0.00024414, 0. , 0. , ..., 0.00024414, 0.00024414, 0.00024414], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602ba9e2963e11ccd901cd4f.wav', 'sampling_rate': 8000},
98_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/asr.md
https://huggingface.co/docs/transformers/en/tasks/asr/#load-minds-14-dataset
.md
'sampling_rate': 8000}, 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602ba9e2963e11ccd901cd4f.wav', 'transcription': "hi I'm trying to use the banking app on my phone and currently my checking and savings account balance is not refreshing"} ``` There are two fields: - `audio`: a 1-dimensional `array` of the speech signal that must be called to load and resample the audio file.
98_2_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/asr.md
https://huggingface.co/docs/transformers/en/tasks/asr/#load-minds-14-dataset
.md
- `audio`: a 1-dimensional `array` of the speech signal that must be called to load and resample the audio file. - `transcription`: the target text.
98_2_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/asr.md
https://huggingface.co/docs/transformers/en/tasks/asr/#preprocess
.md
The next step is to load a Wav2Vec2 processor to process the audio signal: ```py >>> from transformers import AutoProcessor
98_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/asr.md
https://huggingface.co/docs/transformers/en/tasks/asr/#preprocess
.md
>>> processor = AutoProcessor.from_pretrained("facebook/wav2vec2-base") ``` The MInDS-14 dataset has a sampling rate of 8000Hz (you can find this information in its [dataset card](https://huggingface.co/datasets/PolyAI/minds14)), which means you'll need to resample the dataset to 16000Hz to use the pretrained Wav2Vec2 model: ```py >>> minds = minds.cast_column("audio", Audio(sampling_rate=16_000)) >>> minds["train"][0] {'audio': {'array': array([-2.38064706e-04, -1.58618059e-04, -5.43987835e-06, ...,
98_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/asr.md
https://huggingface.co/docs/transformers/en/tasks/asr/#preprocess
.md
>>> minds["train"][0] {'audio': {'array': array([-2.38064706e-04, -1.58618059e-04, -5.43987835e-06, ..., 2.78103951e-04, 2.38446111e-04, 1.18740834e-04], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602ba9e2963e11ccd901cd4f.wav', 'sampling_rate': 16000},
98_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/asr.md
https://huggingface.co/docs/transformers/en/tasks/asr/#preprocess
.md
'sampling_rate': 16000}, 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602ba9e2963e11ccd901cd4f.wav', 'transcription': "hi I'm trying to use the banking app on my phone and currently my checking and savings account balance is not refreshing"} ```
98_3_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/asr.md
https://huggingface.co/docs/transformers/en/tasks/asr/#preprocess
.md
``` As you can see in the `transcription` above, the text contains a mix of uppercase and lowercase characters. The Wav2Vec2 tokenizer is only trained on uppercase characters so you'll need to make sure the text matches the tokenizer's vocabulary: ```py >>> def uppercase(example): ... return {"transcription": example["transcription"].upper()}
98_3_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/asr.md
https://huggingface.co/docs/transformers/en/tasks/asr/#preprocess
.md
>>> minds = minds.map(uppercase) ``` Now create a preprocessing function that: 1. Calls the `audio` column to load and resample the audio file. 2. Extracts the `input_values` from the audio file and tokenize the `transcription` column with the processor. ```py >>> def prepare_dataset(batch): ... audio = batch["audio"] ... batch = processor(audio["array"], sampling_rate=audio["sampling_rate"], text=batch["transcription"]) ... batch["input_length"] = len(batch["input_values"][0])
98_3_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/asr.md
https://huggingface.co/docs/transformers/en/tasks/asr/#preprocess
.md
... batch["input_length"] = len(batch["input_values"][0]) ... return batch ``` To apply the preprocessing function over the entire dataset, use 🤗 Datasets [`~datasets.Dataset.map`] function. You can speed up `map` by increasing the number of processes with the `num_proc` parameter. Remove the columns you don't need with the [`~datasets.Dataset.remove_columns`] method: ```py >>> encoded_minds = minds.map(prepare_dataset, remove_columns=minds.column_names["train"], num_proc=4) ```
98_3_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/asr.md
https://huggingface.co/docs/transformers/en/tasks/asr/#preprocess
.md
``` 🤗 Transformers doesn't have a data collator for ASR, so you'll need to adapt the [`DataCollatorWithPadding`] to create a batch of examples. It'll also dynamically pad your text and labels to the length of the longest element in its batch (instead of the entire dataset) so they are a uniform length. While it is possible to pad your text in the `tokenizer` function by setting `padding=True`, dynamic padding is more efficient.
98_3_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/asr.md
https://huggingface.co/docs/transformers/en/tasks/asr/#preprocess
.md
Unlike other data collators, this specific data collator needs to apply a different padding method to `input_values` and `labels`: ```py >>> import torch
98_3_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/asr.md
https://huggingface.co/docs/transformers/en/tasks/asr/#preprocess
.md
>>> from dataclasses import dataclass, field >>> from typing import Any, Dict, List, Optional, Union >>> @dataclass ... class DataCollatorCTCWithPadding: ... processor: AutoProcessor ... padding: Union[bool, str] = "longest"
98_3_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/asr.md
https://huggingface.co/docs/transformers/en/tasks/asr/#preprocess
.md
... def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]: ... # split inputs and labels since they have to be of different lengths and need ... # different padding methods ... input_features = [{"input_values": feature["input_values"][0]} for feature in features] ... label_features = [{"input_ids": feature["labels"]} for feature in features]
98_3_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/asr.md
https://huggingface.co/docs/transformers/en/tasks/asr/#preprocess
.md
... batch = self.processor.pad(input_features, padding=self.padding, return_tensors="pt") ... labels_batch = self.processor.pad(labels=label_features, padding=self.padding, return_tensors="pt") ... # replace padding with -100 to ignore loss correctly ... labels = labels_batch["input_ids"].masked_fill(labels_batch.attention_mask.ne(1), -100) ... batch["labels"] = labels
98_3_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/asr.md
https://huggingface.co/docs/transformers/en/tasks/asr/#preprocess
.md
... batch["labels"] = labels ... return batch ``` Now instantiate your `DataCollatorForCTCWithPadding`: ```py >>> data_collator = DataCollatorCTCWithPadding(processor=processor, padding="longest") ```
98_3_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/asr.md
https://huggingface.co/docs/transformers/en/tasks/asr/#evaluate
.md
Including a metric during training is often helpful for evaluating your model's performance. You can quickly load an evaluation method with the 🤗 [Evaluate](https://huggingface.co/docs/evaluate/index) library. For this task, load the [word error rate](https://huggingface.co/spaces/evaluate-metric/wer) (WER) metric (refer to the 🤗 Evaluate [quick tour](https://huggingface.co/docs/evaluate/a_quick_tour) to learn more about loading and computing metrics): ```py >>> import evaluate
98_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/asr.md
https://huggingface.co/docs/transformers/en/tasks/asr/#evaluate
.md
>>> wer = evaluate.load("wer") ``` Then create a function that passes your predictions and labels to [`~evaluate.EvaluationModule.compute`] to calculate the WER: ```py >>> import numpy as np >>> def compute_metrics(pred): ... pred_logits = pred.predictions ... pred_ids = np.argmax(pred_logits, axis=-1) ... pred.label_ids[pred.label_ids == -100] = processor.tokenizer.pad_token_id
98_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/asr.md
https://huggingface.co/docs/transformers/en/tasks/asr/#evaluate
.md
... pred.label_ids[pred.label_ids == -100] = processor.tokenizer.pad_token_id ... pred_str = processor.batch_decode(pred_ids) ... label_str = processor.batch_decode(pred.label_ids, group_tokens=False) ... wer = wer.compute(predictions=pred_str, references=label_str) ... return {"wer": wer} ``` Your `compute_metrics` function is ready to go now, and you'll return to it when you setup your training.
98_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/asr.md
https://huggingface.co/docs/transformers/en/tasks/asr/#train
.md
<frameworkcontent> <pt> <Tip> If you aren't familiar with finetuning a model with the [`Trainer`], take a look at the basic tutorial [here](../training#train-with-pytorch-trainer)! </Tip> You are now ready to start training your model! Load Wav2Vec2 with [`AutoModelForCTC`]. Specify the reduction to apply with the `ctc_loss_reduction` parameter. It is often better to use the average instead of the default summation: ```py >>> from transformers import AutoModelForCTC, TrainingArguments, Trainer
98_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/asr.md
https://huggingface.co/docs/transformers/en/tasks/asr/#train
.md
>>> model = AutoModelForCTC.from_pretrained( ... "facebook/wav2vec2-base", ... ctc_loss_reduction="mean", ... pad_token_id=processor.tokenizer.pad_token_id, ... ) ``` At this point, only three steps remain:
98_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/asr.md
https://huggingface.co/docs/transformers/en/tasks/asr/#train
.md
... pad_token_id=processor.tokenizer.pad_token_id, ... ) ``` At this point, only three steps remain: 1. Define your training hyperparameters in [`TrainingArguments`]. The only required parameter is `output_dir` which specifies where to save your model. You'll push this model to the Hub by setting `push_to_hub=True` (you need to be signed in to Hugging Face to upload your model). At the end of each epoch, the [`Trainer`] will evaluate the WER and save the training checkpoint.
98_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/asr.md
https://huggingface.co/docs/transformers/en/tasks/asr/#train
.md
2. Pass the training arguments to [`Trainer`] along with the model, dataset, tokenizer, data collator, and `compute_metrics` function. 3. Call [`~Trainer.train`] to fine-tune your model. ```py >>> training_args = TrainingArguments( ... output_dir="my_awesome_asr_mind_model", ... per_device_train_batch_size=8, ... gradient_accumulation_steps=2, ... learning_rate=1e-5, ... warmup_steps=500, ... max_steps=2000, ... gradient_checkpointing=True, ... fp16=True,
98_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/asr.md
https://huggingface.co/docs/transformers/en/tasks/asr/#train
.md
... warmup_steps=500, ... max_steps=2000, ... gradient_checkpointing=True, ... fp16=True, ... group_by_length=True, ... eval_strategy="steps", ... per_device_eval_batch_size=8, ... save_steps=1000, ... eval_steps=1000, ... logging_steps=25, ... load_best_model_at_end=True, ... metric_for_best_model="wer", ... greater_is_better=False, ... push_to_hub=True, ... )
98_5_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/asr.md
https://huggingface.co/docs/transformers/en/tasks/asr/#train
.md
>>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=encoded_minds["train"], ... eval_dataset=encoded_minds["test"], ... processing_class=processor, ... data_collator=data_collator, ... compute_metrics=compute_metrics, ... )
98_5_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/asr.md
https://huggingface.co/docs/transformers/en/tasks/asr/#train
.md
>>> trainer.train() ``` Once training is completed, share your model to the Hub with the [`~transformers.Trainer.push_to_hub`] method so it can be accessible to everyone: ```py >>> trainer.push_to_hub() ``` </pt> </frameworkcontent> <Tip>
98_5_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/asr.md
https://huggingface.co/docs/transformers/en/tasks/asr/#train
.md
```py >>> trainer.push_to_hub() ``` </pt> </frameworkcontent> <Tip> For a more in-depth example of how to fine-tune a model for automatic speech recognition, take a look at this blog [post](https://huggingface.co/blog/fine-tune-wav2vec2-english) for English ASR and this [post](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for multilingual ASR. </Tip>
98_5_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/asr.md
https://huggingface.co/docs/transformers/en/tasks/asr/#inference
.md
Great, now that you've fine-tuned a model, you can use it for inference! Load an audio file you'd like to run inference on. Remember to resample the sampling rate of the audio file to match the sampling rate of the model if you need to! ```py >>> from datasets import load_dataset, Audio
98_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/asr.md
https://huggingface.co/docs/transformers/en/tasks/asr/#inference
.md
>>> dataset = load_dataset("PolyAI/minds14", "en-US", split="train") >>> dataset = dataset.cast_column("audio", Audio(sampling_rate=16000)) >>> sampling_rate = dataset.features["audio"].sampling_rate >>> audio_file = dataset[0]["audio"]["path"] ``` The simplest way to try out your fine-tuned model for inference is to use it in a [`pipeline`]. Instantiate a `pipeline` for automatic speech recognition with your model, and pass your audio file to it: ```py >>> from transformers import pipeline
98_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/asr.md
https://huggingface.co/docs/transformers/en/tasks/asr/#inference
.md
>>> transcriber = pipeline("automatic-speech-recognition", model="stevhliu/my_awesome_asr_minds_model") >>> transcriber(audio_file) {'text': 'I WOUD LIKE O SET UP JOINT ACOUNT WTH Y PARTNER'} ``` <Tip> The transcription is decent, but it could be better! Try finetuning your model on more examples to get even better results! </Tip> You can also manually replicate the results of the `pipeline` if you'd like: <frameworkcontent> <pt>
98_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/asr.md
https://huggingface.co/docs/transformers/en/tasks/asr/#inference
.md
</Tip> You can also manually replicate the results of the `pipeline` if you'd like: <frameworkcontent> <pt> Load a processor to preprocess the audio file and transcription and return the `input` as PyTorch tensors: ```py >>> from transformers import AutoProcessor
98_6_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/asr.md
https://huggingface.co/docs/transformers/en/tasks/asr/#inference
.md
>>> processor = AutoProcessor.from_pretrained("stevhliu/my_awesome_asr_mind_model") >>> inputs = processor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt") ``` Pass your inputs to the model and return the logits: ```py >>> from transformers import AutoModelForCTC
98_6_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/asr.md
https://huggingface.co/docs/transformers/en/tasks/asr/#inference
.md
>>> model = AutoModelForCTC.from_pretrained("stevhliu/my_awesome_asr_mind_model") >>> with torch.no_grad(): ... logits = model(**inputs).logits ``` Get the predicted `input_ids` with the highest probability, and use the processor to decode the predicted `input_ids` back into text: ```py >>> import torch
98_6_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/asr.md
https://huggingface.co/docs/transformers/en/tasks/asr/#inference
.md
>>> predicted_ids = torch.argmax(logits, dim=-1) >>> transcription = processor.batch_decode(predicted_ids) >>> transcription ['I WOUL LIKE O SET UP JOINT ACOUNT WTH Y PARTNER'] ``` </pt> </frameworkcontent>
98_6_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/language_modeling/
.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
99_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/language_modeling/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
99_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/language_modeling/#causal-language-modeling
.md
[[open-in-colab]] There are two types of language modeling, causal and masked. This guide illustrates causal language modeling. Causal language models are frequently used for text generation. You can use these models for creative applications like choosing your own text adventure or an intelligent coding assistant like Copilot or CodeParrot. <Youtube id="Vpjb1lu0MDk"/> Causal language modeling predicts the next token in a sequence of tokens, and the model can only attend to tokens on
99_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/language_modeling/#causal-language-modeling
.md
Causal language modeling predicts the next token in a sequence of tokens, and the model can only attend to tokens on the left. This means the model cannot see future tokens. GPT-2 is an example of a causal language model. This guide will show you how to: 1. Finetune [DistilGPT2](https://huggingface.co/distilbert/distilgpt2) on the [r/askscience](https://www.reddit.com/r/askscience/) subset of the [ELI5](https://huggingface.co/datasets/eli5) dataset. 2. Use your finetuned model for inference. <Tip>
99_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/language_modeling/#causal-language-modeling
.md
2. Use your finetuned model for inference. <Tip> To see all architectures and checkpoints compatible with this task, we recommend checking the [task-page](https://huggingface.co/tasks/text-generation) </Tip> Before you begin, make sure you have all the necessary libraries installed: ```bash pip install transformers datasets evaluate ``` We encourage you to log in to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to log in:
99_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/language_modeling/#causal-language-modeling
.md
```py >>> from huggingface_hub import notebook_login
99_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/language_modeling/#causal-language-modeling
.md
>>> notebook_login() ```
99_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/language_modeling/#load-eli5-dataset
.md
Start by loading the first 5000 examples from the [ELI5-Category](https://huggingface.co/datasets/eli5_category) dataset with the 🤗 Datasets library. This'll give you a chance to experiment and make sure everything works before spending more time training on the full dataset. ```py >>> from datasets import load_dataset
99_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/language_modeling/#load-eli5-dataset
.md
>>> eli5 = load_dataset("eli5_category", split="train[:5000]") ``` Split the dataset's `train` split into a train and test set with the [`~datasets.Dataset.train_test_split`] method: ```py >>> eli5 = eli5.train_test_split(test_size=0.2) ``` Then take a look at an example: ```py >>> eli5["train"][0] {'q_id': '7h191n', 'title': 'What does the tax bill that was passed today mean? How will it affect Americans in each tax bracket?', 'selftext': '', 'category': 'Economics',
99_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/language_modeling/#load-eli5-dataset
.md
'selftext': '', 'category': 'Economics', 'subreddit': 'explainlikeimfive', 'answers': {'a_id': ['dqnds8l', 'dqnd1jl', 'dqng3i1', 'dqnku5x'],
99_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/language_modeling/#load-eli5-dataset
.md
'text': ["The tax bill is 500 pages long and there were a lot of changes still going on right to the end. It's not just an adjustment to the income tax brackets, it's a whole bunch of changes. As such there is no good answer to your question. The big take aways are: - Big reduction in corporate income tax rate will make large companies very happy. - Pass through rate change will make certain styles of business (law firms, hedge funds) extremely happy - Income tax changes are moderate, and are set to expire
99_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/language_modeling/#load-eli5-dataset
.md
certain styles of business (law firms, hedge funds) extremely happy - Income tax changes are moderate, and are set to expire (though it's the kind of thing that might just always get re-applied without being made permanent) - People in high tax states (California, New York) lose out, and many of them will end up with their taxes raised.",
99_2_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/language_modeling/#load-eli5-dataset
.md
'None yet. It has to be reconciled with a vastly different house bill and then passed again.', 'Also: does this apply to 2017 taxes? Or does it start with 2018 taxes?', 'This article explains both the House and senate bills, including the proposed changes to your income taxes based on your income level. URL_0'], 'score': [21, 19, 5, 3], 'text_urls': [[], [], [], ['https://www.investopedia.com/news/trumps-tax-reform-what-can-be-done/']]}, 'title_urls': ['url'], 'selftext_urls': ['url']} ```
99_2_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/language_modeling/#load-eli5-dataset
.md
'title_urls': ['url'], 'selftext_urls': ['url']} ``` While this may look like a lot, you're only really interested in the `text` field. What's cool about language modeling tasks is you don't need labels (also known as an unsupervised task) because the next word *is* the label.
99_2_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/language_modeling/#preprocess
.md
<Youtube id="ma1TrR7gE7I"/> The next step is to load a DistilGPT2 tokenizer to process the `text` subfield: ```py >>> from transformers import AutoTokenizer
99_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/language_modeling/#preprocess
.md
>>> tokenizer = AutoTokenizer.from_pretrained("distilbert/distilgpt2") ``` You'll notice from the example above, the `text` field is actually nested inside `answers`. This means you'll need to extract the `text` subfield from its nested structure with the [`flatten`](https://huggingface.co/docs/datasets/process#flatten) method: ```py >>> eli5 = eli5.flatten() >>> eli5["train"][0] {'q_id': '7h191n',
99_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/language_modeling/#preprocess
.md
```py >>> eli5 = eli5.flatten() >>> eli5["train"][0] {'q_id': '7h191n', 'title': 'What does the tax bill that was passed today mean? How will it affect Americans in each tax bracket?', 'selftext': '', 'category': 'Economics', 'subreddit': 'explainlikeimfive', 'answers.a_id': ['dqnds8l', 'dqnd1jl', 'dqng3i1', 'dqnku5x'],
99_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/language_modeling/#preprocess
.md
'answers.text': ["The tax bill is 500 pages long and there were a lot of changes still going on right to the end. It's not just an adjustment to the income tax brackets, it's a whole bunch of changes. As such there is no good answer to your question. The big take aways are: - Big reduction in corporate income tax rate will make large companies very happy. - Pass through rate change will make certain styles of business (law firms, hedge funds) extremely happy - Income tax changes are moderate, and are set
99_3_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/language_modeling/#preprocess
.md
will make certain styles of business (law firms, hedge funds) extremely happy - Income tax changes are moderate, and are set to expire (though it's the kind of thing that might just always get re-applied without being made permanent) - People in high tax states (California, New York) lose out, and many of them will end up with their taxes raised.",
99_3_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/language_modeling/#preprocess
.md
'None yet. It has to be reconciled with a vastly different house bill and then passed again.', 'Also: does this apply to 2017 taxes? Or does it start with 2018 taxes?', 'This article explains both the House and senate bills, including the proposed changes to your income taxes based on your income level. URL_0'], 'answers.score': [21, 19, 5, 3], 'answers.text_urls': [[], [], [], ['https://www.investopedia.com/news/trumps-tax-reform-what-can-be-done/']], 'title_urls': ['url'], 'selftext_urls': ['url']} ```
99_3_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/language_modeling/#preprocess
.md
'title_urls': ['url'], 'selftext_urls': ['url']} ``` Each subfield is now a separate column as indicated by the `answers` prefix, and the `text` field is a list now. Instead of tokenizing each sentence separately, convert the list to a string so you can jointly tokenize them. Here is a first preprocessing function to join the list of strings for each example and tokenize the result: ```py >>> def preprocess_function(examples): ... return tokenizer([" ".join(x) for x in examples["answers.text"]])
99_3_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/language_modeling/#preprocess
.md
```py >>> def preprocess_function(examples): ... return tokenizer([" ".join(x) for x in examples["answers.text"]]) ``` To apply this preprocessing function over the entire dataset, use the 🤗 Datasets [`~datasets.Dataset.map`] method. You can speed up the `map` function by setting `batched=True` to process multiple elements of the dataset at once, and increasing the number of processes with `num_proc`. Remove any columns you don't need: ```py >>> tokenized_eli5 = eli5.map(
99_3_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/language_modeling/#preprocess
.md
```py >>> tokenized_eli5 = eli5.map( ... preprocess_function, ... batched=True, ... num_proc=4, ... remove_columns=eli5["train"].column_names, ... ) ``` This dataset contains the token sequences, but some of these are longer than the maximum input length for the model. You can now use a second preprocessing function to - concatenate all the sequences
99_3_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/language_modeling/#preprocess
.md
You can now use a second preprocessing function to - concatenate all the sequences - split the concatenated sequences into shorter chunks defined by `block_size`, which should be both shorter than the maximum input length and short enough for your GPU RAM. ```py >>> block_size = 128
99_3_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/language_modeling/#preprocess
.md
>>> def group_texts(examples): ... # Concatenate all texts. ... concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()} ... total_length = len(concatenated_examples[list(examples.keys())[0]]) ... # We drop the small remainder, we could add padding if the model supported it instead of this drop, you can ... # customize this part to your needs. ... if total_length >= block_size: ... total_length = (total_length // block_size) * block_size
99_3_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/language_modeling/#preprocess
.md
... if total_length >= block_size: ... total_length = (total_length // block_size) * block_size ... # Split by chunks of block_size. ... result = { ... k: [t[i : i + block_size] for i in range(0, total_length, block_size)] ... for k, t in concatenated_examples.items() ... } ... result["labels"] = result["input_ids"].copy() ... return result ``` Apply the `group_texts` function over the entire dataset: ```py
99_3_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/language_modeling/#preprocess
.md
... return result ``` Apply the `group_texts` function over the entire dataset: ```py >>> lm_dataset = tokenized_eli5.map(group_texts, batched=True, num_proc=4) ``` Now create a batch of examples using [`DataCollatorForLanguageModeling`]. It's more efficient to *dynamically pad* the sentences to the longest length in a batch during collation, instead of padding the whole dataset to the maximum length. <frameworkcontent> <pt>
99_3_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/language_modeling/#preprocess
.md
<frameworkcontent> <pt> Use the end-of-sequence token as the padding token and set `mlm=False`. This will use the inputs as labels shifted to the right by one element: ```py >>> from transformers import DataCollatorForLanguageModeling
99_3_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/language_modeling/#preprocess
.md
>>> tokenizer.pad_token = tokenizer.eos_token >>> data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False) ``` </pt> <tf> Use the end-of-sequence token as the padding token and set `mlm=False`. This will use the inputs as labels shifted to the right by one element: ```py >>> from transformers import DataCollatorForLanguageModeling >>> data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False, return_tensors="tf") ``` </tf> </frameworkcontent>
99_3_14
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/language_modeling/#train
.md
<frameworkcontent> <pt> <Tip> If you aren't familiar with finetuning a model with the [`Trainer`], take a look at the [basic tutorial](../training#train-with-pytorch-trainer)! </Tip> You're ready to start training your model now! Load DistilGPT2 with [`AutoModelForCausalLM`]: ```py >>> from transformers import AutoModelForCausalLM, TrainingArguments, Trainer
99_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/language_modeling/#train
.md
>>> model = AutoModelForCausalLM.from_pretrained("distilbert/distilgpt2") ``` At this point, only three steps remain: 1. Define your training hyperparameters in [`TrainingArguments`]. The only required parameter is `output_dir` which specifies where to save your model. You'll push this model to the Hub by setting `push_to_hub=True` (you need to be signed in to Hugging Face to upload your model). 2. Pass the training arguments to [`Trainer`] along with the model, datasets, and data collator.
99_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/language_modeling/#train
.md
2. Pass the training arguments to [`Trainer`] along with the model, datasets, and data collator. 3. Call [`~Trainer.train`] to finetune your model. ```py >>> training_args = TrainingArguments( ... output_dir="my_awesome_eli5_clm-model", ... eval_strategy="epoch", ... learning_rate=2e-5, ... weight_decay=0.01, ... push_to_hub=True, ... )
99_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/language_modeling/#train
.md
>>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=lm_dataset["train"], ... eval_dataset=lm_dataset["test"], ... data_collator=data_collator, ... tokenizer=tokenizer, ... ) >>> trainer.train() ``` Once training is completed, use the [`~transformers.Trainer.evaluate`] method to evaluate your model and get its perplexity: ```py >>> import math
99_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/language_modeling/#train
.md
>>> eval_results = trainer.evaluate() >>> print(f"Perplexity: {math.exp(eval_results['eval_loss']):.2f}") Perplexity: 49.61 ``` Then share your model to the Hub with the [`~transformers.Trainer.push_to_hub`] method so everyone can use your model: ```py >>> trainer.push_to_hub() ``` </pt> <tf> <Tip> If you aren't familiar with finetuning a model with Keras, take a look at the [basic tutorial](../training#train-a-tensorflow-model-with-keras)! </Tip>
99_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/language_modeling/#train
.md
</Tip> To finetune a model in TensorFlow, start by setting up an optimizer function, learning rate schedule, and some training hyperparameters: ```py >>> from transformers import create_optimizer, AdamWeightDecay
99_4_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/language_modeling/#train
.md
>>> optimizer = AdamWeightDecay(learning_rate=2e-5, weight_decay_rate=0.01) ``` Then you can load DistilGPT2 with [`TFAutoModelForCausalLM`]: ```py >>> from transformers import TFAutoModelForCausalLM
99_4_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/language_modeling/#train
.md
>>> model = TFAutoModelForCausalLM.from_pretrained("distilbert/distilgpt2") ``` Convert your datasets to the `tf.data.Dataset` format with [`~transformers.TFPreTrainedModel.prepare_tf_dataset`]: ```py >>> tf_train_set = model.prepare_tf_dataset( ... lm_dataset["train"], ... shuffle=True, ... batch_size=16, ... collate_fn=data_collator, ... )
99_4_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/language_modeling/#train
.md
>>> tf_test_set = model.prepare_tf_dataset( ... lm_dataset["test"], ... shuffle=False, ... batch_size=16, ... collate_fn=data_collator, ... ) ``` Configure the model for training with [`compile`](https://keras.io/api/models/model_training_apis/#compile-method). Note that Transformers models all have a default task-relevant loss function, so you don't need to specify one unless you want to: ```py >>> import tensorflow as tf
99_4_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/language_modeling/#train
.md
>>> model.compile(optimizer=optimizer) # No loss argument! ``` This can be done by specifying where to push your model and tokenizer in the [`~transformers.PushToHubCallback`]: ```py >>> from transformers.keras_callbacks import PushToHubCallback
99_4_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/language_modeling/#train
.md
>>> callback = PushToHubCallback( ... output_dir="my_awesome_eli5_clm-model", ... tokenizer=tokenizer, ... ) ``` Finally, you're ready to start training your model! Call [`fit`](https://keras.io/api/models/model_training_apis/#fit-method) with your training and validation datasets, the number of epochs, and your callback to finetune the model: ```py >>> model.fit(x=tf_train_set, validation_data=tf_test_set, epochs=3, callbacks=[callback]) ```
99_4_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/language_modeling/#train
.md
```py >>> model.fit(x=tf_train_set, validation_data=tf_test_set, epochs=3, callbacks=[callback]) ``` Once training is completed, your model is automatically uploaded to the Hub so everyone can use it! </tf> </frameworkcontent> <Tip> For a more in-depth example of how to finetune a model for causal language modeling, take a look at the corresponding [PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb)
99_4_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/language_modeling.md
https://huggingface.co/docs/transformers/en/tasks/language_modeling/#train
.md
[PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb) or [TensorFlow notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb). </Tip>
99_4_12