source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/peft.md
https://huggingface.co/docs/transformers/en/peft/#add-a-new-adapter
.md
# use adapter_2 model.set_adapter("adapter_2") output_enabled = model.generate(**inputs) print(tokenizer.decode(output_enabled[0], skip_special_tokens=True)) ```
71_6_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/peft.md
https://huggingface.co/docs/transformers/en/peft/#enable-and-disable-adapters
.md
Once you've added an adapter to a model, you can enable or disable the adapter module. To enable the adapter module: ```py from transformers import AutoModelForCausalLM, OPTForCausalLM, AutoTokenizer from peft import PeftConfig model_id = "facebook/opt-350m" adapter_model_id = "ybelkada/opt-350m-lora" tokenizer = AutoTokenizer.from_pretrained(model_id) text = "Hello" inputs = tokenizer(text, return_tensors="pt")
71_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/peft.md
https://huggingface.co/docs/transformers/en/peft/#enable-and-disable-adapters
.md
model = AutoModelForCausalLM.from_pretrained(model_id) peft_config = PeftConfig.from_pretrained(adapter_model_id) # to initiate with random weights peft_config.init_lora_weights = False model.add_adapter(peft_config) model.enable_adapters() output = model.generate(**inputs) ``` To disable the adapter module: ```py model.disable_adapters() output = model.generate(**inputs) ```
71_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/peft.md
https://huggingface.co/docs/transformers/en/peft/#train-a-peft-adapter
.md
PEFT adapters are supported by the [`Trainer`] class so that you can train an adapter for your specific use case. It only requires adding a few more lines of code. For example, to train a LoRA adapter: <Tip> If you aren't familiar with fine-tuning a model with [`Trainer`], take a look at the [Fine-tune a pretrained model](training) tutorial. </Tip>
71_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/peft.md
https://huggingface.co/docs/transformers/en/peft/#train-a-peft-adapter
.md
</Tip> 1. Define your adapter configuration with the task type and hyperparameters (see [`~peft.LoraConfig`] for more details about what the hyperparameters do). ```py from peft import LoraConfig
71_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/peft.md
https://huggingface.co/docs/transformers/en/peft/#train-a-peft-adapter
.md
peft_config = LoraConfig( lora_alpha=16, lora_dropout=0.1, r=64, bias="none", task_type="CAUSAL_LM", ) ``` 2. Add adapter to the model. ```py model.add_adapter(peft_config) ``` 3. Now you can pass the model to [`Trainer`]! ```py trainer = Trainer(model=model, ...) trainer.train() ``` To save your trained adapter and load it back: ```py model.save_pretrained(save_dir) model = AutoModelForCausalLM.from_pretrained(save_dir) ```
71_8_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/peft.md
https://huggingface.co/docs/transformers/en/peft/#add-additional-trainable-layers-to-a-peft-adapter
.md
You can also fine-tune additional trainable adapters on top of a model that has adapters attached by passing `modules_to_save` in your PEFT config. For example, if you want to also fine-tune the lm_head on top of a model with a LoRA adapter: ```py from transformers import AutoModelForCausalLM, OPTForCausalLM, AutoTokenizer from peft import LoraConfig model_id = "facebook/opt-350m" model = AutoModelForCausalLM.from_pretrained(model_id)
71_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/peft.md
https://huggingface.co/docs/transformers/en/peft/#add-additional-trainable-layers-to-a-peft-adapter
.md
model_id = "facebook/opt-350m" model = AutoModelForCausalLM.from_pretrained(model_id) lora_config = LoraConfig( target_modules=["q_proj", "k_proj"], modules_to_save=["lm_head"], ) model.add_adapter(lora_config) ```
71_9_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/peft.md
https://huggingface.co/docs/transformers/en/peft/#api-docs
.md
integrations.PeftAdapterMixin A class containing all functions for loading and using adapters weights that are supported in PEFT library. For more details about adapters and injecting them on a transformer-based model, check out the documentation of PEFT library: https://huggingface.co/docs/peft/index Currently supported PEFT methods are all non-prefix tuning methods. Below is the list of supported PEFT methods that anyone can load, train and run with this mixin class:
71_10_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/peft.md
https://huggingface.co/docs/transformers/en/peft/#api-docs
.md
that anyone can load, train and run with this mixin class: - Low Rank Adapters (LoRA): https://huggingface.co/docs/peft/conceptual_guides/lora - IA3: https://huggingface.co/docs/peft/conceptual_guides/ia3 - AdaLora: https://arxiv.org/abs/2303.10512 Other PEFT models such as prompt tuning, prompt learning are out of scope as these adapters are not "injectable" into a torch module. For using these methods, please refer to the usage guide of PEFT library.
71_10_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/peft.md
https://huggingface.co/docs/transformers/en/peft/#api-docs
.md
into a torch module. For using these methods, please refer to the usage guide of PEFT library. With this mixin, if the correct PEFT version is installed, it is possible to: - Load an adapter stored on a local path or in a remote Hub repository, and inject it in the model - Attach new adapters in the model and train them with Trainer or by your own. - Attach multiple adapters and iteratively activate / deactivate them - Activate / deactivate all adapters from the model.
71_10_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/peft.md
https://huggingface.co/docs/transformers/en/peft/#api-docs
.md
- Attach multiple adapters and iteratively activate / deactivate them - Activate / deactivate all adapters from the model. - Get the `state_dict` of the active adapter. - load_adapter - add_adapter - set_adapter - disable_adapters - enable_adapters - active_adapters - get_adapter_state_dict <!-- TODO: (@younesbelkada @stevhliu) - Link to PEFT docs for further details - Trainer - 8-bit / 4-bit examples ? -->
71_10_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/translation.md
https://huggingface.co/docs/transformers/en/tasks/translation/
.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
72_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/translation.md
https://huggingface.co/docs/transformers/en/tasks/translation/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
72_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/translation.md
https://huggingface.co/docs/transformers/en/tasks/translation/#translation
.md
[[open-in-colab]] <Youtube id="1JvfrvZgi6c"/> Translation converts a sequence of text from one language to another. It is one of several tasks you can formulate as a sequence-to-sequence problem, a powerful framework for returning some output from an input, like translation or summarization. Translation systems are commonly used for translation between different language texts, but it can also be used for speech or some combination in between like text-to-speech or speech-to-text.
72_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/translation.md
https://huggingface.co/docs/transformers/en/tasks/translation/#translation
.md
This guide will show you how to: 1. Finetune [T5](https://huggingface.co/google-t5/t5-small) on the English-French subset of the [OPUS Books](https://huggingface.co/datasets/opus_books) dataset to translate English text to French. 2. Use your finetuned model for inference. <Tip> To see all architectures and checkpoints compatible with this task, we recommend checking the [task-page](https://huggingface.co/tasks/translation). </Tip>
72_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/translation.md
https://huggingface.co/docs/transformers/en/tasks/translation/#translation
.md
</Tip> Before you begin, make sure you have all the necessary libraries installed: ```bash pip install transformers datasets evaluate sacrebleu ``` We encourage you to login to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to login: ```py >>> from huggingface_hub import notebook_login
72_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/translation.md
https://huggingface.co/docs/transformers/en/tasks/translation/#translation
.md
>>> notebook_login() ```
72_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/translation.md
https://huggingface.co/docs/transformers/en/tasks/translation/#load-opus-books-dataset
.md
Start by loading the English-French subset of the [OPUS Books](https://huggingface.co/datasets/opus_books) dataset from the 🤗 Datasets library: ```py >>> from datasets import load_dataset
72_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/translation.md
https://huggingface.co/docs/transformers/en/tasks/translation/#load-opus-books-dataset
.md
>>> books = load_dataset("opus_books", "en-fr") ``` Split the dataset into a train and test set with the [`~datasets.Dataset.train_test_split`] method: ```py >>> books = books["train"].train_test_split(test_size=0.2) ``` Then take a look at an example: ```py >>> books["train"][0] {'id': '90560', 'translation': {'en': 'But this lofty plateau measured only a few fathoms, and soon we reentered Our Element.',
72_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/translation.md
https://huggingface.co/docs/transformers/en/tasks/translation/#load-opus-books-dataset
.md
{'id': '90560', 'translation': {'en': 'But this lofty plateau measured only a few fathoms, and soon we reentered Our Element.', 'fr': 'Mais ce plateau élevé ne mesurait que quelques toises, et bientôt nous fûmes rentrés dans notre élément.'}} ``` `translation`: an English and French translation of the text.
72_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/translation.md
https://huggingface.co/docs/transformers/en/tasks/translation/#preprocess
.md
<Youtube id="XAR8jnZZuUs"/> The next step is to load a T5 tokenizer to process the English-French language pairs: ```py >>> from transformers import AutoTokenizer
72_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/translation.md
https://huggingface.co/docs/transformers/en/tasks/translation/#preprocess
.md
>>> checkpoint = "google-t5/t5-small" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) ``` The preprocessing function you want to create needs to: 1. Prefix the input with a prompt so T5 knows this is a translation task. Some models capable of multiple NLP tasks require prompting for specific tasks.
72_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/translation.md
https://huggingface.co/docs/transformers/en/tasks/translation/#preprocess
.md
2. Set the target language (French) in the `text_target` parameter to ensure the tokenizer processes the target text correctly. If you don't set `text_target`, the tokenizer processes the target text as English. 3. Truncate sequences to be no longer than the maximum length set by the `max_length` parameter. ```py >>> source_lang = "en" >>> target_lang = "fr" >>> prefix = "translate English to French: "
72_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/translation.md
https://huggingface.co/docs/transformers/en/tasks/translation/#preprocess
.md
>>> def preprocess_function(examples): ... inputs = [prefix + example[source_lang] for example in examples["translation"]] ... targets = [example[target_lang] for example in examples["translation"]] ... model_inputs = tokenizer(inputs, text_target=targets, max_length=128, truncation=True) ... return model_inputs ```
72_3_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/translation.md
https://huggingface.co/docs/transformers/en/tasks/translation/#preprocess
.md
... return model_inputs ``` To apply the preprocessing function over the entire dataset, use 🤗 Datasets [`~datasets.Dataset.map`] method. You can speed up the `map` function by setting `batched=True` to process multiple elements of the dataset at once: ```py >>> tokenized_books = books.map(preprocess_function, batched=True) ```
72_3_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/translation.md
https://huggingface.co/docs/transformers/en/tasks/translation/#preprocess
.md
```py >>> tokenized_books = books.map(preprocess_function, batched=True) ``` Now create a batch of examples using [`DataCollatorForSeq2Seq`]. It's more efficient to *dynamically pad* the sentences to the longest length in a batch during collation, instead of padding the whole dataset to the maximum length. <frameworkcontent> <pt> ```py >>> from transformers import DataCollatorForSeq2Seq
72_3_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/translation.md
https://huggingface.co/docs/transformers/en/tasks/translation/#preprocess
.md
>>> data_collator = DataCollatorForSeq2Seq(tokenizer=tokenizer, model=checkpoint) ``` </pt> <tf> ```py >>> from transformers import DataCollatorForSeq2Seq >>> data_collator = DataCollatorForSeq2Seq(tokenizer=tokenizer, model=checkpoint, return_tensors="tf") ``` </tf> </frameworkcontent>
72_3_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/translation.md
https://huggingface.co/docs/transformers/en/tasks/translation/#evaluate
.md
Including a metric during training is often helpful for evaluating your model's performance. You can quickly load a evaluation method with the 🤗 [Evaluate](https://huggingface.co/docs/evaluate/index) library. For this task, load the [SacreBLEU](https://huggingface.co/spaces/evaluate-metric/sacrebleu) metric (see the 🤗 Evaluate [quick tour](https://huggingface.co/docs/evaluate/a_quick_tour) to learn more about how to load and compute a metric): ```py >>> import evaluate
72_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/translation.md
https://huggingface.co/docs/transformers/en/tasks/translation/#evaluate
.md
>>> metric = evaluate.load("sacrebleu") ``` Then create a function that passes your predictions and labels to [`~evaluate.EvaluationModule.compute`] to calculate the SacreBLEU score: ```py >>> import numpy as np >>> def postprocess_text(preds, labels): ... preds = [pred.strip() for pred in preds] ... labels = [[label.strip()] for label in labels] ... return preds, labels
72_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/translation.md
https://huggingface.co/docs/transformers/en/tasks/translation/#evaluate
.md
... return preds, labels >>> def compute_metrics(eval_preds): ... preds, labels = eval_preds ... if isinstance(preds, tuple): ... preds = preds[0] ... decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True) ... labels = np.where(labels != -100, labels, tokenizer.pad_token_id) ... decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True) ... decoded_preds, decoded_labels = postprocess_text(decoded_preds, decoded_labels)
72_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/translation.md
https://huggingface.co/docs/transformers/en/tasks/translation/#evaluate
.md
... decoded_preds, decoded_labels = postprocess_text(decoded_preds, decoded_labels) ... result = metric.compute(predictions=decoded_preds, references=decoded_labels) ... result = {"bleu": result["score"]}
72_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/translation.md
https://huggingface.co/docs/transformers/en/tasks/translation/#evaluate
.md
... prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in preds] ... result["gen_len"] = np.mean(prediction_lens) ... result = {k: round(v, 4) for k, v in result.items()} ... return result ``` Your `compute_metrics` function is ready to go now, and you'll return to it when you setup your training.
72_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/translation.md
https://huggingface.co/docs/transformers/en/tasks/translation/#train
.md
<frameworkcontent> <pt> <Tip> If you aren't familiar with finetuning a model with the [`Trainer`], take a look at the basic tutorial [here](../training#train-with-pytorch-trainer)! </Tip> You're ready to start training your model now! Load T5 with [`AutoModelForSeq2SeqLM`]: ```py >>> from transformers import AutoModelForSeq2SeqLM, Seq2SeqTrainingArguments, Seq2SeqTrainer
72_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/translation.md
https://huggingface.co/docs/transformers/en/tasks/translation/#train
.md
>>> model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint) ``` At this point, only three steps remain: 1. Define your training hyperparameters in [`Seq2SeqTrainingArguments`]. The only required parameter is `output_dir` which specifies where to save your model. You'll push this model to the Hub by setting `push_to_hub=True` (you need to be signed in to Hugging Face to upload your model). At the end of each epoch, the [`Trainer`] will evaluate the SacreBLEU metric and save the training checkpoint.
72_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/translation.md
https://huggingface.co/docs/transformers/en/tasks/translation/#train
.md
2. Pass the training arguments to [`Seq2SeqTrainer`] along with the model, dataset, tokenizer, data collator, and `compute_metrics` function. 3. Call [`~Trainer.train`] to finetune your model. ```py >>> training_args = Seq2SeqTrainingArguments( ... output_dir="my_awesome_opus_books_model", ... eval_strategy="epoch", ... learning_rate=2e-5, ... per_device_train_batch_size=16, ... per_device_eval_batch_size=16, ... weight_decay=0.01, ... save_total_limit=3,
72_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/translation.md
https://huggingface.co/docs/transformers/en/tasks/translation/#train
.md
... per_device_eval_batch_size=16, ... weight_decay=0.01, ... save_total_limit=3, ... num_train_epochs=2, ... predict_with_generate=True, ... fp16=True, #change to bf16=True for XPU ... push_to_hub=True, ... )
72_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/translation.md
https://huggingface.co/docs/transformers/en/tasks/translation/#train
.md
>>> trainer = Seq2SeqTrainer( ... model=model, ... args=training_args, ... train_dataset=tokenized_books["train"], ... eval_dataset=tokenized_books["test"], ... processing_class=tokenizer, ... data_collator=data_collator, ... compute_metrics=compute_metrics, ... )
72_5_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/translation.md
https://huggingface.co/docs/transformers/en/tasks/translation/#train
.md
>>> trainer.train() ``` Once training is completed, share your model to the Hub with the [`~transformers.Trainer.push_to_hub`] method so everyone can use your model: ```py >>> trainer.push_to_hub() ``` </pt> <tf> <Tip> If you aren't familiar with finetuning a model with Keras, take a look at the basic tutorial [here](../training#train-a-tensorflow-model-with-keras)! </Tip>
72_5_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/translation.md
https://huggingface.co/docs/transformers/en/tasks/translation/#train
.md
</Tip> To finetune a model in TensorFlow, start by setting up an optimizer function, learning rate schedule, and some training hyperparameters: ```py >>> from transformers import AdamWeightDecay
72_5_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/translation.md
https://huggingface.co/docs/transformers/en/tasks/translation/#train
.md
>>> optimizer = AdamWeightDecay(learning_rate=2e-5, weight_decay_rate=0.01) ``` Then you can load T5 with [`TFAutoModelForSeq2SeqLM`]: ```py >>> from transformers import TFAutoModelForSeq2SeqLM
72_5_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/translation.md
https://huggingface.co/docs/transformers/en/tasks/translation/#train
.md
>>> model = TFAutoModelForSeq2SeqLM.from_pretrained(checkpoint) ``` Convert your datasets to the `tf.data.Dataset` format with [`~transformers.TFPreTrainedModel.prepare_tf_dataset`]: ```py >>> tf_train_set = model.prepare_tf_dataset( ... tokenized_books["train"], ... shuffle=True, ... batch_size=16, ... collate_fn=data_collator, ... )
72_5_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/translation.md
https://huggingface.co/docs/transformers/en/tasks/translation/#train
.md
>>> tf_test_set = model.prepare_tf_dataset( ... tokenized_books["test"], ... shuffle=False, ... batch_size=16, ... collate_fn=data_collator, ... ) ``` Configure the model for training with [`compile`](https://keras.io/api/models/model_training_apis/#compile-method). Note that Transformers models all have a default task-relevant loss function, so you don't need to specify one unless you want to: ```py >>> import tensorflow as tf
72_5_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/translation.md
https://huggingface.co/docs/transformers/en/tasks/translation/#train
.md
>>> model.compile(optimizer=optimizer) # No loss argument! ``` The last two things to setup before you start training is to compute the SacreBLEU metric from the predictions, and provide a way to push your model to the Hub. Both are done by using [Keras callbacks](../main_classes/keras_callbacks). Pass your `compute_metrics` function to [`~transformers.KerasMetricCallback`]: ```py >>> from transformers.keras_callbacks import KerasMetricCallback
72_5_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/translation.md
https://huggingface.co/docs/transformers/en/tasks/translation/#train
.md
>>> metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_test_set) ``` Specify where to push your model and tokenizer in the [`~transformers.PushToHubCallback`]: ```py >>> from transformers.keras_callbacks import PushToHubCallback
72_5_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/translation.md
https://huggingface.co/docs/transformers/en/tasks/translation/#train
.md
>>> push_to_hub_callback = PushToHubCallback( ... output_dir="my_awesome_opus_books_model", ... tokenizer=tokenizer, ... ) ``` Then bundle your callbacks together: ```py >>> callbacks = [metric_callback, push_to_hub_callback] ``` Finally, you're ready to start training your model! Call [`fit`](https://keras.io/api/models/model_training_apis/#fit-method) with your training and validation datasets, the number of epochs, and your callbacks to finetune the model: ```py
72_5_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/translation.md
https://huggingface.co/docs/transformers/en/tasks/translation/#train
.md
```py >>> model.fit(x=tf_train_set, validation_data=tf_test_set, epochs=3, callbacks=callbacks) ``` Once training is completed, your model is automatically uploaded to the Hub so everyone can use it! </tf> </frameworkcontent> <Tip> For a more in-depth example of how to finetune a model for translation, take a look at the corresponding [PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/translation.ipynb)
72_5_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/translation.md
https://huggingface.co/docs/transformers/en/tasks/translation/#train
.md
[PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/translation.ipynb) or [TensorFlow notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/translation-tf.ipynb). </Tip>
72_5_14
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/translation.md
https://huggingface.co/docs/transformers/en/tasks/translation/#inference
.md
Great, now that you've finetuned a model, you can use it for inference! Come up with some text you'd like to translate to another language. For T5, you need to prefix your input depending on the task you're working on. For translation from English to French, you should prefix your input as shown below: ```py >>> text = "translate English to French: Legumes share resources with nitrogen-fixing bacteria." ```
72_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/translation.md
https://huggingface.co/docs/transformers/en/tasks/translation/#inference
.md
```py >>> text = "translate English to French: Legumes share resources with nitrogen-fixing bacteria." ``` The simplest way to try out your finetuned model for inference is to use it in a [`pipeline`]. Instantiate a `pipeline` for translation with your model, and pass your text to it: ```py >>> from transformers import pipeline
72_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/translation.md
https://huggingface.co/docs/transformers/en/tasks/translation/#inference
.md
# Change `xx` to the language of the input and `yy` to the language of the desired output. # Examples: "en" for English, "fr" for French, "de" for German, "es" for Spanish, "zh" for Chinese, etc; translation_en_to_fr translates English to French # You can view all the lists of languages here - https://huggingface.co/languages >>> translator = pipeline("translation_xx_to_yy", model="username/my_awesome_opus_books_model") >>> translator(text)
72_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/translation.md
https://huggingface.co/docs/transformers/en/tasks/translation/#inference
.md
>>> translator = pipeline("translation_xx_to_yy", model="username/my_awesome_opus_books_model") >>> translator(text) [{'translation_text': 'Legumes partagent des ressources avec des bactéries azotantes.'}] ``` You can also manually replicate the results of the `pipeline` if you'd like: <frameworkcontent> <pt> Tokenize the text and return the `input_ids` as PyTorch tensors: ```py >>> from transformers import AutoTokenizer
72_6_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/translation.md
https://huggingface.co/docs/transformers/en/tasks/translation/#inference
.md
>>> tokenizer = AutoTokenizer.from_pretrained("username/my_awesome_opus_books_model") >>> inputs = tokenizer(text, return_tensors="pt").input_ids ``` Use the [`~generation.GenerationMixin.generate`] method to create the translation. For more details about the different text generation strategies and parameters for controlling generation, check out the [Text Generation](../main_classes/text_generation) API. ```py >>> from transformers import AutoModelForSeq2SeqLM
72_6_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/translation.md
https://huggingface.co/docs/transformers/en/tasks/translation/#inference
.md
>>> model = AutoModelForSeq2SeqLM.from_pretrained("username/my_awesome_opus_books_model") >>> outputs = model.generate(inputs, max_new_tokens=40, do_sample=True, top_k=30, top_p=0.95) ``` Decode the generated token ids back into text: ```py >>> tokenizer.decode(outputs[0], skip_special_tokens=True) 'Les lignées partagent des ressources avec des bactéries enfixant l'azote.' ``` </pt> <tf> Tokenize the text and return the `input_ids` as TensorFlow tensors: ```py
72_6_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/translation.md
https://huggingface.co/docs/transformers/en/tasks/translation/#inference
.md
``` </pt> <tf> Tokenize the text and return the `input_ids` as TensorFlow tensors: ```py >>> from transformers import AutoTokenizer
72_6_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/translation.md
https://huggingface.co/docs/transformers/en/tasks/translation/#inference
.md
>>> tokenizer = AutoTokenizer.from_pretrained("username/my_awesome_opus_books_model") >>> inputs = tokenizer(text, return_tensors="tf").input_ids ``` Use the [`~transformers.generation_tf_utils.TFGenerationMixin.generate`] method to create the translation. For more details about the different text generation strategies and parameters for controlling generation, check out the [Text Generation](../main_classes/text_generation) API. ```py >>> from transformers import TFAutoModelForSeq2SeqLM
72_6_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/translation.md
https://huggingface.co/docs/transformers/en/tasks/translation/#inference
.md
>>> model = TFAutoModelForSeq2SeqLM.from_pretrained("username/my_awesome_opus_books_model") >>> outputs = model.generate(inputs, max_new_tokens=40, do_sample=True, top_k=30, top_p=0.95) ``` Decode the generated token ids back into text: ```py >>> tokenizer.decode(outputs[0], skip_special_tokens=True) 'Les lugumes partagent les ressources avec des bactéries fixatrices d'azote.' ``` </tf> </frameworkcontent>
72_6_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/document_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/document_question_answering/
.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
73_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/document_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/document_question_answering/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
73_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/document_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/document_question_answering/#document-question-answering
.md
[[open-in-colab]] Document Question Answering, also referred to as Document Visual Question Answering, is a task that involves providing answers to questions posed about document images. The input to models supporting this task is typically a combination of an image and a question, and the output is an answer expressed in natural language. These models utilize multiple modalities, including text, the positions of words (bounding boxes), and the image itself. This guide illustrates how to:
73_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/document_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/document_question_answering/#document-question-answering
.md
text, the positions of words (bounding boxes), and the image itself. This guide illustrates how to: - Fine-tune [LayoutLMv2](../model_doc/layoutlmv2) on the [DocVQA dataset](https://huggingface.co/datasets/nielsr/docvqa_1200_examples_donut). - Use your fine-tuned model for inference. <Tip> To see all architectures and checkpoints compatible with this task, we recommend checking the [task-page](https://huggingface.co/tasks/image-to-text) </Tip>
73_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/document_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/document_question_answering/#document-question-answering
.md
</Tip> LayoutLMv2 solves the document question-answering task by adding a question-answering head on top of the final hidden states of the tokens, to predict the positions of the start and end tokens of the answer. In other words, the problem is treated as extractive question answering: given the context, extract which piece of information answers the question. The context comes from the output of an OCR engine, here it is Google's Tesseract.
73_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/document_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/document_question_answering/#document-question-answering
.md
of information answers the question. The context comes from the output of an OCR engine, here it is Google's Tesseract. Before you begin, make sure you have all the necessary libraries installed. LayoutLMv2 depends on detectron2, torchvision and tesseract. ```bash pip install -q transformers datasets ``` ```bash pip install 'git+https://github.com/facebookresearch/detectron2.git' pip install torchvision ``` ```bash sudo apt install tesseract-ocr pip install -q pytesseract ```
73_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/document_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/document_question_answering/#document-question-answering
.md
pip install torchvision ``` ```bash sudo apt install tesseract-ocr pip install -q pytesseract ``` Once you have installed all of the dependencies, restart your runtime. We encourage you to share your model with the community. Log in to your Hugging Face account to upload it to the 🤗 Hub. When prompted, enter your token to log in: ```py >>> from huggingface_hub import notebook_login
73_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/document_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/document_question_answering/#document-question-answering
.md
>>> notebook_login() ``` Let's define some global variables. ```py >>> model_checkpoint = "microsoft/layoutlmv2-base-uncased" >>> batch_size = 4 ```
73_1_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/document_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/document_question_answering/#load-the-data
.md
In this guide we use a small sample of preprocessed DocVQA that you can find on 🤗 Hub. If you'd like to use the full DocVQA dataset, you can register and download it on [DocVQA homepage](https://rrc.cvc.uab.es/?ch=17). If you do so, to proceed with this guide check out [how to load files into a 🤗 dataset](https://huggingface.co/docs/datasets/loading#local-and-remote-files). ```py >>> from datasets import load_dataset
73_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/document_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/document_question_answering/#load-the-data
.md
>>> dataset = load_dataset("nielsr/docvqa_1200_examples") >>> dataset DatasetDict({ train: Dataset({ features: ['id', 'image', 'query', 'answers', 'words', 'bounding_boxes', 'answer'], num_rows: 1000 }) test: Dataset({ features: ['id', 'image', 'query', 'answers', 'words', 'bounding_boxes', 'answer'], num_rows: 200 }) }) ``` As you can see, the dataset is split into train and test sets already. Take a look at a random example to familiarize yourself with the features. ```py
73_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/document_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/document_question_answering/#load-the-data
.md
yourself with the features. ```py >>> dataset["train"].features ``` Here's what the individual fields represent: * `id`: the example's id * `image`: a PIL.Image.Image object containing the document image * `query`: the question string - natural language asked question, in several languages * `answers`: a list of correct answers provided by human annotators * `words` and `bounding_boxes`: the results of OCR, which we will not use here
73_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/document_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/document_question_answering/#load-the-data
.md
* `words` and `bounding_boxes`: the results of OCR, which we will not use here * `answer`: an answer matched by a different model which we will not use here Let's leave only English questions, and drop the `answer` feature which appears to contain predictions by another model. We'll also take the first of the answers from the set provided by the annotators. Alternatively, you can randomly sample it. ```py
73_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/document_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/document_question_answering/#load-the-data
.md
```py >>> updated_dataset = dataset.map(lambda example: {"question": example["query"]["en"]}, remove_columns=["query"]) >>> updated_dataset = updated_dataset.map( ... lambda example: {"answer": example["answers"][0]}, remove_columns=["answer", "answers"] ... ) ``` Note that the LayoutLMv2 checkpoint that we use in this guide has been trained with `max_position_embeddings = 512` (you can
73_2_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/document_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/document_question_answering/#load-the-data
.md
Note that the LayoutLMv2 checkpoint that we use in this guide has been trained with `max_position_embeddings = 512` (you can find this information in the [checkpoint's `config.json` file](https://huggingface.co/microsoft/layoutlmv2-base-uncased/blob/main/config.json#L18)). We can truncate the examples but to avoid the situation where the answer might be at the end of a large document and end up truncated, here we'll remove the few examples where the embedding is likely to end up longer than 512.
73_2_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/document_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/document_question_answering/#load-the-data
.md
here we'll remove the few examples where the embedding is likely to end up longer than 512. If most of the documents in your dataset are long, you can implement a sliding window strategy - check out [this notebook](https://github.com/huggingface/notebooks/blob/main/examples/question_answering.ipynb) for details. ```py >>> updated_dataset = updated_dataset.filter(lambda x: len(x["words"]) + len(x["question"].split()) < 512) ```
73_2_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/document_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/document_question_answering/#load-the-data
.md
```py >>> updated_dataset = updated_dataset.filter(lambda x: len(x["words"]) + len(x["question"].split()) < 512) ``` At this point let's also remove the OCR features from this dataset. These are a result of OCR for fine-tuning a different model. They would still require some processing if we wanted to use them, as they do not match the input requirements of the model we use in this guide. Instead, we can use the [`LayoutLMv2Processor`] on the original data for both OCR and
73_2_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/document_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/document_question_answering/#load-the-data
.md
of the model we use in this guide. Instead, we can use the [`LayoutLMv2Processor`] on the original data for both OCR and tokenization. This way we'll get the inputs that match model's expected input. If you want to process images manually, check out the [`LayoutLMv2` model documentation](../model_doc/layoutlmv2) to learn what input format the model expects. ```py >>> updated_dataset = updated_dataset.remove_columns("words") >>> updated_dataset = updated_dataset.remove_columns("bounding_boxes") ```
73_2_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/document_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/document_question_answering/#load-the-data
.md
>>> updated_dataset = updated_dataset.remove_columns("bounding_boxes") ``` Finally, the data exploration won't be complete if we don't peek at an image example. ```py >>> updated_dataset["train"][11]["image"] ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/docvqa_example.jpg" alt="DocVQA Image Example"/> </div>
73_2_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/document_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/document_question_answering/#preprocess-the-data
.md
The Document Question Answering task is a multimodal task, and you need to make sure that the inputs from each modality are preprocessed according to the model's expectations. Let's start by loading the [`LayoutLMv2Processor`], which internally combines an image processor that can handle image data and a tokenizer that can encode text data. ```py >>> from transformers import AutoProcessor >>> processor = AutoProcessor.from_pretrained(model_checkpoint) ```
73_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/document_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/document_question_answering/#preprocessing-document-images
.md
First, let's prepare the document images for the model with the help of the `image_processor` from the processor. By default, image processor resizes the images to 224x224, makes sure they have the correct order of color channels, applies OCR with tesseract to get words and normalized bounding boxes. In this tutorial, all of these defaults are exactly what we need. Write a function that applies the default image processing to a batch of images and returns the results of OCR. ```py
73_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/document_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/document_question_answering/#preprocessing-document-images
.md
Write a function that applies the default image processing to a batch of images and returns the results of OCR. ```py >>> image_processor = processor.image_processor
73_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/document_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/document_question_answering/#preprocessing-document-images
.md
>>> def get_ocr_words_and_boxes(examples): ... images = [image.convert("RGB") for image in examples["image"]] ... encoded_inputs = image_processor(images) ... examples["image"] = encoded_inputs.pixel_values ... examples["words"] = encoded_inputs.words ... examples["boxes"] = encoded_inputs.boxes
73_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/document_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/document_question_answering/#preprocessing-document-images
.md
... return examples ``` To apply this preprocessing to the entire dataset in a fast way, use [`~datasets.Dataset.map`]. ```py >>> dataset_with_ocr = updated_dataset.map(get_ocr_words_and_boxes, batched=True, batch_size=2) ```
73_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/document_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/document_question_answering/#preprocessing-text-data
.md
Once we have applied OCR to the images, we need to encode the text part of the dataset to prepare it for the model. This involves converting the words and boxes that we got in the previous step to token-level `input_ids`, `attention_mask`, `token_type_ids` and `bbox`. For preprocessing text, we'll need the `tokenizer` from the processor. ```py >>> tokenizer = processor.tokenizer ```
73_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/document_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/document_question_answering/#preprocessing-text-data
.md
```py >>> tokenizer = processor.tokenizer ``` On top of the preprocessing mentioned above, we also need to add the labels for the model. For `xxxForQuestionAnswering` models in 🤗 Transformers, the labels consist of the `start_positions` and `end_positions`, indicating which token is at the start and which token is at the end of the answer. Let's start with that. Define a helper function that can find a sublist (the answer split into words) in a larger list (the words list).
73_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/document_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/document_question_answering/#preprocessing-text-data
.md
This function will take two lists as input, `words_list` and `answer_list`. It will then iterate over the `words_list` and check if the current word in the `words_list` (words_list[i]) is equal to the first word of answer_list (answer_list[0]) and if the sublist of `words_list` starting from the current word and of the same length as `answer_list` is equal `to answer_list`. If this condition is true, it means that a match has been found, and the function will record the match, its starting index (idx),
73_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/document_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/document_question_answering/#preprocessing-text-data
.md
and its ending index (idx + len(answer_list) - 1). If more than one match was found, the function will return only the first one. If no match is found, the function returns (`None`, 0, and 0). ```py >>> def subfinder(words_list, answer_list): ... matches = [] ... start_indices = [] ... end_indices = [] ... for idx, i in enumerate(range(len(words_list))): ... if words_list[i] == answer_list[0] and words_list[i : i + len(answer_list)] == answer_list:
73_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/document_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/document_question_answering/#preprocessing-text-data
.md
... if words_list[i] == answer_list[0] and words_list[i : i + len(answer_list)] == answer_list: ... matches.append(answer_list) ... start_indices.append(idx) ... end_indices.append(idx + len(answer_list) - 1) ... if matches: ... return matches[0], start_indices[0], end_indices[0] ... else: ... return None, 0, 0 ``` To illustrate how this function finds the position of the answer, let's use it on an example: ```py
73_5_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/document_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/document_question_answering/#preprocessing-text-data
.md
``` To illustrate how this function finds the position of the answer, let's use it on an example: ```py >>> example = dataset_with_ocr["train"][1] >>> words = [word.lower() for word in example["words"]] >>> match, word_idx_start, word_idx_end = subfinder(words, example["answer"].lower().split()) >>> print("Question: ", example["question"]) >>> print("Words:", words) >>> print("Answer: ", example["answer"]) >>> print("start_index", word_idx_start) >>> print("end_index", word_idx_end)
73_5_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/document_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/document_question_answering/#preprocessing-text-data
.md
>>> print("Answer: ", example["answer"]) >>> print("start_index", word_idx_start) >>> print("end_index", word_idx_end) Question: Who is in cc in this letter?
73_5_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/document_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/document_question_answering/#preprocessing-text-data
.md
Words: ['wie', 'baw', 'brown', '&', 'williamson', 'tobacco', 'corporation', 'research', '&', 'development', 'internal', 'correspondence', 'to:', 'r.', 'h.', 'honeycutt', 'ce:', 't.f.', 'riehl', 'from:', '.', 'c.j.', 'cook', 'date:', 'may', '8,', '1995', 'subject:', 'review', 'of', 'existing', 'brainstorming', 'ideas/483', 'the', 'major', 'function', 'of', 'the', 'product', 'innovation', 'graup', 'is', 'to', 'develop', 'marketable', 'nove!', 'products', 'that', 'would', 'be', 'profitable', 'to',
73_5_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/document_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/document_question_answering/#preprocessing-text-data
.md
'innovation', 'graup', 'is', 'to', 'develop', 'marketable', 'nove!', 'products', 'that', 'would', 'be', 'profitable', 'to', 'manufacture', 'and', 'sell.', 'novel', 'is', 'defined', 'as:', 'of', 'a', 'new', 'kind,', 'or', 'different', 'from', 'anything', 'seen', 'or', 'known', 'before.', 'innovation', 'is', 'defined', 'as:', 'something', 'new', 'or', 'different', 'introduced;', 'act', 'of', 'innovating;', 'introduction', 'of', 'new', 'things', 'or', 'methods.', 'the', 'products', 'may', 'incorporate',
73_5_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/document_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/document_question_answering/#preprocessing-text-data
.md
'act', 'of', 'innovating;', 'introduction', 'of', 'new', 'things', 'or', 'methods.', 'the', 'products', 'may', 'incorporate', 'the', 'latest', 'technologies,', 'materials', 'and', 'know-how', 'available', 'to', 'give', 'then', 'a', 'unique', 'taste', 'or', 'look.', 'the', 'first', 'task', 'of', 'the', 'product', 'innovation', 'group', 'was', 'to', 'assemble,', 'review', 'and', 'categorize', 'a', 'list', 'of', 'existing', 'brainstorming', 'ideas.', 'ideas', 'were', 'grouped', 'into', 'two', 'major',
73_5_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/document_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/document_question_answering/#preprocessing-text-data
.md
'categorize', 'a', 'list', 'of', 'existing', 'brainstorming', 'ideas.', 'ideas', 'were', 'grouped', 'into', 'two', 'major', 'categories', 'labeled', 'appearance', 'and', 'taste/aroma.', 'these', 'categories', 'are', 'used', 'for', 'novel', 'products', 'that', 'may', 'differ', 'from', 'a', 'visual', 'and/or', 'taste/aroma', 'point', 'of', 'view', 'compared', 'to', 'canventional', 'cigarettes.', 'other', 'categories', 'include', 'a', 'combination', 'of', 'the', 'above,', 'filters,', 'packaging', 'and',
73_5_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/document_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/document_question_answering/#preprocessing-text-data
.md
'cigarettes.', 'other', 'categories', 'include', 'a', 'combination', 'of', 'the', 'above,', 'filters,', 'packaging', 'and', 'brand', 'extensions.', 'appearance', 'this', 'category', 'is', 'used', 'for', 'novel', 'cigarette', 'constructions', 'that', 'yield', 'visually', 'different', 'products', 'with', 'minimal', 'changes', 'in', 'smoke', 'chemistry', 'two', 'cigarettes', 'in', 'cne.', 'emulti-plug', 'te', 'build', 'yaur', 'awn', 'cigarette.', 'eswitchable', 'menthol', 'or', 'non', 'menthol', 'cigarette.',
73_5_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/document_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/document_question_answering/#preprocessing-text-data
.md
'emulti-plug', 'te', 'build', 'yaur', 'awn', 'cigarette.', 'eswitchable', 'menthol', 'or', 'non', 'menthol', 'cigarette.', '*cigarettes', 'with', 'interspaced', 'perforations', 'to', 'enable', 'smoker', 'to', 'separate', 'unburned', 'section', 'for', 'future', 'smoking.', '«short', 'cigarette,', 'tobacco', 'section', '30', 'mm.', '«extremely', 'fast', 'buming', 'cigarette.', '«novel', 'cigarette', 'constructions', 'that', 'permit', 'a', 'significant', 'reduction', 'iretobacco', 'weight', 'while',
73_5_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/document_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/document_question_answering/#preprocessing-text-data
.md
'«novel', 'cigarette', 'constructions', 'that', 'permit', 'a', 'significant', 'reduction', 'iretobacco', 'weight', 'while', 'maintaining', 'smoking', 'mechanics', 'and', 'visual', 'characteristics.', 'higher', 'basis', 'weight', 'paper:', 'potential', 'reduction', 'in', 'tobacco', 'weight.', '«more', 'rigid', 'tobacco', 'column;', 'stiffing', 'agent', 'for', 'tobacco;', 'e.g.', 'starch', '*colored', 'tow', 'and', 'cigarette', 'papers;', 'seasonal', 'promotions,', 'e.g.', 'pastel', 'colored', 'cigarettes',
73_5_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/document_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/document_question_answering/#preprocessing-text-data
.md
'*colored', 'tow', 'and', 'cigarette', 'papers;', 'seasonal', 'promotions,', 'e.g.', 'pastel', 'colored', 'cigarettes', 'for', 'easter', 'or', 'in', 'an', 'ebony', 'and', 'ivory', 'brand', 'containing', 'a', 'mixture', 'of', 'all', 'black', '(black', 'paper', 'and', 'tow)', 'and', 'ail', 'white', 'cigarettes.', '499150498']
73_5_14
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/document_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/document_question_answering/#preprocessing-text-data
.md
Answer: T.F. Riehl start_index 17 end_index 18 ``` Once examples are encoded, however, they will look like this: ```py >>> encoding = tokenizer(example["question"], example["words"], example["boxes"]) >>> tokenizer.decode(encoding["input_ids"]) [CLS] who is in cc in this letter? [SEP] wie baw brown & williamson tobacco corporation research & development ... ``` We'll need to find the position of the answer in the encoded input.
73_5_15
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/document_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/document_question_answering/#preprocessing-text-data
.md
``` We'll need to find the position of the answer in the encoded input. * `token_type_ids` tells us which tokens are part of the question, and which ones are part of the document's words. * `tokenizer.cls_token_id` will help find the special token at the beginning of the input. * `word_ids` will help match the answer found in the original `words` to the same answer in the full encoded input and determine the start/end position of the answer in the encoded input.
73_5_16
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/document_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/document_question_answering/#preprocessing-text-data
.md
the start/end position of the answer in the encoded input. With that in mind, let's create a function to encode a batch of examples in the dataset: ```py >>> def encode_dataset(examples, max_length=512): ... questions = examples["question"] ... words = examples["words"] ... boxes = examples["boxes"] ... answers = examples["answer"]
73_5_17
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/document_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/document_question_answering/#preprocessing-text-data
.md
... # encode the batch of examples and initialize the start_positions and end_positions ... encoding = tokenizer(questions, words, boxes, max_length=max_length, padding="max_length", truncation=True) ... start_positions = [] ... end_positions = [] ... # loop through the examples in the batch ... for i in range(len(questions)): ... cls_index = encoding["input_ids"][i].index(tokenizer.cls_token_id)
73_5_18
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks/document_question_answering.md
https://huggingface.co/docs/transformers/en/tasks/document_question_answering/#preprocessing-text-data
.md
... # find the position of the answer in example's words ... words_example = [word.lower() for word in words[i]] ... answer = answers[i] ... match, word_idx_start, word_idx_end = subfinder(words_example, answer.lower().split())
73_5_19