source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
1. SQA: if you're interested in asking follow-up questions related to a table, in a conversational set-up. For example if you first ask "what's the name of the first actor?" then you can ask a follow-up question such as "how old is he?". Here, questions do not involve any aggregation (all questions are cell selection questions).
307_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
2. WTQ: if you're not interested in asking questions in a conversational set-up, but rather just asking questions related to a table, which might involve aggregation, such as counting a number of rows, summing up cell values or averaging cell values. You can then for example ask "what's the total number of goals Cristiano Ronaldo made in his career?". This case is also called **weak supervision**, since the model itself must learn the appropriate aggregation operator (SUM/COUNT/AVERAGE/NONE) given only the
307_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
supervision**, since the model itself must learn the appropriate aggregation operator (SUM/COUNT/AVERAGE/NONE) given only the answer to the question as supervision.
307_3_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
3. WikiSQL-supervised: this dataset is based on WikiSQL with the model being given the ground truth aggregation operator during training. This is also called **strong supervision**. Here, learning the appropriate aggregation operator is much easier. To summarize: | **Task** | **Example dataset** | **Description** |
307_3_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
|-------------------------------------|---------------------|---------------------------------------------------------------------------------------------------------| | Conversational | SQA | Conversational, only cell selection questions | | Weak supervision for aggregation | WTQ | Questions might involve aggregation, and the model must learn this given only the answer as supervision |
307_3_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
| Strong supervision for aggregation | WikiSQL-supervised | Questions might involve aggregation, and the model must learn this given the gold aggregation operator | <frameworkcontent> <pt> Initializing a model with a pre-trained base and randomly initialized classification heads from the hub can be done as shown below. ```py >>> from transformers import TapasConfig, TapasForQuestionAnswering
307_3_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
>>> # for example, the base sized model with default SQA configuration >>> model = TapasForQuestionAnswering.from_pretrained("google/tapas-base") >>> # or, the base sized model with WTQ configuration >>> config = TapasConfig.from_pretrained("google/tapas-base-finetuned-wtq") >>> model = TapasForQuestionAnswering.from_pretrained("google/tapas-base", config=config)
307_3_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
>>> # or, the base sized model with WikiSQL configuration >>> config = TapasConfig("google-base-finetuned-wikisql-supervised") >>> model = TapasForQuestionAnswering.from_pretrained("google/tapas-base", config=config) ```
307_3_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
``` Of course, you don't necessarily have to follow one of these three ways in which TAPAS was fine-tuned. You can also experiment by defining any hyperparameters you want when initializing [`TapasConfig`], and then create a [`TapasForQuestionAnswering`] based on that configuration. For example, if you have a dataset that has both conversational questions and questions that might involve aggregation, then you can do it this way. Here's an example: ```py
307_3_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
```py >>> from transformers import TapasConfig, TapasForQuestionAnswering
307_3_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
>>> # you can initialize the classification heads any way you want (see docs of TapasConfig) >>> config = TapasConfig(num_aggregation_labels=3, average_logits_per_cell=True) >>> # initializing the pre-trained base sized model with our custom classification heads >>> model = TapasForQuestionAnswering.from_pretrained("google/tapas-base", config=config) ``` </pt> <tf>
307_3_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
>>> model = TapasForQuestionAnswering.from_pretrained("google/tapas-base", config=config) ``` </pt> <tf> Initializing a model with a pre-trained base and randomly initialized classification heads from the hub can be done as shown below. Be sure to have installed the [tensorflow_probability](https://github.com/tensorflow/probability) dependency: ```py >>> from transformers import TapasConfig, TFTapasForQuestionAnswering
307_3_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
>>> # for example, the base sized model with default SQA configuration >>> model = TFTapasForQuestionAnswering.from_pretrained("google/tapas-base") >>> # or, the base sized model with WTQ configuration >>> config = TapasConfig.from_pretrained("google/tapas-base-finetuned-wtq") >>> model = TFTapasForQuestionAnswering.from_pretrained("google/tapas-base", config=config)
307_3_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
>>> # or, the base sized model with WikiSQL configuration >>> config = TapasConfig("google-base-finetuned-wikisql-supervised") >>> model = TFTapasForQuestionAnswering.from_pretrained("google/tapas-base", config=config) ```
307_3_14
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
``` Of course, you don't necessarily have to follow one of these three ways in which TAPAS was fine-tuned. You can also experiment by defining any hyperparameters you want when initializing [`TapasConfig`], and then create a [`TFTapasForQuestionAnswering`] based on that configuration. For example, if you have a dataset that has both conversational questions and questions that might involve aggregation, then you can do it this way. Here's an example: ```py
307_3_15
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
```py >>> from transformers import TapasConfig, TFTapasForQuestionAnswering
307_3_16
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
>>> # you can initialize the classification heads any way you want (see docs of TapasConfig) >>> config = TapasConfig(num_aggregation_labels=3, average_logits_per_cell=True) >>> # initializing the pre-trained base sized model with our custom classification heads >>> model = TFTapasForQuestionAnswering.from_pretrained("google/tapas-base", config=config) ``` </tf> </frameworkcontent>
307_3_17
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
>>> model = TFTapasForQuestionAnswering.from_pretrained("google/tapas-base", config=config) ``` </tf> </frameworkcontent> What you can also do is start from an already fine-tuned checkpoint. A note here is that the already fine-tuned checkpoint on WTQ has some issues due to the L2-loss which is somewhat brittle. See [here](https://github.com/google-research/tapas/issues/91#issuecomment-735719340) for more info.
307_3_18
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
For a list of all pre-trained and fine-tuned TAPAS checkpoints available on HuggingFace's hub, see [here](https://huggingface.co/models?search=tapas). **STEP 2: Prepare your data in the SQA format** Second, no matter what you picked above, you should prepare your dataset in the [SQA](https://www.microsoft.com/en-us/download/details.aspx?id=54253) format. This format is a TSV/CSV file with the following columns: - `id`: optional, id of the table-question pair, for bookkeeping purposes.
307_3_19
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
- `id`: optional, id of the table-question pair, for bookkeeping purposes. - `annotator`: optional, id of the person who annotated the table-question pair, for bookkeeping purposes. - `position`: integer indicating if the question is the first, second, third,... related to the table. Only required in case of conversational setup (SQA). You don't need this column in case you're going for WTQ/WikiSQL-supervised. - `question`: string - `table_file`: string, name of a csv file containing the tabular data
307_3_20
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
- `question`: string - `table_file`: string, name of a csv file containing the tabular data - `answer_coordinates`: list of one or more tuples (each tuple being a cell coordinate, i.e. row, column pair that is part of the answer) - `answer_text`: list of one or more strings (each string being a cell value that is part of the answer) - `aggregation_label`: index of the aggregation operator. Only required in case of strong supervision for aggregation (the WikiSQL-supervised case)
307_3_21
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
- `float_answer`: the float answer to the question, if there is one (np.nan if there isn't). Only required in case of weak supervision for aggregation (such as WTQ and WikiSQL)
307_3_22
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
The tables themselves should be present in a folder, each table being a separate csv file. Note that the authors of the TAPAS algorithm used conversion scripts with some automated logic to convert the other datasets (WTQ, WikiSQL) into the SQA format. The author explains this [here](https://github.com/google-research/tapas/issues/50#issuecomment-705465960). A conversion of this script that works with HuggingFace's implementation can be found [here](https://github.com/NielsRogge/tapas_utils). Interestingly,
307_3_23
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
that works with HuggingFace's implementation can be found [here](https://github.com/NielsRogge/tapas_utils). Interestingly, these conversion scripts are not perfect (the `answer_coordinates` and `float_answer` fields are populated based on the `answer_text`), meaning that WTQ and WikiSQL results could actually be improved.
307_3_24
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
**STEP 3: Convert your data into tensors using TapasTokenizer** <frameworkcontent> <pt> Third, given that you've prepared your data in this TSV/CSV format (and corresponding CSV files containing the tabular data), you can then use [`TapasTokenizer`] to convert table-question pairs into `input_ids`, `attention_mask`, `token_type_ids` and so on. Again, based on which of the three cases you picked above, [`TapasForQuestionAnswering`] requires different inputs to be fine-tuned:
307_3_25
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
inputs to be fine-tuned: | **Task** | **Required inputs** | |------------------------------------|---------------------------------------------------------------------------------------------------------------------| | Conversational | `input_ids`, `attention_mask`, `token_type_ids`, `labels` |
307_3_26
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
| Weak supervision for aggregation | `input_ids`, `attention_mask`, `token_type_ids`, `labels`, `numeric_values`, `numeric_values_scale`, `float_answer` | | Strong supervision for aggregation | `input ids`, `attention mask`, `token type ids`, `labels`, `aggregation_labels` |
307_3_27
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
[`TapasTokenizer`] creates the `labels`, `numeric_values` and `numeric_values_scale` based on the `answer_coordinates` and `answer_text` columns of the TSV file. The `float_answer` and `aggregation_labels` are already in the TSV file of step 2. Here's an example: ```py >>> from transformers import TapasTokenizer >>> import pandas as pd
307_3_28
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
>>> model_name = "google/tapas-base" >>> tokenizer = TapasTokenizer.from_pretrained(model_name)
307_3_29
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
>>> data = {"Actors": ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"], "Number of movies": ["87", "53", "69"]} >>> queries = [ ... "What is the name of the first actor?", ... "How many movies has George Clooney played in?", ... "What is the total number of movies?", ... ] >>> answer_coordinates = [[(0, 0)], [(2, 1)], [(0, 1), (1, 1), (2, 1)]] >>> answer_text = [["Brad Pitt"], ["69"], ["209"]] >>> table = pd.DataFrame.from_dict(data) >>> inputs = tokenizer( ... table=table,
307_3_30
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
>>> table = pd.DataFrame.from_dict(data) >>> inputs = tokenizer( ... table=table, ... queries=queries, ... answer_coordinates=answer_coordinates, ... answer_text=answer_text, ... padding="max_length", ... return_tensors="pt", ... ) >>> inputs {'input_ids': tensor([[ ... ]]), 'attention_mask': tensor([[...]]), 'token_type_ids': tensor([[[...]]]), 'numeric_values': tensor([[ ... ]]), 'numeric_values_scale: tensor([[ ... ]]), labels: tensor([[ ... ]])} ```
307_3_31
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
'numeric_values': tensor([[ ... ]]), 'numeric_values_scale: tensor([[ ... ]]), labels: tensor([[ ... ]])} ``` Note that [`TapasTokenizer`] expects the data of the table to be **text-only**. You can use `.astype(str)` on a dataframe to turn it into text-only data. Of course, this only shows how to encode a single training example. It is advised to create a dataloader to iterate over batches: ```py >>> import torch >>> import pandas as pd
307_3_32
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
>>> tsv_path = "your_path_to_the_tsv_file" >>> table_csv_path = "your_path_to_a_directory_containing_all_csv_files" >>> class TableDataset(torch.utils.data.Dataset): ... def __init__(self, data, tokenizer): ... self.data = data ... self.tokenizer = tokenizer
307_3_33
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
... def __getitem__(self, idx): ... item = data.iloc[idx] ... table = pd.read_csv(table_csv_path + item.table_file).astype( ... str ... ) # be sure to make your table data text only ... encoding = self.tokenizer( ... table=table, ... queries=item.question, ... answer_coordinates=item.answer_coordinates, ... answer_text=item.answer_text, ... truncation=True, ... padding="max_length",
307_3_34
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
... answer_text=item.answer_text, ... truncation=True, ... padding="max_length", ... return_tensors="pt", ... ) ... # remove the batch dimension which the tokenizer adds by default ... encoding = {key: val.squeeze(0) for key, val in encoding.items()} ... # add the float_answer which is also required (weak supervision for aggregation case) ... encoding["float_answer"] = torch.tensor(item.float_answer)
307_3_35
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
... encoding["float_answer"] = torch.tensor(item.float_answer) ... return encoding
307_3_36
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
... def __len__(self): ... return len(self.data)
307_3_37
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
>>> data = pd.read_csv(tsv_path, sep="\t") >>> train_dataset = TableDataset(data, tokenizer) >>> train_dataloader = torch.utils.data.DataLoader(train_dataset, batch_size=32) ``` </pt> <tf>
307_3_38
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
>>> train_dataloader = torch.utils.data.DataLoader(train_dataset, batch_size=32) ``` </pt> <tf> Third, given that you've prepared your data in this TSV/CSV format (and corresponding CSV files containing the tabular data), you can then use [`TapasTokenizer`] to convert table-question pairs into `input_ids`, `attention_mask`, `token_type_ids` and so on. Again, based on which of the three cases you picked above, [`TFTapasForQuestionAnswering`] requires different inputs to be fine-tuned:
307_3_39
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
inputs to be fine-tuned: | **Task** | **Required inputs** | |------------------------------------|---------------------------------------------------------------------------------------------------------------------| | Conversational | `input_ids`, `attention_mask`, `token_type_ids`, `labels` |
307_3_40
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
| Weak supervision for aggregation | `input_ids`, `attention_mask`, `token_type_ids`, `labels`, `numeric_values`, `numeric_values_scale`, `float_answer` | | Strong supervision for aggregation | `input ids`, `attention mask`, `token type ids`, `labels`, `aggregation_labels` |
307_3_41
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
[`TapasTokenizer`] creates the `labels`, `numeric_values` and `numeric_values_scale` based on the `answer_coordinates` and `answer_text` columns of the TSV file. The `float_answer` and `aggregation_labels` are already in the TSV file of step 2. Here's an example: ```py >>> from transformers import TapasTokenizer >>> import pandas as pd
307_3_42
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
>>> model_name = "google/tapas-base" >>> tokenizer = TapasTokenizer.from_pretrained(model_name)
307_3_43
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
>>> data = {"Actors": ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"], "Number of movies": ["87", "53", "69"]} >>> queries = [ ... "What is the name of the first actor?", ... "How many movies has George Clooney played in?", ... "What is the total number of movies?", ... ] >>> answer_coordinates = [[(0, 0)], [(2, 1)], [(0, 1), (1, 1), (2, 1)]] >>> answer_text = [["Brad Pitt"], ["69"], ["209"]] >>> table = pd.DataFrame.from_dict(data) >>> inputs = tokenizer( ... table=table,
307_3_44
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
>>> table = pd.DataFrame.from_dict(data) >>> inputs = tokenizer( ... table=table, ... queries=queries, ... answer_coordinates=answer_coordinates, ... answer_text=answer_text, ... padding="max_length", ... return_tensors="tf", ... ) >>> inputs {'input_ids': tensor([[ ... ]]), 'attention_mask': tensor([[...]]), 'token_type_ids': tensor([[[...]]]), 'numeric_values': tensor([[ ... ]]), 'numeric_values_scale: tensor([[ ... ]]), labels: tensor([[ ... ]])} ```
307_3_45
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
'numeric_values': tensor([[ ... ]]), 'numeric_values_scale: tensor([[ ... ]]), labels: tensor([[ ... ]])} ``` Note that [`TapasTokenizer`] expects the data of the table to be **text-only**. You can use `.astype(str)` on a dataframe to turn it into text-only data. Of course, this only shows how to encode a single training example. It is advised to create a dataloader to iterate over batches: ```py >>> import tensorflow as tf >>> import pandas as pd
307_3_46
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
>>> tsv_path = "your_path_to_the_tsv_file" >>> table_csv_path = "your_path_to_a_directory_containing_all_csv_files" >>> class TableDataset: ... def __init__(self, data, tokenizer): ... self.data = data ... self.tokenizer = tokenizer
307_3_47
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
... def __iter__(self): ... for idx in range(self.__len__()): ... item = self.data.iloc[idx] ... table = pd.read_csv(table_csv_path + item.table_file).astype( ... str ... ) # be sure to make your table data text only ... encoding = self.tokenizer( ... table=table, ... queries=item.question, ... answer_coordinates=item.answer_coordinates,
307_3_48
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
... queries=item.question, ... answer_coordinates=item.answer_coordinates, ... answer_text=item.answer_text, ... truncation=True, ... padding="max_length", ... return_tensors="tf", ... ) ... # remove the batch dimension which the tokenizer adds by default ... encoding = {key: tf.squeeze(val, 0) for key, val in encoding.items()}
307_3_49
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
... encoding = {key: tf.squeeze(val, 0) for key, val in encoding.items()} ... # add the float_answer which is also required (weak supervision for aggregation case) ... encoding["float_answer"] = tf.convert_to_tensor(item.float_answer, dtype=tf.float32) ... yield encoding["input_ids"], encoding["attention_mask"], encoding["numeric_values"], encoding[ ... "numeric_values_scale"
307_3_50
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
... "numeric_values_scale" ... ], encoding["token_type_ids"], encoding["labels"], encoding["float_answer"]
307_3_51
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
... def __len__(self): ... return len(self.data)
307_3_52
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
>>> data = pd.read_csv(tsv_path, sep="\t") >>> train_dataset = TableDataset(data, tokenizer) >>> output_signature = ( ... tf.TensorSpec(shape=(512,), dtype=tf.int32), ... tf.TensorSpec(shape=(512,), dtype=tf.int32), ... tf.TensorSpec(shape=(512,), dtype=tf.float32), ... tf.TensorSpec(shape=(512,), dtype=tf.float32), ... tf.TensorSpec(shape=(512, 7), dtype=tf.int32), ... tf.TensorSpec(shape=(512,), dtype=tf.int32), ... tf.TensorSpec(shape=(512,), dtype=tf.float32), ... )
307_3_53
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
... tf.TensorSpec(shape=(512,), dtype=tf.int32), ... tf.TensorSpec(shape=(512,), dtype=tf.float32), ... ) >>> train_dataloader = tf.data.Dataset.from_generator(train_dataset, output_signature=output_signature).batch(32) ``` </tf> </frameworkcontent>
307_3_54
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
``` </tf> </frameworkcontent> Note that here, we encode each table-question pair independently. This is fine as long as your dataset is **not conversational**. In case your dataset involves conversational questions (such as in SQA), then you should first group together the `queries`, `answer_coordinates` and `answer_text` per table (in the order of their `position`
307_3_55
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
index) and batch encode each table with its questions. This will make sure that the `prev_labels` token types (see docs of [`TapasTokenizer`]) are set correctly. See [this notebook](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/TAPAS/Fine_tuning_TapasForQuestionAnswering_on_SQA.ipynb) for more info. See [this notebook](https://github.com/kamalkraj/Tapas-Tutorial/blob/master/TAPAS/Fine_tuning_TapasForQuestionAnswering_on_SQA.ipynb) for more info regarding using the TensorFlow model.
307_3_56
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
**STEP 4: Train (fine-tune) the model <frameworkcontent> <pt> You can then fine-tune [`TapasForQuestionAnswering`] as follows (shown here for the weak supervision for aggregation case): ```py >>> from transformers import TapasConfig, TapasForQuestionAnswering, AdamW
307_3_57
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
>>> # this is the default WTQ configuration >>> config = TapasConfig( ... num_aggregation_labels=4, ... use_answer_as_supervision=True, ... answer_loss_cutoff=0.664694, ... cell_selection_preference=0.207951, ... huber_loss_delta=0.121194, ... init_cell_selection_weights_to_zero=True, ... select_one_column=True, ... allow_empty_column_selection=False, ... temperature=0.0352513, ... )
307_3_58
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
... select_one_column=True, ... allow_empty_column_selection=False, ... temperature=0.0352513, ... ) >>> model = TapasForQuestionAnswering.from_pretrained("google/tapas-base", config=config)
307_3_59
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
>>> optimizer = AdamW(model.parameters(), lr=5e-5)
307_3_60
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
>>> model.train() >>> for epoch in range(2): # loop over the dataset multiple times ... for batch in train_dataloader: ... # get the inputs; ... input_ids = batch["input_ids"] ... attention_mask = batch["attention_mask"] ... token_type_ids = batch["token_type_ids"] ... labels = batch["labels"] ... numeric_values = batch["numeric_values"] ... numeric_values_scale = batch["numeric_values_scale"] ... float_answer = batch["float_answer"]
307_3_61
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
... # zero the parameter gradients ... optimizer.zero_grad()
307_3_62
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
... # forward + backward + optimize ... outputs = model( ... input_ids=input_ids, ... attention_mask=attention_mask, ... token_type_ids=token_type_ids, ... labels=labels, ... numeric_values=numeric_values, ... numeric_values_scale=numeric_values_scale, ... float_answer=float_answer, ... ) ... loss = outputs.loss ... loss.backward() ... optimizer.step() ``` </pt> <tf>
307_3_63
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
... ) ... loss = outputs.loss ... loss.backward() ... optimizer.step() ``` </pt> <tf> You can then fine-tune [`TFTapasForQuestionAnswering`] as follows (shown here for the weak supervision for aggregation case): ```py >>> import tensorflow as tf >>> from transformers import TapasConfig, TFTapasForQuestionAnswering
307_3_64
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
>>> # this is the default WTQ configuration >>> config = TapasConfig( ... num_aggregation_labels=4, ... use_answer_as_supervision=True, ... answer_loss_cutoff=0.664694, ... cell_selection_preference=0.207951, ... huber_loss_delta=0.121194, ... init_cell_selection_weights_to_zero=True, ... select_one_column=True, ... allow_empty_column_selection=False, ... temperature=0.0352513, ... )
307_3_65
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
... select_one_column=True, ... allow_empty_column_selection=False, ... temperature=0.0352513, ... ) >>> model = TFTapasForQuestionAnswering.from_pretrained("google/tapas-base", config=config)
307_3_66
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
>>> optimizer = tf.keras.optimizers.Adam(learning_rate=5e-5) >>> for epoch in range(2): # loop over the dataset multiple times ... for batch in train_dataloader: ... # get the inputs; ... input_ids = batch[0] ... attention_mask = batch[1] ... token_type_ids = batch[4] ... labels = batch[-1] ... numeric_values = batch[2] ... numeric_values_scale = batch[3] ... float_answer = batch[6]
307_3_67
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
... # forward + backward + optimize ... with tf.GradientTape() as tape: ... outputs = model( ... input_ids=input_ids, ... attention_mask=attention_mask, ... token_type_ids=token_type_ids, ... labels=labels, ... numeric_values=numeric_values, ... numeric_values_scale=numeric_values_scale, ... float_answer=float_answer, ... )
307_3_68
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-fine-tuning
.md
... numeric_values_scale=numeric_values_scale, ... float_answer=float_answer, ... ) ... grads = tape.gradient(outputs.loss, model.trainable_weights) ... optimizer.apply_gradients(zip(grads, model.trainable_weights)) ``` </tf> </frameworkcontent>
307_3_69
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-inference
.md
<frameworkcontent> <pt>
307_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-inference
.md
<pt> Here we explain how you can use [`TapasForQuestionAnswering`] or [`TFTapasForQuestionAnswering`] for inference (i.e. making predictions on new data). For inference, only `input_ids`, `attention_mask` and `token_type_ids` (which you can obtain using [`TapasTokenizer`]) have to be provided to the model to obtain the logits. Next, you can use the handy [`~models.tapas.tokenization_tapas.convert_logits_to_predictions`] method to convert these into predicted coordinates and optional aggregation indices.
307_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-inference
.md
However, note that inference is **different** depending on whether or not the setup is conversational. In a non-conversational set-up, inference can be done in parallel on all table-question pairs of a batch. Here's an example of that: ```py >>> from transformers import TapasTokenizer, TapasForQuestionAnswering >>> import pandas as pd
307_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-inference
.md
>>> model_name = "google/tapas-base-finetuned-wtq" >>> model = TapasForQuestionAnswering.from_pretrained(model_name) >>> tokenizer = TapasTokenizer.from_pretrained(model_name)
307_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-inference
.md
>>> data = {"Actors": ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"], "Number of movies": ["87", "53", "69"]} >>> queries = [ ... "What is the name of the first actor?", ... "How many movies has George Clooney played in?", ... "What is the total number of movies?", ... ] >>> table = pd.DataFrame.from_dict(data) >>> inputs = tokenizer(table=table, queries=queries, padding="max_length", return_tensors="pt") >>> outputs = model(**inputs)
307_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-inference
.md
>>> inputs = tokenizer(table=table, queries=queries, padding="max_length", return_tensors="pt") >>> outputs = model(**inputs) >>> predicted_answer_coordinates, predicted_aggregation_indices = tokenizer.convert_logits_to_predictions( ... inputs, outputs.logits.detach(), outputs.logits_aggregation.detach() ... )
307_4_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-inference
.md
>>> # let's print out the results: >>> id2aggregation = {0: "NONE", 1: "SUM", 2: "AVERAGE", 3: "COUNT"} >>> aggregation_predictions_string = [id2aggregation[x] for x in predicted_aggregation_indices]
307_4_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-inference
.md
>>> answers = [] >>> for coordinates in predicted_answer_coordinates: ... if len(coordinates) == 1: ... # only a single cell: ... answers.append(table.iat[coordinates[0]]) ... else: ... # multiple cells ... cell_values = [] ... for coordinate in coordinates: ... cell_values.append(table.iat[coordinate]) ... answers.append(", ".join(cell_values))
307_4_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-inference
.md
>>> display(table) >>> print("") >>> for query, answer, predicted_agg in zip(queries, answers, aggregation_predictions_string): ... print(query) ... if predicted_agg == "NONE": ... print("Predicted answer: " + answer) ... else: ... print("Predicted answer: " + predicted_agg + " > " + answer) What is the name of the first actor? Predicted answer: Brad Pitt How many movies has George Clooney played in? Predicted answer: COUNT > 69 What is the total number of movies?
307_4_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-inference
.md
How many movies has George Clooney played in? Predicted answer: COUNT > 69 What is the total number of movies? Predicted answer: SUM > 87, 53, 69 ``` </pt> <tf>
307_4_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-inference
.md
``` </pt> <tf> Here we explain how you can use [`TFTapasForQuestionAnswering`] for inference (i.e. making predictions on new data). For inference, only `input_ids`, `attention_mask` and `token_type_ids` (which you can obtain using [`TapasTokenizer`]) have to be provided to the model to obtain the logits. Next, you can use the handy [`~models.tapas.tokenization_tapas.convert_logits_to_predictions`] method to convert these into predicted coordinates and optional aggregation indices.
307_4_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-inference
.md
However, note that inference is **different** depending on whether or not the setup is conversational. In a non-conversational set-up, inference can be done in parallel on all table-question pairs of a batch. Here's an example of that: ```py >>> from transformers import TapasTokenizer, TFTapasForQuestionAnswering >>> import pandas as pd
307_4_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-inference
.md
>>> model_name = "google/tapas-base-finetuned-wtq" >>> model = TFTapasForQuestionAnswering.from_pretrained(model_name) >>> tokenizer = TapasTokenizer.from_pretrained(model_name)
307_4_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-inference
.md
>>> data = {"Actors": ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"], "Number of movies": ["87", "53", "69"]} >>> queries = [ ... "What is the name of the first actor?", ... "How many movies has George Clooney played in?", ... "What is the total number of movies?", ... ] >>> table = pd.DataFrame.from_dict(data) >>> inputs = tokenizer(table=table, queries=queries, padding="max_length", return_tensors="tf") >>> outputs = model(**inputs)
307_4_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-inference
.md
>>> inputs = tokenizer(table=table, queries=queries, padding="max_length", return_tensors="tf") >>> outputs = model(**inputs) >>> predicted_answer_coordinates, predicted_aggregation_indices = tokenizer.convert_logits_to_predictions( ... inputs, outputs.logits, outputs.logits_aggregation ... )
307_4_14
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-inference
.md
>>> # let's print out the results: >>> id2aggregation = {0: "NONE", 1: "SUM", 2: "AVERAGE", 3: "COUNT"} >>> aggregation_predictions_string = [id2aggregation[x] for x in predicted_aggregation_indices]
307_4_15
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-inference
.md
>>> answers = [] >>> for coordinates in predicted_answer_coordinates: ... if len(coordinates) == 1: ... # only a single cell: ... answers.append(table.iat[coordinates[0]]) ... else: ... # multiple cells ... cell_values = [] ... for coordinate in coordinates: ... cell_values.append(table.iat[coordinate]) ... answers.append(", ".join(cell_values))
307_4_16
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-inference
.md
>>> display(table) >>> print("") >>> for query, answer, predicted_agg in zip(queries, answers, aggregation_predictions_string): ... print(query) ... if predicted_agg == "NONE": ... print("Predicted answer: " + answer) ... else: ... print("Predicted answer: " + predicted_agg + " > " + answer) What is the name of the first actor? Predicted answer: Brad Pitt How many movies has George Clooney played in? Predicted answer: COUNT > 69 What is the total number of movies?
307_4_17
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-inference
.md
How many movies has George Clooney played in? Predicted answer: COUNT > 69 What is the total number of movies? Predicted answer: SUM > 87, 53, 69 ``` </tf> </frameworkcontent>
307_4_18
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-inference
.md
In case of a conversational set-up, then each table-question pair must be provided **sequentially** to the model, such that the `prev_labels` token types can be overwritten by the predicted `labels` of the previous table-question pair. Again, more info can be found in [this notebook](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/TAPAS/Fine_tuning_TapasForQuestionAnswering_on_SQA.ipynb) (for PyTorch) and [this
307_4_19
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#usage-inference
.md
(for PyTorch) and [this notebook](https://github.com/kamalkraj/Tapas-Tutorial/blob/master/TAPAS/Fine_tuning_TapasForQuestionAnswering_on_SQA.ipynb) (for TensorFlow).
307_4_20
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#resources
.md
- [Text classification task guide](../tasks/sequence_classification) - [Masked language modeling task guide](../tasks/masked_language_modeling) models.tapas.modeling_tapas.TableQuestionAnsweringOutput Output type of [`TapasForQuestionAnswering`]. Args: loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `labels` (and possibly `answer`, `aggregation_labels`, `numeric_values` and `numeric_values_scale` are provided)):
307_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#resources
.md
Total loss as the sum of the hierarchical cell selection log-likelihood loss and (optionally) the semi-supervised regression loss and (optionally) supervised loss for aggregations. logits (`torch.FloatTensor` of shape `(batch_size, sequence_length)`): Prediction scores of the cell selection head, for every token. logits_aggregation (`torch.FloatTensor`, *optional*, of shape `(batch_size, num_aggregation_labels)`): Prediction scores of the aggregation head, for every aggregation operator.
307_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#resources
.md
Prediction scores of the aggregation head, for every aggregation operator. hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`. Hidden-states of the model at the output of each layer plus the initial embedding outputs.
307_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#resources
.md
plus the initial embedding outputs. attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
307_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#resources
.md
the self-attention heads. This is the configuration class to store the configuration of a [`TapasModel`]. It is used to instantiate a TAPAS model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the TAPAS [google/tapas-base-finetuned-sqa](https://huggingface.co/google/tapas-base-finetuned-sqa) architecture.
307_5_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#resources
.md
[google/tapas-base-finetuned-sqa](https://huggingface.co/google/tapas-base-finetuned-sqa) architecture. Configuration objects inherit from [`PreTrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Hyperparameters additional to BERT are taken from run_task_main.py and hparam_utils.py of the original implementation. Original implementation available at https://github.com/google-research/tapas/tree/master. Args:
307_5_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#resources
.md
implementation. Original implementation available at https://github.com/google-research/tapas/tree/master. Args: vocab_size (`int`, *optional*, defaults to 30522): Vocabulary size of the TAPAS model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`TapasModel`]. hidden_size (`int`, *optional*, defaults to 768): Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (`int`, *optional*, defaults to 12):
307_5_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#resources
.md
Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (`int`, *optional*, defaults to 12): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 12): Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (`int`, *optional*, defaults to 3072): Dimensionality of the "intermediate" (often named feed-forward) layer in the Transformer encoder.
307_5_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#resources
.md
Dimensionality of the "intermediate" (often named feed-forward) layer in the Transformer encoder. hidden_act (`str` or `Callable`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"swish"` and `"gelu_new"` are supported. hidden_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
307_5_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tapas.md
https://huggingface.co/docs/transformers/en/model_doc/tapas/#resources
.md
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout ratio for the attention probabilities. max_position_embeddings (`int`, *optional*, defaults to 1024): The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048).
307_5_9