source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/donut.md
https://huggingface.co/docs/transformers/en/model_doc/donut/#inference-examples
.md
>>> pixel_values = processor(image, return_tensors="pt").pixel_values >>> outputs = model.generate( ... pixel_values.to(device), ... decoder_input_ids=decoder_input_ids.to(device), ... max_length=model.decoder.config.max_position_embeddings, ... pad_token_id=processor.tokenizer.pad_token_id, ... eos_token_id=processor.tokenizer.eos_token_id, ... use_cache=True, ... bad_words_ids=[[processor.tokenizer.unk_token_id]], ... return_dict_in_generate=True, ... )
345_3_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/donut.md
https://huggingface.co/docs/transformers/en/model_doc/donut/#inference-examples
.md
>>> sequence = processor.batch_decode(outputs.sequences)[0] >>> sequence = sequence.replace(processor.tokenizer.eos_token, "").replace(processor.tokenizer.pad_token, "") >>> sequence = re.sub(r"<.*?>", "", sequence, count=1).strip() # remove first task start token >>> print(processor.token2json(sequence)) {'class': 'advertisement'} ``` - Step-by-step Document Parsing ```py >>> import re
345_3_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/donut.md
https://huggingface.co/docs/transformers/en/model_doc/donut/#inference-examples
.md
>>> from transformers import DonutProcessor, VisionEncoderDecoderModel >>> from datasets import load_dataset >>> import torch >>> processor = DonutProcessor.from_pretrained("naver-clova-ix/donut-base-finetuned-cord-v2") >>> model = VisionEncoderDecoderModel.from_pretrained("naver-clova-ix/donut-base-finetuned-cord-v2") >>> device = "cuda" if torch.cuda.is_available() else "cpu" >>> model.to(device) # doctest: +IGNORE_RESULT
345_3_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/donut.md
https://huggingface.co/docs/transformers/en/model_doc/donut/#inference-examples
.md
>>> device = "cuda" if torch.cuda.is_available() else "cpu" >>> model.to(device) # doctest: +IGNORE_RESULT >>> # load document image >>> dataset = load_dataset("hf-internal-testing/example-documents", split="test") >>> image = dataset[2]["image"] >>> # prepare decoder inputs >>> task_prompt = "<s_cord-v2>" >>> decoder_input_ids = processor.tokenizer(task_prompt, add_special_tokens=False, return_tensors="pt").input_ids >>> pixel_values = processor(image, return_tensors="pt").pixel_values
345_3_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/donut.md
https://huggingface.co/docs/transformers/en/model_doc/donut/#inference-examples
.md
>>> pixel_values = processor(image, return_tensors="pt").pixel_values >>> outputs = model.generate( ... pixel_values.to(device), ... decoder_input_ids=decoder_input_ids.to(device), ... max_length=model.decoder.config.max_position_embeddings, ... pad_token_id=processor.tokenizer.pad_token_id, ... eos_token_id=processor.tokenizer.eos_token_id, ... use_cache=True, ... bad_words_ids=[[processor.tokenizer.unk_token_id]], ... return_dict_in_generate=True, ... )
345_3_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/donut.md
https://huggingface.co/docs/transformers/en/model_doc/donut/#inference-examples
.md
>>> sequence = processor.batch_decode(outputs.sequences)[0] >>> sequence = sequence.replace(processor.tokenizer.eos_token, "").replace(processor.tokenizer.pad_token, "") >>> sequence = re.sub(r"<.*?>", "", sequence, count=1).strip() # remove first task start token >>> print(processor.token2json(sequence))
345_3_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/donut.md
https://huggingface.co/docs/transformers/en/model_doc/donut/#inference-examples
.md
>>> print(processor.token2json(sequence)) {'menu': {'nm': 'CINNAMON SUGAR', 'unitprice': '17,000', 'cnt': '1 x', 'price': '17,000'}, 'sub_total': {'subtotal_price': '17,000'}, 'total': {'total_price': '17,000', 'cashprice': '20,000', 'changeprice': '3,000'}} ``` - Step-by-step Document Visual Question Answering (DocVQA) ```py >>> import re
345_3_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/donut.md
https://huggingface.co/docs/transformers/en/model_doc/donut/#inference-examples
.md
>>> from transformers import DonutProcessor, VisionEncoderDecoderModel >>> from datasets import load_dataset >>> import torch >>> processor = DonutProcessor.from_pretrained("naver-clova-ix/donut-base-finetuned-docvqa") >>> model = VisionEncoderDecoderModel.from_pretrained("naver-clova-ix/donut-base-finetuned-docvqa") >>> device = "cuda" if torch.cuda.is_available() else "cpu" >>> model.to(device) # doctest: +IGNORE_RESULT
345_3_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/donut.md
https://huggingface.co/docs/transformers/en/model_doc/donut/#inference-examples
.md
>>> device = "cuda" if torch.cuda.is_available() else "cpu" >>> model.to(device) # doctest: +IGNORE_RESULT >>> # load document image from the DocVQA dataset >>> dataset = load_dataset("hf-internal-testing/example-documents", split="test") >>> image = dataset[0]["image"]
345_3_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/donut.md
https://huggingface.co/docs/transformers/en/model_doc/donut/#inference-examples
.md
>>> # prepare decoder inputs >>> task_prompt = "<s_docvqa><s_question>{user_input}</s_question><s_answer>" >>> question = "When is the coffee break?" >>> prompt = task_prompt.replace("{user_input}", question) >>> decoder_input_ids = processor.tokenizer(prompt, add_special_tokens=False, return_tensors="pt").input_ids >>> pixel_values = processor(image, return_tensors="pt").pixel_values
345_3_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/donut.md
https://huggingface.co/docs/transformers/en/model_doc/donut/#inference-examples
.md
>>> pixel_values = processor(image, return_tensors="pt").pixel_values >>> outputs = model.generate( ... pixel_values.to(device), ... decoder_input_ids=decoder_input_ids.to(device), ... max_length=model.decoder.config.max_position_embeddings, ... pad_token_id=processor.tokenizer.pad_token_id, ... eos_token_id=processor.tokenizer.eos_token_id, ... use_cache=True, ... bad_words_ids=[[processor.tokenizer.unk_token_id]], ... return_dict_in_generate=True, ... )
345_3_14
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/donut.md
https://huggingface.co/docs/transformers/en/model_doc/donut/#inference-examples
.md
>>> sequence = processor.batch_decode(outputs.sequences)[0] >>> sequence = sequence.replace(processor.tokenizer.eos_token, "").replace(processor.tokenizer.pad_token, "") >>> sequence = re.sub(r"<.*?>", "", sequence, count=1).strip() # remove first task start token >>> print(processor.token2json(sequence)) {'question': 'When is the coffee break?', 'answer': '11-14 to 11:39 a.m.'} ``` See the [model hub](https://huggingface.co/models?filter=donut) to look for Donut checkpoints.
345_3_15
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/donut.md
https://huggingface.co/docs/transformers/en/model_doc/donut/#training
.md
We refer to the [tutorial notebooks](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/Donut).
345_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/donut.md
https://huggingface.co/docs/transformers/en/model_doc/donut/#donutswinconfig
.md
This is the configuration class to store the configuration of a [`DonutSwinModel`]. It is used to instantiate a Donut model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Donut [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
345_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/donut.md
https://huggingface.co/docs/transformers/en/model_doc/donut/#donutswinconfig
.md
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: image_size (`int`, *optional*, defaults to 224): The size (resolution) of each image. patch_size (`int`, *optional*, defaults to 4): The size (resolution) of each patch. num_channels (`int`, *optional*, defaults to 3): The number of input channels. embed_dim (`int`, *optional*, defaults to 96): Dimensionality of patch embedding.
345_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/donut.md
https://huggingface.co/docs/transformers/en/model_doc/donut/#donutswinconfig
.md
The number of input channels. embed_dim (`int`, *optional*, defaults to 96): Dimensionality of patch embedding. depths (`list(int)`, *optional*, defaults to `[2, 2, 6, 2]`): Depth of each layer in the Transformer encoder. num_heads (`list(int)`, *optional*, defaults to `[3, 6, 12, 24]`): Number of attention heads in each layer of the Transformer encoder. window_size (`int`, *optional*, defaults to 7): Size of windows. mlp_ratio (`float`, *optional*, defaults to 4.0):
345_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/donut.md
https://huggingface.co/docs/transformers/en/model_doc/donut/#donutswinconfig
.md
window_size (`int`, *optional*, defaults to 7): Size of windows. mlp_ratio (`float`, *optional*, defaults to 4.0): Ratio of MLP hidden dimensionality to embedding dimensionality. qkv_bias (`bool`, *optional*, defaults to `True`): Whether or not a learnable bias should be added to the queries, keys and values. hidden_dropout_prob (`float`, *optional*, defaults to 0.0): The dropout probability for all fully connected layers in the embeddings and encoder.
345_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/donut.md
https://huggingface.co/docs/transformers/en/model_doc/donut/#donutswinconfig
.md
The dropout probability for all fully connected layers in the embeddings and encoder. attention_probs_dropout_prob (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. drop_path_rate (`float`, *optional*, defaults to 0.1): Stochastic depth rate. hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported.
345_5_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/donut.md
https://huggingface.co/docs/transformers/en/model_doc/donut/#donutswinconfig
.md
`"selu"` and `"gelu_new"` are supported. use_absolute_embeddings (`bool`, *optional*, defaults to `False`): Whether or not to add absolute position embeddings to the patch embeddings. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-05): The epsilon used by the layer normalization layers. Example: ```python
345_5_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/donut.md
https://huggingface.co/docs/transformers/en/model_doc/donut/#donutswinconfig
.md
The epsilon used by the layer normalization layers. Example: ```python >>> from transformers import DonutSwinConfig, DonutSwinModel
345_5_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/donut.md
https://huggingface.co/docs/transformers/en/model_doc/donut/#donutswinconfig
.md
>>> # Initializing a Donut naver-clova-ix/donut-base style configuration >>> configuration = DonutSwinConfig() >>> # Randomly initializing a model from the naver-clova-ix/donut-base style configuration >>> model = DonutSwinModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
345_5_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/donut.md
https://huggingface.co/docs/transformers/en/model_doc/donut/#donutimageprocessor
.md
Constructs a Donut image processor. Args: do_resize (`bool`, *optional*, defaults to `True`): Whether to resize the image's (height, width) dimensions to the specified `size`. Can be overridden by `do_resize` in the `preprocess` method. size (`Dict[str, int]` *optional*, defaults to `{"shortest_edge": 224}`): Size of the image after resizing. The shortest edge of the image is resized to size["shortest_edge"], with
345_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/donut.md
https://huggingface.co/docs/transformers/en/model_doc/donut/#donutimageprocessor
.md
Size of the image after resizing. The shortest edge of the image is resized to size["shortest_edge"], with the longest edge resized to keep the input aspect ratio. Can be overridden by `size` in the `preprocess` method. resample (`PILImageResampling`, *optional*, defaults to `Resampling.BILINEAR`): Resampling filter to use if resizing the image. Can be overridden by `resample` in the `preprocess` method. do_thumbnail (`bool`, *optional*, defaults to `True`):
345_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/donut.md
https://huggingface.co/docs/transformers/en/model_doc/donut/#donutimageprocessor
.md
do_thumbnail (`bool`, *optional*, defaults to `True`): Whether to resize the image using thumbnail method. do_align_long_axis (`bool`, *optional*, defaults to `False`): Whether to align the long axis of the image with the long axis of `size` by rotating by 90 degrees. do_pad (`bool`, *optional*, defaults to `True`): Whether to pad the image. If `random_padding` is set to `True` in `preprocess`, each image is padded with a
345_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/donut.md
https://huggingface.co/docs/transformers/en/model_doc/donut/#donutimageprocessor
.md
Whether to pad the image. If `random_padding` is set to `True` in `preprocess`, each image is padded with a random amont of padding on each size, up to the largest image size in the batch. Otherwise, all images are padded to the largest image size in the batch. do_rescale (`bool`, *optional*, defaults to `True`): Whether to rescale the image by the specified scale `rescale_factor`. Can be overridden by `do_rescale` in the `preprocess` method.
345_6_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/donut.md
https://huggingface.co/docs/transformers/en/model_doc/donut/#donutimageprocessor
.md
the `preprocess` method. rescale_factor (`int` or `float`, *optional*, defaults to `1/255`): Scale factor to use if rescaling the image. Can be overridden by `rescale_factor` in the `preprocess` method. do_normalize (`bool`, *optional*, defaults to `True`): Whether to normalize the image. Can be overridden by `do_normalize` in the `preprocess` method. image_mean (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_MEAN`):
345_6_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/donut.md
https://huggingface.co/docs/transformers/en/model_doc/donut/#donutimageprocessor
.md
image_mean (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_MEAN`): Mean to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the `image_mean` parameter in the `preprocess` method. image_std (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_STD`): Image standard deviation. Methods: preprocess
345_6_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/donut.md
https://huggingface.co/docs/transformers/en/model_doc/donut/#donutfeatureextractor
.md
No docstring available for DonutFeatureExtractor Methods: __call__
345_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/donut.md
https://huggingface.co/docs/transformers/en/model_doc/donut/#donutprocessor
.md
Constructs a Donut processor which wraps a Donut image processor and an XLMRoBERTa tokenizer into a single processor. [`DonutProcessor`] offers all the functionalities of [`DonutImageProcessor`] and [`XLMRobertaTokenizer`/`XLMRobertaTokenizerFast`]. See the [`~DonutProcessor.__call__`] and [`~DonutProcessor.decode`] for more information. Args: image_processor ([`DonutImageProcessor`], *optional*): An instance of [`DonutImageProcessor`]. The image processor is a required input.
345_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/donut.md
https://huggingface.co/docs/transformers/en/model_doc/donut/#donutprocessor
.md
An instance of [`DonutImageProcessor`]. The image processor is a required input. tokenizer ([`XLMRobertaTokenizer`/`XLMRobertaTokenizerFast`], *optional*): An instance of [`XLMRobertaTokenizer`/`XLMRobertaTokenizerFast`]. The tokenizer is a required input. Methods: __call__ - from_pretrained - save_pretrained - batch_decode - decode
345_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/donut.md
https://huggingface.co/docs/transformers/en/model_doc/donut/#donutswinmodel
.md
The bare Donut Swin Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`DonutSwinConfig`]): Model configuration class with all the parameters of the model.
345_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/donut.md
https://huggingface.co/docs/transformers/en/model_doc/donut/#donutswinmodel
.md
behavior. Parameters: config ([`DonutSwinConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
345_9_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mluke.md
https://huggingface.co/docs/transformers/en/model_doc/mluke/
.md
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
346_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mluke.md
https://huggingface.co/docs/transformers/en/model_doc/mluke/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
346_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mluke.md
https://huggingface.co/docs/transformers/en/model_doc/mluke/#overview
.md
The mLUKE model was proposed in [mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models](https://arxiv.org/abs/2110.08151) by Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka. It's a multilingual extension of the [LUKE model](https://arxiv.org/abs/2010.01057) trained on the basis of XLM-RoBERTa. It is based on XLM-RoBERTa and adds entity embeddings, which helps improve performance on various downstream tasks
346_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mluke.md
https://huggingface.co/docs/transformers/en/model_doc/mluke/#overview
.md
It is based on XLM-RoBERTa and adds entity embeddings, which helps improve performance on various downstream tasks involving reasoning about entities such as named entity recognition, extractive question answering, relation classification, cloze-style knowledge completion. The abstract from the paper is the following: *Recent studies have shown that multilingual pretrained language models can be effectively improved with cross-lingual
346_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mluke.md
https://huggingface.co/docs/transformers/en/model_doc/mluke/#overview
.md
*Recent studies have shown that multilingual pretrained language models can be effectively improved with cross-lingual alignment information from Wikipedia entities. However, existing methods only exploit entity information in pretraining and do not explicitly use entities in downstream tasks. In this study, we explore the effectiveness of leveraging entity representations for downstream cross-lingual tasks. We train a multilingual language model with 24 languages
346_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mluke.md
https://huggingface.co/docs/transformers/en/model_doc/mluke/#overview
.md
entity representations for downstream cross-lingual tasks. We train a multilingual language model with 24 languages with entity representations and show the model consistently outperforms word-based pretrained models in various cross-lingual transfer tasks. We also analyze the model and the key insight is that incorporating entity representations into the input allows us to extract more language-agnostic features. We also evaluate the model with a
346_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mluke.md
https://huggingface.co/docs/transformers/en/model_doc/mluke/#overview
.md
representations into the input allows us to extract more language-agnostic features. We also evaluate the model with a multilingual cloze prompt task with the mLAMA dataset. We show that entity-based prompt elicits correct factual knowledge more likely than using only word representations.* This model was contributed by [ryo0634](https://huggingface.co/ryo0634). The original code can be found [here](https://github.com/studio-ousia/luke).
346_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mluke.md
https://huggingface.co/docs/transformers/en/model_doc/mluke/#usage-tips
.md
One can directly plug in the weights of mLUKE into a LUKE model, like so: ```python from transformers import LukeModel model = LukeModel.from_pretrained("studio-ousia/mluke-base") ``` Note that mLUKE has its own tokenizer, [`MLukeTokenizer`]. You can initialize it as follows: ```python from transformers import MLukeTokenizer
346_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mluke.md
https://huggingface.co/docs/transformers/en/model_doc/mluke/#usage-tips
.md
tokenizer = MLukeTokenizer.from_pretrained("studio-ousia/mluke-base") ``` <Tip> As mLUKE's architecture is equivalent to that of LUKE, one can refer to [LUKE's documentation page](luke) for all tips, code examples and notebooks. </Tip>
346_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mluke.md
https://huggingface.co/docs/transformers/en/model_doc/mluke/#mluketokenizer
.md
Adapted from [`XLMRobertaTokenizer`] and [`LukeTokenizer`]. Based on [SentencePiece](https://github.com/google/sentencepiece). This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. Args: vocab_file (`str`): Path to the vocabulary file. entity_vocab_file (`str`): Path to the entity vocabulary file. bos_token (`str`, *optional*, defaults to `"<s>"`):
346_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mluke.md
https://huggingface.co/docs/transformers/en/model_doc/mluke/#mluketokenizer
.md
entity_vocab_file (`str`): Path to the entity vocabulary file. bos_token (`str`, *optional*, defaults to `"<s>"`): The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token. <Tip> When building a sequence using special tokens, this is not the token that is used for the beginning of sequence. The token used is the `cls_token`. </Tip> eos_token (`str`, *optional*, defaults to `"</s>"`): The end of sequence token. <Tip>
346_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mluke.md
https://huggingface.co/docs/transformers/en/model_doc/mluke/#mluketokenizer
.md
</Tip> eos_token (`str`, *optional*, defaults to `"</s>"`): The end of sequence token. <Tip> When building a sequence using special tokens, this is not the token that is used for the end of sequence. The token used is the `sep_token`. </Tip> sep_token (`str`, *optional*, defaults to `"</s>"`): The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
346_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mluke.md
https://huggingface.co/docs/transformers/en/model_doc/mluke/#mluketokenizer
.md
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens. cls_token (`str`, *optional*, defaults to `"<s>"`): The classifier token which is used when doing sequence classification (classification of the whole sequence
346_3_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mluke.md
https://huggingface.co/docs/transformers/en/model_doc/mluke/#mluketokenizer
.md
The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens. unk_token (`str`, *optional*, defaults to `"<unk>"`): The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. pad_token (`str`, *optional*, defaults to `"<pad>"`):
346_3_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mluke.md
https://huggingface.co/docs/transformers/en/model_doc/mluke/#mluketokenizer
.md
token instead. pad_token (`str`, *optional*, defaults to `"<pad>"`): The token used for padding, for example when batching sequences of different lengths. mask_token (`str`, *optional*, defaults to `"<mask>"`): The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict. task (`str`, *optional*): Task for which you want to prepare sequences. One of `"entity_classification"`,
346_3_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mluke.md
https://huggingface.co/docs/transformers/en/model_doc/mluke/#mluketokenizer
.md
task (`str`, *optional*): Task for which you want to prepare sequences. One of `"entity_classification"`, `"entity_pair_classification"`, or `"entity_span_classification"`. If you specify this argument, the entity sequence is automatically created based on the given entity span(s). max_entity_length (`int`, *optional*, defaults to 32): The maximum length of `entity_ids`. max_mention_length (`int`, *optional*, defaults to 30): The maximum number of tokens inside an entity span.
346_3_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mluke.md
https://huggingface.co/docs/transformers/en/model_doc/mluke/#mluketokenizer
.md
max_mention_length (`int`, *optional*, defaults to 30): The maximum number of tokens inside an entity span. entity_token_1 (`str`, *optional*, defaults to `<ent>`): The special token used to represent an entity span in a word token sequence. This token is only used when `task` is set to `"entity_classification"` or `"entity_pair_classification"`. entity_token_2 (`str`, *optional*, defaults to `<ent2>`):
346_3_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mluke.md
https://huggingface.co/docs/transformers/en/model_doc/mluke/#mluketokenizer
.md
entity_token_2 (`str`, *optional*, defaults to `<ent2>`): The special token used to represent an entity span in a word token sequence. This token is only used when `task` is set to `"entity_pair_classification"`. additional_special_tokens (`List[str]`, *optional*, defaults to `["<s>NOTUSED", "</s>NOTUSED"]`): Additional special tokens used by the tokenizer. sp_model_kwargs (`dict`, *optional*): Will be passed to the `SentencePieceProcessor.__init__()` method. The [Python wrapper for
346_3_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mluke.md
https://huggingface.co/docs/transformers/en/model_doc/mluke/#mluketokenizer
.md
sp_model_kwargs (`dict`, *optional*): Will be passed to the `SentencePieceProcessor.__init__()` method. The [Python wrapper for SentencePiece](https://github.com/google/sentencepiece/tree/master/python) can be used, among other things, to set: - `enable_sampling`: Enable subword regularization. - `nbest_size`: Sampling parameters for unigram. Invalid for BPE-Dropout. - `nbest_size = {0,1}`: No sampling is performed. - `nbest_size > 1`: samples from the nbest_size results.
346_3_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mluke.md
https://huggingface.co/docs/transformers/en/model_doc/mluke/#mluketokenizer
.md
- `nbest_size = {0,1}`: No sampling is performed. - `nbest_size > 1`: samples from the nbest_size results. - `nbest_size < 0`: assuming that nbest_size is infinite and samples from the all hypothesis (lattice) using forward-filtering-and-backward-sampling algorithm. - `alpha`: Smoothing parameter for unigram sampling, and dropout probability of merge operations for BPE-dropout. Attributes: sp_model (`SentencePieceProcessor`):
346_3_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mluke.md
https://huggingface.co/docs/transformers/en/model_doc/mluke/#mluketokenizer
.md
BPE-dropout. Attributes: sp_model (`SentencePieceProcessor`): The *SentencePiece* processor that is used for every conversion (string, tokens and IDs). Methods: __call__ - save_vocabulary
346_3_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qdqbert.md
https://huggingface.co/docs/transformers/en/model_doc/qdqbert/
.md
<!--Copyright 2021 NVIDIA Corporation and The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
347_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qdqbert.md
https://huggingface.co/docs/transformers/en/model_doc/qdqbert/
.md
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
347_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qdqbert.md
https://huggingface.co/docs/transformers/en/model_doc/qdqbert/#qdqbert
.md
<Tip warning={true}> This model is in maintenance mode only, we don't accept any new PRs changing its code. If you run into any issues running this model, please reinstall the last version that supported this model: v4.40.2. You can do so by running the following command: `pip install -U transformers==4.40.2`. </Tip>
347_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qdqbert.md
https://huggingface.co/docs/transformers/en/model_doc/qdqbert/#overview
.md
The QDQBERT model can be referenced in [Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation](https://arxiv.org/abs/2004.09602) by Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev and Paulius Micikevicius. The abstract from the paper is the following: *Quantization techniques can reduce the size of Deep Neural Networks and improve inference latency and throughput by
347_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qdqbert.md
https://huggingface.co/docs/transformers/en/model_doc/qdqbert/#overview
.md
*Quantization techniques can reduce the size of Deep Neural Networks and improve inference latency and throughput by taking advantage of high throughput integer instructions. In this paper we review the mathematical aspects of quantization parameters and evaluate their choices on a wide range of neural network models for different application domains, including vision, speech, and language. We focus on quantization techniques that are amenable to acceleration
347_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qdqbert.md
https://huggingface.co/docs/transformers/en/model_doc/qdqbert/#overview
.md
domains, including vision, speech, and language. We focus on quantization techniques that are amenable to acceleration by processors with high-throughput integer math pipelines. We also present a workflow for 8-bit quantization that is able to maintain accuracy within 1% of the floating-point baseline on all networks studied, including models that are more difficult to quantize, such as MobileNets and BERT-large.* This model was contributed by [shangz](https://huggingface.co/shangz).
347_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qdqbert.md
https://huggingface.co/docs/transformers/en/model_doc/qdqbert/#usage-tips
.md
- QDQBERT model adds fake quantization operations (pair of QuantizeLinear/DequantizeLinear ops) to (i) linear layer inputs and weights, (ii) matmul inputs, (iii) residual add inputs, in BERT model. - QDQBERT requires the dependency of [Pytorch Quantization Toolkit](https://github.com/NVIDIA/TensorRT/tree/master/tools/pytorch-quantization). To install `pip install pytorch-quantization --extra-index-url https://pypi.ngc.nvidia.com`
347_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qdqbert.md
https://huggingface.co/docs/transformers/en/model_doc/qdqbert/#usage-tips
.md
- QDQBERT model can be loaded from any checkpoint of HuggingFace BERT model (for example *google-bert/bert-base-uncased*), and perform Quantization Aware Training/Post Training Quantization. - A complete example of using QDQBERT model to perform Quatization Aware Training and Post Training Quantization for SQUAD task can be found at [transformers/examples/research_projects/quantization-qdqbert/](examples/research_projects/quantization-qdqbert/).
347_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qdqbert.md
https://huggingface.co/docs/transformers/en/model_doc/qdqbert/#set-default-quantizers
.md
QDQBERT model adds fake quantization operations (pair of QuantizeLinear/DequantizeLinear ops) to BERT by `TensorQuantizer` in [Pytorch Quantization Toolkit](https://github.com/NVIDIA/TensorRT/tree/master/tools/pytorch-quantization). `TensorQuantizer` is the module for quantizing tensors, with `QuantDescriptor` defining how the tensor should be quantized. Refer to [Pytorch
347_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qdqbert.md
https://huggingface.co/docs/transformers/en/model_doc/qdqbert/#set-default-quantizers
.md
for quantizing tensors, with `QuantDescriptor` defining how the tensor should be quantized. Refer to [Pytorch Quantization Toolkit userguide](https://docs.nvidia.com/deeplearning/tensorrt/pytorch-quantization-toolkit/docs/userguide.html) for more details. Before creating QDQBERT model, one has to set the default `QuantDescriptor` defining default tensor quantizers. Example: ```python >>> import pytorch_quantization.nn as quant_nn >>> from pytorch_quantization.tensor_quant import QuantDescriptor
347_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qdqbert.md
https://huggingface.co/docs/transformers/en/model_doc/qdqbert/#set-default-quantizers
.md
>>> # The default tensor quantizer is set to use Max calibration method >>> input_desc = QuantDescriptor(num_bits=8, calib_method="max") >>> # The default tensor quantizer is set to be per-channel quantization for weights >>> weight_desc = QuantDescriptor(num_bits=8, axis=((0,))) >>> quant_nn.QuantLinear.set_default_quant_desc_input(input_desc) >>> quant_nn.QuantLinear.set_default_quant_desc_weight(weight_desc) ```
347_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qdqbert.md
https://huggingface.co/docs/transformers/en/model_doc/qdqbert/#calibration
.md
Calibration is the terminology of passing data samples to the quantizer and deciding the best scaling factors for tensors. After setting up the tensor quantizers, one can use the following example to calibrate the model: ```python >>> # Find the TensorQuantizer and enable calibration >>> for name, module in model.named_modules(): ... if name.endswith("_input_quantizer"): ... module.enable_calib() ... module.disable_quant() # Use full precision data to calibrate
347_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qdqbert.md
https://huggingface.co/docs/transformers/en/model_doc/qdqbert/#calibration
.md
>>> # Feeding data samples >>> model(x) >>> # ... >>> # Finalize calibration >>> for name, module in model.named_modules(): ... if name.endswith("_input_quantizer"): ... module.load_calib_amax() ... module.enable_quant() >>> # If running on GPU, it needs to call .cuda() again because new tensors will be created by calibration process >>> model.cuda() >>> # Keep running the quantized model >>> # ... ```
347_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qdqbert.md
https://huggingface.co/docs/transformers/en/model_doc/qdqbert/#export-to-onnx
.md
The goal of exporting to ONNX is to deploy inference by [TensorRT](https://developer.nvidia.com/tensorrt). Fake quantization will be broken into a pair of QuantizeLinear/DequantizeLinear ONNX ops. After setting static member of TensorQuantizer to use Pytorch’s own fake quantization functions, fake quantized model can be exported to ONNX, follow the instructions in [torch.onnx](https://pytorch.org/docs/stable/onnx.html). Example: ```python >>> from pytorch_quantization.nn import TensorQuantizer
347_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qdqbert.md
https://huggingface.co/docs/transformers/en/model_doc/qdqbert/#export-to-onnx
.md
>>> TensorQuantizer.use_fb_fake_quant = True >>> # Load the calibrated model >>> ... >>> # ONNX export >>> torch.onnx.export(...) ```
347_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qdqbert.md
https://huggingface.co/docs/transformers/en/model_doc/qdqbert/#resources
.md
- [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Causal language modeling task guide](../tasks/language_modeling) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Multiple choice task guide](../tasks/multiple_choice)
347_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qdqbert.md
https://huggingface.co/docs/transformers/en/model_doc/qdqbert/#qdqbertconfig
.md
This is the configuration class to store the configuration of a [`QDQBertModel`]. It is used to instantiate an QDQBERT model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the BERT [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
347_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qdqbert.md
https://huggingface.co/docs/transformers/en/model_doc/qdqbert/#qdqbertconfig
.md
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 30522): Vocabulary size of the QDQBERT model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`QDQBertModel`]. hidden_size (`int`, *optional*, defaults to 768): Dimension of the encoder layers and the pooler layer.
347_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qdqbert.md
https://huggingface.co/docs/transformers/en/model_doc/qdqbert/#qdqbertconfig
.md
hidden_size (`int`, *optional*, defaults to 768): Dimension of the encoder layers and the pooler layer. num_hidden_layers (`int`, *optional*, defaults to 12): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 12): Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (`int`, *optional*, defaults to 3072): Dimension of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
347_8_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qdqbert.md
https://huggingface.co/docs/transformers/en/model_doc/qdqbert/#qdqbertconfig
.md
Dimension of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder. hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported. hidden_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
347_8_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qdqbert.md
https://huggingface.co/docs/transformers/en/model_doc/qdqbert/#qdqbertconfig
.md
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout ratio for the attention probabilities. max_position_embeddings (`int`, *optional*, defaults to 512): The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). type_vocab_size (`int`, *optional*, defaults to 2):
347_8_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qdqbert.md
https://huggingface.co/docs/transformers/en/model_doc/qdqbert/#qdqbertconfig
.md
just in case (e.g., 512 or 1024 or 2048). type_vocab_size (`int`, *optional*, defaults to 2): The vocabulary size of the `token_type_ids` passed when calling [`QDQBertModel`]. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-12): The epsilon used by the layer normalization layers. is_decoder (`bool`, *optional*, defaults to `False`):
347_8_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qdqbert.md
https://huggingface.co/docs/transformers/en/model_doc/qdqbert/#qdqbertconfig
.md
The epsilon used by the layer normalization layers. is_decoder (`bool`, *optional*, defaults to `False`): Whether the model is used as a decoder or not. If `False`, the model is used as an encoder. use_cache (`bool`, *optional*, defaults to `True`): Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if `config.is_decoder=True`. Examples: ```python >>> from transformers import QDQBertModel, QDQBertConfig
347_8_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qdqbert.md
https://huggingface.co/docs/transformers/en/model_doc/qdqbert/#qdqbertconfig
.md
>>> # Initializing a QDQBERT google-bert/bert-base-uncased style configuration >>> configuration = QDQBertConfig() >>> # Initializing a model from the google-bert/bert-base-uncased style configuration >>> model = QDQBertModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
347_8_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qdqbert.md
https://huggingface.co/docs/transformers/en/model_doc/qdqbert/#qdqbertmodel
.md
The bare QDQBERT Model transformer outputting raw hidden-states without any specific head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
347_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qdqbert.md
https://huggingface.co/docs/transformers/en/model_doc/qdqbert/#qdqbertmodel
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`QDQBertConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
347_9_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qdqbert.md
https://huggingface.co/docs/transformers/en/model_doc/qdqbert/#qdqbertmodel
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of cross-attention is added between the self-attention layers, following the architecture described in [Attention is
347_9_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qdqbert.md
https://huggingface.co/docs/transformers/en/model_doc/qdqbert/#qdqbertmodel
.md
cross-attention is added between the self-attention layers, following the architecture described in [Attention is all you need](https://arxiv.org/abs/1706.03762) by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin. To behave as an decoder the model needs to be initialized with the `is_decoder` argument of the configuration set
347_9_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qdqbert.md
https://huggingface.co/docs/transformers/en/model_doc/qdqbert/#qdqbertmodel
.md
To behave as an decoder the model needs to be initialized with the `is_decoder` argument of the configuration set to `True`. To be used in a Seq2Seq model, the model needs to initialized with both `is_decoder` argument and `add_cross_attention` set to `True`; an `encoder_hidden_states` is then expected as an input to the forward pass. Methods: forward
347_9_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qdqbert.md
https://huggingface.co/docs/transformers/en/model_doc/qdqbert/#qdqbertlmheadmodel
.md
QDQBERT Model with a `language modeling` head on top for CLM fine-tuning. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
347_10_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qdqbert.md
https://huggingface.co/docs/transformers/en/model_doc/qdqbert/#qdqbertlmheadmodel
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`QDQBertConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
347_10_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qdqbert.md
https://huggingface.co/docs/transformers/en/model_doc/qdqbert/#qdqbertlmheadmodel
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
347_10_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qdqbert.md
https://huggingface.co/docs/transformers/en/model_doc/qdqbert/#qdqbertformaskedlm
.md
QDQBERT Model with a `language modeling` head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
347_11_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qdqbert.md
https://huggingface.co/docs/transformers/en/model_doc/qdqbert/#qdqbertformaskedlm
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`QDQBertConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
347_11_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qdqbert.md
https://huggingface.co/docs/transformers/en/model_doc/qdqbert/#qdqbertformaskedlm
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
347_11_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qdqbert.md
https://huggingface.co/docs/transformers/en/model_doc/qdqbert/#qdqbertforsequenceclassification
.md
Bert Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
347_12_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qdqbert.md
https://huggingface.co/docs/transformers/en/model_doc/qdqbert/#qdqbertforsequenceclassification
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`QDQBertConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
347_12_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qdqbert.md
https://huggingface.co/docs/transformers/en/model_doc/qdqbert/#qdqbertforsequenceclassification
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
347_12_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qdqbert.md
https://huggingface.co/docs/transformers/en/model_doc/qdqbert/#qdqbertfornextsentenceprediction
.md
Bert Model with a `next sentence prediction (classification)` head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
347_13_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qdqbert.md
https://huggingface.co/docs/transformers/en/model_doc/qdqbert/#qdqbertfornextsentenceprediction
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`QDQBertConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
347_13_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qdqbert.md
https://huggingface.co/docs/transformers/en/model_doc/qdqbert/#qdqbertfornextsentenceprediction
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
347_13_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qdqbert.md
https://huggingface.co/docs/transformers/en/model_doc/qdqbert/#qdqbertformultiplechoice
.md
Bert Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
347_14_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qdqbert.md
https://huggingface.co/docs/transformers/en/model_doc/qdqbert/#qdqbertformultiplechoice
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`QDQBertConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
347_14_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qdqbert.md
https://huggingface.co/docs/transformers/en/model_doc/qdqbert/#qdqbertformultiplechoice
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
347_14_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qdqbert.md
https://huggingface.co/docs/transformers/en/model_doc/qdqbert/#qdqbertfortokenclassification
.md
QDQBERT Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
347_15_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qdqbert.md
https://huggingface.co/docs/transformers/en/model_doc/qdqbert/#qdqbertfortokenclassification
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`QDQBertConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
347_15_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qdqbert.md
https://huggingface.co/docs/transformers/en/model_doc/qdqbert/#qdqbertfortokenclassification
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
347_15_2