source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/time_series_utils.md
https://huggingface.co/docs/transformers/en/internal/time_series_utils/#time-series-utilities
.md
This page lists all the utility functions and classes that can be used for Time Series based models. Most of those are only useful if you are studying the code of the time series models or you wish to add to the collection of distributional output classes.
428_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/time_series_utils.md
https://huggingface.co/docs/transformers/en/internal/time_series_utils/#distributional-output
.md
time_series_utils.NormalOutput Normal distribution output class. time_series_utils.StudentTOutput Student-T distribution output class. time_series_utils.NegativeBinomialOutput Negative Binomial distribution output class.
428_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/tokenization_utils.md
https://huggingface.co/docs/transformers/en/internal/tokenization_utils/
.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
429_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/tokenization_utils.md
https://huggingface.co/docs/transformers/en/internal/tokenization_utils/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
429_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/tokenization_utils.md
https://huggingface.co/docs/transformers/en/internal/tokenization_utils/#utilities-for-tokenizers
.md
This page lists all the utility functions used by the tokenizers, mainly the class [`~tokenization_utils_base.PreTrainedTokenizerBase`] that implements the common methods between [`PreTrainedTokenizer`] and [`PreTrainedTokenizerFast`] and the mixin [`~tokenization_utils_base.SpecialTokensMixin`]. Most of those are only useful if you are studying the code of the tokenizers in the library.
429_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/tokenization_utils.md
https://huggingface.co/docs/transformers/en/internal/tokenization_utils/#pretrainedtokenizerbase
.md
tokenization_utils_base.PreTrainedTokenizerBase Base class for [`PreTrainedTokenizer`] and [`PreTrainedTokenizerFast`]. Handles shared (mostly boiler plate) methods for those two classes. Class attributes (overridden by derived classes) - **vocab_files_names** (`Dict[str, str]`) -- A dictionary with, as keys, the `__init__` keyword name of each vocabulary file required by the model, and as associated values, the filename for saving the associated file (string).
429_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/tokenization_utils.md
https://huggingface.co/docs/transformers/en/internal/tokenization_utils/#pretrainedtokenizerbase
.md
vocabulary file required by the model, and as associated values, the filename for saving the associated file (string). - **pretrained_vocab_files_map** (`Dict[str, Dict[str, str]]`) -- A dictionary of dictionaries, with the high-level keys being the `__init__` keyword name of each vocabulary file required by the model, the low-level being the `short-cut-names` of the pretrained models with, as associated values, the `url` to the associated pretrained vocabulary file.
429_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/tokenization_utils.md
https://huggingface.co/docs/transformers/en/internal/tokenization_utils/#pretrainedtokenizerbase
.md
associated pretrained vocabulary file. - **model_input_names** (`List[str]`) -- A list of inputs expected in the forward pass of the model. - **padding_side** (`str`) -- The default value for the side on which the model should have padding applied. Should be `'right'` or `'left'`. - **truncation_side** (`str`) -- The default value for the side on which the model should have truncation applied. Should be `'right'` or `'left'`. Args: model_max_length (`int`, *optional*):
429_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/tokenization_utils.md
https://huggingface.co/docs/transformers/en/internal/tokenization_utils/#pretrainedtokenizerbase
.md
applied. Should be `'right'` or `'left'`. Args: model_max_length (`int`, *optional*): The maximum length (in number of tokens) for the inputs to the transformer model. When the tokenizer is loaded with [`~tokenization_utils_base.PreTrainedTokenizerBase.from_pretrained`], this will be set to the value stored for the associated model in `max_model_input_sizes` (see above). If no value is provided, will default to VERY_LARGE_INTEGER (`int(1e30)`). padding_side (`str`, *optional*):
429_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/tokenization_utils.md
https://huggingface.co/docs/transformers/en/internal/tokenization_utils/#pretrainedtokenizerbase
.md
default to VERY_LARGE_INTEGER (`int(1e30)`). padding_side (`str`, *optional*): The side on which the model should have padding applied. Should be selected between ['right', 'left']. Default value is picked from the class attribute of the same name. truncation_side (`str`, *optional*): The side on which the model should have truncation applied. Should be selected between ['right', 'left']. Default value is picked from the class attribute of the same name. chat_template (`str`, *optional*):
429_2_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/tokenization_utils.md
https://huggingface.co/docs/transformers/en/internal/tokenization_utils/#pretrainedtokenizerbase
.md
Default value is picked from the class attribute of the same name. chat_template (`str`, *optional*): A Jinja template string that will be used to format lists of chat messages. See https://huggingface.co/docs/transformers/chat_templating for a full description. model_input_names (`List[string]`, *optional*): The list of inputs accepted by the forward pass of the model (like `"token_type_ids"` or `"attention_mask"`). Default value is picked from the class attribute of the same name.
429_2_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/tokenization_utils.md
https://huggingface.co/docs/transformers/en/internal/tokenization_utils/#pretrainedtokenizerbase
.md
`"attention_mask"`). Default value is picked from the class attribute of the same name. bos_token (`str` or `tokenizers.AddedToken`, *optional*): A special token representing the beginning of a sentence. Will be associated to `self.bos_token` and `self.bos_token_id`. eos_token (`str` or `tokenizers.AddedToken`, *optional*): A special token representing the end of a sentence. Will be associated to `self.eos_token` and `self.eos_token_id`. unk_token (`str` or `tokenizers.AddedToken`, *optional*):
429_2_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/tokenization_utils.md
https://huggingface.co/docs/transformers/en/internal/tokenization_utils/#pretrainedtokenizerbase
.md
`self.eos_token_id`. unk_token (`str` or `tokenizers.AddedToken`, *optional*): A special token representing an out-of-vocabulary token. Will be associated to `self.unk_token` and `self.unk_token_id`. sep_token (`str` or `tokenizers.AddedToken`, *optional*): A special token separating two different sentences in the same input (used by BERT for instance). Will be associated to `self.sep_token` and `self.sep_token_id`. pad_token (`str` or `tokenizers.AddedToken`, *optional*):
429_2_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/tokenization_utils.md
https://huggingface.co/docs/transformers/en/internal/tokenization_utils/#pretrainedtokenizerbase
.md
associated to `self.sep_token` and `self.sep_token_id`. pad_token (`str` or `tokenizers.AddedToken`, *optional*): A special token used to make arrays of tokens the same size for batching purpose. Will then be ignored by attention mechanisms or loss computation. Will be associated to `self.pad_token` and `self.pad_token_id`. cls_token (`str` or `tokenizers.AddedToken`, *optional*): A special token representing the class of the input (used by BERT for instance). Will be associated to
429_2_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/tokenization_utils.md
https://huggingface.co/docs/transformers/en/internal/tokenization_utils/#pretrainedtokenizerbase
.md
A special token representing the class of the input (used by BERT for instance). Will be associated to `self.cls_token` and `self.cls_token_id`. mask_token (`str` or `tokenizers.AddedToken`, *optional*): A special token representing a masked token (used by masked-language modeling pretraining objectives, like BERT). Will be associated to `self.mask_token` and `self.mask_token_id`. additional_special_tokens (tuple or list of `str` or `tokenizers.AddedToken`, *optional*):
429_2_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/tokenization_utils.md
https://huggingface.co/docs/transformers/en/internal/tokenization_utils/#pretrainedtokenizerbase
.md
additional_special_tokens (tuple or list of `str` or `tokenizers.AddedToken`, *optional*): A tuple or a list of additional special tokens. Add them here to ensure they are skipped when decoding with `skip_special_tokens` is set to True. If they are not part of the vocabulary, they will be added at the end of the vocabulary. clean_up_tokenization_spaces (`bool`, *optional*, defaults to `True`): Whether or not the model should cleanup the spaces that were added when splitting the input text during the
429_2_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/tokenization_utils.md
https://huggingface.co/docs/transformers/en/internal/tokenization_utils/#pretrainedtokenizerbase
.md
Whether or not the model should cleanup the spaces that were added when splitting the input text during the tokenization process. split_special_tokens (`bool`, *optional*, defaults to `False`): Whether or not the special tokens should be split during the tokenization process. Passing will affect the internal state of the tokenizer. The default behavior is to not split special tokens. This means that if `<s>` is the `bos_token`, then `tokenizer.tokenize("<s>") = ['<s>`]. Otherwise, if
429_2_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/tokenization_utils.md
https://huggingface.co/docs/transformers/en/internal/tokenization_utils/#pretrainedtokenizerbase
.md
`<s>` is the `bos_token`, then `tokenizer.tokenize("<s>") = ['<s>`]. Otherwise, if `split_special_tokens=True`, then `tokenizer.tokenize("<s>")` will be give `['<','s', '>']`. - __call__ - all
429_2_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/tokenization_utils.md
https://huggingface.co/docs/transformers/en/internal/tokenization_utils/#specialtokensmixin
.md
tokenization_utils_base.SpecialTokensMixin A mixin derived by [`PreTrainedTokenizer`] and [`PreTrainedTokenizerFast`] to handle specific behaviors related to special tokens. In particular, this class hold the attributes which can be used to directly access these special tokens in a model-independent manner and allow to set and update the special tokens. Args: bos_token (`str` or `tokenizers.AddedToken`, *optional*): A special token representing the beginning of a sentence.
429_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/tokenization_utils.md
https://huggingface.co/docs/transformers/en/internal/tokenization_utils/#specialtokensmixin
.md
Args: bos_token (`str` or `tokenizers.AddedToken`, *optional*): A special token representing the beginning of a sentence. eos_token (`str` or `tokenizers.AddedToken`, *optional*): A special token representing the end of a sentence. unk_token (`str` or `tokenizers.AddedToken`, *optional*): A special token representing an out-of-vocabulary token. sep_token (`str` or `tokenizers.AddedToken`, *optional*): A special token separating two different sentences in the same input (used by BERT for instance).
429_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/tokenization_utils.md
https://huggingface.co/docs/transformers/en/internal/tokenization_utils/#specialtokensmixin
.md
A special token separating two different sentences in the same input (used by BERT for instance). pad_token (`str` or `tokenizers.AddedToken`, *optional*): A special token used to make arrays of tokens the same size for batching purpose. Will then be ignored by attention mechanisms or loss computation. cls_token (`str` or `tokenizers.AddedToken`, *optional*): A special token representing the class of the input (used by BERT for instance). mask_token (`str` or `tokenizers.AddedToken`, *optional*):
429_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/tokenization_utils.md
https://huggingface.co/docs/transformers/en/internal/tokenization_utils/#specialtokensmixin
.md
mask_token (`str` or `tokenizers.AddedToken`, *optional*): A special token representing a masked token (used by masked-language modeling pretraining objectives, like BERT). additional_special_tokens (tuple or list of `str` or `tokenizers.AddedToken`, *optional*): A tuple or a list of additional tokens, which will be marked as `special`, meaning that they will be skipped when decoding if `skip_special_tokens` is set to `True`.
429_3_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/tokenization_utils.md
https://huggingface.co/docs/transformers/en/internal/tokenization_utils/#enums-and-namedtuples
.md
tokenization_utils_base.TruncationStrategy Possible values for the `truncation` argument in [`PreTrainedTokenizerBase.__call__`]. Useful for tab-completion in an IDE. tokenization_utils_base.CharSpan Character span in the original string. Args: start (`int`): Index of the first character in the original string. end (`int`): Index of the character following the last character in the original string. tokenization_utils_base.TokenSpan Token span in an encoded string (list of tokens). Args:
429_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/tokenization_utils.md
https://huggingface.co/docs/transformers/en/internal/tokenization_utils/#enums-and-namedtuples
.md
tokenization_utils_base.TokenSpan Token span in an encoded string (list of tokens). Args: start (`int`): Index of the first token in the span. end (`int`): Index of the token following the last token in the span.
429_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/image_processing_utils.md
https://huggingface.co/docs/transformers/en/internal/image_processing_utils/
.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
430_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/image_processing_utils.md
https://huggingface.co/docs/transformers/en/internal/image_processing_utils/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
430_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/image_processing_utils.md
https://huggingface.co/docs/transformers/en/internal/image_processing_utils/#utilities-for-image-processors
.md
This page lists all the utility functions used by the image processors, mainly the functional transformations used to process the images. Most of those are only useful if you are studying the code of the image processors in the library.
430_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/image_processing_utils.md
https://huggingface.co/docs/transformers/en/internal/image_processing_utils/#image-transformations
.md
image_transforms.center_crop Crops the `image` to the specified `size` using a center crop. Note that if the image is too small to be cropped to the size given, it will be padded (so the returned result will always be of size `size`). Args: image (`np.ndarray`): The image to crop. size (`Tuple[int, int]`): The target size for the cropped image. data_format (`str` or `ChannelDimension`, *optional*): The channel dimension format for the output image. Can be one of:
430_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/image_processing_utils.md
https://huggingface.co/docs/transformers/en/internal/image_processing_utils/#image-transformations
.md
data_format (`str` or `ChannelDimension`, *optional*): The channel dimension format for the output image. Can be one of: - `"channels_first"` or `ChannelDimension.FIRST`: image in (num_channels, height, width) format. - `"channels_last"` or `ChannelDimension.LAST`: image in (height, width, num_channels) format. If unset, will use the inferred format of the input image. input_data_format (`str` or `ChannelDimension`, *optional*): The channel dimension format for the input image. Can be one of:
430_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/image_processing_utils.md
https://huggingface.co/docs/transformers/en/internal/image_processing_utils/#image-transformations
.md
input_data_format (`str` or `ChannelDimension`, *optional*): The channel dimension format for the input image. Can be one of: - `"channels_first"` or `ChannelDimension.FIRST`: image in (num_channels, height, width) format. - `"channels_last"` or `ChannelDimension.LAST`: image in (height, width, num_channels) format. If unset, will use the inferred format of the input image. return_numpy (`bool`, *optional*):
430_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/image_processing_utils.md
https://huggingface.co/docs/transformers/en/internal/image_processing_utils/#image-transformations
.md
If unset, will use the inferred format of the input image. return_numpy (`bool`, *optional*): Whether or not to return the cropped image as a numpy array. Used for backwards compatibility with the previous ImageFeatureExtractionMixin method. - Unset: will return the same type as the input image. - `True`: will return a numpy array. - `False`: will return a `PIL.Image.Image` object. Returns: `np.ndarray`: The cropped image. image_transforms.center_to_corners_format
430_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/image_processing_utils.md
https://huggingface.co/docs/transformers/en/internal/image_processing_utils/#image-transformations
.md
Returns: `np.ndarray`: The cropped image. image_transforms.center_to_corners_format Converts bounding boxes from center format to corners format. center format: contains the coordinate for the center of the box and its width, height dimensions (center_x, center_y, width, height) corners format: contains the coodinates for the top-left and bottom-right corners of the box (top_left_x, top_left_y, bottom_right_x, bottom_right_y) image_transforms.corners_to_center_format
430_2_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/image_processing_utils.md
https://huggingface.co/docs/transformers/en/internal/image_processing_utils/#image-transformations
.md
(top_left_x, top_left_y, bottom_right_x, bottom_right_y) image_transforms.corners_to_center_format Converts bounding boxes from corners format to center format. corners format: contains the coordinates for the top-left and bottom-right corners of the box (top_left_x, top_left_y, bottom_right_x, bottom_right_y) center format: contains the coordinate for the center of the box and its the width, height dimensions (center_x, center_y, width, height) image_transforms.id_to_rgb
430_2_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/image_processing_utils.md
https://huggingface.co/docs/transformers/en/internal/image_processing_utils/#image-transformations
.md
(center_x, center_y, width, height) image_transforms.id_to_rgb Converts unique ID to RGB color. image_transforms.normalize Normalizes `image` using the mean and standard deviation specified by `mean` and `std`. image = (image - mean) / std Args: image (`np.ndarray`): The image to normalize. mean (`float` or `Iterable[float]`): The mean to use for normalization. std (`float` or `Iterable[float]`): The standard deviation to use for normalization. data_format (`ChannelDimension`, *optional*):
430_2_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/image_processing_utils.md
https://huggingface.co/docs/transformers/en/internal/image_processing_utils/#image-transformations
.md
The standard deviation to use for normalization. data_format (`ChannelDimension`, *optional*): The channel dimension format of the output image. If unset, will use the inferred format from the input. input_data_format (`ChannelDimension`, *optional*): The channel dimension format of the input image. If unset, will use the inferred format from the input. image_transforms.pad Pads the `image` with the specified (height, width) `padding` and `mode`. Args: image (`np.ndarray`): The image to pad.
430_2_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/image_processing_utils.md
https://huggingface.co/docs/transformers/en/internal/image_processing_utils/#image-transformations
.md
Pads the `image` with the specified (height, width) `padding` and `mode`. Args: image (`np.ndarray`): The image to pad. padding (`int` or `Tuple[int, int]` or `Iterable[Tuple[int, int]]`): Padding to apply to the edges of the height, width axes. Can be one of three formats: - `((before_height, after_height), (before_width, after_width))` unique pad widths for each axis. - `((before, after),)` yields same before and after pad for height and width.
430_2_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/image_processing_utils.md
https://huggingface.co/docs/transformers/en/internal/image_processing_utils/#image-transformations
.md
- `((before, after),)` yields same before and after pad for height and width. - `(pad,)` or int is a shortcut for before = after = pad width for all axes. mode (`PaddingMode`): The padding mode to use. Can be one of: - `"constant"`: pads with a constant value. - `"reflect"`: pads with the reflection of the vector mirrored on the first and last values of the vector along each axis. - `"replicate"`: pads with the replication of the last value on the edge of the array along each axis.
430_2_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/image_processing_utils.md
https://huggingface.co/docs/transformers/en/internal/image_processing_utils/#image-transformations
.md
vector along each axis. - `"replicate"`: pads with the replication of the last value on the edge of the array along each axis. - `"symmetric"`: pads with the reflection of the vector mirrored along the edge of the array. constant_values (`float` or `Iterable[float]`, *optional*): The value to use for the padding if `mode` is `"constant"`. data_format (`str` or `ChannelDimension`, *optional*): The channel dimension format for the output image. Can be one of:
430_2_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/image_processing_utils.md
https://huggingface.co/docs/transformers/en/internal/image_processing_utils/#image-transformations
.md
data_format (`str` or `ChannelDimension`, *optional*): The channel dimension format for the output image. Can be one of: - `"channels_first"` or `ChannelDimension.FIRST`: image in (num_channels, height, width) format. - `"channels_last"` or `ChannelDimension.LAST`: image in (height, width, num_channels) format. If unset, will use same as the input image. input_data_format (`str` or `ChannelDimension`, *optional*): The channel dimension format for the input image. Can be one of:
430_2_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/image_processing_utils.md
https://huggingface.co/docs/transformers/en/internal/image_processing_utils/#image-transformations
.md
input_data_format (`str` or `ChannelDimension`, *optional*): The channel dimension format for the input image. Can be one of: - `"channels_first"` or `ChannelDimension.FIRST`: image in (num_channels, height, width) format. - `"channels_last"` or `ChannelDimension.LAST`: image in (height, width, num_channels) format. If unset, will use the inferred format of the input image. Returns: `np.ndarray`: The padded image. image_transforms.rgb_to_id Converts RGB color to unique ID.
430_2_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/image_processing_utils.md
https://huggingface.co/docs/transformers/en/internal/image_processing_utils/#image-transformations
.md
Returns: `np.ndarray`: The padded image. image_transforms.rgb_to_id Converts RGB color to unique ID. image_transforms.rescale Rescales `image` by `scale`. Args: image (`np.ndarray`): The image to rescale. scale (`float`): The scale to use for rescaling the image. data_format (`ChannelDimension`, *optional*): The channel dimension format of the image. If not provided, it will be the same as the input image. dtype (`np.dtype`, *optional*, defaults to `np.float32`):
430_2_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/image_processing_utils.md
https://huggingface.co/docs/transformers/en/internal/image_processing_utils/#image-transformations
.md
dtype (`np.dtype`, *optional*, defaults to `np.float32`): The dtype of the output image. Defaults to `np.float32`. Used for backwards compatibility with feature extractors. input_data_format (`ChannelDimension`, *optional*): The channel dimension format of the input image. If not provided, it will be inferred from the input image. Returns: `np.ndarray`: The rescaled image. image_transforms.resize Resizes `image` to `(height, width)` specified by `size` using the PIL library. Args:
430_2_14
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/image_processing_utils.md
https://huggingface.co/docs/transformers/en/internal/image_processing_utils/#image-transformations
.md
image_transforms.resize Resizes `image` to `(height, width)` specified by `size` using the PIL library. Args: image (`np.ndarray`): The image to resize. size (`Tuple[int, int]`): The size to use for resizing the image. resample (`int`, *optional*, defaults to `PILImageResampling.BILINEAR`): The filter to user for resampling. reducing_gap (`int`, *optional*): Apply optimization by resizing the image in two steps. The bigger `reducing_gap`, the closer the result to
430_2_15
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/image_processing_utils.md
https://huggingface.co/docs/transformers/en/internal/image_processing_utils/#image-transformations
.md
Apply optimization by resizing the image in two steps. The bigger `reducing_gap`, the closer the result to the fair resampling. See corresponding Pillow documentation for more details. data_format (`ChannelDimension`, *optional*): The channel dimension format of the output image. If unset, will use the inferred format from the input. return_numpy (`bool`, *optional*, defaults to `True`): Whether or not to return the resized image as a numpy array. If False a `PIL.Image.Image` object is returned.
430_2_16
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/image_processing_utils.md
https://huggingface.co/docs/transformers/en/internal/image_processing_utils/#image-transformations
.md
Whether or not to return the resized image as a numpy array. If False a `PIL.Image.Image` object is returned. input_data_format (`ChannelDimension`, *optional*): The channel dimension format of the input image. If unset, will use the inferred format from the input. Returns: `np.ndarray`: The resized image. image_transforms.to_pil_image Converts `image` to a PIL Image. Optionally rescales it and puts the channel dimension back as the last axis if needed. Args:
430_2_17
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/image_processing_utils.md
https://huggingface.co/docs/transformers/en/internal/image_processing_utils/#image-transformations
.md
Converts `image` to a PIL Image. Optionally rescales it and puts the channel dimension back as the last axis if needed. Args: image (`PIL.Image.Image` or `numpy.ndarray` or `torch.Tensor` or `tf.Tensor`): The image to convert to the `PIL.Image` format. do_rescale (`bool`, *optional*): Whether or not to apply the scaling factor (to make pixel values integers between 0 and 255). Will default to `True` if the image type is a floating type and casting to `int` would result in a loss of precision,
430_2_18
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/image_processing_utils.md
https://huggingface.co/docs/transformers/en/internal/image_processing_utils/#image-transformations
.md
to `True` if the image type is a floating type and casting to `int` would result in a loss of precision, and `False` otherwise. image_mode (`str`, *optional*): The mode to use for the PIL image. If unset, will use the default mode for the input image type. input_data_format (`ChannelDimension`, *optional*): The channel dimension format of the input image. If unset, will use the inferred format from the input. Returns: `PIL.Image.Image`: The converted image.
430_2_19
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/image_processing_utils.md
https://huggingface.co/docs/transformers/en/internal/image_processing_utils/#imageprocessingmixin
.md
image_processing_utils.ImageProcessingMixin This is an image processor mixin used to provide saving/loading functionality for sequential and image feature extractors.
430_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/trainer_utils.md
https://huggingface.co/docs/transformers/en/internal/trainer_utils/
.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
431_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/trainer_utils.md
https://huggingface.co/docs/transformers/en/internal/trainer_utils/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
431_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/trainer_utils.md
https://huggingface.co/docs/transformers/en/internal/trainer_utils/#utilities-for-trainer
.md
This page lists all the utility functions used by [`Trainer`]. Most of those are only useful if you are studying the code of the Trainer in the library.
431_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/trainer_utils.md
https://huggingface.co/docs/transformers/en/internal/trainer_utils/#utilities
.md
Evaluation output (always contains labels), to be used to compute metrics. Parameters: predictions (`np.ndarray`): Predictions of the model. label_ids (`np.ndarray`): Targets to be matched. inputs (`np.ndarray`, *optional*): Input data passed to the model. losses (`np.ndarray`, *optional*): Loss values computed during evaluation. IntervalStrategy Helper function for reproducible behavior during distributed training. See - https://pytorch.org/docs/stable/notes/randomness.html for pytorch
431_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/trainer_utils.md
https://huggingface.co/docs/transformers/en/internal/trainer_utils/#utilities
.md
- https://pytorch.org/docs/stable/notes/randomness.html for pytorch - https://www.tensorflow.org/api_docs/python/tf/config/experimental/enable_op_determinism for tensorflow Helper function for reproducible behavior to set the seed in `random`, `numpy`, `torch` and/or `tf` (if installed). Args: seed (`int`): The seed to set. deterministic (`bool`, *optional*, defaults to `False`): Whether to use deterministic algorithms where available. Can slow down training.
431_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/trainer_utils.md
https://huggingface.co/docs/transformers/en/internal/trainer_utils/#utilities
.md
Whether to use deterministic algorithms where available. Can slow down training. Decorator to make all processes in distributed training wait for each local_master to do something. Args: local_rank (`int`): The rank of the local process.
431_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/trainer_utils.md
https://huggingface.co/docs/transformers/en/internal/trainer_utils/#callbacks-internals
.md
trainer_callback.CallbackHandler Internal class that just calls the list of callbacks in order.
431_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/trainer_utils.md
https://huggingface.co/docs/transformers/en/internal/trainer_utils/#distributed-evaluation
.md
trainer_pt_utils.DistributedTensorGatherer A class responsible for properly gathering tensors (or nested list/tuple of tensors) on the CPU by chunks. If our dataset has 16 samples with a batch size of 2 on 3 processes and we gather then transfer on CPU at every step, our sampler will generate the following indices: `[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 0, 1]` to get something of size a multiple of 3 (so that each process gets the same dataset length). Then process 0, 1 and
431_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/trainer_utils.md
https://huggingface.co/docs/transformers/en/internal/trainer_utils/#distributed-evaluation
.md
to get something of size a multiple of 3 (so that each process gets the same dataset length). Then process 0, 1 and 2 will be responsible of making predictions for the following samples: - P0: `[0, 1, 2, 3, 4, 5]` - P1: `[6, 7, 8, 9, 10, 11]` - P2: `[12, 13, 14, 15, 0, 1]` The first batch treated on each process will be - P0: `[0, 1]` - P1: `[6, 7]` - P2: `[12, 13]` So if we gather at the end of the first batch, we will get a tensor (nested list/tuple of tensor) corresponding to
431_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/trainer_utils.md
https://huggingface.co/docs/transformers/en/internal/trainer_utils/#distributed-evaluation
.md
So if we gather at the end of the first batch, we will get a tensor (nested list/tuple of tensor) corresponding to the following indices: `[0, 1, 6, 7, 12, 13]` If we directly concatenate our results without taking any precautions, the user will then get the predictions for the indices in this order at the end of the prediction loop: `[0, 1, 6, 7, 12, 13, 2, 3, 8, 9, 14, 15, 4, 5, 10, 11, 0, 1]` For some reason, that's not going to roll their boat. This class is there to solve that problem. Args:
431_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/trainer_utils.md
https://huggingface.co/docs/transformers/en/internal/trainer_utils/#distributed-evaluation
.md
For some reason, that's not going to roll their boat. This class is there to solve that problem. Args: world_size (`int`): The number of processes used in the distributed training. num_samples (`int`): The number of samples in our dataset. make_multiple_of (`int`, *optional*): If passed, the class assumes the datasets passed to each process are made to be a multiple of this argument (by adding samples). padding_index (`int`, *optional*, defaults to -100):
431_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/trainer_utils.md
https://huggingface.co/docs/transformers/en/internal/trainer_utils/#distributed-evaluation
.md
(by adding samples). padding_index (`int`, *optional*, defaults to -100): The padding index to use if the arrays don't all have the same sequence length.
431_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/trainer_utils.md
https://huggingface.co/docs/transformers/en/internal/trainer_utils/#trainer-argument-parser
.md
This subclass of `argparse.ArgumentParser` uses type hints on dataclasses to generate arguments. The class is designed to play well with the native argparse. In particular, you can add more (non-dataclass backed) arguments to the parser after initialization and you'll get the output back after parsing as an additional namespace. Optional: To create sub argument groups use the `_argument_group_name` attribute in the dataclass.
431_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/trainer_utils.md
https://huggingface.co/docs/transformers/en/internal/trainer_utils/#debug-utilities
.md
debug_utils.DebugUnderflowOverflow This debug class helps detect and understand where the model starts getting very large or very small, and more importantly `nan` or `inf` weight and activation elements. There are 2 working modes: 1. Underflow/overflow detection (default) 2. Specific batch absolute min/max tracing without detection Mode 1: Underflow/overflow detection To activate the underflow/overflow detection, initialize the object with the model : ```python
431_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/trainer_utils.md
https://huggingface.co/docs/transformers/en/internal/trainer_utils/#debug-utilities
.md
To activate the underflow/overflow detection, initialize the object with the model : ```python debug_overflow = DebugUnderflowOverflow(model) ``` then run the training as normal and if `nan` or `inf` gets detected in at least one of the weight, input or output elements this module will throw an exception and will print `max_frames_to_save` frames that lead to this event, each frame reporting 1. the fully qualified module name plus the class name whose `forward` was run
431_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/trainer_utils.md
https://huggingface.co/docs/transformers/en/internal/trainer_utils/#debug-utilities
.md
each frame reporting 1. the fully qualified module name plus the class name whose `forward` was run 2. the absolute min and max value of all elements for each module weights, and the inputs and output For example, here is the header and the last few frames in detection report for `google/mt5-small` run in fp16 mixed precision : ``` Detected inf/nan during batch_number=0 Last 21 forward frames: abs min abs max metadata [...] encoder.block.2.layer.1.DenseReluDense.wi_0 Linear 2.17e-07 4.50e+00 weight
431_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/trainer_utils.md
https://huggingface.co/docs/transformers/en/internal/trainer_utils/#debug-utilities
.md
abs min abs max metadata [...] encoder.block.2.layer.1.DenseReluDense.wi_0 Linear 2.17e-07 4.50e+00 weight 1.79e-06 4.65e+00 input[0] 2.68e-06 3.70e+01 output encoder.block.2.layer.1.DenseReluDense.wi_1 Linear 8.08e-07 2.66e+01 weight 1.79e-06 4.65e+00 input[0] 1.27e-04 2.37e+02 output encoder.block.2.layer.1.DenseReluDense.wo Linear 1.01e-06 6.44e+00 weight 0.00e+00 9.74e+03 input[0] 3.18e-04 6.27e+04 output encoder.block.2.layer.1.DenseReluDense T5DenseGatedGeluDense 1.79e-06 4.65e+00 input[0]
431_6_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/trainer_utils.md
https://huggingface.co/docs/transformers/en/internal/trainer_utils/#debug-utilities
.md
3.18e-04 6.27e+04 output encoder.block.2.layer.1.DenseReluDense T5DenseGatedGeluDense 1.79e-06 4.65e+00 input[0] 3.18e-04 6.27e+04 output encoder.block.2.layer.1.dropout Dropout 3.18e-04 6.27e+04 input[0] 0.00e+00 inf output ``` You can see here, that `T5DenseGatedGeluDense.forward` resulted in output activations, whose absolute max value was around 62.7K, which is very close to fp16's top limit of 64K. In the next frame we have `Dropout` which
431_6_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/trainer_utils.md
https://huggingface.co/docs/transformers/en/internal/trainer_utils/#debug-utilities
.md
around 62.7K, which is very close to fp16's top limit of 64K. In the next frame we have `Dropout` which renormalizes the weights, after it zeroed some of the elements, which pushes the absolute max value to more than 64K, and we get an overlow. As you can see it's the previous frames that we need to look into when the numbers start going into very large for fp16 numbers. The tracking is done in a forward hook, which gets invoked immediately after `forward` has completed.
431_6_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/trainer_utils.md
https://huggingface.co/docs/transformers/en/internal/trainer_utils/#debug-utilities
.md
fp16 numbers. The tracking is done in a forward hook, which gets invoked immediately after `forward` has completed. By default the last 21 frames are printed. You can change the default to adjust for your needs. For example : ```python debug_overflow = DebugUnderflowOverflow(model, max_frames_to_save=100) ``` To validate that you have set up this debugging feature correctly, and you intend to use it in a training that
431_6_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/trainer_utils.md
https://huggingface.co/docs/transformers/en/internal/trainer_utils/#debug-utilities
.md
``` To validate that you have set up this debugging feature correctly, and you intend to use it in a training that may take hours to complete, first run it with normal tracing enabled for one of a few batches as explained in the next section. Mode 2. Specific batch absolute min/max tracing without detection The second work mode is per-batch tracing with the underflow/overflow detection feature turned off.
431_6_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/trainer_utils.md
https://huggingface.co/docs/transformers/en/internal/trainer_utils/#debug-utilities
.md
The second work mode is per-batch tracing with the underflow/overflow detection feature turned off. Let's say you want to watch the absolute min and max values for all the ingredients of each `forward` call of a given batch, and only do that for batches 1 and 3. Then you instantiate this class as : ```python debug_overflow = DebugUnderflowOverflow(model, trace_batch_nums=[1, 3]) ``` And now full batches 1 and 3 will be traced using the same format as explained above. Batches are 0-indexed.
431_6_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/trainer_utils.md
https://huggingface.co/docs/transformers/en/internal/trainer_utils/#debug-utilities
.md
``` And now full batches 1 and 3 will be traced using the same format as explained above. Batches are 0-indexed. This is helpful if you know that the program starts misbehaving after a certain batch number, so you can fast-forward right to that area. Early stopping: You can also specify the batch number after which to stop the training, with : ```python debug_overflow = DebugUnderflowOverflow(model, trace_batch_nums=[1, 3], abort_after_batch_num=3) ```
431_6_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/trainer_utils.md
https://huggingface.co/docs/transformers/en/internal/trainer_utils/#debug-utilities
.md
```python debug_overflow = DebugUnderflowOverflow(model, trace_batch_nums=[1, 3], abort_after_batch_num=3) ``` This feature is mainly useful in the tracing mode, but you can use it for any mode. **Performance**: As this module measures absolute `min`/``max` of each weight of the model on every forward it'll slow the training down. Therefore remember to turn it off once the debugging needs have been met. Args: model (`nn.Module`): The model to debug.
431_6_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/trainer_utils.md
https://huggingface.co/docs/transformers/en/internal/trainer_utils/#debug-utilities
.md
Args: model (`nn.Module`): The model to debug. max_frames_to_save (`int`, *optional*, defaults to 21): How many frames back to record trace_batch_nums(`List[int]`, *optional*, defaults to `[]`): Which batch numbers to trace (turns detection off) abort_after_batch_num (`int``, *optional*): Whether to abort after a certain batch number has finished
431_6_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/pipelines_utils.md
https://huggingface.co/docs/transformers/en/internal/pipelines_utils/
.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
432_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/pipelines_utils.md
https://huggingface.co/docs/transformers/en/internal/pipelines_utils/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
432_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/pipelines_utils.md
https://huggingface.co/docs/transformers/en/internal/pipelines_utils/#utilities-for-pipelines
.md
This page lists all the utility functions the library provides for pipelines. Most of those are only useful if you are studying the code of the models in the library.
432_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/pipelines_utils.md
https://huggingface.co/docs/transformers/en/internal/pipelines_utils/#argument-handling
.md
pipelines.ArgumentHandler Base interface for handling arguments for each [`~pipelines.Pipeline`]. pipelines.ZeroShotClassificationArgumentHandler Handles arguments for zero-shot for text classification by turning each possible label into an NLI premise/hypothesis pair. pipelines.QuestionAnsweringArgumentHandler QuestionAnsweringPipeline requires the user to provide multiple arguments (i.e. question & context) to be mapped to internal [`SquadExample`].
432_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/pipelines_utils.md
https://huggingface.co/docs/transformers/en/internal/pipelines_utils/#argument-handling
.md
internal [`SquadExample`]. QuestionAnsweringArgumentHandler manages all the possible to create a [`SquadExample`] from the command-line supplied arguments.
432_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/pipelines_utils.md
https://huggingface.co/docs/transformers/en/internal/pipelines_utils/#data-format
.md
pipelines.PipelineDataFormat Base class for all the pipeline supported data format both for reading and writing. Supported data formats currently includes: - JSON - CSV - stdin/stdout (pipe) `PipelineDataFormat` also includes some utilities to work with multi-columns like mapping from datasets columns to pipelines keyword arguments through the `dataset_kwarg_1=dataset_column_1` format. Args: output_path (`str`): Where to save the outgoing data. input_path (`str`): Where to look for the input data.
432_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/pipelines_utils.md
https://huggingface.co/docs/transformers/en/internal/pipelines_utils/#data-format
.md
Args: output_path (`str`): Where to save the outgoing data. input_path (`str`): Where to look for the input data. column (`str`): The column to read. overwrite (`bool`, *optional*, defaults to `False`): Whether or not to overwrite the `output_path`. pipelines.CsvPipelineDataFormat Support for pipelines using CSV data format. Args: output_path (`str`): Where to save the outgoing data. input_path (`str`): Where to look for the input data. column (`str`): The column to read.
432_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/pipelines_utils.md
https://huggingface.co/docs/transformers/en/internal/pipelines_utils/#data-format
.md
input_path (`str`): Where to look for the input data. column (`str`): The column to read. overwrite (`bool`, *optional*, defaults to `False`): Whether or not to overwrite the `output_path`. pipelines.JsonPipelineDataFormat Support for pipelines using JSON file format. Args: output_path (`str`): Where to save the outgoing data. input_path (`str`): Where to look for the input data. column (`str`): The column to read. overwrite (`bool`, *optional*, defaults to `False`):
432_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/pipelines_utils.md
https://huggingface.co/docs/transformers/en/internal/pipelines_utils/#data-format
.md
column (`str`): The column to read. overwrite (`bool`, *optional*, defaults to `False`): Whether or not to overwrite the `output_path`. pipelines.PipedPipelineDataFormat Read data from piped input to the python process. For multi columns data, columns should separated by If columns are provided, then the output will be a dictionary with {column_x: value_x} Args: output_path (`str`): Where to save the outgoing data. input_path (`str`): Where to look for the input data.
432_3_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/pipelines_utils.md
https://huggingface.co/docs/transformers/en/internal/pipelines_utils/#data-format
.md
Args: output_path (`str`): Where to save the outgoing data. input_path (`str`): Where to look for the input data. column (`str`): The column to read. overwrite (`bool`, *optional*, defaults to `False`): Whether or not to overwrite the `output_path`.
432_3_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/pipelines_utils.md
https://huggingface.co/docs/transformers/en/internal/pipelines_utils/#utilities
.md
pipelines.PipelineException Raised by a [`Pipeline`] when handling __call__. Args: task (`str`): The task of the pipeline. model (`str`): The model used by the pipeline. reason (`str`): The error message to display.
432_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/awq.md
https://huggingface.co/docs/transformers/en/quantization/awq/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
433_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/awq.md
https://huggingface.co/docs/transformers/en/quantization/awq/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
433_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/awq.md
https://huggingface.co/docs/transformers/en/quantization/awq/#awq
.md
<Tip> Try AWQ quantization with this [notebook](https://colab.research.google.com/drive/1HzZH89yAXJaZgwJDhQj9LqSBux932BvY)! </Tip> [Activation-aware Weight Quantization (AWQ)](https://hf.co/papers/2306.00978) doesn't quantize all the weights in a model, and instead, it preserves a small percentage of weights that are important for LLM performance. This significantly reduces quantization loss such that you can run models in 4-bit precision without experiencing any performance degradation.
433_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/awq.md
https://huggingface.co/docs/transformers/en/quantization/awq/#awq
.md
There are several libraries for quantizing models with the AWQ algorithm, such as [llm-awq](https://github.com/mit-han-lab/llm-awq), [autoawq](https://github.com/casper-hansen/AutoAWQ) or [optimum-intel](https://huggingface.co/docs/optimum/main/en/intel/optimization_inc). Transformers supports loading models quantized with the llm-awq and autoawq libraries. This guide will show you how to load models quantized with autoawq, but the process is similar for llm-awq quantized models.
433_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/awq.md
https://huggingface.co/docs/transformers/en/quantization/awq/#awq
.md
Make sure you have autoawq installed: ```bash pip install autoawq ``` AWQ-quantized models can be identified by checking the `quantization_config` attribute in the model's [config.json](https://huggingface.co/TheBloke/zephyr-7B-alpha-AWQ/blob/main/config.json) file: ```json { "_name_or_path": "/workspace/process/huggingfaceh4_zephyr-7b-alpha/source", "architectures": [ "MistralForCausalLM" ], ... ... ... "quantization_config": { "quant_method": "awq", "zero_point": true, "group_size": 128, "bits": 4,
433_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/awq.md
https://huggingface.co/docs/transformers/en/quantization/awq/#awq
.md
], ... ... ... "quantization_config": { "quant_method": "awq", "zero_point": true, "group_size": 128, "bits": 4, "version": "gemm" } } ``` A quantized model is loaded with the [`~PreTrainedModel.from_pretrained`] method. If you loaded your model on the CPU, make sure to move it to a GPU device first. Use the `device_map` parameter to specify where to place the model: ```py from transformers import AutoModelForCausalLM, AutoTokenizer
433_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/awq.md
https://huggingface.co/docs/transformers/en/quantization/awq/#awq
.md
model_id = "TheBloke/zephyr-7B-alpha-AWQ" model = AutoModelForCausalLM.from_pretrained(model_id, device_map="cuda:0") ``` Loading an AWQ-quantized model automatically sets other weights to fp16 by default for performance reasons. If you want to load these other weights in a different format, use the `torch_dtype` parameter: ```py from transformers import AutoModelForCausalLM, AutoTokenizer
433_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/awq.md
https://huggingface.co/docs/transformers/en/quantization/awq/#awq
.md
model_id = "TheBloke/zephyr-7B-alpha-AWQ" model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float32) ``` AWQ quantization can also be combined with [FlashAttention-2](../perf_infer_gpu_one#flashattention-2) to further accelerate inference: ```py from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("TheBloke/zephyr-7B-alpha-AWQ", attn_implementation="flash_attention_2", device_map="cuda:0") ```
433_1_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/awq.md
https://huggingface.co/docs/transformers/en/quantization/awq/#fused-modules
.md
Fused modules offers improved accuracy and performance and it is supported out-of-the-box for AWQ modules for [Llama](https://huggingface.co/meta-llama) and [Mistral](https://huggingface.co/mistralai/Mistral-7B-v0.1) architectures, but you can also fuse AWQ modules for unsupported architectures. <Tip warning={true}> Fused modules cannot be combined with other optimization techniques such as FlashAttention-2. </Tip> <hfoptions id="fuse"> <hfoption id="supported architectures">
433_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/awq.md
https://huggingface.co/docs/transformers/en/quantization/awq/#fused-modules
.md
</Tip> <hfoptions id="fuse"> <hfoption id="supported architectures"> To enable fused modules for supported architectures, create an [`AwqConfig`] and set the parameters `fuse_max_seq_len` and `do_fuse=True`. The `fuse_max_seq_len` parameter is the total sequence length and it should include the context length and the expected generation length. You can set it to a larger value to be safe.
433_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/awq.md
https://huggingface.co/docs/transformers/en/quantization/awq/#fused-modules
.md
For example, to fuse the AWQ modules of the [TheBloke/Mistral-7B-OpenOrca-AWQ](https://huggingface.co/TheBloke/Mistral-7B-OpenOrca-AWQ) model. ```python import torch from transformers import AwqConfig, AutoModelForCausalLM
433_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/awq.md
https://huggingface.co/docs/transformers/en/quantization/awq/#fused-modules
.md
model_id = "TheBloke/Mistral-7B-OpenOrca-AWQ" quantization_config = AwqConfig( bits=4, fuse_max_seq_len=512, do_fuse=True, )
433_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/awq.md
https://huggingface.co/docs/transformers/en/quantization/awq/#fused-modules
.md
model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=quantization_config).to(0) ``` The [TheBloke/Mistral-7B-OpenOrca-AWQ](https://huggingface.co/TheBloke/Mistral-7B-OpenOrca-AWQ) model was benchmarked with `batch_size=1` with and without fused modules. <figcaption class="text-center text-gray-500 text-lg">Unfused module</figcaption> | Batch Size | Prefill Length | Decode Length | Prefill tokens/s | Decode tokens/s | Memory (VRAM) |
433_2_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/awq.md
https://huggingface.co/docs/transformers/en/quantization/awq/#fused-modules
.md
| Batch Size | Prefill Length | Decode Length | Prefill tokens/s | Decode tokens/s | Memory (VRAM) | |-------------:|-----------------:|----------------:|-------------------:|------------------:|:----------------| | 1 | 32 | 32 | 60.0984 | 38.4537 | 4.50 GB (5.68%) | | 1 | 64 | 64 | 1333.67 | 31.6604 | 4.50 GB (5.68%) |
433_2_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/awq.md
https://huggingface.co/docs/transformers/en/quantization/awq/#fused-modules
.md
| 1 | 64 | 64 | 1333.67 | 31.6604 | 4.50 GB (5.68%) | | 1 | 128 | 128 | 2434.06 | 31.6272 | 4.50 GB (5.68%) | | 1 | 256 | 256 | 3072.26 | 38.1731 | 4.50 GB (5.68%) | | 1 | 512 | 512 | 3184.74 | 31.6819 | 4.59 GB (5.80%) |
433_2_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/awq.md
https://huggingface.co/docs/transformers/en/quantization/awq/#fused-modules
.md
| 1 | 512 | 512 | 3184.74 | 31.6819 | 4.59 GB (5.80%) | | 1 | 1024 | 1024 | 3148.18 | 36.8031 | 4.81 GB (6.07%) | | 1 | 2048 | 2048 | 2927.33 | 35.2676 | 5.73 GB (7.23%) | <figcaption class="text-center text-gray-500 text-lg">Fused module</figcaption>
433_2_7