source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/modeling_utils.md | https://huggingface.co/docs/transformers/en/internal/modeling_utils/#tensorflow-custom-layers | .md | [[autodoc]] modeling_tf_utils.TFConv1D: No module named 'h5py'
[[autodoc]] modeling_tf_utils.TFSequenceSummary: No module named 'h5py' | 424_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/modeling_utils.md | https://huggingface.co/docs/transformers/en/internal/modeling_utils/#tensorflow-loss-functions | .md | [[autodoc]] modeling_tf_utils.TFCausalLanguageModelingLoss: No module named 'h5py'
[[autodoc]] modeling_tf_utils.TFMaskedLanguageModelingLoss: No module named 'h5py'
[[autodoc]] modeling_tf_utils.TFMultipleChoiceLoss: No module named 'h5py'
[[autodoc]] modeling_tf_utils.TFQuestionAnsweringLoss: No module named 'h5py'
[[autodoc]] modeling_tf_utils.TFSequenceClassificationLoss: No module named 'h5py'
[[autodoc]] modeling_tf_utils.TFTokenClassificationLoss: No module named 'h5py' | 424_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/modeling_utils.md | https://huggingface.co/docs/transformers/en/internal/modeling_utils/#tensorflow-helper-functions | .md | [[autodoc]] modeling_tf_utils.get_initializer: No module named 'h5py'
[[autodoc]] modeling_tf_utils.keras_serializable: No module named 'h5py'
[[autodoc]] modeling_tf_utils.shape_list: No module named 'h5py' | 424_6_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/file_utils.md | https://huggingface.co/docs/transformers/en/internal/file_utils/ | .md | <!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 425_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/file_utils.md | https://huggingface.co/docs/transformers/en/internal/file_utils/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 425_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/file_utils.md | https://huggingface.co/docs/transformers/en/internal/file_utils/#general-utilities | .md | This page lists all of Transformers general utility functions that are found in the file `utils.py`.
Most of those are only useful if you are studying the general code in the library. | 425_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/file_utils.md | https://huggingface.co/docs/transformers/en/internal/file_utils/#enums-and-namedtuples | .md | utils.ExplicitEnum
Enum with more explicit error message for missing values.
utils.PaddingStrategy
Possible values for the `padding` argument in [`PreTrainedTokenizerBase.__call__`]. Useful for tab-completion in an
IDE.
utils.TensorType
Possible values for the `return_tensors` argument in [`PreTrainedTokenizerBase.__call__`]. Useful for
tab-completion in an IDE. | 425_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/file_utils.md | https://huggingface.co/docs/transformers/en/internal/file_utils/#special-decorators | .md | utils.add_start_docstrings
utils.add_start_docstrings_to_model_forward
utils.add_end_docstrings
utils.add_code_sample_docstrings
utils.replace_return_docstrings | 425_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/file_utils.md | https://huggingface.co/docs/transformers/en/internal/file_utils/#special-properties | .md | utils.cached_property
Descriptor that mimics @property but caches output in member variable.
From tensorflow_datasets
Built-in in functools from Python 3.8. | 425_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/file_utils.md | https://huggingface.co/docs/transformers/en/internal/file_utils/#other-utilities | .md | utils._LazyModule
Module class that surfaces all objects but only performs associated imports when the objects are requested. | 425_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/audio_utils.md | https://huggingface.co/docs/transformers/en/internal/audio_utils/ | .md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 426_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/audio_utils.md | https://huggingface.co/docs/transformers/en/internal/audio_utils/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 426_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/audio_utils.md | https://huggingface.co/docs/transformers/en/internal/audio_utils/#utilities-for-featureextractors | .md | This page lists all the utility functions that can be used by the audio [`FeatureExtractor`] in order to compute special features from a raw audio using common algorithms such as *Short Time Fourier Transform* or *log mel spectrogram*.
Most of those are only useful if you are studying the code of the audio processors in the library. | 426_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/audio_utils.md | https://huggingface.co/docs/transformers/en/internal/audio_utils/#audio-transformations | .md | audio_utils.hertz_to_mel
Convert frequency from hertz to mels.
Args:
freq (`float` or `np.ndarray`):
The frequency, or multiple frequencies, in hertz (Hz).
mel_scale (`str`, *optional*, defaults to `"htk"`):
The mel frequency scale to use, `"htk"`, `"kaldi"` or `"slaney"`.
Returns:
`float` or `np.ndarray`: The frequencies on the mel scale.
audio_utils.mel_to_hertz
Convert frequency from mels to hertz.
Args:
mels (`float` or `np.ndarray`):
The frequency, or multiple frequencies, in mels. | 426_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/audio_utils.md | https://huggingface.co/docs/transformers/en/internal/audio_utils/#audio-transformations | .md | Convert frequency from mels to hertz.
Args:
mels (`float` or `np.ndarray`):
The frequency, or multiple frequencies, in mels.
mel_scale (`str`, *optional*, `"htk"`):
The mel frequency scale to use, `"htk"`, `"kaldi"` or `"slaney"`.
Returns:
`float` or `np.ndarray`: The frequencies in hertz.
audio_utils.mel_filter_bank
Creates a frequency bin conversion matrix used to obtain a mel spectrogram. This is called a *mel filter bank*, and | 426_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/audio_utils.md | https://huggingface.co/docs/transformers/en/internal/audio_utils/#audio-transformations | .md | Creates a frequency bin conversion matrix used to obtain a mel spectrogram. This is called a *mel filter bank*, and
various implementation exist, which differ in the number of filters, the shape of the filters, the way the filters
are spaced, the bandwidth of the filters, and the manner in which the spectrum is warped. The goal of these
features is to approximate the non-linear human perception of the variation in pitch with respect to the frequency. | 426_2_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/audio_utils.md | https://huggingface.co/docs/transformers/en/internal/audio_utils/#audio-transformations | .md | features is to approximate the non-linear human perception of the variation in pitch with respect to the frequency.
Different banks of mel filters were introduced in the literature. The following variations are supported:
- MFCC FB-20: introduced in 1980 by Davis and Mermelstein, it assumes a sampling frequency of 10 kHz and a speech
bandwidth of `[0, 4600]` Hz.
- MFCC FB-24 HTK: from the Cambridge HMM Toolkit (HTK) (1995) uses a filter bank of 24 filters for a speech | 426_2_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/audio_utils.md | https://huggingface.co/docs/transformers/en/internal/audio_utils/#audio-transformations | .md | - MFCC FB-24 HTK: from the Cambridge HMM Toolkit (HTK) (1995) uses a filter bank of 24 filters for a speech
bandwidth of `[0, 8000]` Hz. This assumes sampling rate ≥ 16 kHz.
- MFCC FB-40: from the Auditory Toolbox for MATLAB written by Slaney in 1998, assumes a sampling rate of 16 kHz and
speech bandwidth of `[133, 6854]` Hz. This version also includes area normalization.
- HFCC-E FB-29 (Human Factor Cepstral Coefficients) of Skowronski and Harris (2004), assumes a sampling rate of | 426_2_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/audio_utils.md | https://huggingface.co/docs/transformers/en/internal/audio_utils/#audio-transformations | .md | - HFCC-E FB-29 (Human Factor Cepstral Coefficients) of Skowronski and Harris (2004), assumes a sampling rate of
12.5 kHz and speech bandwidth of `[0, 6250]` Hz.
This code is adapted from *torchaudio* and *librosa*. Note that the default parameters of torchaudio's
`melscale_fbanks` implement the `"htk"` filters while librosa uses the `"slaney"` implementation.
Args:
num_frequency_bins (`int`):
Number of frequencies used to compute the spectrogram (should be the same as in `stft`). | 426_2_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/audio_utils.md | https://huggingface.co/docs/transformers/en/internal/audio_utils/#audio-transformations | .md | Args:
num_frequency_bins (`int`):
Number of frequencies used to compute the spectrogram (should be the same as in `stft`).
num_mel_filters (`int`):
Number of mel filters to generate.
min_frequency (`float`):
Lowest frequency of interest in Hz.
max_frequency (`float`):
Highest frequency of interest in Hz. This should not exceed `sampling_rate / 2`.
sampling_rate (`int`):
Sample rate of the audio waveform.
norm (`str`, *optional*): | 426_2_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/audio_utils.md | https://huggingface.co/docs/transformers/en/internal/audio_utils/#audio-transformations | .md | sampling_rate (`int`):
Sample rate of the audio waveform.
norm (`str`, *optional*):
If `"slaney"`, divide the triangular mel weights by the width of the mel band (area normalization).
mel_scale (`str`, *optional*, defaults to `"htk"`):
The mel frequency scale to use, `"htk"`, `"kaldi"` or `"slaney"`.
triangularize_in_mel_space (`bool`, *optional*, defaults to `False`):
If this option is enabled, the triangular filter is applied in mel space rather than frequency space. This | 426_2_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/audio_utils.md | https://huggingface.co/docs/transformers/en/internal/audio_utils/#audio-transformations | .md | If this option is enabled, the triangular filter is applied in mel space rather than frequency space. This
should be set to `true` in order to get the same results as `torchaudio` when computing mel filters.
Returns:
`np.ndarray` of shape (`num_frequency_bins`, `num_mel_filters`): Triangular filter bank matrix. This is a
projection matrix to go from a spectrogram to a mel spectrogram.
audio_utils.optimal_fft_length | 426_2_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/audio_utils.md | https://huggingface.co/docs/transformers/en/internal/audio_utils/#audio-transformations | .md | projection matrix to go from a spectrogram to a mel spectrogram.
audio_utils.optimal_fft_length
Finds the best FFT input size for a given `window_length`. This function takes a given window length and, if not
already a power of two, rounds it up to the next power or two.
The FFT algorithm works fastest when the length of the input is a power of two, which may be larger than the size
of the window or analysis frame. For example, if the window is 400 samples, using an FFT input size of 512 samples | 426_2_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/audio_utils.md | https://huggingface.co/docs/transformers/en/internal/audio_utils/#audio-transformations | .md | of the window or analysis frame. For example, if the window is 400 samples, using an FFT input size of 512 samples
is more optimal than an FFT size of 400 samples. Using a larger FFT size does not affect the detected frequencies,
it simply gives a higher frequency resolution (i.e. the frequency bins are smaller).
audio_utils.window_function
Returns an array containing the specified window. This window is intended to be used with `stft`.
The following window types are supported: | 426_2_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/audio_utils.md | https://huggingface.co/docs/transformers/en/internal/audio_utils/#audio-transformations | .md | The following window types are supported:
- `"boxcar"`: a rectangular window
- `"hamming"`: the Hamming window
- `"hann"`: the Hann window
- `"povey"`: the Povey window
Args:
window_length (`int`):
The length of the window in samples.
name (`str`, *optional*, defaults to `"hann"`):
The name of the window function.
periodic (`bool`, *optional*, defaults to `True`):
Whether the window is periodic or symmetric.
frame_length (`int`, *optional*): | 426_2_11 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/audio_utils.md | https://huggingface.co/docs/transformers/en/internal/audio_utils/#audio-transformations | .md | Whether the window is periodic or symmetric.
frame_length (`int`, *optional*):
The length of the analysis frames in samples. Provide a value for `frame_length` if the window is smaller
than the frame length, so that it will be zero-padded.
center (`bool`, *optional*, defaults to `True`):
Whether to center the window inside the FFT buffer. Only used when `frame_length` is provided.
Returns:
`np.ndarray` of shape `(window_length,)` or `(frame_length,)` containing the window.
audio_utils.spectrogram | 426_2_12 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/audio_utils.md | https://huggingface.co/docs/transformers/en/internal/audio_utils/#audio-transformations | .md | Returns:
`np.ndarray` of shape `(window_length,)` or `(frame_length,)` containing the window.
audio_utils.spectrogram
Calculates a spectrogram over one waveform using the Short-Time Fourier Transform.
This function can create the following kinds of spectrograms:
- amplitude spectrogram (`power = 1.0`)
- power spectrogram (`power = 2.0`)
- complex-valued spectrogram (`power = None`)
- log spectrogram (use `log_mel` argument)
- mel spectrogram (provide `mel_filters`) | 426_2_13 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/audio_utils.md | https://huggingface.co/docs/transformers/en/internal/audio_utils/#audio-transformations | .md | - log spectrogram (use `log_mel` argument)
- mel spectrogram (provide `mel_filters`)
- log-mel spectrogram (provide `mel_filters` and `log_mel`)
How this works:
1. The input waveform is split into frames of size `frame_length` that are partially overlapping by `frame_length
- hop_length` samples.
2. Each frame is multiplied by the window and placed into a buffer of size `fft_length`.
3. The DFT is taken of each windowed frame.
4. The results are stacked into a spectrogram. | 426_2_14 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/audio_utils.md | https://huggingface.co/docs/transformers/en/internal/audio_utils/#audio-transformations | .md | 3. The DFT is taken of each windowed frame.
4. The results are stacked into a spectrogram.
We make a distinction between the following "blocks" of sample data, each of which may have a different lengths:
- The analysis frame. This is the size of the time slices that the input waveform is split into.
- The window. Each analysis frame is multiplied by the window to avoid spectral leakage.
- The FFT input buffer. The length of this determines how many frequency bins are in the spectrogram. | 426_2_15 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/audio_utils.md | https://huggingface.co/docs/transformers/en/internal/audio_utils/#audio-transformations | .md | - The FFT input buffer. The length of this determines how many frequency bins are in the spectrogram.
In this implementation, the window is assumed to be zero-padded to have the same size as the analysis frame. A
padded window can be obtained from `window_function()`. The FFT input buffer may be larger than the analysis frame,
typically the next power of two.
Note: This function is not optimized for speed yet. It should be mostly compatible with `librosa.stft` and | 426_2_16 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/audio_utils.md | https://huggingface.co/docs/transformers/en/internal/audio_utils/#audio-transformations | .md | Note: This function is not optimized for speed yet. It should be mostly compatible with `librosa.stft` and
`torchaudio.functional.transforms.Spectrogram`, although it is more flexible due to the different ways spectrograms
can be constructed.
Args:
waveform (`np.ndarray` of shape `(length,)`):
The input waveform. This must be a single real-valued, mono waveform.
window (`np.ndarray` of shape `(frame_length,)`): | 426_2_17 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/audio_utils.md | https://huggingface.co/docs/transformers/en/internal/audio_utils/#audio-transformations | .md | The input waveform. This must be a single real-valued, mono waveform.
window (`np.ndarray` of shape `(frame_length,)`):
The windowing function to apply, including zero-padding if necessary. The actual window length may be
shorter than `frame_length`, but we're assuming the array has already been zero-padded.
frame_length (`int`):
The length of the analysis frames in samples. With librosa this is always equal to `fft_length` but we also
allow smaller sizes.
hop_length (`int`): | 426_2_18 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/audio_utils.md | https://huggingface.co/docs/transformers/en/internal/audio_utils/#audio-transformations | .md | allow smaller sizes.
hop_length (`int`):
The stride between successive analysis frames in samples.
fft_length (`int`, *optional*):
The size of the FFT buffer in samples. This determines how many frequency bins the spectrogram will have.
For optimal speed, this should be a power of two. If `None`, uses `frame_length`.
power (`float`, *optional*, defaults to 1.0):
If 1.0, returns the amplitude spectrogram. If 2.0, returns the power spectrogram. If `None`, returns
complex numbers. | 426_2_19 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/audio_utils.md | https://huggingface.co/docs/transformers/en/internal/audio_utils/#audio-transformations | .md | If 1.0, returns the amplitude spectrogram. If 2.0, returns the power spectrogram. If `None`, returns
complex numbers.
center (`bool`, *optional*, defaults to `True`):
Whether to pad the waveform so that frame `t` is centered around time `t * hop_length`. If `False`, frame
`t` will start at time `t * hop_length`.
pad_mode (`str`, *optional*, defaults to `"reflect"`):
Padding mode used when `center` is `True`. Possible values are: `"constant"` (pad with zeros), `"edge"` | 426_2_20 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/audio_utils.md | https://huggingface.co/docs/transformers/en/internal/audio_utils/#audio-transformations | .md | Padding mode used when `center` is `True`. Possible values are: `"constant"` (pad with zeros), `"edge"`
(pad with edge values), `"reflect"` (pads with mirrored values).
onesided (`bool`, *optional*, defaults to `True`):
If True, only computes the positive frequencies and returns a spectrogram containing `fft_length // 2 + 1`
frequency bins. If False, also computes the negative frequencies and returns `fft_length` frequency bins.
preemphasis (`float`, *optional*) | 426_2_21 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/audio_utils.md | https://huggingface.co/docs/transformers/en/internal/audio_utils/#audio-transformations | .md | preemphasis (`float`, *optional*)
Coefficient for a low-pass filter that applies pre-emphasis before the DFT.
mel_filters (`np.ndarray` of shape `(num_freq_bins, num_mel_filters)`, *optional*):
The mel filter bank. If supplied, applies a this filter bank to create a mel spectrogram.
mel_floor (`float`, *optional*, defaults to 1e-10):
Minimum value of mel frequency banks.
log_mel (`str`, *optional*):
How to convert the spectrogram to log scale. Possible options are: `None` (don't convert), `"log"` (take | 426_2_22 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/audio_utils.md | https://huggingface.co/docs/transformers/en/internal/audio_utils/#audio-transformations | .md | How to convert the spectrogram to log scale. Possible options are: `None` (don't convert), `"log"` (take
the natural logarithm) `"log10"` (take the base-10 logarithm), `"dB"` (convert to decibels). Can only be
used when `power` is not `None`.
reference (`float`, *optional*, defaults to 1.0):
Sets the input spectrogram value that corresponds to 0 dB. For example, use `np.max(spectrogram)` to set
the loudest part to 0 dB. Must be greater than zero.
min_value (`float`, *optional*, defaults to `1e-10`): | 426_2_23 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/audio_utils.md | https://huggingface.co/docs/transformers/en/internal/audio_utils/#audio-transformations | .md | the loudest part to 0 dB. Must be greater than zero.
min_value (`float`, *optional*, defaults to `1e-10`):
The spectrogram will be clipped to this minimum value before conversion to decibels, to avoid taking
`log(0)`. For a power spectrogram, the default of `1e-10` corresponds to a minimum of -100 dB. For an
amplitude spectrogram, the value `1e-5` corresponds to -100 dB. Must be greater than zero.
db_range (`float`, *optional*): | 426_2_24 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/audio_utils.md | https://huggingface.co/docs/transformers/en/internal/audio_utils/#audio-transformations | .md | amplitude spectrogram, the value `1e-5` corresponds to -100 dB. Must be greater than zero.
db_range (`float`, *optional*):
Sets the maximum dynamic range in decibels. For example, if `db_range = 80`, the difference between the
peak value and the smallest value will never be more than 80 dB. Must be greater than zero.
remove_dc_offset (`bool`, *optional*):
Subtract mean from waveform on each frame, applied before pre-emphasis. This should be set to `true` in | 426_2_25 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/audio_utils.md | https://huggingface.co/docs/transformers/en/internal/audio_utils/#audio-transformations | .md | Subtract mean from waveform on each frame, applied before pre-emphasis. This should be set to `true` in
order to get the same results as `torchaudio.compliance.kaldi.fbank` when computing mel filters.
dtype (`np.dtype`, *optional*, defaults to `np.float32`):
Data type of the spectrogram tensor. If `power` is None, this argument is ignored and the dtype will be
`np.complex64`.
Returns:
`nd.array` containing a spectrogram of shape `(num_frequency_bins, length)` for a regular spectrogram or shape | 426_2_26 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/audio_utils.md | https://huggingface.co/docs/transformers/en/internal/audio_utils/#audio-transformations | .md | Returns:
`nd.array` containing a spectrogram of shape `(num_frequency_bins, length)` for a regular spectrogram or shape
`(num_mel_filters, length)` for a mel spectrogram.
audio_utils.power_to_db
Converts a power spectrogram to the decibel scale. This computes `10 * log10(spectrogram / reference)`, using basic
logarithm properties for numerical stability.
The motivation behind applying the log function on the (mel) spectrogram is that humans do not hear loudness on a | 426_2_27 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/audio_utils.md | https://huggingface.co/docs/transformers/en/internal/audio_utils/#audio-transformations | .md | The motivation behind applying the log function on the (mel) spectrogram is that humans do not hear loudness on a
linear scale. Generally to double the perceived volume of a sound we need to put 8 times as much energy into it.
This means that large variations in energy may not sound all that different if the sound is loud to begin with.
This compression operation makes the (mel) spectrogram features match more closely what humans actually hear.
Based on the implementation of `librosa.power_to_db`. | 426_2_28 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/audio_utils.md | https://huggingface.co/docs/transformers/en/internal/audio_utils/#audio-transformations | .md | Based on the implementation of `librosa.power_to_db`.
Args:
spectrogram (`np.ndarray`):
The input power (mel) spectrogram. Note that a power spectrogram has the amplitudes squared!
reference (`float`, *optional*, defaults to 1.0):
Sets the input spectrogram value that corresponds to 0 dB. For example, use `np.max(spectrogram)` to set
the loudest part to 0 dB. Must be greater than zero.
min_value (`float`, *optional*, defaults to `1e-10`): | 426_2_29 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/audio_utils.md | https://huggingface.co/docs/transformers/en/internal/audio_utils/#audio-transformations | .md | the loudest part to 0 dB. Must be greater than zero.
min_value (`float`, *optional*, defaults to `1e-10`):
The spectrogram will be clipped to this minimum value before conversion to decibels, to avoid taking
`log(0)`. The default of `1e-10` corresponds to a minimum of -100 dB. Must be greater than zero.
db_range (`float`, *optional*):
Sets the maximum dynamic range in decibels. For example, if `db_range = 80`, the difference between the | 426_2_30 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/audio_utils.md | https://huggingface.co/docs/transformers/en/internal/audio_utils/#audio-transformations | .md | Sets the maximum dynamic range in decibels. For example, if `db_range = 80`, the difference between the
peak value and the smallest value will never be more than 80 dB. Must be greater than zero.
Returns:
`np.ndarray`: the spectrogram in decibels
audio_utils.amplitude_to_db
Converts an amplitude spectrogram to the decibel scale. This computes `20 * log10(spectrogram / reference)`, using
basic logarithm properties for numerical stability. | 426_2_31 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/audio_utils.md | https://huggingface.co/docs/transformers/en/internal/audio_utils/#audio-transformations | .md | basic logarithm properties for numerical stability.
The motivation behind applying the log function on the (mel) spectrogram is that humans do not hear loudness on a
linear scale. Generally to double the perceived volume of a sound we need to put 8 times as much energy into it.
This means that large variations in energy may not sound all that different if the sound is loud to begin with.
This compression operation makes the (mel) spectrogram features match more closely what humans actually hear.
Args: | 426_2_32 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/audio_utils.md | https://huggingface.co/docs/transformers/en/internal/audio_utils/#audio-transformations | .md | This compression operation makes the (mel) spectrogram features match more closely what humans actually hear.
Args:
spectrogram (`np.ndarray`):
The input amplitude (mel) spectrogram.
reference (`float`, *optional*, defaults to 1.0):
Sets the input spectrogram value that corresponds to 0 dB. For example, use `np.max(spectrogram)` to set
the loudest part to 0 dB. Must be greater than zero.
min_value (`float`, *optional*, defaults to `1e-5`): | 426_2_33 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/audio_utils.md | https://huggingface.co/docs/transformers/en/internal/audio_utils/#audio-transformations | .md | the loudest part to 0 dB. Must be greater than zero.
min_value (`float`, *optional*, defaults to `1e-5`):
The spectrogram will be clipped to this minimum value before conversion to decibels, to avoid taking
`log(0)`. The default of `1e-5` corresponds to a minimum of -100 dB. Must be greater than zero.
db_range (`float`, *optional*):
Sets the maximum dynamic range in decibels. For example, if `db_range = 80`, the difference between the | 426_2_34 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/audio_utils.md | https://huggingface.co/docs/transformers/en/internal/audio_utils/#audio-transformations | .md | Sets the maximum dynamic range in decibels. For example, if `db_range = 80`, the difference between the
peak value and the smallest value will never be more than 80 dB. Must be greater than zero.
Returns:
`np.ndarray`: the spectrogram in decibels | 426_2_35 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/ | .md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 427_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 427_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#utilities-for-generation | .md | This page lists all the utility functions used by [`~generation.GenerationMixin.generate`]. | 427_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#generate-outputs | .md | The output of [`~generation.GenerationMixin.generate`] is an instance of a subclass of
[`~utils.ModelOutput`]. This output is a data structure containing all the information returned
by [`~generation.GenerationMixin.generate`], but that can also be used as tuple or dictionary.
Here's an example:
```python
from transformers import GPT2Tokenizer, GPT2LMHeadModel
tokenizer = GPT2Tokenizer.from_pretrained("openai-community/gpt2")
model = GPT2LMHeadModel.from_pretrained("openai-community/gpt2") | 427_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#generate-outputs | .md | inputs = tokenizer("Hello, my dog is cute and ", return_tensors="pt")
generation_output = model.generate(**inputs, return_dict_in_generate=True, output_scores=True)
```
The `generation_output` object is a [`~generation.GenerateDecoderOnlyOutput`], as we can
see in the documentation of that class below, it means it has the following attributes:
- `sequences`: the generated sequences of tokens
- `scores` (optional): the prediction scores of the language modelling head, for each generation step | 427_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#generate-outputs | .md | - `scores` (optional): the prediction scores of the language modelling head, for each generation step
- `hidden_states` (optional): the hidden states of the model, for each generation step
- `attentions` (optional): the attention weights of the model, for each generation step
Here we have the `scores` since we passed along `output_scores=True`, but we don't have `hidden_states` and
`attentions` because we didn't pass `output_hidden_states=True` or `output_attentions=True`. | 427_2_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#generate-outputs | .md | `attentions` because we didn't pass `output_hidden_states=True` or `output_attentions=True`.
You can access each attribute as you would usually do, and if that attribute has not been returned by the model, you
will get `None`. Here for instance `generation_output.scores` are all the generated prediction scores of the
language modeling head, and `generation_output.attentions` is `None`.
When using our `generation_output` object as a tuple, it only keeps the attributes that don't have `None` values. | 427_2_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#generate-outputs | .md | When using our `generation_output` object as a tuple, it only keeps the attributes that don't have `None` values.
Here, for instance, it has two elements, `loss` then `logits`, so
```python
generation_output[:2]
```
will return the tuple `(generation_output.sequences, generation_output.scores)` for instance.
When using our `generation_output` object as a dictionary, it only keeps the attributes that don't have `None`
values. Here, for instance, it has two keys that are `sequences` and `scores`. | 427_2_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#generate-outputs | .md | values. Here, for instance, it has two keys that are `sequences` and `scores`.
We document here all output types. | 427_2_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | generation.GenerateDecoderOnlyOutput
Outputs of decoder-only generation models, when using non-beam methods.
Args:
sequences (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
The generated sequences. The second dimension (sequence_length) is either equal to `max_length` or shorter
if all batches finished early due to the `eos_token_id`.
scores (`tuple(torch.FloatTensor)` *optional*, returned when `output_scores=True`): | 427_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | scores (`tuple(torch.FloatTensor)` *optional*, returned when `output_scores=True`):
Processed prediction scores of the language modeling head (scores for each vocabulary token before SoftMax)
at each generation step. Tuple of `torch.FloatTensor` with up to `max_new_tokens` elements (one element for
each generated token), with each tensor of shape `(batch_size, config.vocab_size)`.
logits (`tuple(torch.FloatTensor)` *optional*, returned when `output_logits=True`): | 427_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | logits (`tuple(torch.FloatTensor)` *optional*, returned when `output_logits=True`):
Unprocessed prediction scores of the language modeling head (scores for each vocabulary token before SoftMax)
at each generation step. Tuple of `torch.FloatTensor` with up to `max_new_tokens` elements (one element for
each generated token), with each tensor of shape `(batch_size, config.vocab_size)`.
attentions (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `output_attentions=True`): | 427_3_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | attentions (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `output_attentions=True`):
Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of
`torch.FloatTensor` of shape `(batch_size, num_heads, generated_length, sequence_length)`.
hidden_states (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `output_hidden_states=True`):
Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of | 427_3_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of
`torch.FloatTensor` of shape `(batch_size, generated_length, hidden_size)`.
past_key_values (`tuple(tuple(torch.FloatTensor)))`, *optional*, returned when `use_cache=True`):
Returns the model cache, used to speed up decoding. Different models have a different cache format, check
the model's documentation. Usually, a [`~cache_utils.Cache`] instance.
generation.GenerateEncoderDecoderOutput | 427_3_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | the model's documentation. Usually, a [`~cache_utils.Cache`] instance.
generation.GenerateEncoderDecoderOutput
Outputs of encoder-decoder generation models, when using non-beam methods.
Args:
sequences (`torch.LongTensor` of shape `(batch_size*num_return_sequences, sequence_length)`):
The generated sequences. The second dimension (sequence_length) is either equal to `max_length` or shorter
if all batches finished early due to the `eos_token_id`. | 427_3_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | if all batches finished early due to the `eos_token_id`.
scores (`tuple(torch.FloatTensor)` *optional*, returned when `output_scores=True`):
Processed prediction scores of the language modeling head (scores for each vocabulary token before SoftMax)
at each generation step. Tuple of `torch.FloatTensor` with up to `max_new_tokens` elements (one element for
each generated token), with each tensor of shape `(batch_size, config.vocab_size)`. | 427_3_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | each generated token), with each tensor of shape `(batch_size, config.vocab_size)`.
logits (`tuple(torch.FloatTensor)` *optional*, returned when `output_logits=True`):
Unprocessed prediction scores of the language modeling head (scores for each vocabulary token before SoftMax)
at each generation step. Tuple of `torch.FloatTensor` with up to `max_new_tokens` elements (one element for
each generated token), with each tensor of shape `(batch_size, config.vocab_size)`. | 427_3_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | each generated token), with each tensor of shape `(batch_size, config.vocab_size)`.
encoder_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True`):
Tuple of `torch.FloatTensor` (one for each layer of the decoder) of shape `(batch_size, num_heads,
sequence_length, sequence_length)`.
encoder_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True`): | 427_3_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | encoder_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True`):
Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of
shape `(batch_size, sequence_length, hidden_size)`.
decoder_attentions (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `output_attentions=True`):
Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of | 427_3_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of
`torch.FloatTensor` of shape `(batch_size, num_heads, generated_length, sequence_length)`.
cross_attentions (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `output_attentions=True`):
Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of
`torch.FloatTensor` of shape `(batch_size, num_heads, generated_length, sequence_length)`. | 427_3_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | `torch.FloatTensor` of shape `(batch_size, num_heads, generated_length, sequence_length)`.
decoder_hidden_states (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `output_hidden_states=True`):
Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of
`torch.FloatTensor` of shape `(batch_size, generated_length, hidden_size)`. | 427_3_11 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | `torch.FloatTensor` of shape `(batch_size, generated_length, hidden_size)`.
past_key_values (`tuple(tuple(torch.FloatTensor)))`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`):
Returns the model cache, used to speed up decoding. Different models have a different cache format, check
the model's documentation. Usually, a [`~cache_utils.Cache`] instance.
generation.GenerateBeamDecoderOnlyOutput
Outputs of decoder-only generation models, when using beam methods. | 427_3_12 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | generation.GenerateBeamDecoderOnlyOutput
Outputs of decoder-only generation models, when using beam methods.
Args:
sequences (`torch.LongTensor` of shape `(batch_size*num_return_sequences, sequence_length)`):
The generated sequences. The second dimension (sequence_length) is either equal to `max_length` or shorter
if all batches finished early due to the `eos_token_id`.
sequences_scores (`torch.FloatTensor` of shape `(batch_size*num_return_sequences)`, *optional*, returned when `output_scores=True`): | 427_3_13 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | Final beam scores of the generated `sequences`.
scores (`tuple(torch.FloatTensor)` *optional*, returned when `output_scores=True`):
Beam transition scores for each vocabulary token at each generation step. Beam transition scores consisting
of log probabilities of tokens conditioned on log softmax of previously generated tokens in this beam.
Tuple of `torch.FloatTensor` with up to `max_new_tokens` elements (one element for each generated token), | 427_3_14 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | Tuple of `torch.FloatTensor` with up to `max_new_tokens` elements (one element for each generated token),
with each tensor of shape `(batch_size*num_beams, config.vocab_size)`.
logits (`tuple(torch.FloatTensor)` *optional*, returned when `output_logits=True`):
Unprocessed prediction scores of the language modeling head (scores for each vocabulary token before SoftMax)
at each generation step. Tuple of `torch.FloatTensor` with up to `max_new_tokens` elements (one element for | 427_3_15 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | at each generation step. Tuple of `torch.FloatTensor` with up to `max_new_tokens` elements (one element for
each generated token), with each tensor of shape `(batch_size, config.vocab_size)`.
beam_indices (`torch.LongTensor`, *optional*, returned when `output_scores=True`):
Beam indices of generated token id at each generation step. `torch.LongTensor` of shape
`(batch_size*num_return_sequences, sequence_length)`. | 427_3_16 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | `(batch_size*num_return_sequences, sequence_length)`.
attentions (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `output_attentions=True`):
Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of
`torch.FloatTensor` of shape `(batch_size*num_beams, num_heads, generated_length, sequence_length)`.
hidden_states (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `output_hidden_states=True`): | 427_3_17 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | hidden_states (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `output_hidden_states=True`):
Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of
`torch.FloatTensor` of shape `(batch_size*num_beams*num_return_sequences, generated_length, hidden_size)`.
past_key_values (`tuple(tuple(torch.FloatTensor)))`, *optional*, returned when `use_cache=True`): | 427_3_18 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | past_key_values (`tuple(tuple(torch.FloatTensor)))`, *optional*, returned when `use_cache=True`):
Returns the model cache, used to speed up decoding. Different models have a different cache format, check
the model's documentation. Usually, a [`~cache_utils.Cache`] instance.
generation.GenerateBeamEncoderDecoderOutput
Outputs of encoder-decoder generation models, when using beam methods.
Args:
sequences (`torch.LongTensor` of shape `(batch_size*num_return_sequences, sequence_length)`): | 427_3_19 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | Args:
sequences (`torch.LongTensor` of shape `(batch_size*num_return_sequences, sequence_length)`):
The generated sequences. The second dimension (sequence_length) is either equal to `max_length` or shorter
if all batches finished early due to the `eos_token_id`.
sequences_scores (`torch.FloatTensor` of shape `(batch_size*num_return_sequences)`, *optional*, returned when `output_scores=True`):
Final beam scores of the generated `sequences`. | 427_3_20 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | Final beam scores of the generated `sequences`.
scores (`tuple(torch.FloatTensor)` *optional*, returned when `output_scores=True`):
Beam transition scores for each vocabulary token at each generation step. Beam transition scores consisting
of log probabilities of tokens conditioned on log softmax of previously generated tokens in this beam.
Tuple of `torch.FloatTensor` with up to `max_new_tokens` elements (one element for each generated token), | 427_3_21 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | Tuple of `torch.FloatTensor` with up to `max_new_tokens` elements (one element for each generated token),
with each tensor of shape `(batch_size*num_beams, config.vocab_size)`.
logits (`tuple(torch.FloatTensor)` *optional*, returned when `output_logits=True`):
Unprocessed prediction scores of the language modeling head (scores for each vocabulary token before SoftMax)
at each generation step. Tuple of `torch.FloatTensor` with up to `max_new_tokens` elements (one element for | 427_3_22 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | at each generation step. Tuple of `torch.FloatTensor` with up to `max_new_tokens` elements (one element for
each generated token), with each tensor of shape `(batch_size, config.vocab_size)`.
beam_indices (`torch.LongTensor`, *optional*, returned when `output_scores=True`):
Beam indices of generated token id at each generation step. `torch.LongTensor` of shape
`(batch_size*num_return_sequences, sequence_length)`. | 427_3_23 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | `(batch_size*num_return_sequences, sequence_length)`.
encoder_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True`):
Tuple of `torch.FloatTensor` (one for each layer of the decoder) of shape `(batch_size, num_heads,
sequence_length, sequence_length)`.
encoder_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True`):
Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of | 427_3_24 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of
shape `(batch_size*num_beams*num_return_sequences, sequence_length, hidden_size)`.
decoder_attentions (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `output_attentions=True`):
Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of
`torch.FloatTensor` of shape `(batch_size*num_beams*num_return_sequences, num_heads, generated_length, | 427_3_25 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | `torch.FloatTensor` of shape `(batch_size*num_beams*num_return_sequences, num_heads, generated_length,
sequence_length)`.
cross_attentions (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `output_attentions=True`):
Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of
`torch.FloatTensor` of shape `(batch_size, num_heads, generated_length, sequence_length)`. | 427_3_26 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | `torch.FloatTensor` of shape `(batch_size, num_heads, generated_length, sequence_length)`.
decoder_hidden_states (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `output_hidden_states=True`):
Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of
`torch.FloatTensor` of shape `(batch_size*num_beams*num_return_sequences, generated_length, hidden_size)`.
past_key_values (`tuple(tuple(torch.FloatTensor)))`, *optional*, returned when `use_cache=True`): | 427_3_27 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | past_key_values (`tuple(tuple(torch.FloatTensor)))`, *optional*, returned when `use_cache=True`):
Returns the model cache, used to speed up decoding. Different models have a different cache format, check
the model's documentation. Usually, a [`~cache_utils.Cache`] instance. | 427_3_28 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#tensorflow | .md | [[autodoc]] generation.TFGreedySearchEncoderDecoderOutput: module transformers.generation has no attribute TFGreedySearchEncoderDecoderOutput
[[autodoc]] generation.TFGreedySearchDecoderOnlyOutput: module transformers.generation has no attribute TFGreedySearchDecoderOnlyOutput
[[autodoc]] generation.TFSampleEncoderDecoderOutput: module transformers.generation has no attribute TFSampleEncoderDecoderOutput | 427_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#tensorflow | .md | [[autodoc]] generation.TFSampleDecoderOnlyOutput: module transformers.generation has no attribute TFSampleDecoderOnlyOutput
[[autodoc]] generation.TFBeamSearchEncoderDecoderOutput: module transformers.generation has no attribute TFBeamSearchEncoderDecoderOutput
[[autodoc]] generation.TFBeamSearchDecoderOnlyOutput: module transformers.generation has no attribute TFBeamSearchDecoderOnlyOutput | 427_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#tensorflow | .md | [[autodoc]] generation.TFBeamSampleEncoderDecoderOutput: module transformers.generation has no attribute TFBeamSampleEncoderDecoderOutput
[[autodoc]] generation.TFBeamSampleDecoderOnlyOutput: module transformers.generation has no attribute TFBeamSampleDecoderOnlyOutput
[[autodoc]] generation.TFContrastiveSearchEncoderDecoderOutput: module transformers.generation has no attribute TFContrastiveSearchEncoderDecoderOutput | 427_4_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#tensorflow | .md | [[autodoc]] generation.TFContrastiveSearchDecoderOnlyOutput: module transformers.generation has no attribute TFContrastiveSearchDecoderOnlyOutput | 427_4_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#flax | .md | [[autodoc]] generation.FlaxSampleOutput: module transformers.generation has no attribute FlaxSampleOutput
[[autodoc]] generation.FlaxGreedySearchOutput: module transformers.generation has no attribute FlaxGreedySearchOutput
[[autodoc]] generation.FlaxBeamSearchOutput: module transformers.generation has no attribute FlaxBeamSearchOutput | 427_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#logitsprocessor | .md | A [`LogitsProcessor`] can be used to modify the prediction scores of a language model head for
generation. | 427_6_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | [`LogitsProcessor`] enforcing alternated generation between the two codebooks of Bark.
<Tip warning={true}>
This logits processor is exclusively compatible with
[Bark](https://huggingface.co/docs/transformers/en/model_doc/bark)'s fine submodel. See the model documentation
for examples.
</Tip>
Args:
input_start_len (`int`):
The length of the initial input sequence.
semantic_vocab_size (`int`):
Vocabulary size of the semantic part, i.e number of tokens associated to the semantic vocabulary. | 427_7_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | semantic_vocab_size (`int`):
Vocabulary size of the semantic part, i.e number of tokens associated to the semantic vocabulary.
codebook_size (`int`):
Number of tokens associated to the codebook.
- __call__
[`LogitsProcessor`] for classifier free guidance (CFG). The scores are split over the batch dimension,
where the first half correspond to the conditional logits (predicted from the input prompt) and the second half | 427_7_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | where the first half correspond to the conditional logits (predicted from the input prompt) and the second half
correspond to the unconditional logits (predicted from an empty or 'null' prompt). The processor computes a
weighted average across the conditional and unconditional logits, parameterised by the `guidance_scale`.
See [the paper](https://arxiv.org/abs/2306.05284) for more information.
<Tip warning={true}>
This logits processor is exclusively compatible with | 427_7_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | <Tip warning={true}>
This logits processor is exclusively compatible with
[MusicGen](https://huggingface.co/docs/transformers/main/en/model_doc/musicgen)
</Tip>
Args:
guidance_scale (float):
The guidance scale for classifier free guidance (CFG). CFG is enabled by setting `guidance_scale > 1`.
Higher guidance scale encourages the model to generate samples that are more closely linked to the input
prompt, usually at the expense of poorer quality.
Examples:
```python | 427_7_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | prompt, usually at the expense of poorer quality.
Examples:
```python
>>> from transformers import AutoProcessor, MusicgenForConditionalGeneration | 427_7_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | >>> processor = AutoProcessor.from_pretrained("facebook/musicgen-small")
>>> model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-small") | 427_7_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md | https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch | .md | >>> inputs = processor(
... text=["80s pop track with bassy drums and synth", "90s rock song with loud guitars and heavy drums"],
... padding=True,
... return_tensors="pt",
... )
>>> audio_values = model.generate(**inputs, do_sample=True, guidance_scale=3, max_new_tokens=256)
```
- __call__
[`LogitsProcessor`] that works similarly to [`NoRepeatNGramLogitsProcessor`], but applied exclusively to prevent
the repetition of n-grams present in the prompt. | 427_7_6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.