source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/model.md | https://huggingface.co/docs/transformers/en/main_classes/model/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 455_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/model.md | https://huggingface.co/docs/transformers/en/main_classes/model/#models | .md | The base classes [`PreTrainedModel`], [`TFPreTrainedModel`], and
[`FlaxPreTrainedModel`] implement the common methods for loading/saving a model either from a local
file or directory, or from a pretrained model configuration provided by the library (downloaded from HuggingFace's AWS
S3 repository).
[`PreTrainedModel`] and [`TFPreTrainedModel`] also implement a few methods which
are common among all the models to:
- resize the input token embeddings when new tokens are added to the vocabulary | 455_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/model.md | https://huggingface.co/docs/transformers/en/main_classes/model/#models | .md | are common among all the models to:
- resize the input token embeddings when new tokens are added to the vocabulary
- prune the attention heads of the model.
The other methods that are common to each model are defined in [`~modeling_utils.ModuleUtilsMixin`]
(for the PyTorch models) and [`~modeling_tf_utils.TFModuleUtilsMixin`] (for the TensorFlow models) or
for text generation, [`~generation.GenerationMixin`] (for the PyTorch models),
[`~generation.TFGenerationMixin`] (for the TensorFlow models) and | 455_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/model.md | https://huggingface.co/docs/transformers/en/main_classes/model/#models | .md | [`~generation.TFGenerationMixin`] (for the TensorFlow models) and
[`~generation.FlaxGenerationMixin`] (for the Flax/JAX models). | 455_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/model.md | https://huggingface.co/docs/transformers/en/main_classes/model/#pretrainedmodel | .md | Base class for all models.
[`PreTrainedModel`] takes care of storing the configuration of the models and handles methods for loading,
downloading and saving models as well as a few methods common to all models to:
- resize the input embeddings,
- prune heads in the self-attention heads.
Class attributes (overridden by derived classes):
- **config_class** ([`PretrainedConfig`]) -- A subclass of [`PretrainedConfig`] to use as configuration class
for this model architecture. | 455_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/model.md | https://huggingface.co/docs/transformers/en/main_classes/model/#pretrainedmodel | .md | for this model architecture.
- **load_tf_weights** (`Callable`) -- A python *method* for loading a TensorFlow checkpoint in a PyTorch model,
taking as arguments:
- **model** ([`PreTrainedModel`]) -- An instance of the model on which to load the TensorFlow checkpoint.
- **config** ([`PreTrainedConfig`]) -- An instance of the configuration associated to the model.
- **path** (`str`) -- A path to the TensorFlow checkpoint. | 455_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/model.md | https://huggingface.co/docs/transformers/en/main_classes/model/#pretrainedmodel | .md | - **path** (`str`) -- A path to the TensorFlow checkpoint.
- **base_model_prefix** (`str`) -- A string indicating the attribute associated to the base model in derived
classes of the same architecture adding modules on top of the base model.
- **is_parallelizable** (`bool`) -- A flag indicating whether this model supports model parallelization.
- **main_input_name** (`str`) -- The name of the principal input to the model (often `input_ids` for NLP | 455_2_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/model.md | https://huggingface.co/docs/transformers/en/main_classes/model/#pretrainedmodel | .md | - **main_input_name** (`str`) -- The name of the principal input to the model (often `input_ids` for NLP
models, `pixel_values` for vision models and `input_values` for speech models).
- push_to_hub
- all
Custom models should also include a `_supports_assign_param_buffer`, which determines if superfast init can apply
on the particular model. Signs that your model needs this are if `test_save_and_load_from_pretrained` fails. If so,
set this to `False`. | 455_2_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/model.md | https://huggingface.co/docs/transformers/en/main_classes/model/#moduleutilsmixin | .md | modeling_utils.ModuleUtilsMixin
A few utilities for `torch.nn.Modules`, to be used as a mixin. | 455_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/model.md | https://huggingface.co/docs/transformers/en/main_classes/model/#tfpretrainedmodel | .md | TFPreTrainedModel
- push_to_hub
- all | 455_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/model.md | https://huggingface.co/docs/transformers/en/main_classes/model/#tfmodelutilsmixin | .md | [[autodoc]] modeling_tf_utils.TFModelUtilsMixin: No module named 'h5py' | 455_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/model.md | https://huggingface.co/docs/transformers/en/main_classes/model/#flaxpretrainedmodel | .md | FlaxPreTrainedModel
- push_to_hub
- all | 455_6_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/model.md | https://huggingface.co/docs/transformers/en/main_classes/model/#pushing-to-the-hub | .md | utils.PushToHubMixin
A Mixin containing the functionality to push a model or tokenizer to the hub. | 455_7_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/model.md | https://huggingface.co/docs/transformers/en/main_classes/model/#sharded-checkpoints | .md | modeling_utils.load_sharded_checkpoint
This is the same as
[`torch.nn.Module.load_state_dict`](https://pytorch.org/docs/stable/generated/torch.nn.Module.html?highlight=load_state_dict#torch.nn.Module.load_state_dict)
but for a sharded checkpoint.
This load is performed efficiently: each checkpoint shard is loaded one by one in RAM and deleted after being
loaded in the model.
Args:
model (`torch.nn.Module`): The model in which to load the checkpoint. | 455_8_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/model.md | https://huggingface.co/docs/transformers/en/main_classes/model/#sharded-checkpoints | .md | loaded in the model.
Args:
model (`torch.nn.Module`): The model in which to load the checkpoint.
folder (`str` or `os.PathLike`): A path to a folder containing the sharded checkpoint.
strict (`bool`, *optional`, defaults to `True`):
Whether to strictly enforce that the keys in the model state dict match the keys in the sharded checkpoint.
prefer_safe (`bool`, *optional*, defaults to `False`)
If both safetensors and PyTorch save files are present in checkpoint and `prefer_safe` is True, the | 455_8_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/model.md | https://huggingface.co/docs/transformers/en/main_classes/model/#sharded-checkpoints | .md | If both safetensors and PyTorch save files are present in checkpoint and `prefer_safe` is True, the
safetensors files will be loaded. Otherwise, PyTorch files are always loaded when possible.
Returns:
`NamedTuple`: A named tuple with `missing_keys` and `unexpected_keys` fields
- `missing_keys` is a list of str containing the missing keys
- `unexpected_keys` is a list of str containing the unexpected keys | 455_8_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/ | .md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 456_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 456_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#pipelines | .md | The pipelines are a great and easy way to use models for inference. These pipelines are objects that abstract most of
the complex code from the library, offering a simple API dedicated to several tasks, including Named Entity
Recognition, Masked Language Modeling, Sentiment Analysis, Feature Extraction and Question Answering. See the
[task summary](../task_summary) for examples of use.
There are two categories of pipeline abstractions to be aware about: | 456_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#pipelines | .md | [task summary](../task_summary) for examples of use.
There are two categories of pipeline abstractions to be aware about:
- The [`pipeline`] which is the most powerful object encapsulating all other pipelines.
- Task-specific pipelines are available for [audio](#audio), [computer vision](#computer-vision), [natural language processing](#natural-language-processing), and [multimodal](#multimodal) tasks. | 456_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#the-pipeline-abstraction | .md | The *pipeline* abstraction is a wrapper around all the other available pipelines. It is instantiated as any other
pipeline but can provide additional quality of life.
Simple call on one item:
```python
>>> pipe = pipeline("text-classification")
>>> pipe("This restaurant is awesome")
[{'label': 'POSITIVE', 'score': 0.9998743534088135}]
```
If you want to use a specific model from the [hub](https://huggingface.co) you can ignore the task if the model on
the hub already defines it:
```python | 456_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#the-pipeline-abstraction | .md | the hub already defines it:
```python
>>> pipe = pipeline(model="FacebookAI/roberta-large-mnli")
>>> pipe("This restaurant is awesome")
[{'label': 'NEUTRAL', 'score': 0.7313136458396912}]
```
To call a pipeline on many items, you can call it with a *list*.
```python
>>> pipe = pipeline("text-classification")
>>> pipe(["This restaurant is awesome", "This restaurant is awful"])
[{'label': 'POSITIVE', 'score': 0.9998743534088135},
{'label': 'NEGATIVE', 'score': 0.9996669292449951}]
``` | 456_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#the-pipeline-abstraction | .md | [{'label': 'POSITIVE', 'score': 0.9998743534088135},
{'label': 'NEGATIVE', 'score': 0.9996669292449951}]
```
To iterate over full datasets it is recommended to use a `dataset` directly. This means you don't need to allocate
the whole dataset at once, nor do you need to do batching yourself. This should work just as fast as custom loops on
GPU. If it doesn't don't hesitate to create an issue.
```python
import datasets
from transformers import pipeline | 456_2_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#the-pipeline-abstraction | .md | GPU. If it doesn't don't hesitate to create an issue.
```python
import datasets
from transformers import pipeline
from transformers.pipelines.pt_utils import KeyDataset
from tqdm.auto import tqdm | 456_2_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#the-pipeline-abstraction | .md | pipe = pipeline("automatic-speech-recognition", model="facebook/wav2vec2-base-960h", device=0)
dataset = datasets.load_dataset("superb", name="asr", split="test") | 456_2_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#the-pipeline-abstraction | .md | # KeyDataset (only *pt*) will simply return the item in the dict returned by the dataset item
# as we're not interested in the *target* part of the dataset. For sentence pair use KeyPairDataset
for out in tqdm(pipe(KeyDataset(dataset, "file"))):
print(out)
# {"text": "NUMBER TEN FRESH NELLY IS WAITING ON YOU GOOD NIGHT HUSBAND"}
# {"text": ....}
# ....
```
For ease of use, a generator is also possible:
```python
from transformers import pipeline
pipe = pipeline("text-classification") | 456_2_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#the-pipeline-abstraction | .md | pipe = pipeline("text-classification")
def data():
while True:
# This could come from a dataset, a database, a queue or HTTP request
# in a server
# Caveat: because this is iterative, you cannot use `num_workers > 1` variable
# to use multiple threads to preprocess data. You can still have 1 thread that
# does the preprocessing while the main runs the big inference
yield "This is a test" | 456_2_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#the-pipeline-abstraction | .md | for out in pipe(data()):
print(out)
# {"text": "NUMBER TEN FRESH NELLY IS WAITING ON YOU GOOD NIGHT HUSBAND"}
# {"text": ....}
# ....
```
Utility factory method to build a [`Pipeline`].
A pipeline consists of:
- One or more components for pre-processing model inputs, such as a [tokenizer](tokenizer),
[image_processor](image_processor), [feature_extractor](feature_extractor), or [processor](processors).
- A [model](model) that generates predictions from the inputs. | 456_2_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#the-pipeline-abstraction | .md | - A [model](model) that generates predictions from the inputs.
- Optional post-processing steps to refine the model's output, which can also be handled by processors.
<Tip>
While there are such optional arguments as `tokenizer`, `feature_extractor`, `image_processor`, and `processor`,
they shouldn't be specified all at once. If these components are not provided, `pipeline` will try to load
required ones automatically. In case you want to provide these components explicitly, please refer to a | 456_2_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#the-pipeline-abstraction | .md | required ones automatically. In case you want to provide these components explicitly, please refer to a
specific pipeline in order to get more details regarding what components are required.
</Tip>
Args:
task (`str`):
The task defining which pipeline will be returned. Currently accepted tasks are:
- `"audio-classification"`: will return a [`AudioClassificationPipeline`].
- `"automatic-speech-recognition"`: will return a [`AutomaticSpeechRecognitionPipeline`]. | 456_2_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#the-pipeline-abstraction | .md | - `"automatic-speech-recognition"`: will return a [`AutomaticSpeechRecognitionPipeline`].
- `"depth-estimation"`: will return a [`DepthEstimationPipeline`].
- `"document-question-answering"`: will return a [`DocumentQuestionAnsweringPipeline`].
- `"feature-extraction"`: will return a [`FeatureExtractionPipeline`].
- `"fill-mask"`: will return a [`FillMaskPipeline`]:.
- `"image-classification"`: will return a [`ImageClassificationPipeline`]. | 456_2_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#the-pipeline-abstraction | .md | - `"image-classification"`: will return a [`ImageClassificationPipeline`].
- `"image-feature-extraction"`: will return an [`ImageFeatureExtractionPipeline`].
- `"image-segmentation"`: will return a [`ImageSegmentationPipeline`].
- `"image-text-to-text"`: will return a [`ImageTextToTextPipeline`].
- `"image-to-image"`: will return a [`ImageToImagePipeline`].
- `"image-to-text"`: will return a [`ImageToTextPipeline`].
- `"mask-generation"`: will return a [`MaskGenerationPipeline`]. | 456_2_11 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#the-pipeline-abstraction | .md | - `"image-to-text"`: will return a [`ImageToTextPipeline`].
- `"mask-generation"`: will return a [`MaskGenerationPipeline`].
- `"object-detection"`: will return a [`ObjectDetectionPipeline`].
- `"question-answering"`: will return a [`QuestionAnsweringPipeline`].
- `"summarization"`: will return a [`SummarizationPipeline`].
- `"table-question-answering"`: will return a [`TableQuestionAnsweringPipeline`].
- `"text2text-generation"`: will return a [`Text2TextGenerationPipeline`]. | 456_2_12 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#the-pipeline-abstraction | .md | - `"text2text-generation"`: will return a [`Text2TextGenerationPipeline`].
- `"text-classification"` (alias `"sentiment-analysis"` available): will return a
[`TextClassificationPipeline`].
- `"text-generation"`: will return a [`TextGenerationPipeline`]:.
- `"text-to-audio"` (alias `"text-to-speech"` available): will return a [`TextToAudioPipeline`]:.
- `"token-classification"` (alias `"ner"` available): will return a [`TokenClassificationPipeline`].
- `"translation"`: will return a [`TranslationPipeline`]. | 456_2_13 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#the-pipeline-abstraction | .md | - `"translation"`: will return a [`TranslationPipeline`].
- `"translation_xx_to_yy"`: will return a [`TranslationPipeline`].
- `"video-classification"`: will return a [`VideoClassificationPipeline`].
- `"visual-question-answering"`: will return a [`VisualQuestionAnsweringPipeline`].
- `"zero-shot-classification"`: will return a [`ZeroShotClassificationPipeline`].
- `"zero-shot-image-classification"`: will return a [`ZeroShotImageClassificationPipeline`]. | 456_2_14 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#the-pipeline-abstraction | .md | - `"zero-shot-image-classification"`: will return a [`ZeroShotImageClassificationPipeline`].
- `"zero-shot-audio-classification"`: will return a [`ZeroShotAudioClassificationPipeline`].
- `"zero-shot-object-detection"`: will return a [`ZeroShotObjectDetectionPipeline`].
model (`str` or [`PreTrainedModel`] or [`TFPreTrainedModel`], *optional*):
The model that will be used by the pipeline to make predictions. This can be a model identifier or an | 456_2_15 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#the-pipeline-abstraction | .md | The model that will be used by the pipeline to make predictions. This can be a model identifier or an
actual instance of a pretrained model inheriting from [`PreTrainedModel`] (for PyTorch) or
[`TFPreTrainedModel`] (for TensorFlow).
If not provided, the default for the `task` will be loaded.
config (`str` or [`PretrainedConfig`], *optional*):
The configuration that will be used by the pipeline to instantiate the model. This can be a model | 456_2_16 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#the-pipeline-abstraction | .md | The configuration that will be used by the pipeline to instantiate the model. This can be a model
identifier or an actual pretrained model configuration inheriting from [`PretrainedConfig`].
If not provided, the default configuration file for the requested model will be used. That means that if
`model` is given, its default configuration will be used. However, if `model` is not supplied, this
`task`'s default model's config is used instead.
tokenizer (`str` or [`PreTrainedTokenizer`], *optional*): | 456_2_17 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#the-pipeline-abstraction | .md | `task`'s default model's config is used instead.
tokenizer (`str` or [`PreTrainedTokenizer`], *optional*):
The tokenizer that will be used by the pipeline to encode data for the model. This can be a model
identifier or an actual pretrained tokenizer inheriting from [`PreTrainedTokenizer`].
If not provided, the default tokenizer for the given `model` will be loaded (if it is a string). If `model`
is not specified or not a string, then the default tokenizer for `config` is loaded (if it is a string). | 456_2_18 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#the-pipeline-abstraction | .md | is not specified or not a string, then the default tokenizer for `config` is loaded (if it is a string).
However, if `config` is also not given or not a string, then the default tokenizer for the given `task`
will be loaded.
feature_extractor (`str` or [`PreTrainedFeatureExtractor`], *optional*):
The feature extractor that will be used by the pipeline to encode data for the model. This can be a model
identifier or an actual pretrained feature extractor inheriting from [`PreTrainedFeatureExtractor`]. | 456_2_19 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#the-pipeline-abstraction | .md | identifier or an actual pretrained feature extractor inheriting from [`PreTrainedFeatureExtractor`].
Feature extractors are used for non-NLP models, such as Speech or Vision models as well as multi-modal
models. Multi-modal models will also require a tokenizer to be passed.
If not provided, the default feature extractor for the given `model` will be loaded (if it is a string). If
`model` is not specified or not a string, then the default feature extractor for `config` is loaded (if it | 456_2_20 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#the-pipeline-abstraction | .md | `model` is not specified or not a string, then the default feature extractor for `config` is loaded (if it
is a string). However, if `config` is also not given or not a string, then the default feature extractor
for the given `task` will be loaded.
image_processor (`str` or [`BaseImageProcessor`], *optional*):
The image processor that will be used by the pipeline to preprocess images for the model. This can be a
model identifier or an actual image processor inheriting from [`BaseImageProcessor`]. | 456_2_21 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#the-pipeline-abstraction | .md | model identifier or an actual image processor inheriting from [`BaseImageProcessor`].
Image processors are used for Vision models and multi-modal models that require image inputs. Multi-modal
models will also require a tokenizer to be passed.
If not provided, the default image processor for the given `model` will be loaded (if it is a string). If
`model` is not specified or not a string, then the default image processor for `config` is loaded (if it is
a string). | 456_2_22 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#the-pipeline-abstraction | .md | `model` is not specified or not a string, then the default image processor for `config` is loaded (if it is
a string).
processor (`str` or [`ProcessorMixin`], *optional*):
The processor that will be used by the pipeline to preprocess data for the model. This can be a model
identifier or an actual processor inheriting from [`ProcessorMixin`].
Processors are used for multi-modal models that require multi-modal inputs, for example, a model that
requires both text and image inputs. | 456_2_23 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#the-pipeline-abstraction | .md | requires both text and image inputs.
If not provided, the default processor for the given `model` will be loaded (if it is a string). If `model`
is not specified or not a string, then the default processor for `config` is loaded (if it is a string).
framework (`str`, *optional*):
The framework to use, either `"pt"` for PyTorch or `"tf"` for TensorFlow. The specified framework must be
installed.
If no framework is specified, will default to the one currently installed. If no framework is specified and | 456_2_24 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#the-pipeline-abstraction | .md | installed.
If no framework is specified, will default to the one currently installed. If no framework is specified and
both frameworks are installed, will default to the framework of the `model`, or to PyTorch if no model is
provided.
revision (`str`, *optional*, defaults to `"main"`):
When passing a task name or a string model identifier: The specific model version to use. It can be a
branch name, a tag name, or a commit id, since we use a git-based system for storing models and other | 456_2_25 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#the-pipeline-abstraction | .md | branch name, a tag name, or a commit id, since we use a git-based system for storing models and other
artifacts on huggingface.co, so `revision` can be any identifier allowed by git.
use_fast (`bool`, *optional*, defaults to `True`):
Whether or not to use a Fast tokenizer if possible (a [`PreTrainedTokenizerFast`]).
use_auth_token (`str` or *bool*, *optional*):
The token to use as HTTP bearer authorization for remote files. If `True`, will use the token generated | 456_2_26 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#the-pipeline-abstraction | .md | The token to use as HTTP bearer authorization for remote files. If `True`, will use the token generated
when running `huggingface-cli login` (stored in `~/.huggingface`).
device (`int` or `str` or `torch.device`):
Defines the device (*e.g.*, `"cpu"`, `"cuda:1"`, `"mps"`, or a GPU ordinal rank like `1`) on which this
pipeline will be allocated.
device_map (`str` or `Dict[str, Union[int, str, torch.device]`, *optional*): | 456_2_27 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#the-pipeline-abstraction | .md | pipeline will be allocated.
device_map (`str` or `Dict[str, Union[int, str, torch.device]`, *optional*):
Sent directly as `model_kwargs` (just a simpler shortcut). When `accelerate` library is present, set
`device_map="auto"` to compute the most optimized `device_map` automatically (see
[here](https://huggingface.co/docs/accelerate/main/en/package_reference/big_modeling#accelerate.cpu_offload)
for more information).
<Tip warning={true}> | 456_2_28 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#the-pipeline-abstraction | .md | for more information).
<Tip warning={true}>
Do not use `device_map` AND `device` at the same time as they will conflict
</Tip>
torch_dtype (`str` or `torch.dtype`, *optional*):
Sent directly as `model_kwargs` (just a simpler shortcut) to use the available precision for this model
(`torch.float16`, `torch.bfloat16`, ... or `"auto"`).
trust_remote_code (`bool`, *optional*, defaults to `False`):
Whether or not to allow for custom code defined on the Hub in their own modeling, configuration, | 456_2_29 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#the-pipeline-abstraction | .md | Whether or not to allow for custom code defined on the Hub in their own modeling, configuration,
tokenization or even pipeline files. This option should only be set to `True` for repositories you trust
and in which you have read the code, as it will execute code present on the Hub on your local machine.
model_kwargs (`Dict[str, Any]`, *optional*):
Additional dictionary of keyword arguments passed along to the model's `from_pretrained(...,
**model_kwargs)` function.
kwargs (`Dict[str, Any]`, *optional*): | 456_2_30 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#the-pipeline-abstraction | .md | **model_kwargs)` function.
kwargs (`Dict[str, Any]`, *optional*):
Additional keyword arguments passed along to the specific pipeline init (see the documentation for the
corresponding pipeline class for possible values).
Returns:
[`Pipeline`]: A suitable pipeline for the task.
Examples:
```python
>>> from transformers import pipeline, AutoModelForTokenClassification, AutoTokenizer | 456_2_31 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#the-pipeline-abstraction | .md | >>> # Sentiment analysis pipeline
>>> analyzer = pipeline("sentiment-analysis")
>>> # Question answering pipeline, specifying the checkpoint identifier
>>> oracle = pipeline(
... "question-answering", model="distilbert/distilbert-base-cased-distilled-squad", tokenizer="google-bert/bert-base-cased"
... ) | 456_2_32 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#the-pipeline-abstraction | .md | >>> # Named entity recognition pipeline, passing in a specific model and tokenizer
>>> model = AutoModelForTokenClassification.from_pretrained("dbmdz/bert-large-cased-finetuned-conll03-english")
>>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-cased")
>>> recognizer = pipeline("ner", model=model, tokenizer=tokenizer)
``` | 456_2_33 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#pipeline-batching | .md | All pipelines can use batching. This will work
whenever the pipeline uses its streaming ability (so when passing lists or `Dataset` or `generator`).
```python
from transformers import pipeline
from transformers.pipelines.pt_utils import KeyDataset
import datasets | 456_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#pipeline-batching | .md | dataset = datasets.load_dataset("imdb", name="plain_text", split="unsupervised")
pipe = pipeline("text-classification", device=0)
for out in pipe(KeyDataset(dataset, "text"), batch_size=8, truncation="only_first"):
print(out)
# [{'label': 'POSITIVE', 'score': 0.9998743534088135}]
# Exactly the same output as before, but the content are passed
# as batches to the model
```
<Tip warning={true}> | 456_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#pipeline-batching | .md | # Exactly the same output as before, but the content are passed
# as batches to the model
```
<Tip warning={true}>
However, this is not automatically a win for performance. It can be either a 10x speedup or 5x slowdown depending
on hardware, data and the actual model being used.
Example where it's mostly a speedup:
</Tip>
```python
from transformers import pipeline
from torch.utils.data import Dataset
from tqdm.auto import tqdm | 456_3_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#pipeline-batching | .md | pipe = pipeline("text-classification", device=0)
class MyDataset(Dataset):
def __len__(self):
return 5000
def __getitem__(self, i):
return "This is a test"
dataset = MyDataset() | 456_3_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#pipeline-batching | .md | for batch_size in [1, 8, 64, 256]:
print("-" * 30)
print(f"Streaming batch_size={batch_size}")
for out in tqdm(pipe(dataset, batch_size=batch_size), total=len(dataset)):
pass
```
```
# On GTX 970
------------------------------
Streaming no batching
100%|██████████████████████████████████████████████████████████████████████| 5000/5000 [00:26<00:00, 187.52it/s]
------------------------------
Streaming batch_size=8 | 456_3_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#pipeline-batching | .md | ------------------------------
Streaming batch_size=8
100%|█████████████████████████████████████████████████████████████████████| 5000/5000 [00:04<00:00, 1205.95it/s]
------------------------------
Streaming batch_size=64
100%|█████████████████████████████████████████████████████████████████████| 5000/5000 [00:02<00:00, 2478.24it/s]
------------------------------
Streaming batch_size=256
100%|█████████████████████████████████████████████████████████████████████| 5000/5000 [00:01<00:00, 2554.43it/s] | 456_3_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#pipeline-batching | .md | 100%|█████████████████████████████████████████████████████████████████████| 5000/5000 [00:01<00:00, 2554.43it/s]
(diminishing returns, saturated the GPU)
```
Example where it's most a slowdown:
```python
class MyDataset(Dataset):
def __len__(self):
return 5000 | 456_3_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#pipeline-batching | .md | def __getitem__(self, i):
if i % 64 == 0:
n = 100
else:
n = 1
return "This is a test" * n
```
This is a occasional very long sentence compared to the other. In that case, the **whole** batch will need to be 400
tokens long, so the whole batch will be [64, 400] instead of [64, 4], leading to the high slowdown. Even worse, on
bigger batches, the program simply crashes.
```
------------------------------
Streaming no batching | 456_3_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#pipeline-batching | .md | bigger batches, the program simply crashes.
```
------------------------------
Streaming no batching
100%|█████████████████████████████████████████████████████████████████████| 1000/1000 [00:05<00:00, 183.69it/s]
------------------------------
Streaming batch_size=8
100%|█████████████████████████████████████████████████████████████████████| 1000/1000 [00:03<00:00, 265.74it/s]
------------------------------
Streaming batch_size=64 | 456_3_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#pipeline-batching | .md | ------------------------------
Streaming batch_size=64
100%|██████████████████████████████████████████████████████████████████████| 1000/1000 [00:26<00:00, 37.80it/s]
------------------------------
Streaming batch_size=256
0%| | 0/1000 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/home/nicolas/src/transformers/test.py", line 42, in <module>
for out in tqdm(pipe(dataset, batch_size=256), total=len(dataset)):
.... | 456_3_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#pipeline-batching | .md | for out in tqdm(pipe(dataset, batch_size=256), total=len(dataset)):
....
q = q / math.sqrt(dim_per_head) # (bs, n_heads, q_length, dim_per_head)
RuntimeError: CUDA out of memory. Tried to allocate 376.00 MiB (GPU 0; 3.95 GiB total capacity; 1.72 GiB already allocated; 354.88 MiB free; 2.46 GiB reserved in total by PyTorch)
```
There are no good (general) solutions for this problem, and your mileage may vary depending on your use cases. Rule of
thumb:
For users, a rule of thumb is: | 456_3_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#pipeline-batching | .md | thumb:
For users, a rule of thumb is:
- **Measure performance on your load, with your hardware. Measure, measure, and keep measuring. Real numbers are the
only way to go.**
- If you are latency constrained (live product doing inference), don't batch.
- If you are using CPU, don't batch.
- If you are using throughput (you want to run your model on a bunch of static data), on GPU, then:
- If you have no clue about the size of the sequence_length ("natural" data), by default don't batch, measure and | 456_3_11 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#pipeline-batching | .md | - If you have no clue about the size of the sequence_length ("natural" data), by default don't batch, measure and
try tentatively to add it, add OOM checks to recover when it will fail (and it will at some point if you don't
control the sequence_length.)
- If your sequence_length is super regular, then batching is more likely to be VERY interesting, measure and push
it until you get OOMs.
- The larger the GPU the more likely batching is going to be more interesting | 456_3_12 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#pipeline-batching | .md | it until you get OOMs.
- The larger the GPU the more likely batching is going to be more interesting
- As soon as you enable batching, make sure you can handle OOMs nicely. | 456_3_13 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#pipeline-chunk-batching | .md | `zero-shot-classification` and `question-answering` are slightly specific in the sense, that a single input might yield
multiple forward pass of a model. Under normal circumstances, this would yield issues with `batch_size` argument.
In order to circumvent this issue, both of these pipelines are a bit specific, they are `ChunkPipeline` instead of
regular `Pipeline`. In short:
```python
preprocessed = pipe.preprocess(inputs)
model_outputs = pipe.forward(preprocessed) | 456_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#pipeline-chunk-batching | .md | regular `Pipeline`. In short:
```python
preprocessed = pipe.preprocess(inputs)
model_outputs = pipe.forward(preprocessed)
outputs = pipe.postprocess(model_outputs)
```
Now becomes:
```python
all_model_outputs = []
for preprocessed in pipe.preprocess(inputs):
model_outputs = pipe.forward(preprocessed)
all_model_outputs.append(model_outputs)
outputs = pipe.postprocess(all_model_outputs)
```
This should be very transparent to your code because the pipelines are used in
the same way. | 456_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#pipeline-chunk-batching | .md | ```
This should be very transparent to your code because the pipelines are used in
the same way.
This is a simplified view, since the pipeline can handle automatically the batch to ! Meaning you don't have to care
about how many forward passes you inputs are actually going to trigger, you can optimize the `batch_size`
independently of the inputs. The caveats from the previous section still apply. | 456_4_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#pipeline-fp16-inference | .md | Models can be run in FP16 which can be significantly faster on GPU while saving memory. Most models will not suffer noticeable performance loss from this. The larger the model, the less likely that it will.
To enable FP16 inference, you can simply pass `torch_dtype=torch.float16` or `torch_dtype='float16'` to the pipeline constructor. Note that this only works for models with a PyTorch backend. Your inputs will be converted to FP16 internally. | 456_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#pipeline-custom-code | .md | If you want to override a specific pipeline.
Don't hesitate to create an issue for your task at hand, the goal of the pipeline is to be easy to use and support most
cases, so `transformers` could maybe support your use case.
If you want to try simply you can:
- Subclass your pipeline of choice
```python
class MyPipeline(TextClassificationPipeline):
def postprocess():
# Your code goes here
scores = scores * 100
# And here | 456_6_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#pipeline-custom-code | .md | my_pipeline = MyPipeline(model=model, tokenizer=tokenizer, ...)
# or if you use *pipeline* function, then:
my_pipeline = pipeline(model="xxxx", pipeline_class=MyPipeline)
```
That should enable you to do all the custom code you want. | 456_6_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#implementing-a-pipeline | .md | [Implementing a new pipeline](../add_new_pipeline) | 456_7_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#audio | .md | Pipelines available for audio tasks include the following. | 456_8_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#audioclassificationpipeline | .md | Audio classification pipeline using any `AutoModelForAudioClassification`. This pipeline predicts the class of a
raw waveform or an audio file. In case of an audio file, ffmpeg should be installed to support multiple audio
formats.
Example:
```python
>>> from transformers import pipeline | 456_9_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#audioclassificationpipeline | .md | >>> classifier = pipeline(model="superb/wav2vec2-base-superb-ks")
>>> classifier("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/1.flac")
[{'score': 0.997, 'label': '_unknown_'}, {'score': 0.002, 'label': 'left'}, {'score': 0.0, 'label': 'yes'}, {'score': 0.0, 'label': 'down'}, {'score': 0.0, 'label': 'stop'}]
```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial) | 456_9_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#audioclassificationpipeline | .md | ```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial)
This pipeline can currently be loaded from [`pipeline`] using the following task identifier:
`"audio-classification"`.
See the list of available models on
[huggingface.co/models](https://huggingface.co/models?filter=audio-classification).
Arguments:
model ([`PreTrainedModel`] or [`TFPreTrainedModel`]): | 456_9_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#audioclassificationpipeline | .md | Arguments:
model ([`PreTrainedModel`] or [`TFPreTrainedModel`]):
The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from
[`PreTrainedModel`] for PyTorch and [`TFPreTrainedModel`] for TensorFlow.
feature_extractor ([`SequenceFeatureExtractor`]):
The feature extractor that will be used by the pipeline to encode data for the model. This object inherits from
[`SequenceFeatureExtractor`].
modelcard (`str` or [`ModelCard`], *optional*): | 456_9_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#audioclassificationpipeline | .md | [`SequenceFeatureExtractor`].
modelcard (`str` or [`ModelCard`], *optional*):
Model card attributed to the model for this pipeline.
framework (`str`, *optional*):
The framework to use, either `"pt"` for PyTorch or `"tf"` for TensorFlow. The specified framework must be
installed.
If no framework is specified, will default to the one currently installed. If no framework is specified and
both frameworks are installed, will default to the framework of the `model`, or to PyTorch if no model is
provided. | 456_9_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#audioclassificationpipeline | .md | both frameworks are installed, will default to the framework of the `model`, or to PyTorch if no model is
provided.
task (`str`, defaults to `""`):
A task-identifier for the pipeline.
num_workers (`int`, *optional*, defaults to 8):
When the pipeline will use *DataLoader* (when passing a dataset, on GPU for a Pytorch model), the number of
workers to be used.
batch_size (`int`, *optional*, defaults to 1): | 456_9_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#audioclassificationpipeline | .md | workers to be used.
batch_size (`int`, *optional*, defaults to 1):
When the pipeline will use *DataLoader* (when passing a dataset, on GPU for a Pytorch model), the size of
the batch to use, for inference this is not always beneficial, please read [Batching with
pipelines](https://huggingface.co/transformers/main_classes/pipelines.html#pipeline-batching) .
args_parser ([`~pipelines.ArgumentHandler`], *optional*):
Reference to the object in charge of parsing supplied pipeline parameters. | 456_9_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#audioclassificationpipeline | .md | Reference to the object in charge of parsing supplied pipeline parameters.
device (`int`, *optional*, defaults to -1):
Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on
the associated CUDA device id. You can pass native `torch.device` or a `str` too
torch_dtype (`str` or `torch.dtype`, *optional*):
Sent directly as `model_kwargs` (just a simpler shortcut) to use the available precision for this model | 456_9_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#audioclassificationpipeline | .md | Sent directly as `model_kwargs` (just a simpler shortcut) to use the available precision for this model
(`torch.float16`, `torch.bfloat16`, ... or `"auto"`)
binary_output (`bool`, *optional*, defaults to `False`):
Flag indicating if the output the pipeline should happen in a serialized format (i.e., pickle) or as
the raw output data e.g. text.
- __call__
- all | 456_9_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#automaticspeechrecognitionpipeline | .md | Pipeline that aims at extracting spoken text contained within some audio.
The input can be either a raw waveform or a audio file. In case of the audio file, ffmpeg should be installed for
to support multiple audio formats
Example:
```python
>>> from transformers import pipeline | 456_10_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#automaticspeechrecognitionpipeline | .md | >>> transcriber = pipeline(model="openai/whisper-base")
>>> transcriber("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/1.flac")
{'text': ' He hoped there would be stew for dinner, turnips and carrots and bruised potatoes and fat mutton pieces to be ladled out in thick, peppered flour-fatten sauce.'}
```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial)
Arguments:
model ([`PreTrainedModel`] or [`TFPreTrainedModel`]): | 456_10_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#automaticspeechrecognitionpipeline | .md | Arguments:
model ([`PreTrainedModel`] or [`TFPreTrainedModel`]):
The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from
[`PreTrainedModel`] for PyTorch and [`TFPreTrainedModel`] for TensorFlow.
feature_extractor ([`SequenceFeatureExtractor`]):
The feature extractor that will be used by the pipeline to encode waveform for the model.
tokenizer ([`PreTrainedTokenizer`]): | 456_10_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#automaticspeechrecognitionpipeline | .md | The feature extractor that will be used by the pipeline to encode waveform for the model.
tokenizer ([`PreTrainedTokenizer`]):
The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from
[`PreTrainedTokenizer`].
decoder (`pyctcdecode.BeamSearchDecoderCTC`, *optional*):
[PyCTCDecode's
BeamSearchDecoderCTC](https://github.com/kensho-technologies/pyctcdecode/blob/2fd33dc37c4111417e08d89ccd23d28e9b308d19/pyctcdecode/decoder.py#L180) | 456_10_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#automaticspeechrecognitionpipeline | .md | can be passed for language model boosted decoding. See [`Wav2Vec2ProcessorWithLM`] for more information.
chunk_length_s (`float`, *optional*, defaults to 0):
The input length for in each chunk. If `chunk_length_s = 0` then chunking is disabled (default).
<Tip>
For more information on how to effectively use `chunk_length_s`, please have a look at the [ASR chunking
blog post](https://huggingface.co/blog/asr-chunking).
</Tip>
stride_length_s (`float`, *optional*, defaults to `chunk_length_s / 6`): | 456_10_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#automaticspeechrecognitionpipeline | .md | </Tip>
stride_length_s (`float`, *optional*, defaults to `chunk_length_s / 6`):
The length of stride on the left and right of each chunk. Used only with `chunk_length_s > 0`. This enables
the model to *see* more context and infer letters better than without this context but the pipeline
discards the stride bits at the end to make the final reconstitution as perfect as possible.
<Tip>
For more information on how to effectively use `stride_length_s`, please have a look at the [ASR chunking | 456_10_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#automaticspeechrecognitionpipeline | .md | <Tip>
For more information on how to effectively use `stride_length_s`, please have a look at the [ASR chunking
blog post](https://huggingface.co/blog/asr-chunking).
</Tip>
framework (`str`, *optional*):
The framework to use, either `"pt"` for PyTorch or `"tf"` for TensorFlow. The specified framework must be
installed. If no framework is specified, will default to the one currently installed. If no framework is | 456_10_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#automaticspeechrecognitionpipeline | .md | installed. If no framework is specified, will default to the one currently installed. If no framework is
specified and both frameworks are installed, will default to the framework of the `model`, or to PyTorch if
no model is provided.
device (Union[`int`, `torch.device`], *optional*):
Device ordinal for CPU/GPU supports. Setting this to `None` will leverage CPU, a positive will run the
model on the associated CUDA device id.
torch_dtype (Union[`int`, `torch.dtype`], *optional*): | 456_10_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#automaticspeechrecognitionpipeline | .md | model on the associated CUDA device id.
torch_dtype (Union[`int`, `torch.dtype`], *optional*):
The data-type (dtype) of the computation. Setting this to `None` will use float32 precision. Set to
`torch.float16` or `torch.bfloat16` to use half-precision in the respective dtypes.
- __call__
- all | 456_10_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#texttoaudiopipeline | .md | Text-to-audio generation pipeline using any `AutoModelForTextToWaveform` or `AutoModelForTextToSpectrogram`. This
pipeline generates an audio file from an input text and optional other conditional inputs.
Example:
```python
>>> from transformers import pipeline
>>> pipe = pipeline(model="suno/bark-small")
>>> output = pipe("Hey it's HuggingFace on the phone!") | 456_11_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#texttoaudiopipeline | .md | >>> pipe = pipeline(model="suno/bark-small")
>>> output = pipe("Hey it's HuggingFace on the phone!")
>>> audio = output["audio"]
>>> sampling_rate = output["sampling_rate"]
```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial)
<Tip>
You can specify parameters passed to the model by using [`TextToAudioPipeline.__call__.forward_params`] or
[`TextToAudioPipeline.__call__.generate_kwargs`].
Example:
```python
>>> from transformers import pipeline | 456_11_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#texttoaudiopipeline | .md | >>> music_generator = pipeline(task="text-to-audio", model="facebook/musicgen-small", framework="pt")
>>> # diversify the music generation by adding randomness with a high temperature and set a maximum music length
>>> generate_kwargs = {
... "do_sample": True,
... "temperature": 0.7,
... "max_new_tokens": 35,
... } | 456_11_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#texttoaudiopipeline | .md | >>> outputs = music_generator("Techno music with high melodic riffs", generate_kwargs=generate_kwargs)
```
</Tip>
This pipeline can currently be loaded from [`pipeline`] using the following task identifiers: `"text-to-speech"` or
`"text-to-audio"`.
See the list of available models on [huggingface.co/models](https://huggingface.co/models?filter=text-to-speech).
- __call__
- all | 456_11_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#zeroshotaudioclassificationpipeline | .md | Zero shot audio classification pipeline using `ClapModel`. This pipeline predicts the class of an audio when you
provide an audio and a set of `candidate_labels`.
<Tip warning={true}>
The default `hypothesis_template` is : `"This is a sound of {}."`. Make sure you update it for your usage.
</Tip>
Example:
```python
>>> from transformers import pipeline
>>> from datasets import load_dataset | 456_12_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#zeroshotaudioclassificationpipeline | .md | >>> dataset = load_dataset("ashraq/esc50")
>>> audio = next(iter(dataset["train"]["audio"]))["array"]
>>> classifier = pipeline(task="zero-shot-audio-classification", model="laion/clap-htsat-unfused")
>>> classifier(audio, candidate_labels=["Sound of a dog", "Sound of vaccum cleaner"])
[{'score': 0.9996, 'label': 'Sound of a dog'}, {'score': 0.0004, 'label': 'Sound of vaccum cleaner'}]
```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial) This audio | 456_12_1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.