source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#zeroshotaudioclassificationpipeline | .md | ```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial) This audio
classification pipeline can currently be loaded from [`pipeline`] using the following task identifier:
`"zero-shot-audio-classification"`. See the list of available models on
[huggingface.co/models](https://huggingface.co/models?filter=zero-shot-audio-classification).
Arguments:
model ([`PreTrainedModel`] or [`TFPreTrainedModel`]): | 456_12_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#zeroshotaudioclassificationpipeline | .md | Arguments:
model ([`PreTrainedModel`] or [`TFPreTrainedModel`]):
The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from
[`PreTrainedModel`] for PyTorch and [`TFPreTrainedModel`] for TensorFlow.
tokenizer ([`PreTrainedTokenizer`]):
The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from
[`PreTrainedTokenizer`].
feature_extractor ([`SequenceFeatureExtractor`]): | 456_12_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#zeroshotaudioclassificationpipeline | .md | [`PreTrainedTokenizer`].
feature_extractor ([`SequenceFeatureExtractor`]):
The feature extractor that will be used by the pipeline to encode data for the model. This object inherits from
[`SequenceFeatureExtractor`].
modelcard (`str` or [`ModelCard`], *optional*):
Model card attributed to the model for this pipeline.
framework (`str`, *optional*):
The framework to use, either `"pt"` for PyTorch or `"tf"` for TensorFlow. The specified framework must be
installed. | 456_12_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#zeroshotaudioclassificationpipeline | .md | The framework to use, either `"pt"` for PyTorch or `"tf"` for TensorFlow. The specified framework must be
installed.
If no framework is specified, will default to the one currently installed. If no framework is specified and
both frameworks are installed, will default to the framework of the `model`, or to PyTorch if no model is
provided.
task (`str`, defaults to `""`):
A task-identifier for the pipeline.
num_workers (`int`, *optional*, defaults to 8): | 456_12_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#zeroshotaudioclassificationpipeline | .md | provided.
task (`str`, defaults to `""`):
A task-identifier for the pipeline.
num_workers (`int`, *optional*, defaults to 8):
When the pipeline will use *DataLoader* (when passing a dataset, on GPU for a Pytorch model), the number of
workers to be used.
batch_size (`int`, *optional*, defaults to 1):
When the pipeline will use *DataLoader* (when passing a dataset, on GPU for a Pytorch model), the size of
the batch to use, for inference this is not always beneficial, please read [Batching with | 456_12_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#zeroshotaudioclassificationpipeline | .md | the batch to use, for inference this is not always beneficial, please read [Batching with
pipelines](https://huggingface.co/transformers/main_classes/pipelines.html#pipeline-batching) .
args_parser ([`~pipelines.ArgumentHandler`], *optional*):
Reference to the object in charge of parsing supplied pipeline parameters.
device (`int`, *optional*, defaults to -1):
Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on | 456_12_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#zeroshotaudioclassificationpipeline | .md | Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on
the associated CUDA device id. You can pass native `torch.device` or a `str` too
torch_dtype (`str` or `torch.dtype`, *optional*):
Sent directly as `model_kwargs` (just a simpler shortcut) to use the available precision for this model
(`torch.float16`, `torch.bfloat16`, ... or `"auto"`)
binary_output (`bool`, *optional*, defaults to `False`): | 456_12_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#zeroshotaudioclassificationpipeline | .md | (`torch.float16`, `torch.bfloat16`, ... or `"auto"`)
binary_output (`bool`, *optional*, defaults to `False`):
Flag indicating if the output the pipeline should happen in a serialized format (i.e., pickle) or as
the raw output data e.g. text.
- __call__
- all | 456_12_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#computer-vision | .md | Pipelines available for computer vision tasks include the following. | 456_13_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#depthestimationpipeline | .md | Depth estimation pipeline using any `AutoModelForDepthEstimation`. This pipeline predicts the depth of an image.
Example:
```python
>>> from transformers import pipeline | 456_14_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#depthestimationpipeline | .md | >>> depth_estimator = pipeline(task="depth-estimation", model="LiheYoung/depth-anything-base-hf")
>>> output = depth_estimator("http://images.cocodataset.org/val2017/000000039769.jpg")
>>> # This is a tensor with the values being the depth expressed in meters for each pixel
>>> output["predicted_depth"].shape
torch.Size([1, 384, 384])
```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial) | 456_14_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#depthestimationpipeline | .md | ```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial)
This depth estimation pipeline can currently be loaded from [`pipeline`] using the following task identifier:
`"depth-estimation"`.
See the list of available models on [huggingface.co/models](https://huggingface.co/models?filter=depth-estimation).
Arguments:
model ([`PreTrainedModel`] or [`TFPreTrainedModel`]): | 456_14_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#depthestimationpipeline | .md | Arguments:
model ([`PreTrainedModel`] or [`TFPreTrainedModel`]):
The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from
[`PreTrainedModel`] for PyTorch and [`TFPreTrainedModel`] for TensorFlow.
image_processor ([`BaseImageProcessor`]):
The image processor that will be used by the pipeline to encode data for the model. This object inherits from
[`BaseImageProcessor`].
modelcard (`str` or [`ModelCard`], *optional*): | 456_14_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#depthestimationpipeline | .md | [`BaseImageProcessor`].
modelcard (`str` or [`ModelCard`], *optional*):
Model card attributed to the model for this pipeline.
framework (`str`, *optional*):
The framework to use, either `"pt"` for PyTorch or `"tf"` for TensorFlow. The specified framework must be
installed.
If no framework is specified, will default to the one currently installed. If no framework is specified and
both frameworks are installed, will default to the framework of the `model`, or to PyTorch if no model is
provided. | 456_14_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#depthestimationpipeline | .md | both frameworks are installed, will default to the framework of the `model`, or to PyTorch if no model is
provided.
task (`str`, defaults to `""`):
A task-identifier for the pipeline.
num_workers (`int`, *optional*, defaults to 8):
When the pipeline will use *DataLoader* (when passing a dataset, on GPU for a Pytorch model), the number of
workers to be used.
batch_size (`int`, *optional*, defaults to 1): | 456_14_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#depthestimationpipeline | .md | workers to be used.
batch_size (`int`, *optional*, defaults to 1):
When the pipeline will use *DataLoader* (when passing a dataset, on GPU for a Pytorch model), the size of
the batch to use, for inference this is not always beneficial, please read [Batching with
pipelines](https://huggingface.co/transformers/main_classes/pipelines.html#pipeline-batching) .
args_parser ([`~pipelines.ArgumentHandler`], *optional*):
Reference to the object in charge of parsing supplied pipeline parameters. | 456_14_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#depthestimationpipeline | .md | Reference to the object in charge of parsing supplied pipeline parameters.
device (`int`, *optional*, defaults to -1):
Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on
the associated CUDA device id. You can pass native `torch.device` or a `str` too
torch_dtype (`str` or `torch.dtype`, *optional*):
Sent directly as `model_kwargs` (just a simpler shortcut) to use the available precision for this model | 456_14_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#depthestimationpipeline | .md | Sent directly as `model_kwargs` (just a simpler shortcut) to use the available precision for this model
(`torch.float16`, `torch.bfloat16`, ... or `"auto"`)
binary_output (`bool`, *optional*, defaults to `False`):
Flag indicating if the output the pipeline should happen in a serialized format (i.e., pickle) or as
the raw output data e.g. text.
- __call__
- all | 456_14_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#imageclassificationpipeline | .md | Image classification pipeline using any `AutoModelForImageClassification`. This pipeline predicts the class of an
image.
Example:
```python
>>> from transformers import pipeline | 456_15_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#imageclassificationpipeline | .md | >>> classifier = pipeline(model="microsoft/beit-base-patch16-224-pt22k-ft22k")
>>> classifier("https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png")
[{'score': 0.442, 'label': 'macaw'}, {'score': 0.088, 'label': 'popinjay'}, {'score': 0.075, 'label': 'parrot'}, {'score': 0.073, 'label': 'parodist, lampooner'}, {'score': 0.046, 'label': 'poll, poll_parrot'}]
```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial) | 456_15_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#imageclassificationpipeline | .md | ```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial)
This image classification pipeline can currently be loaded from [`pipeline`] using the following task identifier:
`"image-classification"`.
See the list of available models on
[huggingface.co/models](https://huggingface.co/models?filter=image-classification).
Arguments:
model ([`PreTrainedModel`] or [`TFPreTrainedModel`]): | 456_15_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#imageclassificationpipeline | .md | Arguments:
model ([`PreTrainedModel`] or [`TFPreTrainedModel`]):
The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from
[`PreTrainedModel`] for PyTorch and [`TFPreTrainedModel`] for TensorFlow.
image_processor ([`BaseImageProcessor`]):
The image processor that will be used by the pipeline to encode data for the model. This object inherits from
[`BaseImageProcessor`].
modelcard (`str` or [`ModelCard`], *optional*): | 456_15_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#imageclassificationpipeline | .md | [`BaseImageProcessor`].
modelcard (`str` or [`ModelCard`], *optional*):
Model card attributed to the model for this pipeline.
framework (`str`, *optional*):
The framework to use, either `"pt"` for PyTorch or `"tf"` for TensorFlow. The specified framework must be
installed.
If no framework is specified, will default to the one currently installed. If no framework is specified and
both frameworks are installed, will default to the framework of the `model`, or to PyTorch if no model is
provided. | 456_15_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#imageclassificationpipeline | .md | both frameworks are installed, will default to the framework of the `model`, or to PyTorch if no model is
provided.
task (`str`, defaults to `""`):
A task-identifier for the pipeline.
num_workers (`int`, *optional*, defaults to 8):
When the pipeline will use *DataLoader* (when passing a dataset, on GPU for a Pytorch model), the number of
workers to be used.
batch_size (`int`, *optional*, defaults to 1): | 456_15_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#imageclassificationpipeline | .md | workers to be used.
batch_size (`int`, *optional*, defaults to 1):
When the pipeline will use *DataLoader* (when passing a dataset, on GPU for a Pytorch model), the size of
the batch to use, for inference this is not always beneficial, please read [Batching with
pipelines](https://huggingface.co/transformers/main_classes/pipelines.html#pipeline-batching) .
args_parser ([`~pipelines.ArgumentHandler`], *optional*):
Reference to the object in charge of parsing supplied pipeline parameters. | 456_15_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#imageclassificationpipeline | .md | Reference to the object in charge of parsing supplied pipeline parameters.
device (`int`, *optional*, defaults to -1):
Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on
the associated CUDA device id. You can pass native `torch.device` or a `str` too
torch_dtype (`str` or `torch.dtype`, *optional*):
Sent directly as `model_kwargs` (just a simpler shortcut) to use the available precision for this model | 456_15_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#imageclassificationpipeline | .md | Sent directly as `model_kwargs` (just a simpler shortcut) to use the available precision for this model
(`torch.float16`, `torch.bfloat16`, ... or `"auto"`)
binary_output (`bool`, *optional*, defaults to `False`):
Flag indicating if the output the pipeline should happen in a serialized format (i.e., pickle) or as
the raw output data e.g. text.
function_to_apply (`str`, *optional*, defaults to `"default"`): | 456_15_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#imageclassificationpipeline | .md | the raw output data e.g. text.
function_to_apply (`str`, *optional*, defaults to `"default"`):
The function to apply to the model outputs in order to retrieve the scores. Accepts four different values:
- `"default"`: if the model has a single label, will apply the sigmoid function on the output. If the model
has several labels, will apply the softmax function on the output.
- `"sigmoid"`: Applies the sigmoid function on the output.
- `"softmax"`: Applies the softmax function on the output. | 456_15_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#imageclassificationpipeline | .md | - `"sigmoid"`: Applies the sigmoid function on the output.
- `"softmax"`: Applies the softmax function on the output.
- `"none"`: Does not apply any function on the output.
- __call__
- all | 456_15_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#imagesegmentationpipeline | .md | Image segmentation pipeline using any `AutoModelForXXXSegmentation`. This pipeline predicts masks of objects and
their classes.
Example:
```python
>>> from transformers import pipeline
>>> segmenter = pipeline(model="facebook/detr-resnet-50-panoptic")
>>> segments = segmenter("https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png")
>>> len(segments)
2
>>> segments[0]["label"]
'bird'
>>> segments[1]["label"]
'bird' | 456_16_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#imagesegmentationpipeline | .md | >>> segments[0]["label"]
'bird'
>>> segments[1]["label"]
'bird'
>>> type(segments[0]["mask"]) # This is a black and white mask showing where is the bird on the original image.
<class 'PIL.Image.Image'> | 456_16_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#imagesegmentationpipeline | .md | >>> segments[0]["mask"].size
(768, 512)
```
This image segmentation pipeline can currently be loaded from [`pipeline`] using the following task identifier:
`"image-segmentation"`.
See the list of available models on
[huggingface.co/models](https://huggingface.co/models?filter=image-segmentation).
Arguments:
model ([`PreTrainedModel`] or [`TFPreTrainedModel`]):
The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from | 456_16_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#imagesegmentationpipeline | .md | The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from
[`PreTrainedModel`] for PyTorch and [`TFPreTrainedModel`] for TensorFlow.
image_processor ([`BaseImageProcessor`]):
The image processor that will be used by the pipeline to encode data for the model. This object inherits from
[`BaseImageProcessor`].
modelcard (`str` or [`ModelCard`], *optional*):
Model card attributed to the model for this pipeline.
framework (`str`, *optional*): | 456_16_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#imagesegmentationpipeline | .md | Model card attributed to the model for this pipeline.
framework (`str`, *optional*):
The framework to use, either `"pt"` for PyTorch or `"tf"` for TensorFlow. The specified framework must be
installed.
If no framework is specified, will default to the one currently installed. If no framework is specified and
both frameworks are installed, will default to the framework of the `model`, or to PyTorch if no model is
provided.
task (`str`, defaults to `""`):
A task-identifier for the pipeline. | 456_16_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#imagesegmentationpipeline | .md | provided.
task (`str`, defaults to `""`):
A task-identifier for the pipeline.
num_workers (`int`, *optional*, defaults to 8):
When the pipeline will use *DataLoader* (when passing a dataset, on GPU for a Pytorch model), the number of
workers to be used.
batch_size (`int`, *optional*, defaults to 1):
When the pipeline will use *DataLoader* (when passing a dataset, on GPU for a Pytorch model), the size of
the batch to use, for inference this is not always beneficial, please read [Batching with | 456_16_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#imagesegmentationpipeline | .md | the batch to use, for inference this is not always beneficial, please read [Batching with
pipelines](https://huggingface.co/transformers/main_classes/pipelines.html#pipeline-batching) .
args_parser ([`~pipelines.ArgumentHandler`], *optional*):
Reference to the object in charge of parsing supplied pipeline parameters.
device (`int`, *optional*, defaults to -1):
Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on | 456_16_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#imagesegmentationpipeline | .md | Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on
the associated CUDA device id. You can pass native `torch.device` or a `str` too
torch_dtype (`str` or `torch.dtype`, *optional*):
Sent directly as `model_kwargs` (just a simpler shortcut) to use the available precision for this model
(`torch.float16`, `torch.bfloat16`, ... or `"auto"`)
binary_output (`bool`, *optional*, defaults to `False`): | 456_16_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#imagesegmentationpipeline | .md | (`torch.float16`, `torch.bfloat16`, ... or `"auto"`)
binary_output (`bool`, *optional*, defaults to `False`):
Flag indicating if the output the pipeline should happen in a serialized format (i.e., pickle) or as
the raw output data e.g. text.
- __call__
- all | 456_16_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#imagetoimagepipeline | .md | Image to Image pipeline using any `AutoModelForImageToImage`. This pipeline generates an image based on a previous
image input.
Example:
```python
>>> from PIL import Image
>>> import requests
>>> from transformers import pipeline | 456_17_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#imagetoimagepipeline | .md | >>> from transformers import pipeline
>>> upscaler = pipeline("image-to-image", model="caidas/swin2SR-classical-sr-x2-64")
>>> img = Image.open(requests.get("http://images.cocodataset.org/val2017/000000039769.jpg", stream=True).raw)
>>> img = img.resize((64, 64))
>>> upscaled_img = upscaler(img)
>>> img.size
(64, 64) | 456_17_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#imagetoimagepipeline | .md | >>> upscaled_img.size
(144, 144)
```
This image to image pipeline can currently be loaded from [`pipeline`] using the following task identifier:
`"image-to-image"`.
See the list of available models on [huggingface.co/models](https://huggingface.co/models?filter=image-to-image).
Arguments:
model ([`PreTrainedModel`] or [`TFPreTrainedModel`]):
The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from | 456_17_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#imagetoimagepipeline | .md | The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from
[`PreTrainedModel`] for PyTorch and [`TFPreTrainedModel`] for TensorFlow.
image_processor ([`BaseImageProcessor`]):
The image processor that will be used by the pipeline to encode data for the model. This object inherits from
[`BaseImageProcessor`].
modelcard (`str` or [`ModelCard`], *optional*):
Model card attributed to the model for this pipeline.
framework (`str`, *optional*): | 456_17_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#imagetoimagepipeline | .md | Model card attributed to the model for this pipeline.
framework (`str`, *optional*):
The framework to use, either `"pt"` for PyTorch or `"tf"` for TensorFlow. The specified framework must be
installed.
If no framework is specified, will default to the one currently installed. If no framework is specified and
both frameworks are installed, will default to the framework of the `model`, or to PyTorch if no model is
provided.
task (`str`, defaults to `""`):
A task-identifier for the pipeline. | 456_17_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#imagetoimagepipeline | .md | provided.
task (`str`, defaults to `""`):
A task-identifier for the pipeline.
num_workers (`int`, *optional*, defaults to 8):
When the pipeline will use *DataLoader* (when passing a dataset, on GPU for a Pytorch model), the number of
workers to be used.
batch_size (`int`, *optional*, defaults to 1):
When the pipeline will use *DataLoader* (when passing a dataset, on GPU for a Pytorch model), the size of
the batch to use, for inference this is not always beneficial, please read [Batching with | 456_17_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#imagetoimagepipeline | .md | the batch to use, for inference this is not always beneficial, please read [Batching with
pipelines](https://huggingface.co/transformers/main_classes/pipelines.html#pipeline-batching) .
args_parser ([`~pipelines.ArgumentHandler`], *optional*):
Reference to the object in charge of parsing supplied pipeline parameters.
device (`int`, *optional*, defaults to -1):
Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on | 456_17_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#imagetoimagepipeline | .md | Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on
the associated CUDA device id. You can pass native `torch.device` or a `str` too
torch_dtype (`str` or `torch.dtype`, *optional*):
Sent directly as `model_kwargs` (just a simpler shortcut) to use the available precision for this model
(`torch.float16`, `torch.bfloat16`, ... or `"auto"`)
binary_output (`bool`, *optional*, defaults to `False`): | 456_17_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#imagetoimagepipeline | .md | (`torch.float16`, `torch.bfloat16`, ... or `"auto"`)
binary_output (`bool`, *optional*, defaults to `False`):
Flag indicating if the output the pipeline should happen in a serialized format (i.e., pickle) or as
the raw output data e.g. text.
- __call__
- all | 456_17_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#objectdetectionpipeline | .md | Object detection pipeline using any `AutoModelForObjectDetection`. This pipeline predicts bounding boxes of objects
and their classes.
Example:
```python
>>> from transformers import pipeline | 456_18_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#objectdetectionpipeline | .md | >>> detector = pipeline(model="facebook/detr-resnet-50")
>>> detector("https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png")
[{'score': 0.997, 'label': 'bird', 'box': {'xmin': 69, 'ymin': 171, 'xmax': 396, 'ymax': 507}}, {'score': 0.999, 'label': 'bird', 'box': {'xmin': 398, 'ymin': 105, 'xmax': 767, 'ymax': 507}}] | 456_18_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#objectdetectionpipeline | .md | >>> # x, y are expressed relative to the top left hand corner.
```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial)
This object detection pipeline can currently be loaded from [`pipeline`] using the following task identifier:
`"object-detection"`.
See the list of available models on [huggingface.co/models](https://huggingface.co/models?filter=object-detection).
Arguments:
model ([`PreTrainedModel`] or [`TFPreTrainedModel`]): | 456_18_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#objectdetectionpipeline | .md | Arguments:
model ([`PreTrainedModel`] or [`TFPreTrainedModel`]):
The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from
[`PreTrainedModel`] for PyTorch and [`TFPreTrainedModel`] for TensorFlow.
image_processor ([`BaseImageProcessor`]):
The image processor that will be used by the pipeline to encode data for the model. This object inherits from
[`BaseImageProcessor`].
modelcard (`str` or [`ModelCard`], *optional*): | 456_18_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#objectdetectionpipeline | .md | [`BaseImageProcessor`].
modelcard (`str` or [`ModelCard`], *optional*):
Model card attributed to the model for this pipeline.
framework (`str`, *optional*):
The framework to use, either `"pt"` for PyTorch or `"tf"` for TensorFlow. The specified framework must be
installed.
If no framework is specified, will default to the one currently installed. If no framework is specified and
both frameworks are installed, will default to the framework of the `model`, or to PyTorch if no model is
provided. | 456_18_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#objectdetectionpipeline | .md | both frameworks are installed, will default to the framework of the `model`, or to PyTorch if no model is
provided.
task (`str`, defaults to `""`):
A task-identifier for the pipeline.
num_workers (`int`, *optional*, defaults to 8):
When the pipeline will use *DataLoader* (when passing a dataset, on GPU for a Pytorch model), the number of
workers to be used.
batch_size (`int`, *optional*, defaults to 1): | 456_18_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#objectdetectionpipeline | .md | workers to be used.
batch_size (`int`, *optional*, defaults to 1):
When the pipeline will use *DataLoader* (when passing a dataset, on GPU for a Pytorch model), the size of
the batch to use, for inference this is not always beneficial, please read [Batching with
pipelines](https://huggingface.co/transformers/main_classes/pipelines.html#pipeline-batching) .
args_parser ([`~pipelines.ArgumentHandler`], *optional*):
Reference to the object in charge of parsing supplied pipeline parameters. | 456_18_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#objectdetectionpipeline | .md | Reference to the object in charge of parsing supplied pipeline parameters.
device (`int`, *optional*, defaults to -1):
Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on
the associated CUDA device id. You can pass native `torch.device` or a `str` too
torch_dtype (`str` or `torch.dtype`, *optional*):
Sent directly as `model_kwargs` (just a simpler shortcut) to use the available precision for this model | 456_18_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#objectdetectionpipeline | .md | Sent directly as `model_kwargs` (just a simpler shortcut) to use the available precision for this model
(`torch.float16`, `torch.bfloat16`, ... or `"auto"`)
binary_output (`bool`, *optional*, defaults to `False`):
Flag indicating if the output the pipeline should happen in a serialized format (i.e., pickle) or as
the raw output data e.g. text.
- __call__
- all | 456_18_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#videoclassificationpipeline | .md | Video classification pipeline using any `AutoModelForVideoClassification`. This pipeline predicts the class of a
video.
This video classification pipeline can currently be loaded from [`pipeline`] using the following task identifier:
`"video-classification"`.
See the list of available models on
[huggingface.co/models](https://huggingface.co/models?filter=video-classification).
Arguments:
model ([`PreTrainedModel`] or [`TFPreTrainedModel`]): | 456_19_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#videoclassificationpipeline | .md | Arguments:
model ([`PreTrainedModel`] or [`TFPreTrainedModel`]):
The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from
[`PreTrainedModel`] for PyTorch and [`TFPreTrainedModel`] for TensorFlow.
image_processor ([`BaseImageProcessor`]):
The image processor that will be used by the pipeline to encode data for the model. This object inherits from
[`BaseImageProcessor`].
modelcard (`str` or [`ModelCard`], *optional*): | 456_19_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#videoclassificationpipeline | .md | [`BaseImageProcessor`].
modelcard (`str` or [`ModelCard`], *optional*):
Model card attributed to the model for this pipeline.
framework (`str`, *optional*):
The framework to use, either `"pt"` for PyTorch or `"tf"` for TensorFlow. The specified framework must be
installed.
If no framework is specified, will default to the one currently installed. If no framework is specified and
both frameworks are installed, will default to the framework of the `model`, or to PyTorch if no model is
provided. | 456_19_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#videoclassificationpipeline | .md | both frameworks are installed, will default to the framework of the `model`, or to PyTorch if no model is
provided.
task (`str`, defaults to `""`):
A task-identifier for the pipeline.
num_workers (`int`, *optional*, defaults to 8):
When the pipeline will use *DataLoader* (when passing a dataset, on GPU for a Pytorch model), the number of
workers to be used.
batch_size (`int`, *optional*, defaults to 1): | 456_19_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#videoclassificationpipeline | .md | workers to be used.
batch_size (`int`, *optional*, defaults to 1):
When the pipeline will use *DataLoader* (when passing a dataset, on GPU for a Pytorch model), the size of
the batch to use, for inference this is not always beneficial, please read [Batching with
pipelines](https://huggingface.co/transformers/main_classes/pipelines.html#pipeline-batching) .
args_parser ([`~pipelines.ArgumentHandler`], *optional*):
Reference to the object in charge of parsing supplied pipeline parameters. | 456_19_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#videoclassificationpipeline | .md | Reference to the object in charge of parsing supplied pipeline parameters.
device (`int`, *optional*, defaults to -1):
Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on
the associated CUDA device id. You can pass native `torch.device` or a `str` too
torch_dtype (`str` or `torch.dtype`, *optional*):
Sent directly as `model_kwargs` (just a simpler shortcut) to use the available precision for this model | 456_19_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#videoclassificationpipeline | .md | Sent directly as `model_kwargs` (just a simpler shortcut) to use the available precision for this model
(`torch.float16`, `torch.bfloat16`, ... or `"auto"`)
binary_output (`bool`, *optional*, defaults to `False`):
Flag indicating if the output the pipeline should happen in a serialized format (i.e., pickle) or as
the raw output data e.g. text.
- __call__
- all | 456_19_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#zeroshotimageclassificationpipeline | .md | Zero shot image classification pipeline using `CLIPModel`. This pipeline predicts the class of an image when you
provide an image and a set of `candidate_labels`.
Example:
```python
>>> from transformers import pipeline | 456_20_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#zeroshotimageclassificationpipeline | .md | >>> classifier = pipeline(model="google/siglip-so400m-patch14-384")
>>> classifier(
... "https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png",
... candidate_labels=["animals", "humans", "landscape"],
... )
[{'score': 0.965, 'label': 'animals'}, {'score': 0.03, 'label': 'humans'}, {'score': 0.005, 'label': 'landscape'}] | 456_20_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#zeroshotimageclassificationpipeline | .md | >>> classifier(
... "https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png",
... candidate_labels=["black and white", "photorealist", "painting"],
... )
[{'score': 0.996, 'label': 'black and white'}, {'score': 0.003, 'label': 'photorealist'}, {'score': 0.0, 'label': 'painting'}]
```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial) | 456_20_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#zeroshotimageclassificationpipeline | .md | ```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial)
This image classification pipeline can currently be loaded from [`pipeline`] using the following task identifier:
`"zero-shot-image-classification"`.
See the list of available models on
[huggingface.co/models](https://huggingface.co/models?filter=zero-shot-image-classification).
Arguments:
model ([`PreTrainedModel`] or [`TFPreTrainedModel`]): | 456_20_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#zeroshotimageclassificationpipeline | .md | Arguments:
model ([`PreTrainedModel`] or [`TFPreTrainedModel`]):
The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from
[`PreTrainedModel`] for PyTorch and [`TFPreTrainedModel`] for TensorFlow.
image_processor ([`BaseImageProcessor`]):
The image processor that will be used by the pipeline to encode data for the model. This object inherits from
[`BaseImageProcessor`].
modelcard (`str` or [`ModelCard`], *optional*): | 456_20_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#zeroshotimageclassificationpipeline | .md | [`BaseImageProcessor`].
modelcard (`str` or [`ModelCard`], *optional*):
Model card attributed to the model for this pipeline.
framework (`str`, *optional*):
The framework to use, either `"pt"` for PyTorch or `"tf"` for TensorFlow. The specified framework must be
installed.
If no framework is specified, will default to the one currently installed. If no framework is specified and
both frameworks are installed, will default to the framework of the `model`, or to PyTorch if no model is
provided. | 456_20_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#zeroshotimageclassificationpipeline | .md | both frameworks are installed, will default to the framework of the `model`, or to PyTorch if no model is
provided.
task (`str`, defaults to `""`):
A task-identifier for the pipeline.
num_workers (`int`, *optional*, defaults to 8):
When the pipeline will use *DataLoader* (when passing a dataset, on GPU for a Pytorch model), the number of
workers to be used.
batch_size (`int`, *optional*, defaults to 1): | 456_20_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#zeroshotimageclassificationpipeline | .md | workers to be used.
batch_size (`int`, *optional*, defaults to 1):
When the pipeline will use *DataLoader* (when passing a dataset, on GPU for a Pytorch model), the size of
the batch to use, for inference this is not always beneficial, please read [Batching with
pipelines](https://huggingface.co/transformers/main_classes/pipelines.html#pipeline-batching) .
args_parser ([`~pipelines.ArgumentHandler`], *optional*):
Reference to the object in charge of parsing supplied pipeline parameters. | 456_20_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#zeroshotimageclassificationpipeline | .md | Reference to the object in charge of parsing supplied pipeline parameters.
device (`int`, *optional*, defaults to -1):
Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on
the associated CUDA device id. You can pass native `torch.device` or a `str` too
torch_dtype (`str` or `torch.dtype`, *optional*):
Sent directly as `model_kwargs` (just a simpler shortcut) to use the available precision for this model | 456_20_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#zeroshotimageclassificationpipeline | .md | Sent directly as `model_kwargs` (just a simpler shortcut) to use the available precision for this model
(`torch.float16`, `torch.bfloat16`, ... or `"auto"`)
binary_output (`bool`, *optional*, defaults to `False`):
Flag indicating if the output the pipeline should happen in a serialized format (i.e., pickle) or as
the raw output data e.g. text.
- __call__
- all | 456_20_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#zeroshotobjectdetectionpipeline | .md | Zero shot object detection pipeline using `OwlViTForObjectDetection`. This pipeline predicts bounding boxes of
objects when you provide an image and a set of `candidate_labels`.
Example:
```python
>>> from transformers import pipeline | 456_21_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#zeroshotobjectdetectionpipeline | .md | >>> detector = pipeline(model="google/owlvit-base-patch32", task="zero-shot-object-detection")
>>> detector(
... "http://images.cocodataset.org/val2017/000000039769.jpg",
... candidate_labels=["cat", "couch"],
... )
[{'score': 0.287, 'label': 'cat', 'box': {'xmin': 324, 'ymin': 20, 'xmax': 640, 'ymax': 373}}, {'score': 0.254, 'label': 'cat', 'box': {'xmin': 1, 'ymin': 55, 'xmax': 315, 'ymax': 472}}, {'score': 0.121, 'label': 'couch', 'box': {'xmin': 4, 'ymin': 0, 'xmax': 642, 'ymax': 476}}] | 456_21_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#zeroshotobjectdetectionpipeline | .md | >>> detector(
... "https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png",
... candidate_labels=["head", "bird"],
... )
[{'score': 0.119, 'label': 'bird', 'box': {'xmin': 71, 'ymin': 170, 'xmax': 410, 'ymax': 508}}]
```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial)
This object detection pipeline can currently be loaded from [`pipeline`] using the following task identifier:
`"zero-shot-object-detection"`. | 456_21_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#zeroshotobjectdetectionpipeline | .md | `"zero-shot-object-detection"`.
See the list of available models on
[huggingface.co/models](https://huggingface.co/models?filter=zero-shot-object-detection).
Arguments:
model ([`PreTrainedModel`] or [`TFPreTrainedModel`]):
The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from
[`PreTrainedModel`] for PyTorch and [`TFPreTrainedModel`] for TensorFlow.
image_processor ([`BaseImageProcessor`]): | 456_21_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#zeroshotobjectdetectionpipeline | .md | [`PreTrainedModel`] for PyTorch and [`TFPreTrainedModel`] for TensorFlow.
image_processor ([`BaseImageProcessor`]):
The image processor that will be used by the pipeline to encode data for the model. This object inherits from
[`BaseImageProcessor`].
modelcard (`str` or [`ModelCard`], *optional*):
Model card attributed to the model for this pipeline.
framework (`str`, *optional*):
The framework to use, either `"pt"` for PyTorch or `"tf"` for TensorFlow. The specified framework must be
installed. | 456_21_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#zeroshotobjectdetectionpipeline | .md | The framework to use, either `"pt"` for PyTorch or `"tf"` for TensorFlow. The specified framework must be
installed.
If no framework is specified, will default to the one currently installed. If no framework is specified and
both frameworks are installed, will default to the framework of the `model`, or to PyTorch if no model is
provided.
task (`str`, defaults to `""`):
A task-identifier for the pipeline.
num_workers (`int`, *optional*, defaults to 8): | 456_21_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#zeroshotobjectdetectionpipeline | .md | provided.
task (`str`, defaults to `""`):
A task-identifier for the pipeline.
num_workers (`int`, *optional*, defaults to 8):
When the pipeline will use *DataLoader* (when passing a dataset, on GPU for a Pytorch model), the number of
workers to be used.
batch_size (`int`, *optional*, defaults to 1):
When the pipeline will use *DataLoader* (when passing a dataset, on GPU for a Pytorch model), the size of
the batch to use, for inference this is not always beneficial, please read [Batching with | 456_21_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#zeroshotobjectdetectionpipeline | .md | the batch to use, for inference this is not always beneficial, please read [Batching with
pipelines](https://huggingface.co/transformers/main_classes/pipelines.html#pipeline-batching) .
args_parser ([`~pipelines.ArgumentHandler`], *optional*):
Reference to the object in charge of parsing supplied pipeline parameters.
device (`int`, *optional*, defaults to -1):
Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on | 456_21_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#zeroshotobjectdetectionpipeline | .md | Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on
the associated CUDA device id. You can pass native `torch.device` or a `str` too
torch_dtype (`str` or `torch.dtype`, *optional*):
Sent directly as `model_kwargs` (just a simpler shortcut) to use the available precision for this model
(`torch.float16`, `torch.bfloat16`, ... or `"auto"`)
binary_output (`bool`, *optional*, defaults to `False`): | 456_21_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#zeroshotobjectdetectionpipeline | .md | (`torch.float16`, `torch.bfloat16`, ... or `"auto"`)
binary_output (`bool`, *optional*, defaults to `False`):
Flag indicating if the output the pipeline should happen in a serialized format (i.e., pickle) or as
the raw output data e.g. text.
- __call__
- all | 456_21_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#natural-language-processing | .md | Pipelines available for natural language processing tasks include the following. | 456_22_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#fillmaskpipeline | .md | Masked language modeling prediction pipeline using any `ModelWithLMHead`. See the [masked language modeling
examples](../task_summary#masked-language-modeling) for more information.
Example:
```python
>>> from transformers import pipeline | 456_23_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#fillmaskpipeline | .md | >>> fill_masker = pipeline(model="google-bert/bert-base-uncased")
>>> fill_masker("This is a simple [MASK].") | 456_23_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#fillmaskpipeline | .md | [{'score': 0.042, 'token': 3291, 'token_str': 'problem', 'sequence': 'this is a simple problem.'}, {'score': 0.031, 'token': 3160, 'token_str': 'question', 'sequence': 'this is a simple question.'}, {'score': 0.03, 'token': 8522, 'token_str': 'equation', 'sequence': 'this is a simple equation.'}, {'score': 0.027, 'token': 2028, 'token_str': 'one', 'sequence': 'this is a simple one.'}, {'score': 0.024, 'token': 3627, 'token_str': 'rule', 'sequence': 'this is a simple rule.'}]
``` | 456_23_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#fillmaskpipeline | .md | ```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial)
This mask filling pipeline can currently be loaded from [`pipeline`] using the following task identifier:
`"fill-mask"`.
The models that this pipeline can use are models that have been trained with a masked language modeling objective,
which includes the bi-directional models in the library. See the up-to-date list of available models on | 456_23_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#fillmaskpipeline | .md | which includes the bi-directional models in the library. See the up-to-date list of available models on
[huggingface.co/models](https://huggingface.co/models?filter=fill-mask).
<Tip>
This pipeline only works for inputs with exactly one token masked. Experimental: We added support for multiple
masks. The returned values are raw model output, and correspond to disjoint probabilities where one might expect
joint probabilities (See [discussion](https://github.com/huggingface/transformers/pull/10222)). | 456_23_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#fillmaskpipeline | .md | joint probabilities (See [discussion](https://github.com/huggingface/transformers/pull/10222)).
</Tip>
<Tip>
This pipeline now supports tokenizer_kwargs. For example try:
```python
>>> from transformers import pipeline | 456_23_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#fillmaskpipeline | .md | >>> fill_masker = pipeline(model="google-bert/bert-base-uncased")
>>> tokenizer_kwargs = {"truncation": True}
>>> fill_masker(
... "This is a simple [MASK]. " + "...with a large amount of repeated text appended. " * 100,
... tokenizer_kwargs=tokenizer_kwargs,
... )
```
</Tip>
Arguments:
model ([`PreTrainedModel`] or [`TFPreTrainedModel`]):
The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from | 456_23_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#fillmaskpipeline | .md | The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from
[`PreTrainedModel`] for PyTorch and [`TFPreTrainedModel`] for TensorFlow.
tokenizer ([`PreTrainedTokenizer`]):
The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from
[`PreTrainedTokenizer`].
modelcard (`str` or [`ModelCard`], *optional*):
Model card attributed to the model for this pipeline.
framework (`str`, *optional*): | 456_23_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#fillmaskpipeline | .md | Model card attributed to the model for this pipeline.
framework (`str`, *optional*):
The framework to use, either `"pt"` for PyTorch or `"tf"` for TensorFlow. The specified framework must be
installed.
If no framework is specified, will default to the one currently installed. If no framework is specified and
both frameworks are installed, will default to the framework of the `model`, or to PyTorch if no model is
provided.
task (`str`, defaults to `""`):
A task-identifier for the pipeline. | 456_23_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#fillmaskpipeline | .md | provided.
task (`str`, defaults to `""`):
A task-identifier for the pipeline.
num_workers (`int`, *optional*, defaults to 8):
When the pipeline will use *DataLoader* (when passing a dataset, on GPU for a Pytorch model), the number of
workers to be used.
batch_size (`int`, *optional*, defaults to 1):
When the pipeline will use *DataLoader* (when passing a dataset, on GPU for a Pytorch model), the size of
the batch to use, for inference this is not always beneficial, please read [Batching with | 456_23_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#fillmaskpipeline | .md | the batch to use, for inference this is not always beneficial, please read [Batching with
pipelines](https://huggingface.co/transformers/main_classes/pipelines.html#pipeline-batching) .
args_parser ([`~pipelines.ArgumentHandler`], *optional*):
Reference to the object in charge of parsing supplied pipeline parameters.
device (`int`, *optional*, defaults to -1):
Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on | 456_23_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#fillmaskpipeline | .md | Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on
the associated CUDA device id. You can pass native `torch.device` or a `str` too
torch_dtype (`str` or `torch.dtype`, *optional*):
Sent directly as `model_kwargs` (just a simpler shortcut) to use the available precision for this model
(`torch.float16`, `torch.bfloat16`, ... or `"auto"`)
binary_output (`bool`, *optional*, defaults to `False`): | 456_23_11 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#fillmaskpipeline | .md | (`torch.float16`, `torch.bfloat16`, ... or `"auto"`)
binary_output (`bool`, *optional*, defaults to `False`):
Flag indicating if the output the pipeline should happen in a serialized format (i.e., pickle) or as
the raw output data e.g. text.
top_k (`int`, *optional*, defaults to 5):
The number of predictions to return.
targets (`str` or `List[str]`, *optional*):
When passed, the model will limit the scores to the passed targets instead of looking up in the whole | 456_23_12 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#fillmaskpipeline | .md | When passed, the model will limit the scores to the passed targets instead of looking up in the whole
vocab. If the provided targets are not in the model vocab, they will be tokenized and the first resulting
token will be used (with a warning, and that might be slower).
tokenizer_kwargs (`dict`, *optional*):
Additional dictionary of keyword arguments passed along to the tokenizer.
- __call__
- all | 456_23_13 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#questionansweringpipeline | .md | Question Answering pipeline using any `ModelForQuestionAnswering`. See the [question answering
examples](../task_summary#question-answering) for more information.
Example:
```python
>>> from transformers import pipeline | 456_24_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#questionansweringpipeline | .md | >>> oracle = pipeline(model="deepset/roberta-base-squad2")
>>> oracle(question="Where do I live?", context="My name is Wolfgang and I live in Berlin")
{'score': 0.9191, 'start': 34, 'end': 40, 'answer': 'Berlin'}
```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial)
This question answering pipeline can currently be loaded from [`pipeline`] using the following task identifier:
`"question-answering"`. | 456_24_1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.