source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#multimodal | .md | Pipelines available for multimodal tasks include the following. | 456_33_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#documentquestionansweringpipeline | .md | Document Question Answering pipeline using any `AutoModelForDocumentQuestionAnswering`. The inputs/outputs are
similar to the (extractive) question answering pipeline; however, the pipeline takes an image (and optional OCR'd
words/boxes) as input instead of text context.
Example:
```python
>>> from transformers import pipeline | 456_34_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#documentquestionansweringpipeline | .md | >>> document_qa = pipeline(model="impira/layoutlm-document-qa")
>>> document_qa(
... image="https://huggingface.co/spaces/impira/docquery/resolve/2359223c1837a7587402bda0f2643382a6eefeab/invoice.png",
... question="What is the invoice number?",
... )
[{'score': 0.425, 'answer': 'us-001', 'start': 16, 'end': 16}]
```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial) | 456_34_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#documentquestionansweringpipeline | .md | ```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial)
This document question answering pipeline can currently be loaded from [`pipeline`] using the following task
identifier: `"document-question-answering"`.
The models that this pipeline can use are models that have been fine-tuned on a document question answering task.
See the up-to-date list of available models on | 456_34_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#documentquestionansweringpipeline | .md | See the up-to-date list of available models on
[huggingface.co/models](https://huggingface.co/models?filter=document-question-answering).
Arguments:
model ([`PreTrainedModel`] or [`TFPreTrainedModel`]):
The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from
[`PreTrainedModel`] for PyTorch and [`TFPreTrainedModel`] for TensorFlow.
tokenizer ([`PreTrainedTokenizer`]): | 456_34_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#documentquestionansweringpipeline | .md | [`PreTrainedModel`] for PyTorch and [`TFPreTrainedModel`] for TensorFlow.
tokenizer ([`PreTrainedTokenizer`]):
The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from
[`PreTrainedTokenizer`].
image_processor ([`BaseImageProcessor`]):
The image processor that will be used by the pipeline to encode data for the model. This object inherits from
[`BaseImageProcessor`].
modelcard (`str` or [`ModelCard`], *optional*): | 456_34_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#documentquestionansweringpipeline | .md | [`BaseImageProcessor`].
modelcard (`str` or [`ModelCard`], *optional*):
Model card attributed to the model for this pipeline.
framework (`str`, *optional*):
The framework to use, either `"pt"` for PyTorch or `"tf"` for TensorFlow. The specified framework must be
installed.
If no framework is specified, will default to the one currently installed. If no framework is specified and
both frameworks are installed, will default to the framework of the `model`, or to PyTorch if no model is
provided. | 456_34_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#documentquestionansweringpipeline | .md | both frameworks are installed, will default to the framework of the `model`, or to PyTorch if no model is
provided.
task (`str`, defaults to `""`):
A task-identifier for the pipeline.
num_workers (`int`, *optional*, defaults to 8):
When the pipeline will use *DataLoader* (when passing a dataset, on GPU for a Pytorch model), the number of
workers to be used.
batch_size (`int`, *optional*, defaults to 1): | 456_34_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#documentquestionansweringpipeline | .md | workers to be used.
batch_size (`int`, *optional*, defaults to 1):
When the pipeline will use *DataLoader* (when passing a dataset, on GPU for a Pytorch model), the size of
the batch to use, for inference this is not always beneficial, please read [Batching with
pipelines](https://huggingface.co/transformers/main_classes/pipelines.html#pipeline-batching) .
args_parser ([`~pipelines.ArgumentHandler`], *optional*):
Reference to the object in charge of parsing supplied pipeline parameters. | 456_34_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#documentquestionansweringpipeline | .md | Reference to the object in charge of parsing supplied pipeline parameters.
device (`int`, *optional*, defaults to -1):
Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on
the associated CUDA device id. You can pass native `torch.device` or a `str` too
torch_dtype (`str` or `torch.dtype`, *optional*):
Sent directly as `model_kwargs` (just a simpler shortcut) to use the available precision for this model | 456_34_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#documentquestionansweringpipeline | .md | Sent directly as `model_kwargs` (just a simpler shortcut) to use the available precision for this model
(`torch.float16`, `torch.bfloat16`, ... or `"auto"`)
binary_output (`bool`, *optional*, defaults to `False`):
Flag indicating if the output the pipeline should happen in a serialized format (i.e., pickle) or as
the raw output data e.g. text.
- __call__
- all | 456_34_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#featureextractionpipeline | .md | Feature extraction pipeline uses no model head. This pipeline extracts the hidden states from the base
transformer, which can be used as features in downstream tasks.
Example:
```python
>>> from transformers import pipeline | 456_35_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#featureextractionpipeline | .md | >>> extractor = pipeline(model="google-bert/bert-base-uncased", task="feature-extraction")
>>> result = extractor("This is a simple test.", return_tensors=True)
>>> result.shape # This is a tensor of shape [1, sequence_length, hidden_dimension] representing the input string.
torch.Size([1, 8, 768])
```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial)
This feature extraction pipeline can currently be loaded from [`pipeline`] using the task identifier: | 456_35_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#featureextractionpipeline | .md | This feature extraction pipeline can currently be loaded from [`pipeline`] using the task identifier:
`"feature-extraction"`.
All models may be used for this pipeline. See a list of all models, including community-contributed models on
[huggingface.co/models](https://huggingface.co/models).
Arguments:
model ([`PreTrainedModel`] or [`TFPreTrainedModel`]):
The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from | 456_35_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#featureextractionpipeline | .md | The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from
[`PreTrainedModel`] for PyTorch and [`TFPreTrainedModel`] for TensorFlow.
tokenizer ([`PreTrainedTokenizer`]):
The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from
[`PreTrainedTokenizer`].
modelcard (`str` or [`ModelCard`], *optional*):
Model card attributed to the model for this pipeline.
framework (`str`, *optional*): | 456_35_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#featureextractionpipeline | .md | Model card attributed to the model for this pipeline.
framework (`str`, *optional*):
The framework to use, either `"pt"` for PyTorch or `"tf"` for TensorFlow. The specified framework must be
installed.
If no framework is specified, will default to the one currently installed. If no framework is specified and
both frameworks are installed, will default to the framework of the `model`, or to PyTorch if no model is
provided.
task (`str`, defaults to `""`):
A task-identifier for the pipeline. | 456_35_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#featureextractionpipeline | .md | provided.
task (`str`, defaults to `""`):
A task-identifier for the pipeline.
num_workers (`int`, *optional*, defaults to 8):
When the pipeline will use *DataLoader* (when passing a dataset, on GPU for a Pytorch model), the number of
workers to be used.
batch_size (`int`, *optional*, defaults to 1):
When the pipeline will use *DataLoader* (when passing a dataset, on GPU for a Pytorch model), the size of
the batch to use, for inference this is not always beneficial, please read [Batching with | 456_35_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#featureextractionpipeline | .md | the batch to use, for inference this is not always beneficial, please read [Batching with
pipelines](https://huggingface.co/transformers/main_classes/pipelines.html#pipeline-batching) .
args_parser ([`~pipelines.ArgumentHandler`], *optional*):
Reference to the object in charge of parsing supplied pipeline parameters.
device (`int`, *optional*, defaults to -1):
Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on | 456_35_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#featureextractionpipeline | .md | Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on
the associated CUDA device id. You can pass native `torch.device` or a `str` too
torch_dtype (`str` or `torch.dtype`, *optional*):
Sent directly as `model_kwargs` (just a simpler shortcut) to use the available precision for this model
(`torch.float16`, `torch.bfloat16`, ... or `"auto"`)
tokenize_kwargs (`dict`, *optional*):
Additional dictionary of keyword arguments passed along to the tokenizer. | 456_35_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#featureextractionpipeline | .md | tokenize_kwargs (`dict`, *optional*):
Additional dictionary of keyword arguments passed along to the tokenizer.
return_tensors (`bool`, *optional*):
If `True`, returns a tensor according to the specified framework, otherwise returns a list.
- __call__
- all | 456_35_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#imagefeatureextractionpipeline | .md | Image feature extraction pipeline uses no model head. This pipeline extracts the hidden states from the base
transformer, which can be used as features in downstream tasks.
Example:
```python
>>> from transformers import pipeline | 456_36_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#imagefeatureextractionpipeline | .md | >>> extractor = pipeline(model="google/vit-base-patch16-224", task="image-feature-extraction")
>>> result = extractor("https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png", return_tensors=True)
>>> result.shape # This is a tensor of shape [1, sequence_lenth, hidden_dimension] representing the input image.
torch.Size([1, 197, 768])
```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial) | 456_36_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#imagefeatureextractionpipeline | .md | ```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial)
This image feature extraction pipeline can currently be loaded from [`pipeline`] using the task identifier:
`"image-feature-extraction"`.
All vision models may be used for this pipeline. See a list of all models, including community-contributed models on
[huggingface.co/models](https://huggingface.co/models).
Arguments:
model ([`PreTrainedModel`] or [`TFPreTrainedModel`]): | 456_36_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#imagefeatureextractionpipeline | .md | [huggingface.co/models](https://huggingface.co/models).
Arguments:
model ([`PreTrainedModel`] or [`TFPreTrainedModel`]):
The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from
[`PreTrainedModel`] for PyTorch and [`TFPreTrainedModel`] for TensorFlow.
image_processor ([`BaseImageProcessor`]):
The image processor that will be used by the pipeline to encode data for the model. This object inherits from
[`BaseImageProcessor`]. | 456_36_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#imagefeatureextractionpipeline | .md | [`BaseImageProcessor`].
modelcard (`str` or [`ModelCard`], *optional*):
Model card attributed to the model for this pipeline.
framework (`str`, *optional*):
The framework to use, either `"pt"` for PyTorch or `"tf"` for TensorFlow. The specified framework must be
installed.
If no framework is specified, will default to the one currently installed. If no framework is specified and
both frameworks are installed, will default to the framework of the `model`, or to PyTorch if no model is
provided. | 456_36_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#imagefeatureextractionpipeline | .md | both frameworks are installed, will default to the framework of the `model`, or to PyTorch if no model is
provided.
task (`str`, defaults to `""`):
A task-identifier for the pipeline.
num_workers (`int`, *optional*, defaults to 8):
When the pipeline will use *DataLoader* (when passing a dataset, on GPU for a Pytorch model), the number of
workers to be used.
batch_size (`int`, *optional*, defaults to 1): | 456_36_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#imagefeatureextractionpipeline | .md | workers to be used.
batch_size (`int`, *optional*, defaults to 1):
When the pipeline will use *DataLoader* (when passing a dataset, on GPU for a Pytorch model), the size of
the batch to use, for inference this is not always beneficial, please read [Batching with
pipelines](https://huggingface.co/transformers/main_classes/pipelines.html#pipeline-batching) .
args_parser ([`~pipelines.ArgumentHandler`], *optional*):
Reference to the object in charge of parsing supplied pipeline parameters. | 456_36_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#imagefeatureextractionpipeline | .md | Reference to the object in charge of parsing supplied pipeline parameters.
device (`int`, *optional*, defaults to -1):
Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on
the associated CUDA device id. You can pass native `torch.device` or a `str` too
torch_dtype (`str` or `torch.dtype`, *optional*):
Sent directly as `model_kwargs` (just a simpler shortcut) to use the available precision for this model | 456_36_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#imagefeatureextractionpipeline | .md | Sent directly as `model_kwargs` (just a simpler shortcut) to use the available precision for this model
(`torch.float16`, `torch.bfloat16`, ... or `"auto"`)
binary_output (`bool`, *optional*, defaults to `False`):
Flag indicating if the output the pipeline should happen in a serialized format (i.e., pickle) or as
the raw output data e.g. text.
image_processor_kwargs (`dict`, *optional*):
Additional dictionary of keyword arguments passed along to the image processor e.g. | 456_36_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#imagefeatureextractionpipeline | .md | Additional dictionary of keyword arguments passed along to the image processor e.g.
{"size": {"height": 100, "width": 100}}
pool (`bool`, *optional*, defaults to `False`):
Whether or not to return the pooled output. If `False`, the model will return the raw hidden states.
- __call__
- all | 456_36_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#imagetotextpipeline | .md | Image To Text pipeline using a `AutoModelForVision2Seq`. This pipeline predicts a caption for a given image.
Example:
```python
>>> from transformers import pipeline | 456_37_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#imagetotextpipeline | .md | >>> captioner = pipeline(model="ydshieh/vit-gpt2-coco-en")
>>> captioner("https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png")
[{'generated_text': 'two birds are standing next to each other '}]
```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial)
This image to text pipeline can currently be loaded from pipeline() using the following task identifier:
"image-to-text".
See the list of available models on | 456_37_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#imagetotextpipeline | .md | "image-to-text".
See the list of available models on
[huggingface.co/models](https://huggingface.co/models?pipeline_tag=image-to-text).
Arguments:
model ([`PreTrainedModel`] or [`TFPreTrainedModel`]):
The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from
[`PreTrainedModel`] for PyTorch and [`TFPreTrainedModel`] for TensorFlow.
tokenizer ([`PreTrainedTokenizer`]): | 456_37_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#imagetotextpipeline | .md | [`PreTrainedModel`] for PyTorch and [`TFPreTrainedModel`] for TensorFlow.
tokenizer ([`PreTrainedTokenizer`]):
The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from
[`PreTrainedTokenizer`].
image_processor ([`BaseImageProcessor`]):
The image processor that will be used by the pipeline to encode data for the model. This object inherits from
[`BaseImageProcessor`].
modelcard (`str` or [`ModelCard`], *optional*): | 456_37_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#imagetotextpipeline | .md | [`BaseImageProcessor`].
modelcard (`str` or [`ModelCard`], *optional*):
Model card attributed to the model for this pipeline.
framework (`str`, *optional*):
The framework to use, either `"pt"` for PyTorch or `"tf"` for TensorFlow. The specified framework must be
installed.
If no framework is specified, will default to the one currently installed. If no framework is specified and
both frameworks are installed, will default to the framework of the `model`, or to PyTorch if no model is
provided. | 456_37_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#imagetotextpipeline | .md | both frameworks are installed, will default to the framework of the `model`, or to PyTorch if no model is
provided.
task (`str`, defaults to `""`):
A task-identifier for the pipeline.
num_workers (`int`, *optional*, defaults to 8):
When the pipeline will use *DataLoader* (when passing a dataset, on GPU for a Pytorch model), the number of
workers to be used.
batch_size (`int`, *optional*, defaults to 1): | 456_37_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#imagetotextpipeline | .md | workers to be used.
batch_size (`int`, *optional*, defaults to 1):
When the pipeline will use *DataLoader* (when passing a dataset, on GPU for a Pytorch model), the size of
the batch to use, for inference this is not always beneficial, please read [Batching with
pipelines](https://huggingface.co/transformers/main_classes/pipelines.html#pipeline-batching) .
args_parser ([`~pipelines.ArgumentHandler`], *optional*):
Reference to the object in charge of parsing supplied pipeline parameters. | 456_37_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#imagetotextpipeline | .md | Reference to the object in charge of parsing supplied pipeline parameters.
device (`int`, *optional*, defaults to -1):
Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on
the associated CUDA device id. You can pass native `torch.device` or a `str` too
torch_dtype (`str` or `torch.dtype`, *optional*):
Sent directly as `model_kwargs` (just a simpler shortcut) to use the available precision for this model | 456_37_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#imagetotextpipeline | .md | Sent directly as `model_kwargs` (just a simpler shortcut) to use the available precision for this model
(`torch.float16`, `torch.bfloat16`, ... or `"auto"`)
binary_output (`bool`, *optional*, defaults to `False`):
Flag indicating if the output the pipeline should happen in a serialized format (i.e., pickle) or as
the raw output data e.g. text.
- __call__
- all | 456_37_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#imagetexttotextpipeline | .md | Image-text-to-text pipeline using an `AutoModelForImageTextToText`. This pipeline generates text given an image and text.
When the underlying model is a conversational model, it can also accept one or more chats,
in which case the pipeline will operate in chat mode and will continue the chat(s) by adding its response(s).
Each chat takes the form of a list of dicts, where each dict contains "role" and "content" keys.
Example:
```python
>>> from transformers import pipeline | 456_38_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#imagetexttotextpipeline | .md | >>> pipe = pipeline(task="image-text-to-text", model="Salesforce/blip-image-captioning-base")
>>> pipe("https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png", text="A photo of")
[{'generated_text': 'a photo of two birds'}]
```
```python
>>> from transformers import pipeline | 456_38_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#imagetexttotextpipeline | .md | >>> pipe = pipeline("image-text-to-text", model="llava-hf/llava-interleave-qwen-0.5b-hf")
>>> messages = [
>>> {
>>> "role": "user",
>>> "content": [
>>> {
>>> "type": "image",
>>> "url": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
>>> },
>>> {"type": "text", "text": "Describe this image."},
>>> ],
>>> },
>>> {
>>> "role": "assistant",
>>> "content": [ | 456_38_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#imagetexttotextpipeline | .md | >>> ],
>>> },
>>> {
>>> "role": "assistant",
>>> "content": [
>>> {"type": "text", "text": "There is a dog and"},
>>> ],
>>> },
>>> ]
>>> pipe(text=messages, max_new_tokens=20, return_full_text=False)
[{'input_text': [{'role': 'user',
'content': [{'type': 'image',
'url': 'https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg'},
{'type': 'text', 'text': 'Describe this image.'}]},
{'role': 'assistant', | 456_38_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#imagetexttotextpipeline | .md | {'type': 'text', 'text': 'Describe this image.'}]},
{'role': 'assistant',
'content': [{'type': 'text', 'text': 'There is a dog and'}]}],
'generated_text': ' a person in the image. The dog is sitting on the sand, and the person is sitting on'}]
```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial)
This image-text to text pipeline can currently be loaded from pipeline() using the following task identifier:
"image-text-to-text". | 456_38_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#imagetexttotextpipeline | .md | "image-text-to-text".
See the list of available models on
[huggingface.co/models](https://huggingface.co/models?pipeline_tag=image-text-to-text).
Arguments:
model ([`PreTrainedModel`] or [`TFPreTrainedModel`]):
The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from
[`PreTrainedModel`] for PyTorch and [`TFPreTrainedModel`] for TensorFlow.
processor ([`ProcessorMixin`]): | 456_38_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#imagetexttotextpipeline | .md | [`PreTrainedModel`] for PyTorch and [`TFPreTrainedModel`] for TensorFlow.
processor ([`ProcessorMixin`]):
The processor that will be used by the pipeline to encode data for the model. This object inherits from
[`ProcessorMixin`]. Processor is a composite object that might contain `tokenizer`, `feature_extractor`, and
`image_processor`.
modelcard (`str` or [`ModelCard`], *optional*):
Model card attributed to the model for this pipeline.
framework (`str`, *optional*): | 456_38_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#imagetexttotextpipeline | .md | Model card attributed to the model for this pipeline.
framework (`str`, *optional*):
The framework to use, either `"pt"` for PyTorch or `"tf"` for TensorFlow. The specified framework must be
installed.
If no framework is specified, will default to the one currently installed. If no framework is specified and
both frameworks are installed, will default to the framework of the `model`, or to PyTorch if no model is
provided.
task (`str`, defaults to `""`):
A task-identifier for the pipeline. | 456_38_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#imagetexttotextpipeline | .md | provided.
task (`str`, defaults to `""`):
A task-identifier for the pipeline.
num_workers (`int`, *optional*, defaults to 8):
When the pipeline will use *DataLoader* (when passing a dataset, on GPU for a Pytorch model), the number of
workers to be used.
batch_size (`int`, *optional*, defaults to 1):
When the pipeline will use *DataLoader* (when passing a dataset, on GPU for a Pytorch model), the size of
the batch to use, for inference this is not always beneficial, please read [Batching with | 456_38_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#imagetexttotextpipeline | .md | the batch to use, for inference this is not always beneficial, please read [Batching with
pipelines](https://huggingface.co/transformers/main_classes/pipelines.html#pipeline-batching) .
args_parser ([`~pipelines.ArgumentHandler`], *optional*):
Reference to the object in charge of parsing supplied pipeline parameters.
device (`int`, *optional*, defaults to -1):
Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on | 456_38_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#imagetexttotextpipeline | .md | Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on
the associated CUDA device id. You can pass native `torch.device` or a `str` too
torch_dtype (`str` or `torch.dtype`, *optional*):
Sent directly as `model_kwargs` (just a simpler shortcut) to use the available precision for this model
(`torch.float16`, `torch.bfloat16`, ... or `"auto"`)
binary_output (`bool`, *optional*, defaults to `False`): | 456_38_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#imagetexttotextpipeline | .md | (`torch.float16`, `torch.bfloat16`, ... or `"auto"`)
binary_output (`bool`, *optional*, defaults to `False`):
Flag indicating if the output the pipeline should happen in a serialized format (i.e., pickle) or as
the raw output data e.g. text.
- __call__
- all | 456_38_11 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#maskgenerationpipeline | .md | Automatic mask generation for images using `SamForMaskGeneration`. This pipeline predicts binary masks for an
image, given an image. It is a `ChunkPipeline` because you can seperate the points in a mini-batch in order to
avoid OOM issues. Use the `points_per_batch` argument to control the number of points that will be processed at the
same time. Default is `64`.
The pipeline works in 3 steps:
1. `preprocess`: A grid of 1024 points evenly separated is generated along with bounding boxes and point
labels. | 456_39_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#maskgenerationpipeline | .md | 1. `preprocess`: A grid of 1024 points evenly separated is generated along with bounding boxes and point
labels.
For more details on how the points and bounding boxes are created, check the `_generate_crop_boxes`
function. The image is also preprocessed using the `image_processor`. This function `yields` a minibatch of
`points_per_batch`.
2. `forward`: feeds the outputs of `preprocess` to the model. The image embedding is computed only once. | 456_39_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#maskgenerationpipeline | .md | `points_per_batch`.
2. `forward`: feeds the outputs of `preprocess` to the model. The image embedding is computed only once.
Calls both `self.model.get_image_embeddings` and makes sure that the gradients are not computed, and the
tensors and models are on the same device.
3. `postprocess`: The most important part of the automatic mask generation happens here. Three steps
are induced:
- image_processor.postprocess_masks (run on each minibatch loop): takes in the raw output masks,
resizes them according | 456_39_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#maskgenerationpipeline | .md | - image_processor.postprocess_masks (run on each minibatch loop): takes in the raw output masks,
resizes them according
to the image size, and transforms there to binary masks.
- image_processor.filter_masks (on each minibatch loop): uses both `pred_iou_thresh` and
`stability_scores`. Also
applies a variety of filters based on non maximum suppression to remove bad masks.
- image_processor.postprocess_masks_for_amg applies the NSM on the mask to only keep relevant ones.
Example:
```python | 456_39_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#maskgenerationpipeline | .md | - image_processor.postprocess_masks_for_amg applies the NSM on the mask to only keep relevant ones.
Example:
```python
>>> from transformers import pipeline | 456_39_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#maskgenerationpipeline | .md | >>> generator = pipeline(model="facebook/sam-vit-base", task="mask-generation")
>>> outputs = generator(
... "http://images.cocodataset.org/val2017/000000039769.jpg",
... ) | 456_39_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#maskgenerationpipeline | .md | >>> outputs = generator(
... "https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png", points_per_batch=128
... )
```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial)
This segmentation pipeline can currently be loaded from [`pipeline`] using the following task identifier:
`"mask-generation"`.
See the list of available models on [huggingface.co/models](https://huggingface.co/models?filter=mask-generation).
Arguments: | 456_39_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#maskgenerationpipeline | .md | See the list of available models on [huggingface.co/models](https://huggingface.co/models?filter=mask-generation).
Arguments:
model ([`PreTrainedModel`] or [`TFPreTrainedModel`]):
The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from
[`PreTrainedModel`] for PyTorch and [`TFPreTrainedModel`] for TensorFlow.
image_processor ([`BaseImageProcessor`]): | 456_39_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#maskgenerationpipeline | .md | [`PreTrainedModel`] for PyTorch and [`TFPreTrainedModel`] for TensorFlow.
image_processor ([`BaseImageProcessor`]):
The image processor that will be used by the pipeline to encode data for the model. This object inherits from
[`BaseImageProcessor`].
modelcard (`str` or [`ModelCard`], *optional*):
Model card attributed to the model for this pipeline.
framework (`str`, *optional*):
The framework to use, either `"pt"` for PyTorch or `"tf"` for TensorFlow. The specified framework must be
installed. | 456_39_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#maskgenerationpipeline | .md | The framework to use, either `"pt"` for PyTorch or `"tf"` for TensorFlow. The specified framework must be
installed.
If no framework is specified, will default to the one currently installed. If no framework is specified and
both frameworks are installed, will default to the framework of the `model`, or to PyTorch if no model is
provided.
task (`str`, defaults to `""`):
A task-identifier for the pipeline.
num_workers (`int`, *optional*, defaults to 8): | 456_39_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#maskgenerationpipeline | .md | provided.
task (`str`, defaults to `""`):
A task-identifier for the pipeline.
num_workers (`int`, *optional*, defaults to 8):
When the pipeline will use *DataLoader* (when passing a dataset, on GPU for a Pytorch model), the number of
workers to be used.
batch_size (`int`, *optional*, defaults to 1):
When the pipeline will use *DataLoader* (when passing a dataset, on GPU for a Pytorch model), the size of
the batch to use, for inference this is not always beneficial, please read [Batching with | 456_39_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#maskgenerationpipeline | .md | the batch to use, for inference this is not always beneficial, please read [Batching with
pipelines](https://huggingface.co/transformers/main_classes/pipelines.html#pipeline-batching) .
args_parser ([`~pipelines.ArgumentHandler`], *optional*):
Reference to the object in charge of parsing supplied pipeline parameters.
device (`int`, *optional*, defaults to -1):
Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on | 456_39_11 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#maskgenerationpipeline | .md | Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on
the associated CUDA device id. You can pass native `torch.device` or a `str` too
torch_dtype (`str` or `torch.dtype`, *optional*):
Sent directly as `model_kwargs` (just a simpler shortcut) to use the available precision for this model
(`torch.float16`, `torch.bfloat16`, ... or `"auto"`)
binary_output (`bool`, *optional*, defaults to `False`): | 456_39_12 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#maskgenerationpipeline | .md | (`torch.float16`, `torch.bfloat16`, ... or `"auto"`)
binary_output (`bool`, *optional*, defaults to `False`):
Flag indicating if the output the pipeline should happen in a serialized format (i.e., pickle) or as
the raw output data e.g. text.
points_per_batch (*optional*, int, default to 64):
Sets the number of points run simultaneously by the model. Higher numbers may be faster but use more GPU
memory.
output_bboxes_mask (`bool`, *optional*, default to `False`): | 456_39_13 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#maskgenerationpipeline | .md | memory.
output_bboxes_mask (`bool`, *optional*, default to `False`):
Whether or not to output the bounding box predictions.
output_rle_masks (`bool`, *optional*, default to `False`):
Whether or not to output the masks in `RLE` format
- __call__
- all | 456_39_14 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#visualquestionansweringpipeline | .md | Visual Question Answering pipeline using a `AutoModelForVisualQuestionAnswering`. This pipeline is currently only
available in PyTorch.
Example:
```python
>>> from transformers import pipeline | 456_40_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#visualquestionansweringpipeline | .md | >>> oracle = pipeline(model="dandelin/vilt-b32-finetuned-vqa")
>>> image_url = "https://huggingface.co/datasets/Narsil/image_dummy/raw/main/lena.png"
>>> oracle(question="What is she wearing ?", image=image_url)
[{'score': 0.948, 'answer': 'hat'}, {'score': 0.009, 'answer': 'fedora'}, {'score': 0.003, 'answer': 'clothes'}, {'score': 0.003, 'answer': 'sun hat'}, {'score': 0.002, 'answer': 'nothing'}]
>>> oracle(question="What is she wearing ?", image=image_url, top_k=1)
[{'score': 0.948, 'answer': 'hat'}] | 456_40_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#visualquestionansweringpipeline | .md | >>> oracle(question="What is she wearing ?", image=image_url, top_k=1)
[{'score': 0.948, 'answer': 'hat'}]
>>> oracle(question="Is this a person ?", image=image_url, top_k=1)
[{'score': 0.993, 'answer': 'yes'}] | 456_40_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#visualquestionansweringpipeline | .md | >>> oracle(question="Is this a man ?", image=image_url, top_k=1)
[{'score': 0.996, 'answer': 'no'}]
```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial)
This visual question answering pipeline can currently be loaded from [`pipeline`] using the following task
identifiers: `"visual-question-answering", "vqa"`.
The models that this pipeline can use are models that have been fine-tuned on a visual question answering task. See | 456_40_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#visualquestionansweringpipeline | .md | The models that this pipeline can use are models that have been fine-tuned on a visual question answering task. See
the up-to-date list of available models on
[huggingface.co/models](https://huggingface.co/models?filter=visual-question-answering).
Arguments:
model ([`PreTrainedModel`] or [`TFPreTrainedModel`]):
The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from
[`PreTrainedModel`] for PyTorch and [`TFPreTrainedModel`] for TensorFlow. | 456_40_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#visualquestionansweringpipeline | .md | [`PreTrainedModel`] for PyTorch and [`TFPreTrainedModel`] for TensorFlow.
tokenizer ([`PreTrainedTokenizer`]):
The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from
[`PreTrainedTokenizer`].
image_processor ([`BaseImageProcessor`]):
The image processor that will be used by the pipeline to encode data for the model. This object inherits from
[`BaseImageProcessor`].
modelcard (`str` or [`ModelCard`], *optional*): | 456_40_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#visualquestionansweringpipeline | .md | [`BaseImageProcessor`].
modelcard (`str` or [`ModelCard`], *optional*):
Model card attributed to the model for this pipeline.
framework (`str`, *optional*):
The framework to use, either `"pt"` for PyTorch or `"tf"` for TensorFlow. The specified framework must be
installed.
If no framework is specified, will default to the one currently installed. If no framework is specified and
both frameworks are installed, will default to the framework of the `model`, or to PyTorch if no model is
provided. | 456_40_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#visualquestionansweringpipeline | .md | both frameworks are installed, will default to the framework of the `model`, or to PyTorch if no model is
provided.
task (`str`, defaults to `""`):
A task-identifier for the pipeline.
num_workers (`int`, *optional*, defaults to 8):
When the pipeline will use *DataLoader* (when passing a dataset, on GPU for a Pytorch model), the number of
workers to be used.
batch_size (`int`, *optional*, defaults to 1): | 456_40_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#visualquestionansweringpipeline | .md | workers to be used.
batch_size (`int`, *optional*, defaults to 1):
When the pipeline will use *DataLoader* (when passing a dataset, on GPU for a Pytorch model), the size of
the batch to use, for inference this is not always beneficial, please read [Batching with
pipelines](https://huggingface.co/transformers/main_classes/pipelines.html#pipeline-batching) .
args_parser ([`~pipelines.ArgumentHandler`], *optional*):
Reference to the object in charge of parsing supplied pipeline parameters. | 456_40_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#visualquestionansweringpipeline | .md | Reference to the object in charge of parsing supplied pipeline parameters.
device (`int`, *optional*, defaults to -1):
Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on
the associated CUDA device id. You can pass native `torch.device` or a `str` too
torch_dtype (`str` or `torch.dtype`, *optional*):
Sent directly as `model_kwargs` (just a simpler shortcut) to use the available precision for this model | 456_40_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#visualquestionansweringpipeline | .md | Sent directly as `model_kwargs` (just a simpler shortcut) to use the available precision for this model
(`torch.float16`, `torch.bfloat16`, ... or `"auto"`)
binary_output (`bool`, *optional*, defaults to `False`):
Flag indicating if the output the pipeline should happen in a serialized format (i.e., pickle) or as
the raw output data e.g. text.
- __call__
- all | 456_40_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#parent-class-pipeline | .md | The Pipeline class is the class from which all pipelines inherit. Refer to this class for methods shared across
different pipelines.
Base class implementing pipelined operations. Pipeline workflow is defined as a sequence of the following
operations:
Input -> Tokenization -> Model Inference -> Post-Processing (task dependent) -> Output
Pipeline supports running on CPU or GPU through the device argument (see below). | 456_41_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#parent-class-pipeline | .md | Pipeline supports running on CPU or GPU through the device argument (see below).
Some pipeline, like for instance [`FeatureExtractionPipeline`] (`'feature-extraction'`) output large tensor object
as nested-lists. In order to avoid dumping such large structure as textual data we provide the `binary_output`
constructor argument. If set to `True`, the output will be stored in the pickle format.
Arguments:
model ([`PreTrainedModel`] or [`TFPreTrainedModel`]): | 456_41_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#parent-class-pipeline | .md | Arguments:
model ([`PreTrainedModel`] or [`TFPreTrainedModel`]):
The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from
[`PreTrainedModel`] for PyTorch and [`TFPreTrainedModel`] for TensorFlow.
tokenizer ([`PreTrainedTokenizer`]):
The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from
[`PreTrainedTokenizer`].
feature_extractor ([`SequenceFeatureExtractor`]): | 456_41_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#parent-class-pipeline | .md | [`PreTrainedTokenizer`].
feature_extractor ([`SequenceFeatureExtractor`]):
The feature extractor that will be used by the pipeline to encode data for the model. This object inherits from
[`SequenceFeatureExtractor`].
image_processor ([`BaseImageProcessor`]):
The image processor that will be used by the pipeline to encode data for the model. This object inherits from
[`BaseImageProcessor`].
processor ([`ProcessorMixin`]): | 456_41_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#parent-class-pipeline | .md | [`BaseImageProcessor`].
processor ([`ProcessorMixin`]):
The processor that will be used by the pipeline to encode data for the model. This object inherits from
[`ProcessorMixin`]. Processor is a composite object that might contain `tokenizer`, `feature_extractor`, and
`image_processor`.
modelcard (`str` or [`ModelCard`], *optional*):
Model card attributed to the model for this pipeline.
framework (`str`, *optional*): | 456_41_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#parent-class-pipeline | .md | Model card attributed to the model for this pipeline.
framework (`str`, *optional*):
The framework to use, either `"pt"` for PyTorch or `"tf"` for TensorFlow. The specified framework must be
installed.
If no framework is specified, will default to the one currently installed. If no framework is specified and
both frameworks are installed, will default to the framework of the `model`, or to PyTorch if no model is
provided.
task (`str`, defaults to `""`):
A task-identifier for the pipeline. | 456_41_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#parent-class-pipeline | .md | provided.
task (`str`, defaults to `""`):
A task-identifier for the pipeline.
num_workers (`int`, *optional*, defaults to 8):
When the pipeline will use *DataLoader* (when passing a dataset, on GPU for a Pytorch model), the number of
workers to be used.
batch_size (`int`, *optional*, defaults to 1):
When the pipeline will use *DataLoader* (when passing a dataset, on GPU for a Pytorch model), the size of
the batch to use, for inference this is not always beneficial, please read [Batching with | 456_41_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#parent-class-pipeline | .md | the batch to use, for inference this is not always beneficial, please read [Batching with
pipelines](https://huggingface.co/transformers/main_classes/pipelines.html#pipeline-batching) .
args_parser ([`~pipelines.ArgumentHandler`], *optional*):
Reference to the object in charge of parsing supplied pipeline parameters.
device (`int`, *optional*, defaults to -1):
Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on | 456_41_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#parent-class-pipeline | .md | Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on
the associated CUDA device id. You can pass native `torch.device` or a `str` too
torch_dtype (`str` or `torch.dtype`, *optional*):
Sent directly as `model_kwargs` (just a simpler shortcut) to use the available precision for this model
(`torch.float16`, `torch.bfloat16`, ... or `"auto"`)
binary_output (`bool`, *optional*, defaults to `False`): | 456_41_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#parent-class-pipeline | .md | (`torch.float16`, `torch.bfloat16`, ... or `"auto"`)
binary_output (`bool`, *optional*, defaults to `False`):
Flag indicating if the output the pipeline should happen in a serialized format (i.e., pickle) or as
the raw output data e.g. text. | 456_41_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/keras_callbacks.md | https://huggingface.co/docs/transformers/en/main_classes/keras_callbacks/ | .md | <!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 457_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/keras_callbacks.md | https://huggingface.co/docs/transformers/en/main_classes/keras_callbacks/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 457_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/keras_callbacks.md | https://huggingface.co/docs/transformers/en/main_classes/keras_callbacks/#keras-callbacks | .md | When training a Transformers model with Keras, there are some library-specific callbacks available to automate common
tasks: | 457_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/keras_callbacks.md | https://huggingface.co/docs/transformers/en/main_classes/keras_callbacks/#kerasmetriccallback | .md | KerasMetricCallback | 457_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/keras_callbacks.md | https://huggingface.co/docs/transformers/en/main_classes/keras_callbacks/#pushtohubcallback | .md | PushToHubCallback | 457_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md | https://huggingface.co/docs/transformers/en/main_classes/output/ | .md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 458_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md | https://huggingface.co/docs/transformers/en/main_classes/output/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 458_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md | https://huggingface.co/docs/transformers/en/main_classes/output/#model-outputs | .md | All models have outputs that are instances of subclasses of [`~utils.ModelOutput`]. Those are
data structures containing all the information returned by the model, but that can also be used as tuples or
dictionaries.
Let's see how this looks in an example:
```python
from transformers import BertTokenizer, BertForSequenceClassification
import torch | 458_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md | https://huggingface.co/docs/transformers/en/main_classes/output/#model-outputs | .md | tokenizer = BertTokenizer.from_pretrained("google-bert/bert-base-uncased")
model = BertForSequenceClassification.from_pretrained("google-bert/bert-base-uncased") | 458_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md | https://huggingface.co/docs/transformers/en/main_classes/output/#model-outputs | .md | inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
labels = torch.tensor([1]).unsqueeze(0) # Batch size 1
outputs = model(**inputs, labels=labels)
```
The `outputs` object is a [`~modeling_outputs.SequenceClassifierOutput`], as we can see in the
documentation of that class below, it means it has an optional `loss`, a `logits`, an optional `hidden_states` and
an optional `attentions` attribute. Here we have the `loss` since we passed along `labels`, but we don't have | 458_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md | https://huggingface.co/docs/transformers/en/main_classes/output/#model-outputs | .md | an optional `attentions` attribute. Here we have the `loss` since we passed along `labels`, but we don't have
`hidden_states` and `attentions` because we didn't pass `output_hidden_states=True` or
`output_attentions=True`.
<Tip>
When passing `output_hidden_states=True` you may expect the `outputs.hidden_states[-1]` to match `outputs.last_hidden_state` exactly.
However, this is not always the case. Some models apply normalization or subsequent process to the last hidden state when it's returned. | 458_1_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md | https://huggingface.co/docs/transformers/en/main_classes/output/#model-outputs | .md | </Tip>
You can access each attribute as you would usually do, and if that attribute has not been returned by the model, you
will get `None`. Here for instance `outputs.loss` is the loss computed by the model, and `outputs.attentions` is
`None`.
When considering our `outputs` object as tuple, it only considers the attributes that don't have `None` values.
Here for instance, it has two elements, `loss` then `logits`, so
```python
outputs[:2]
``` | 458_1_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md | https://huggingface.co/docs/transformers/en/main_classes/output/#model-outputs | .md | Here for instance, it has two elements, `loss` then `logits`, so
```python
outputs[:2]
```
will return the tuple `(outputs.loss, outputs.logits)` for instance.
When considering our `outputs` object as dictionary, it only considers the attributes that don't have `None`
values. Here for instance, it has two keys that are `loss` and `logits`.
We document here the generic model outputs that are used by more than one model type. Specific output types are
documented on their corresponding model page. | 458_1_5 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.