source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#questionansweringpipeline | .md | `"question-answering"`.
The models that this pipeline can use are models that have been fine-tuned on a question answering task. See the
up-to-date list of available models on
[huggingface.co/models](https://huggingface.co/models?filter=question-answering).
Arguments:
model ([`PreTrainedModel`] or [`TFPreTrainedModel`]):
The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from
[`PreTrainedModel`] for PyTorch and [`TFPreTrainedModel`] for TensorFlow. | 456_24_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#questionansweringpipeline | .md | [`PreTrainedModel`] for PyTorch and [`TFPreTrainedModel`] for TensorFlow.
tokenizer ([`PreTrainedTokenizer`]):
The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from
[`PreTrainedTokenizer`].
modelcard (`str` or [`ModelCard`], *optional*):
Model card attributed to the model for this pipeline.
framework (`str`, *optional*):
The framework to use, either `"pt"` for PyTorch or `"tf"` for TensorFlow. The specified framework must be
installed. | 456_24_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#questionansweringpipeline | .md | The framework to use, either `"pt"` for PyTorch or `"tf"` for TensorFlow. The specified framework must be
installed.
If no framework is specified, will default to the one currently installed. If no framework is specified and
both frameworks are installed, will default to the framework of the `model`, or to PyTorch if no model is
provided.
task (`str`, defaults to `""`):
A task-identifier for the pipeline.
num_workers (`int`, *optional*, defaults to 8): | 456_24_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#questionansweringpipeline | .md | provided.
task (`str`, defaults to `""`):
A task-identifier for the pipeline.
num_workers (`int`, *optional*, defaults to 8):
When the pipeline will use *DataLoader* (when passing a dataset, on GPU for a Pytorch model), the number of
workers to be used.
batch_size (`int`, *optional*, defaults to 1):
When the pipeline will use *DataLoader* (when passing a dataset, on GPU for a Pytorch model), the size of
the batch to use, for inference this is not always beneficial, please read [Batching with | 456_24_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#questionansweringpipeline | .md | the batch to use, for inference this is not always beneficial, please read [Batching with
pipelines](https://huggingface.co/transformers/main_classes/pipelines.html#pipeline-batching) .
args_parser ([`~pipelines.ArgumentHandler`], *optional*):
Reference to the object in charge of parsing supplied pipeline parameters.
device (`int`, *optional*, defaults to -1):
Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on | 456_24_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#questionansweringpipeline | .md | Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on
the associated CUDA device id. You can pass native `torch.device` or a `str` too
torch_dtype (`str` or `torch.dtype`, *optional*):
Sent directly as `model_kwargs` (just a simpler shortcut) to use the available precision for this model
(`torch.float16`, `torch.bfloat16`, ... or `"auto"`)
binary_output (`bool`, *optional*, defaults to `False`): | 456_24_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#questionansweringpipeline | .md | (`torch.float16`, `torch.bfloat16`, ... or `"auto"`)
binary_output (`bool`, *optional*, defaults to `False`):
Flag indicating if the output the pipeline should happen in a serialized format (i.e., pickle) or as
the raw output data e.g. text.
- __call__
- all | 456_24_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#summarizationpipeline | .md | Summarize news articles and other documents.
This summarizing pipeline can currently be loaded from [`pipeline`] using the following task identifier:
`"summarization"`.
The models that this pipeline can use are models that have been fine-tuned on a summarization task, which is
currently, '*bart-large-cnn*', '*google-t5/t5-small*', '*google-t5/t5-base*', '*google-t5/t5-large*', '*google-t5/t5-3b*', '*google-t5/t5-11b*'. See the up-to-date | 456_25_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#summarizationpipeline | .md | list of available models on [huggingface.co/models](https://huggingface.co/models?filter=summarization). For a list
of available parameters, see the [following
documentation](https://huggingface.co/docs/transformers/en/main_classes/text_generation#transformers.generation.GenerationMixin.generate)
Usage:
```python
# use bart in pytorch
summarizer = pipeline("summarization")
summarizer("An apple a day, keeps the doctor away", min_length=5, max_length=20) | 456_25_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#summarizationpipeline | .md | # use t5 in tf
summarizer = pipeline("summarization", model="google-t5/t5-base", tokenizer="google-t5/t5-base", framework="tf")
summarizer("An apple a day, keeps the doctor away", min_length=5, max_length=20)
```
Arguments:
model ([`PreTrainedModel`] or [`TFPreTrainedModel`]):
The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from
[`PreTrainedModel`] for PyTorch and [`TFPreTrainedModel`] for TensorFlow.
tokenizer ([`PreTrainedTokenizer`]): | 456_25_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#summarizationpipeline | .md | [`PreTrainedModel`] for PyTorch and [`TFPreTrainedModel`] for TensorFlow.
tokenizer ([`PreTrainedTokenizer`]):
The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from
[`PreTrainedTokenizer`].
modelcard (`str` or [`ModelCard`], *optional*):
Model card attributed to the model for this pipeline.
framework (`str`, *optional*):
The framework to use, either `"pt"` for PyTorch or `"tf"` for TensorFlow. The specified framework must be
installed. | 456_25_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#summarizationpipeline | .md | The framework to use, either `"pt"` for PyTorch or `"tf"` for TensorFlow. The specified framework must be
installed.
If no framework is specified, will default to the one currently installed. If no framework is specified and
both frameworks are installed, will default to the framework of the `model`, or to PyTorch if no model is
provided.
task (`str`, defaults to `""`):
A task-identifier for the pipeline.
num_workers (`int`, *optional*, defaults to 8): | 456_25_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#summarizationpipeline | .md | provided.
task (`str`, defaults to `""`):
A task-identifier for the pipeline.
num_workers (`int`, *optional*, defaults to 8):
When the pipeline will use *DataLoader* (when passing a dataset, on GPU for a Pytorch model), the number of
workers to be used.
batch_size (`int`, *optional*, defaults to 1):
When the pipeline will use *DataLoader* (when passing a dataset, on GPU for a Pytorch model), the size of
the batch to use, for inference this is not always beneficial, please read [Batching with | 456_25_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#summarizationpipeline | .md | the batch to use, for inference this is not always beneficial, please read [Batching with
pipelines](https://huggingface.co/transformers/main_classes/pipelines.html#pipeline-batching) .
args_parser ([`~pipelines.ArgumentHandler`], *optional*):
Reference to the object in charge of parsing supplied pipeline parameters.
device (`int`, *optional*, defaults to -1):
Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on | 456_25_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#summarizationpipeline | .md | Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on
the associated CUDA device id. You can pass native `torch.device` or a `str` too
torch_dtype (`str` or `torch.dtype`, *optional*):
Sent directly as `model_kwargs` (just a simpler shortcut) to use the available precision for this model
(`torch.float16`, `torch.bfloat16`, ... or `"auto"`)
binary_output (`bool`, *optional*, defaults to `False`): | 456_25_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#summarizationpipeline | .md | (`torch.float16`, `torch.bfloat16`, ... or `"auto"`)
binary_output (`bool`, *optional*, defaults to `False`):
Flag indicating if the output the pipeline should happen in a serialized format (i.e., pickle) or as
the raw output data e.g. text.
- __call__
- all | 456_25_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#tablequestionansweringpipeline | .md | Table Question Answering pipeline using a `ModelForTableQuestionAnswering`. This pipeline is only available in
PyTorch.
Example:
```python
>>> from transformers import pipeline | 456_26_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#tablequestionansweringpipeline | .md | >>> oracle = pipeline(model="google/tapas-base-finetuned-wtq")
>>> table = {
... "Repository": ["Transformers", "Datasets", "Tokenizers"],
... "Stars": ["36542", "4512", "3934"],
... "Contributors": ["651", "77", "34"],
... "Programming language": ["Python", "Python", "Rust, Python and NodeJS"],
... }
>>> oracle(query="How many stars does the transformers repository have?", table=table)
{'answer': 'AVERAGE > 36542', 'coordinates': [(0, 1)], 'cells': ['36542'], 'aggregator': 'AVERAGE'} | 456_26_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#tablequestionansweringpipeline | .md | {'answer': 'AVERAGE > 36542', 'coordinates': [(0, 1)], 'cells': ['36542'], 'aggregator': 'AVERAGE'}
```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial)
This tabular question answering pipeline can currently be loaded from [`pipeline`] using the following task
identifier: `"table-question-answering"`.
The models that this pipeline can use are models that have been fine-tuned on a tabular question answering task. | 456_26_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#tablequestionansweringpipeline | .md | The models that this pipeline can use are models that have been fine-tuned on a tabular question answering task.
See the up-to-date list of available models on
[huggingface.co/models](https://huggingface.co/models?filter=table-question-answering).
Arguments:
model ([`PreTrainedModel`] or [`TFPreTrainedModel`]):
The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from
[`PreTrainedModel`] for PyTorch and [`TFPreTrainedModel`] for TensorFlow. | 456_26_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#tablequestionansweringpipeline | .md | [`PreTrainedModel`] for PyTorch and [`TFPreTrainedModel`] for TensorFlow.
tokenizer ([`PreTrainedTokenizer`]):
The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from
[`PreTrainedTokenizer`].
modelcard (`str` or [`ModelCard`], *optional*):
Model card attributed to the model for this pipeline.
framework (`str`, *optional*):
The framework to use, either `"pt"` for PyTorch or `"tf"` for TensorFlow. The specified framework must be
installed. | 456_26_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#tablequestionansweringpipeline | .md | The framework to use, either `"pt"` for PyTorch or `"tf"` for TensorFlow. The specified framework must be
installed.
If no framework is specified, will default to the one currently installed. If no framework is specified and
both frameworks are installed, will default to the framework of the `model`, or to PyTorch if no model is
provided.
task (`str`, defaults to `""`):
A task-identifier for the pipeline.
num_workers (`int`, *optional*, defaults to 8): | 456_26_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#tablequestionansweringpipeline | .md | provided.
task (`str`, defaults to `""`):
A task-identifier for the pipeline.
num_workers (`int`, *optional*, defaults to 8):
When the pipeline will use *DataLoader* (when passing a dataset, on GPU for a Pytorch model), the number of
workers to be used.
batch_size (`int`, *optional*, defaults to 1):
When the pipeline will use *DataLoader* (when passing a dataset, on GPU for a Pytorch model), the size of
the batch to use, for inference this is not always beneficial, please read [Batching with | 456_26_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#tablequestionansweringpipeline | .md | the batch to use, for inference this is not always beneficial, please read [Batching with
pipelines](https://huggingface.co/transformers/main_classes/pipelines.html#pipeline-batching) .
args_parser ([`~pipelines.ArgumentHandler`], *optional*):
Reference to the object in charge of parsing supplied pipeline parameters.
device (`int`, *optional*, defaults to -1):
Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on | 456_26_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#tablequestionansweringpipeline | .md | Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on
the associated CUDA device id. You can pass native `torch.device` or a `str` too
torch_dtype (`str` or `torch.dtype`, *optional*):
Sent directly as `model_kwargs` (just a simpler shortcut) to use the available precision for this model
(`torch.float16`, `torch.bfloat16`, ... or `"auto"`)
binary_output (`bool`, *optional*, defaults to `False`): | 456_26_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#tablequestionansweringpipeline | .md | (`torch.float16`, `torch.bfloat16`, ... or `"auto"`)
binary_output (`bool`, *optional*, defaults to `False`):
Flag indicating if the output the pipeline should happen in a serialized format (i.e., pickle) or as
the raw output data e.g. text.
- __call__ | 456_26_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#textclassificationpipeline | .md | Text classification pipeline using any `ModelForSequenceClassification`. See the [sequence classification
examples](../task_summary#sequence-classification) for more information.
Example:
```python
>>> from transformers import pipeline
>>> classifier = pipeline(model="distilbert/distilbert-base-uncased-finetuned-sst-2-english")
>>> classifier("This movie is disgustingly good !")
[{'label': 'POSITIVE', 'score': 1.0}] | 456_27_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#textclassificationpipeline | .md | >>> classifier("Director tried too much.")
[{'label': 'NEGATIVE', 'score': 0.996}]
```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial)
This text classification pipeline can currently be loaded from [`pipeline`] using the following task identifier:
`"sentiment-analysis"` (for classifying sequences according to positive or negative sentiments). | 456_27_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#textclassificationpipeline | .md | `"sentiment-analysis"` (for classifying sequences according to positive or negative sentiments).
If multiple classification labels are available (`model.config.num_labels >= 2`), the pipeline will run a softmax
over the results. If there is a single label, the pipeline will run a sigmoid over the result. In case of regression
tasks (`model.config.problem_type == "regression"`), will not apply any function on the output. | 456_27_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#textclassificationpipeline | .md | tasks (`model.config.problem_type == "regression"`), will not apply any function on the output.
The models that this pipeline can use are models that have been fine-tuned on a sequence classification task. See
the up-to-date list of available models on
[huggingface.co/models](https://huggingface.co/models?filter=text-classification).
Arguments:
model ([`PreTrainedModel`] or [`TFPreTrainedModel`]):
The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from | 456_27_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#textclassificationpipeline | .md | The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from
[`PreTrainedModel`] for PyTorch and [`TFPreTrainedModel`] for TensorFlow.
tokenizer ([`PreTrainedTokenizer`]):
The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from
[`PreTrainedTokenizer`].
modelcard (`str` or [`ModelCard`], *optional*):
Model card attributed to the model for this pipeline.
framework (`str`, *optional*): | 456_27_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#textclassificationpipeline | .md | Model card attributed to the model for this pipeline.
framework (`str`, *optional*):
The framework to use, either `"pt"` for PyTorch or `"tf"` for TensorFlow. The specified framework must be
installed.
If no framework is specified, will default to the one currently installed. If no framework is specified and
both frameworks are installed, will default to the framework of the `model`, or to PyTorch if no model is
provided.
task (`str`, defaults to `""`):
A task-identifier for the pipeline. | 456_27_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#textclassificationpipeline | .md | provided.
task (`str`, defaults to `""`):
A task-identifier for the pipeline.
num_workers (`int`, *optional*, defaults to 8):
When the pipeline will use *DataLoader* (when passing a dataset, on GPU for a Pytorch model), the number of
workers to be used.
batch_size (`int`, *optional*, defaults to 1):
When the pipeline will use *DataLoader* (when passing a dataset, on GPU for a Pytorch model), the size of
the batch to use, for inference this is not always beneficial, please read [Batching with | 456_27_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#textclassificationpipeline | .md | the batch to use, for inference this is not always beneficial, please read [Batching with
pipelines](https://huggingface.co/transformers/main_classes/pipelines.html#pipeline-batching) .
args_parser ([`~pipelines.ArgumentHandler`], *optional*):
Reference to the object in charge of parsing supplied pipeline parameters.
device (`int`, *optional*, defaults to -1):
Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on | 456_27_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#textclassificationpipeline | .md | Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on
the associated CUDA device id. You can pass native `torch.device` or a `str` too
torch_dtype (`str` or `torch.dtype`, *optional*):
Sent directly as `model_kwargs` (just a simpler shortcut) to use the available precision for this model
(`torch.float16`, `torch.bfloat16`, ... or `"auto"`)
binary_output (`bool`, *optional*, defaults to `False`): | 456_27_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#textclassificationpipeline | .md | (`torch.float16`, `torch.bfloat16`, ... or `"auto"`)
binary_output (`bool`, *optional*, defaults to `False`):
Flag indicating if the output the pipeline should happen in a serialized format (i.e., pickle) or as
the raw output data e.g. text.
return_all_scores (`bool`, *optional*, defaults to `False`):
Whether to return all prediction scores or just the one of the predicted class.
function_to_apply (`str`, *optional*, defaults to `"default"`): | 456_27_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#textclassificationpipeline | .md | function_to_apply (`str`, *optional*, defaults to `"default"`):
The function to apply to the model outputs in order to retrieve the scores. Accepts four different values:
- `"default"`: if the model has a single label, will apply the sigmoid function on the output. If the model
has several labels, will apply the softmax function on the output. In case of regression tasks, will not
apply any function on the output.
- `"sigmoid"`: Applies the sigmoid function on the output. | 456_27_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#textclassificationpipeline | .md | apply any function on the output.
- `"sigmoid"`: Applies the sigmoid function on the output.
- `"softmax"`: Applies the softmax function on the output.
- `"none"`: Does not apply any function on the output.
- __call__
- all | 456_27_11 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#textgenerationpipeline | .md | Language generation pipeline using any `ModelWithLMHead`. This pipeline predicts the words that will follow a
specified text prompt. When the underlying model is a conversational model, it can also accept one or more chats,
in which case the pipeline will operate in chat mode and will continue the chat(s) by adding its response(s).
Each chat takes the form of a list of dicts, where each dict contains "role" and "content" keys.
Examples:
```python
>>> from transformers import pipeline | 456_28_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#textgenerationpipeline | .md | >>> generator = pipeline(model="openai-community/gpt2")
>>> generator("I can't believe you did such a ", do_sample=False)
[{'generated_text': "I can't believe you did such a icky thing to me. I'm so sorry. I'm so sorry. I'm so sorry. I'm so sorry. I'm so sorry. I'm so sorry. I'm so sorry. I"}] | 456_28_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#textgenerationpipeline | .md | >>> # These parameters will return suggestions, and only the newly created text making it easier for prompting suggestions.
>>> outputs = generator("My tart needs some", num_return_sequences=4, return_full_text=False)
```
```python
>>> from transformers import pipeline | 456_28_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#textgenerationpipeline | .md | >>> generator = pipeline(model="HuggingFaceH4/zephyr-7b-beta")
>>> # Zephyr-beta is a conversational model, so let's pass it a chat instead of a single string
>>> generator([{"role": "user", "content": "What is the capital of France? Answer in one word."}], do_sample=False, max_new_tokens=2)
[{'generated_text': [{'role': 'user', 'content': 'What is the capital of France? Answer in one word.'}, {'role': 'assistant', 'content': 'Paris'}]}]
``` | 456_28_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#textgenerationpipeline | .md | ```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial). You can pass text
generation parameters to this pipeline to control stopping criteria, decoding strategy, and more. Learn more about
text generation parameters in [Text generation strategies](../generation_strategies) and [Text
generation](text_generation).
This language generation pipeline can currently be loaded from [`pipeline`] using the following task identifier:
`"text-generation"`. | 456_28_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#textgenerationpipeline | .md | `"text-generation"`.
The models that this pipeline can use are models that have been trained with an autoregressive language modeling
objective. See the list of available [text completion models](https://huggingface.co/models?filter=text-generation)
and the list of [conversational models](https://huggingface.co/models?other=conversational)
on [huggingface.co/models].
Arguments:
model ([`PreTrainedModel`] or [`TFPreTrainedModel`]): | 456_28_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#textgenerationpipeline | .md | on [huggingface.co/models].
Arguments:
model ([`PreTrainedModel`] or [`TFPreTrainedModel`]):
The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from
[`PreTrainedModel`] for PyTorch and [`TFPreTrainedModel`] for TensorFlow.
tokenizer ([`PreTrainedTokenizer`]):
The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from
[`PreTrainedTokenizer`].
modelcard (`str` or [`ModelCard`], *optional*): | 456_28_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#textgenerationpipeline | .md | [`PreTrainedTokenizer`].
modelcard (`str` or [`ModelCard`], *optional*):
Model card attributed to the model for this pipeline.
framework (`str`, *optional*):
The framework to use, either `"pt"` for PyTorch or `"tf"` for TensorFlow. The specified framework must be
installed.
If no framework is specified, will default to the one currently installed. If no framework is specified and
both frameworks are installed, will default to the framework of the `model`, or to PyTorch if no model is
provided. | 456_28_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#textgenerationpipeline | .md | both frameworks are installed, will default to the framework of the `model`, or to PyTorch if no model is
provided.
task (`str`, defaults to `""`):
A task-identifier for the pipeline.
num_workers (`int`, *optional*, defaults to 8):
When the pipeline will use *DataLoader* (when passing a dataset, on GPU for a Pytorch model), the number of
workers to be used.
batch_size (`int`, *optional*, defaults to 1): | 456_28_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#textgenerationpipeline | .md | workers to be used.
batch_size (`int`, *optional*, defaults to 1):
When the pipeline will use *DataLoader* (when passing a dataset, on GPU for a Pytorch model), the size of
the batch to use, for inference this is not always beneficial, please read [Batching with
pipelines](https://huggingface.co/transformers/main_classes/pipelines.html#pipeline-batching) .
args_parser ([`~pipelines.ArgumentHandler`], *optional*):
Reference to the object in charge of parsing supplied pipeline parameters. | 456_28_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#textgenerationpipeline | .md | Reference to the object in charge of parsing supplied pipeline parameters.
device (`int`, *optional*, defaults to -1):
Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on
the associated CUDA device id. You can pass native `torch.device` or a `str` too
torch_dtype (`str` or `torch.dtype`, *optional*):
Sent directly as `model_kwargs` (just a simpler shortcut) to use the available precision for this model | 456_28_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#textgenerationpipeline | .md | Sent directly as `model_kwargs` (just a simpler shortcut) to use the available precision for this model
(`torch.float16`, `torch.bfloat16`, ... or `"auto"`)
binary_output (`bool`, *optional*, defaults to `False`):
Flag indicating if the output the pipeline should happen in a serialized format (i.e., pickle) or as
the raw output data e.g. text.
- __call__
- all | 456_28_11 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#text2textgenerationpipeline | .md | Pipeline for text to text generation using seq2seq models.
Example:
```python
>>> from transformers import pipeline | 456_29_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#text2textgenerationpipeline | .md | >>> generator = pipeline(model="mrm8488/t5-base-finetuned-question-generation-ap")
>>> generator(
... "answer: Manuel context: Manuel has created RuPERTa-base with the support of HF-Transformers and Google"
... )
[{'generated_text': 'question: Who created the RuPERTa-base?'}]
```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial). You can pass text | 456_29_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#text2textgenerationpipeline | .md | ```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial). You can pass text
generation parameters to this pipeline to control stopping criteria, decoding strategy, and more. Learn more about
text generation parameters in [Text generation strategies](../generation_strategies) and [Text
generation](text_generation).
This Text2TextGenerationPipeline pipeline can currently be loaded from [`pipeline`] using the following task | 456_29_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#text2textgenerationpipeline | .md | This Text2TextGenerationPipeline pipeline can currently be loaded from [`pipeline`] using the following task
identifier: `"text2text-generation"`.
The models that this pipeline can use are models that have been fine-tuned on a translation task. See the
up-to-date list of available models on
[huggingface.co/models](https://huggingface.co/models?filter=text2text-generation). For a list of available
parameters, see the [following | 456_29_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#text2textgenerationpipeline | .md | parameters, see the [following
documentation](https://huggingface.co/docs/transformers/en/main_classes/text_generation#transformers.generation.GenerationMixin.generate)
Usage:
```python
text2text_generator = pipeline("text2text-generation")
text2text_generator("question: What is 42 ? context: 42 is the answer to life, the universe and everything")
```
Arguments:
model ([`PreTrainedModel`] or [`TFPreTrainedModel`]): | 456_29_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#text2textgenerationpipeline | .md | ```
Arguments:
model ([`PreTrainedModel`] or [`TFPreTrainedModel`]):
The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from
[`PreTrainedModel`] for PyTorch and [`TFPreTrainedModel`] for TensorFlow.
tokenizer ([`PreTrainedTokenizer`]):
The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from
[`PreTrainedTokenizer`].
modelcard (`str` or [`ModelCard`], *optional*): | 456_29_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#text2textgenerationpipeline | .md | [`PreTrainedTokenizer`].
modelcard (`str` or [`ModelCard`], *optional*):
Model card attributed to the model for this pipeline.
framework (`str`, *optional*):
The framework to use, either `"pt"` for PyTorch or `"tf"` for TensorFlow. The specified framework must be
installed.
If no framework is specified, will default to the one currently installed. If no framework is specified and
both frameworks are installed, will default to the framework of the `model`, or to PyTorch if no model is
provided. | 456_29_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#text2textgenerationpipeline | .md | both frameworks are installed, will default to the framework of the `model`, or to PyTorch if no model is
provided.
task (`str`, defaults to `""`):
A task-identifier for the pipeline.
num_workers (`int`, *optional*, defaults to 8):
When the pipeline will use *DataLoader* (when passing a dataset, on GPU for a Pytorch model), the number of
workers to be used.
batch_size (`int`, *optional*, defaults to 1): | 456_29_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#text2textgenerationpipeline | .md | workers to be used.
batch_size (`int`, *optional*, defaults to 1):
When the pipeline will use *DataLoader* (when passing a dataset, on GPU for a Pytorch model), the size of
the batch to use, for inference this is not always beneficial, please read [Batching with
pipelines](https://huggingface.co/transformers/main_classes/pipelines.html#pipeline-batching) .
args_parser ([`~pipelines.ArgumentHandler`], *optional*):
Reference to the object in charge of parsing supplied pipeline parameters. | 456_29_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#text2textgenerationpipeline | .md | Reference to the object in charge of parsing supplied pipeline parameters.
device (`int`, *optional*, defaults to -1):
Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on
the associated CUDA device id. You can pass native `torch.device` or a `str` too
torch_dtype (`str` or `torch.dtype`, *optional*):
Sent directly as `model_kwargs` (just a simpler shortcut) to use the available precision for this model | 456_29_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#text2textgenerationpipeline | .md | Sent directly as `model_kwargs` (just a simpler shortcut) to use the available precision for this model
(`torch.float16`, `torch.bfloat16`, ... or `"auto"`)
binary_output (`bool`, *optional*, defaults to `False`):
Flag indicating if the output the pipeline should happen in a serialized format (i.e., pickle) or as
the raw output data e.g. text.
- __call__
- all | 456_29_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#tokenclassificationpipeline | .md | Named Entity Recognition pipeline using any `ModelForTokenClassification`. See the [named entity recognition
examples](../task_summary#named-entity-recognition) for more information.
Example:
```python
>>> from transformers import pipeline | 456_30_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#tokenclassificationpipeline | .md | >>> token_classifier = pipeline(model="Jean-Baptiste/camembert-ner", aggregation_strategy="simple")
>>> sentence = "Je m'appelle jean-baptiste et je vis à montréal"
>>> tokens = token_classifier(sentence)
>>> tokens
[{'entity_group': 'PER', 'score': 0.9931, 'word': 'jean-baptiste', 'start': 12, 'end': 26}, {'entity_group': 'LOC', 'score': 0.998, 'word': 'montréal', 'start': 38, 'end': 47}] | 456_30_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#tokenclassificationpipeline | .md | >>> token = tokens[0]
>>> # Start and end provide an easy way to highlight words in the original text.
>>> sentence[token["start"] : token["end"]]
' jean-baptiste' | 456_30_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#tokenclassificationpipeline | .md | >>> # Some models use the same idea to do part of speech.
>>> syntaxer = pipeline(model="vblagoje/bert-english-uncased-finetuned-pos", aggregation_strategy="simple")
>>> syntaxer("My name is Sarah and I live in London") | 456_30_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#tokenclassificationpipeline | .md | [{'entity_group': 'PRON', 'score': 0.999, 'word': 'my', 'start': 0, 'end': 2}, {'entity_group': 'NOUN', 'score': 0.997, 'word': 'name', 'start': 3, 'end': 7}, {'entity_group': 'AUX', 'score': 0.994, 'word': 'is', 'start': 8, 'end': 10}, {'entity_group': 'PROPN', 'score': 0.999, 'word': 'sarah', 'start': 11, 'end': 16}, {'entity_group': 'CCONJ', 'score': 0.999, 'word': 'and', 'start': 17, 'end': 20}, {'entity_group': 'PRON', 'score': 0.999, 'word': 'i', 'start': 21, 'end': 22}, {'entity_group': 'VERB', | 456_30_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#tokenclassificationpipeline | .md | 17, 'end': 20}, {'entity_group': 'PRON', 'score': 0.999, 'word': 'i', 'start': 21, 'end': 22}, {'entity_group': 'VERB', 'score': 0.998, 'word': 'live', 'start': 23, 'end': 27}, {'entity_group': 'ADP', 'score': 0.999, 'word': 'in', 'start': 28, 'end': 30}, {'entity_group': 'PROPN', 'score': 0.999, 'word': 'london', 'start': 31, 'end': 37}] | 456_30_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#tokenclassificationpipeline | .md | ```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial)
This token recognition pipeline can currently be loaded from [`pipeline`] using the following task identifier:
`"ner"` (for predicting the classes of tokens in a sequence: person, organisation, location or miscellaneous).
The models that this pipeline can use are models that have been fine-tuned on a token classification task. See the
up-to-date list of available models on | 456_30_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#tokenclassificationpipeline | .md | up-to-date list of available models on
[huggingface.co/models](https://huggingface.co/models?filter=token-classification).
Arguments:
model ([`PreTrainedModel`] or [`TFPreTrainedModel`]):
The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from
[`PreTrainedModel`] for PyTorch and [`TFPreTrainedModel`] for TensorFlow.
tokenizer ([`PreTrainedTokenizer`]):
The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from | 456_30_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#tokenclassificationpipeline | .md | The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from
[`PreTrainedTokenizer`].
modelcard (`str` or [`ModelCard`], *optional*):
Model card attributed to the model for this pipeline.
framework (`str`, *optional*):
The framework to use, either `"pt"` for PyTorch or `"tf"` for TensorFlow. The specified framework must be
installed.
If no framework is specified, will default to the one currently installed. If no framework is specified and | 456_30_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#tokenclassificationpipeline | .md | installed.
If no framework is specified, will default to the one currently installed. If no framework is specified and
both frameworks are installed, will default to the framework of the `model`, or to PyTorch if no model is
provided.
task (`str`, defaults to `""`):
A task-identifier for the pipeline.
num_workers (`int`, *optional*, defaults to 8):
When the pipeline will use *DataLoader* (when passing a dataset, on GPU for a Pytorch model), the number of
workers to be used. | 456_30_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#tokenclassificationpipeline | .md | When the pipeline will use *DataLoader* (when passing a dataset, on GPU for a Pytorch model), the number of
workers to be used.
batch_size (`int`, *optional*, defaults to 1):
When the pipeline will use *DataLoader* (when passing a dataset, on GPU for a Pytorch model), the size of
the batch to use, for inference this is not always beneficial, please read [Batching with
pipelines](https://huggingface.co/transformers/main_classes/pipelines.html#pipeline-batching) . | 456_30_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#tokenclassificationpipeline | .md | pipelines](https://huggingface.co/transformers/main_classes/pipelines.html#pipeline-batching) .
args_parser ([`~pipelines.ArgumentHandler`], *optional*):
Reference to the object in charge of parsing supplied pipeline parameters.
device (`int`, *optional*, defaults to -1):
Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on
the associated CUDA device id. You can pass native `torch.device` or a `str` too
torch_dtype (`str` or `torch.dtype`, *optional*): | 456_30_11 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#tokenclassificationpipeline | .md | torch_dtype (`str` or `torch.dtype`, *optional*):
Sent directly as `model_kwargs` (just a simpler shortcut) to use the available precision for this model
(`torch.float16`, `torch.bfloat16`, ... or `"auto"`)
binary_output (`bool`, *optional*, defaults to `False`):
Flag indicating if the output the pipeline should happen in a serialized format (i.e., pickle) or as
the raw output data e.g. text.
ignore_labels (`List[str]`, defaults to `["O"]`):
A list of labels to ignore. | 456_30_12 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#tokenclassificationpipeline | .md | the raw output data e.g. text.
ignore_labels (`List[str]`, defaults to `["O"]`):
A list of labels to ignore.
grouped_entities (`bool`, *optional*, defaults to `False`):
DEPRECATED, use `aggregation_strategy` instead. Whether or not to group the tokens corresponding to the
same entity together in the predictions or not.
stride (`int`, *optional*):
If stride is provided, the pipeline is applied on all the text. The text is split into chunks of size | 456_30_13 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#tokenclassificationpipeline | .md | If stride is provided, the pipeline is applied on all the text. The text is split into chunks of size
model_max_length. Works only with fast tokenizers and `aggregation_strategy` different from `NONE`. The
value of this argument defines the number of overlapping tokens between chunks. In other words, the model
will shift forward by `tokenizer.model_max_length - stride` tokens each step.
aggregation_strategy (`str`, *optional*, defaults to `"none"`): | 456_30_14 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#tokenclassificationpipeline | .md | aggregation_strategy (`str`, *optional*, defaults to `"none"`):
The strategy to fuse (or not) tokens based on the model prediction.
- "none" : Will simply not do any aggregation and simply return raw results from the model
- "simple" : Will attempt to group entities following the default schema. (A, B-TAG), (B, I-TAG), (C,
I-TAG), (D, B-TAG2) (E, B-TAG2) will end up being [{"word": ABC, "entity": "TAG"}, {"word": "D", | 456_30_15 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#tokenclassificationpipeline | .md | I-TAG), (D, B-TAG2) (E, B-TAG2) will end up being [{"word": ABC, "entity": "TAG"}, {"word": "D",
"entity": "TAG2"}, {"word": "E", "entity": "TAG2"}] Notice that two consecutive B tags will end up as
different entities. On word based languages, we might end up splitting words undesirably : Imagine
Microsoft being tagged as [{"word": "Micro", "entity": "ENTERPRISE"}, {"word": "soft", "entity":
"NAME"}]. Look for FIRST, MAX, AVERAGE for ways to mitigate that and disambiguate words (on languages | 456_30_16 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#tokenclassificationpipeline | .md | "NAME"}]. Look for FIRST, MAX, AVERAGE for ways to mitigate that and disambiguate words (on languages
that support that meaning, which is basically tokens separated by a space). These mitigations will
only work on real words, "New york" might still be tagged with two different entities.
- "first" : (works only on word based models) Will use the `SIMPLE` strategy except that words, cannot
end up with different tags. Words will simply use the tag of the first token of the word when there
is ambiguity. | 456_30_17 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#tokenclassificationpipeline | .md | end up with different tags. Words will simply use the tag of the first token of the word when there
is ambiguity.
- "average" : (works only on word based models) Will use the `SIMPLE` strategy except that words,
cannot end up with different tags. scores will be averaged first across tokens, and then the maximum
label is applied.
- "max" : (works only on word based models) Will use the `SIMPLE` strategy except that words, cannot | 456_30_18 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#tokenclassificationpipeline | .md | label is applied.
- "max" : (works only on word based models) Will use the `SIMPLE` strategy except that words, cannot
end up with different tags. Word entity will simply be the token with the maximum score.
- __call__
- all | 456_30_19 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#translationpipeline | .md | Translates from one language to another.
This translation pipeline can currently be loaded from [`pipeline`] using the following task identifier:
`"translation_xx_to_yy"`.
The models that this pipeline can use are models that have been fine-tuned on a translation task. See the
up-to-date list of available models on [huggingface.co/models](https://huggingface.co/models?filter=translation).
For a list of available parameters, see the [following | 456_31_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#translationpipeline | .md | For a list of available parameters, see the [following
documentation](https://huggingface.co/docs/transformers/en/main_classes/text_generation#transformers.generation.GenerationMixin.generate)
Usage:
```python
en_fr_translator = pipeline("translation_en_to_fr")
en_fr_translator("How old are you?")
```
Arguments:
model ([`PreTrainedModel`] or [`TFPreTrainedModel`]):
The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from | 456_31_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#translationpipeline | .md | The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from
[`PreTrainedModel`] for PyTorch and [`TFPreTrainedModel`] for TensorFlow.
tokenizer ([`PreTrainedTokenizer`]):
The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from
[`PreTrainedTokenizer`].
modelcard (`str` or [`ModelCard`], *optional*):
Model card attributed to the model for this pipeline.
framework (`str`, *optional*): | 456_31_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#translationpipeline | .md | Model card attributed to the model for this pipeline.
framework (`str`, *optional*):
The framework to use, either `"pt"` for PyTorch or `"tf"` for TensorFlow. The specified framework must be
installed.
If no framework is specified, will default to the one currently installed. If no framework is specified and
both frameworks are installed, will default to the framework of the `model`, or to PyTorch if no model is
provided.
task (`str`, defaults to `""`):
A task-identifier for the pipeline. | 456_31_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#translationpipeline | .md | provided.
task (`str`, defaults to `""`):
A task-identifier for the pipeline.
num_workers (`int`, *optional*, defaults to 8):
When the pipeline will use *DataLoader* (when passing a dataset, on GPU for a Pytorch model), the number of
workers to be used.
batch_size (`int`, *optional*, defaults to 1):
When the pipeline will use *DataLoader* (when passing a dataset, on GPU for a Pytorch model), the size of
the batch to use, for inference this is not always beneficial, please read [Batching with | 456_31_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#translationpipeline | .md | the batch to use, for inference this is not always beneficial, please read [Batching with
pipelines](https://huggingface.co/transformers/main_classes/pipelines.html#pipeline-batching) .
args_parser ([`~pipelines.ArgumentHandler`], *optional*):
Reference to the object in charge of parsing supplied pipeline parameters.
device (`int`, *optional*, defaults to -1):
Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on | 456_31_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#translationpipeline | .md | Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on
the associated CUDA device id. You can pass native `torch.device` or a `str` too
torch_dtype (`str` or `torch.dtype`, *optional*):
Sent directly as `model_kwargs` (just a simpler shortcut) to use the available precision for this model
(`torch.float16`, `torch.bfloat16`, ... or `"auto"`)
binary_output (`bool`, *optional*, defaults to `False`): | 456_31_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#translationpipeline | .md | (`torch.float16`, `torch.bfloat16`, ... or `"auto"`)
binary_output (`bool`, *optional*, defaults to `False`):
Flag indicating if the output the pipeline should happen in a serialized format (i.e., pickle) or as
the raw output data e.g. text.
- __call__
- all | 456_31_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#zeroshotclassificationpipeline | .md | NLI-based zero-shot classification pipeline using a `ModelForSequenceClassification` trained on NLI (natural
language inference) tasks. Equivalent of `text-classification` pipelines, but these models don't require a
hardcoded number of potential classes, they can be chosen at runtime. It usually means it's slower but it is
**much** more flexible.
Any combination of sequences and labels can be passed and each combination will be posed as a premise/hypothesis | 456_32_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#zeroshotclassificationpipeline | .md | Any combination of sequences and labels can be passed and each combination will be posed as a premise/hypothesis
pair and passed to the pretrained model. Then, the logit for *entailment* is taken as the logit for the candidate
label being valid. Any NLI model can be used, but the id of the *entailment* label must be included in the model
config's :attr:*~transformers.PretrainedConfig.label2id*.
Example:
```python
>>> from transformers import pipeline | 456_32_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#zeroshotclassificationpipeline | .md | >>> oracle = pipeline(model="facebook/bart-large-mnli")
>>> oracle(
... "I have a problem with my iphone that needs to be resolved asap!!",
... candidate_labels=["urgent", "not urgent", "phone", "tablet", "computer"],
... )
{'sequence': 'I have a problem with my iphone that needs to be resolved asap!!', 'labels': ['urgent', 'phone', 'computer', 'not urgent', 'tablet'], 'scores': [0.504, 0.479, 0.013, 0.003, 0.002]} | 456_32_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#zeroshotclassificationpipeline | .md | >>> oracle(
... "I have a problem with my iphone that needs to be resolved asap!!",
... candidate_labels=["english", "german"],
... )
{'sequence': 'I have a problem with my iphone that needs to be resolved asap!!', 'labels': ['english', 'german'], 'scores': [0.814, 0.186]}
```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial)
This NLI pipeline can currently be loaded from [`pipeline`] using the following task identifier: | 456_32_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#zeroshotclassificationpipeline | .md | This NLI pipeline can currently be loaded from [`pipeline`] using the following task identifier:
`"zero-shot-classification"`.
The models that this pipeline can use are models that have been fine-tuned on an NLI task. See the up-to-date list
of available models on [huggingface.co/models](https://huggingface.co/models?search=nli).
Arguments:
model ([`PreTrainedModel`] or [`TFPreTrainedModel`]):
The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from | 456_32_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#zeroshotclassificationpipeline | .md | The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from
[`PreTrainedModel`] for PyTorch and [`TFPreTrainedModel`] for TensorFlow.
tokenizer ([`PreTrainedTokenizer`]):
The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from
[`PreTrainedTokenizer`].
modelcard (`str` or [`ModelCard`], *optional*):
Model card attributed to the model for this pipeline.
framework (`str`, *optional*): | 456_32_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#zeroshotclassificationpipeline | .md | Model card attributed to the model for this pipeline.
framework (`str`, *optional*):
The framework to use, either `"pt"` for PyTorch or `"tf"` for TensorFlow. The specified framework must be
installed.
If no framework is specified, will default to the one currently installed. If no framework is specified and
both frameworks are installed, will default to the framework of the `model`, or to PyTorch if no model is
provided.
task (`str`, defaults to `""`):
A task-identifier for the pipeline. | 456_32_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#zeroshotclassificationpipeline | .md | provided.
task (`str`, defaults to `""`):
A task-identifier for the pipeline.
num_workers (`int`, *optional*, defaults to 8):
When the pipeline will use *DataLoader* (when passing a dataset, on GPU for a Pytorch model), the number of
workers to be used.
batch_size (`int`, *optional*, defaults to 1):
When the pipeline will use *DataLoader* (when passing a dataset, on GPU for a Pytorch model), the size of
the batch to use, for inference this is not always beneficial, please read [Batching with | 456_32_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#zeroshotclassificationpipeline | .md | the batch to use, for inference this is not always beneficial, please read [Batching with
pipelines](https://huggingface.co/transformers/main_classes/pipelines.html#pipeline-batching) .
args_parser ([`~pipelines.ArgumentHandler`], *optional*):
Reference to the object in charge of parsing supplied pipeline parameters.
device (`int`, *optional*, defaults to -1):
Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on | 456_32_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#zeroshotclassificationpipeline | .md | Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on
the associated CUDA device id. You can pass native `torch.device` or a `str` too
torch_dtype (`str` or `torch.dtype`, *optional*):
Sent directly as `model_kwargs` (just a simpler shortcut) to use the available precision for this model
(`torch.float16`, `torch.bfloat16`, ... or `"auto"`)
binary_output (`bool`, *optional*, defaults to `False`): | 456_32_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md | https://huggingface.co/docs/transformers/en/main_classes/pipelines/#zeroshotclassificationpipeline | .md | (`torch.float16`, `torch.bfloat16`, ... or `"auto"`)
binary_output (`bool`, *optional*, defaults to `False`):
Flag indicating if the output the pipeline should happen in a serialized format (i.e., pickle) or as
the raw output data e.g. text.
- __call__
- all | 456_32_10 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.