source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents.md | https://huggingface.co/docs/transformers/en/agents/#code-execution | .md | A Python interpreter executes the code on a set of inputs passed along with your tools.
This should be safe because the only functions that can be called are the tools you provided (especially if it's only tools by Hugging Face) and the print function, so you're already limited in what can be executed.
The Python interpreter also doesn't allow imports by default outside of a safe list, so all the most obvious attacks shouldn't be an issue. | 56_6_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents.md | https://huggingface.co/docs/transformers/en/agents/#code-execution | .md | You can still authorize additional imports by passing the authorized modules as a list of strings in argument `additional_authorized_imports` upon initialization of your [`ReactCodeAgent`] or [`CodeAgent`]:
```py
>>> from transformers import ReactCodeAgent | 56_6_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents.md | https://huggingface.co/docs/transformers/en/agents/#code-execution | .md | >>> agent = ReactCodeAgent(tools=[], additional_authorized_imports=['requests', 'bs4'])
>>> agent.run("Could you get me the title of the page at url 'https://huggingface.co/blog'?")
(...)
'Hugging Face – Blog'
```
The execution will stop at any code trying to perform an illegal operation or if there is a regular Python error with the code generated by the agent.
> [!WARNING]
> The LLM can generate arbitrary code that will then be executed: do not add any unsafe imports! | 56_6_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents.md | https://huggingface.co/docs/transformers/en/agents/#the-system-prompt | .md | An agent, or rather the LLM that drives the agent, generates an output based on the system prompt. The system prompt can be customized and tailored to the intended task. For example, check the system prompt for the [`ReactCodeAgent`] (below version is slightly simplified).
```text
You will be given a task to solve as best you can.
You have access to the following tools:
<<tool_descriptions>> | 56_7_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents.md | https://huggingface.co/docs/transformers/en/agents/#the-system-prompt | .md | To solve the task, you must plan forward to proceed in a series of steps, in a cycle of 'Thought:', 'Code:', and 'Observation:' sequences. | 56_7_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents.md | https://huggingface.co/docs/transformers/en/agents/#the-system-prompt | .md | At each step, in the 'Thought:' sequence, you should first explain your reasoning towards solving the task, then the tools that you want to use.
Then in the 'Code:' sequence, you should write the code in simple Python. The code sequence must end with '/End code' sequence.
During each intermediate step, you can use 'print()' to save whatever important information you will then need. | 56_7_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents.md | https://huggingface.co/docs/transformers/en/agents/#the-system-prompt | .md | During each intermediate step, you can use 'print()' to save whatever important information you will then need.
These print outputs will then be available in the 'Observation:' field, for using this information as input for the next step. | 56_7_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents.md | https://huggingface.co/docs/transformers/en/agents/#the-system-prompt | .md | In the end you have to return a final answer using the `final_answer` tool.
Here are a few examples using notional tools:
---
{examples}
Above example were using notional tools that might not exist for you. You only have acces to those tools:
<<tool_names>>
You also can perform computations in the python code you generate.
Always provide a 'Thought:' and a 'Code:\n```py' sequence ending with '```<end_code>' sequence. You MUST provide at least the 'Code:' sequence to move forward. | 56_7_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents.md | https://huggingface.co/docs/transformers/en/agents/#the-system-prompt | .md | Remember to not perform too many operations in a single code block! You should split the task into intermediate code blocks.
Print results at the end of each step to save the intermediate results. Then use final_answer() to return the final result.
Remember to make sure that variables you use are all defined. | 56_7_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents.md | https://huggingface.co/docs/transformers/en/agents/#the-system-prompt | .md | Now Begin!
```
The system prompt includes:
- An *introduction* that explains how the agent should behave and what tools are.
- A description of all the tools that is defined by a `<<tool_descriptions>>` token that is dynamically replaced at runtime with the tools defined/chosen by the user.
- The tool description comes from the tool attributes, `name`, `description`, `inputs` and `output_type`, and a simple `jinja2` template that you can refine.
- The expected output format. | 56_7_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents.md | https://huggingface.co/docs/transformers/en/agents/#the-system-prompt | .md | - The expected output format.
You could improve the system prompt, for example, by adding an explanation of the output format.
For maximum flexibility, you can overwrite the whole system prompt template by passing your custom prompt as an argument to the `system_prompt` parameter.
```python
from transformers import ReactJsonAgent
from transformers.agents import PythonInterpreterTool | 56_7_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents.md | https://huggingface.co/docs/transformers/en/agents/#the-system-prompt | .md | agent = ReactJsonAgent(tools=[PythonInterpreterTool()], system_prompt="{your_custom_prompt}")
```
> [!WARNING]
> Please make sure to define the `<<tool_descriptions>>` string somewhere in the `template` so the agent is aware
of the available tools. | 56_7_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents.md | https://huggingface.co/docs/transformers/en/agents/#inspecting-an-agent-run | .md | Here are a few useful attributes to inspect what happened after a run:
- `agent.logs` stores the fine-grained logs of the agent. At every step of the agent's run, everything gets stored in a dictionary that then is appended to `agent.logs`. | 56_8_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents.md | https://huggingface.co/docs/transformers/en/agents/#inspecting-an-agent-run | .md | - Running `agent.write_inner_memory_from_logs()` creates an inner memory of the agent's logs for the LLM to view, as a list of chat messages. This method goes over each step of the log and only stores what it's interested in as a message: for instance, it will save the system prompt and task in separate messages, then for each step it will store the LLM output as a message, and the tool call output as another message. Use this if you want a higher-level view of what has happened - but not every log will be | 56_8_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents.md | https://huggingface.co/docs/transformers/en/agents/#inspecting-an-agent-run | .md | tool call output as another message. Use this if you want a higher-level view of what has happened - but not every log will be transcripted by this method. | 56_8_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents.md | https://huggingface.co/docs/transformers/en/agents/#tools | .md | A tool is an atomic function to be used by an agent.
You can for instance check the [`PythonInterpreterTool`]: it has a name, a description, input descriptions, an output type, and a `__call__` method to perform the action.
When the agent is initialized, the tool attributes are used to generate a tool description which is baked into the agent's system prompt. This lets the agent know which tools it can use and why. | 56_9_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents.md | https://huggingface.co/docs/transformers/en/agents/#default-toolbox | .md | Transformers comes with a default toolbox for empowering agents, that you can add to your agent upon initialization with argument `add_base_tools = True`:
- **Document question answering**: given a document (such as a PDF) in image format, answer a question on this document ([Donut](./model_doc/donut))
- **Image question answering**: given an image, answer a question on this image ([VILT](./model_doc/vilt)) | 56_10_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents.md | https://huggingface.co/docs/transformers/en/agents/#default-toolbox | .md | - **Image question answering**: given an image, answer a question on this image ([VILT](./model_doc/vilt))
- **Speech to text**: given an audio recording of a person talking, transcribe the speech into text ([Whisper](./model_doc/whisper))
- **Text to speech**: convert text to speech ([SpeechT5](./model_doc/speecht5))
- **Translation**: translates a given sentence from source language to target language.
- **DuckDuckGo search***: performs a web search using DuckDuckGo browser. | 56_10_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents.md | https://huggingface.co/docs/transformers/en/agents/#default-toolbox | .md | - **DuckDuckGo search***: performs a web search using DuckDuckGo browser.
- **Python code interpreter**: runs your the LLM generated Python code in a secure environment. This tool will only be added to [`ReactJsonAgent`] if you initialize it with `add_base_tools=True`, since code-based agent can already natively execute Python code
You can manually use a tool by calling the [`load_tool`] function and a task to perform.
```python
from transformers import load_tool | 56_10_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents.md | https://huggingface.co/docs/transformers/en/agents/#default-toolbox | .md | tool = load_tool("text-to-speech")
audio = tool("This is a text to speech tool")
``` | 56_10_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents.md | https://huggingface.co/docs/transformers/en/agents/#create-a-new-tool | .md | You can create your own tool for use cases not covered by the default tools from Hugging Face.
For example, let's create a tool that returns the most downloaded model for a given task from the Hub.
You'll start with the code below.
```python
from huggingface_hub import list_models
task = "text-classification" | 56_11_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents.md | https://huggingface.co/docs/transformers/en/agents/#create-a-new-tool | .md | task = "text-classification"
model = next(iter(list_models(filter=task, sort="downloads", direction=-1)))
print(model.id)
```
This code can quickly be converted into a tool, just by wrapping it in a function and adding the `tool` decorator:
```py
from transformers import tool
@tool
def model_download_tool(task: str) -> str:
"""
This is a tool that returns the most downloaded model of a given task on the Hugging Face Hub.
It returns the name of the checkpoint. | 56_11_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents.md | https://huggingface.co/docs/transformers/en/agents/#create-a-new-tool | .md | Args:
task: The task for which
"""
model = next(iter(list_models(filter="text-classification", sort="downloads", direction=-1)))
return model.id
```
The function needs:
- A clear name. The name usually describes what the tool does. Since the code returns the model with the most downloads for a task, let's put `model_download_tool`.
- Type hints on both inputs and output | 56_11_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents.md | https://huggingface.co/docs/transformers/en/agents/#create-a-new-tool | .md | - Type hints on both inputs and output
- A description, that includes an 'Args:' part where each argument is described (without a type indication this time, it will be pulled from the type hint).
All these will be automatically baked into the agent's system prompt upon initialization: so strive to make them as clear as possible!
> [!TIP] | 56_11_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents.md | https://huggingface.co/docs/transformers/en/agents/#create-a-new-tool | .md | > [!TIP]
> This definition format is the same as tool schemas used in `apply_chat_template`, the only difference is the added `tool` decorator: read more on our tool use API [here](https://huggingface.co/blog/unified-tool-use#passing-tools-to-a-chat-template).
Then you can directly initialize your agent:
```py
from transformers import CodeAgent
agent = CodeAgent(tools=[model_download_tool], llm_engine=llm_engine)
agent.run( | 56_11_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents.md | https://huggingface.co/docs/transformers/en/agents/#create-a-new-tool | .md | ```py
from transformers import CodeAgent
agent = CodeAgent(tools=[model_download_tool], llm_engine=llm_engine)
agent.run(
"Can you give me the name of the model that has the most downloads in the 'text-to-video' task on the Hugging Face Hub?"
)
```
You get the following:
```text
======== New task ========
Can you give me the name of the model that has the most downloads in the 'text-to-video' task on the Hugging Face Hub?
==== Agent is executing the code below: | 56_11_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents.md | https://huggingface.co/docs/transformers/en/agents/#create-a-new-tool | .md | ==== Agent is executing the code below:
most_downloaded_model = model_download_tool(task="text-to-video")
print(f"The most downloaded model for the 'text-to-video' task is {most_downloaded_model}.")
====
```
And the output:
`"The most downloaded model for the 'text-to-video' task is ByteDance/AnimateDiff-Lightning."` | 56_11_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents.md | https://huggingface.co/docs/transformers/en/agents/#manage-your-agents-toolbox | .md | If you have already initialized an agent, it is inconvenient to reinitialize it from scratch with a tool you want to use. With Transformers, you can manage an agent's toolbox by adding or replacing a tool.
Let's add the `model_download_tool` to an existing agent initialized with only the default toolbox.
```python
from transformers import CodeAgent | 56_12_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents.md | https://huggingface.co/docs/transformers/en/agents/#manage-your-agents-toolbox | .md | agent = CodeAgent(tools=[], llm_engine=llm_engine, add_base_tools=True)
agent.toolbox.add_tool(model_download_tool)
```
Now we can leverage both the new tool and the previous text-to-speech tool:
```python
agent.run(
"Can you read out loud the name of the model that has the most downloads in the 'text-to-video' task on the Hugging Face Hub and return the audio?"
)
``` | 56_12_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents.md | https://huggingface.co/docs/transformers/en/agents/#manage-your-agents-toolbox | .md | )
```
| **Audio** |
|------------------------------------------------------------------------------------------------------------------------------------------------------|
| <audio controls><source src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/damo.wav" type="audio/wav"/> |
> [!WARNING] | 56_12_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents.md | https://huggingface.co/docs/transformers/en/agents/#manage-your-agents-toolbox | .md | > [!WARNING]
> Beware when adding tools to an agent that already works well because it can bias selection towards your tool or select another tool other than the one already defined.
Use the `agent.toolbox.update_tool()` method to replace an existing tool in the agent's toolbox.
This is useful if your new tool is a one-to-one replacement of the existing tool because the agent already knows how to perform that specific task. | 56_12_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents.md | https://huggingface.co/docs/transformers/en/agents/#manage-your-agents-toolbox | .md | Just make sure the new tool follows the same API as the replaced tool or adapt the system prompt template to ensure all examples using the replaced tool are updated. | 56_12_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents.md | https://huggingface.co/docs/transformers/en/agents/#use-a-collection-of-tools | .md | You can leverage tool collections by using the ToolCollection object, with the slug of the collection you want to use.
Then pass them as a list to initialize you agent, and start using them!
```py
from transformers import ToolCollection, ReactCodeAgent
image_tool_collection = ToolCollection(collection_slug="huggingface-tools/diffusion-tools-6630bb19a942c2306a2cdb6f")
agent = ReactCodeAgent(tools=[*image_tool_collection.tools], add_base_tools=True) | 56_13_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents.md | https://huggingface.co/docs/transformers/en/agents/#use-a-collection-of-tools | .md | agent.run("Please draw me a picture of rivers and lakes.")
```
To speed up the start, tools are loaded only if called by the agent.
This gets you this image:
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rivers_and_lakes.png"> | 56_13_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/philosophy.md | https://huggingface.co/docs/transformers/en/philosophy/ | .md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 57_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/philosophy.md | https://huggingface.co/docs/transformers/en/philosophy/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 57_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/philosophy.md | https://huggingface.co/docs/transformers/en/philosophy/#philosophy | .md | 🤗 Transformers is an opinionated library built for:
- machine learning researchers and educators seeking to use, study or extend large-scale Transformers models.
- hands-on practitioners who want to fine-tune those models or serve them in production, or both.
- engineers who just want to download a pretrained model and use it to solve a given machine learning task.
The library was designed with two strong goals in mind:
1. Be as easy and fast to use as possible: | 57_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/philosophy.md | https://huggingface.co/docs/transformers/en/philosophy/#philosophy | .md | The library was designed with two strong goals in mind:
1. Be as easy and fast to use as possible:
- We strongly limited the number of user-facing abstractions to learn, in fact, there are almost no abstractions,
just three standard classes required to use each model: [configuration](main_classes/configuration), | 57_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/philosophy.md | https://huggingface.co/docs/transformers/en/philosophy/#philosophy | .md | just three standard classes required to use each model: [configuration](main_classes/configuration),
[models](main_classes/model), and a preprocessing class ([tokenizer](main_classes/tokenizer) for NLP, [image processor](main_classes/image_processor) for vision, [feature extractor](main_classes/feature_extractor) for audio, and [processor](main_classes/processors) for multimodal inputs).
- All of these classes can be initialized in a simple and unified way from pretrained instances by using a common | 57_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/philosophy.md | https://huggingface.co/docs/transformers/en/philosophy/#philosophy | .md | - All of these classes can be initialized in a simple and unified way from pretrained instances by using a common
`from_pretrained()` method which downloads (if needed), caches and
loads the related class instance and associated data (configurations' hyperparameters, tokenizers' vocabulary,
and models' weights) from a pretrained checkpoint provided on [Hugging Face Hub](https://huggingface.co/models) or your own saved checkpoint. | 57_1_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/philosophy.md | https://huggingface.co/docs/transformers/en/philosophy/#philosophy | .md | - On top of those three base classes, the library provides two APIs: [`pipeline`] for quickly
using a model for inference on a given task and [`Trainer`] to quickly train or fine-tune a PyTorch model (all TensorFlow models are compatible with `Keras.fit`).
- As a consequence, this library is NOT a modular toolbox of building blocks for neural nets. If you want to
extend or build upon the library, just use regular Python, PyTorch, TensorFlow, Keras modules and inherit from the base | 57_1_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/philosophy.md | https://huggingface.co/docs/transformers/en/philosophy/#philosophy | .md | extend or build upon the library, just use regular Python, PyTorch, TensorFlow, Keras modules and inherit from the base
classes of the library to reuse functionalities like model loading and saving. If you'd like to learn more about our coding philosophy for models, check out our [Repeat Yourself](https://huggingface.co/blog/transformers-design-philosophy) blog post.
2. Provide state-of-the-art models with performances as close as possible to the original models: | 57_1_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/philosophy.md | https://huggingface.co/docs/transformers/en/philosophy/#philosophy | .md | 2. Provide state-of-the-art models with performances as close as possible to the original models:
- We provide at least one example for each architecture which reproduces a result provided by the official authors
of said architecture.
- The code is usually as close to the original code base as possible which means some PyTorch code may be not as
*pytorchic* as it could be as a result of being converted TensorFlow code and vice versa.
A few other goals: | 57_1_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/philosophy.md | https://huggingface.co/docs/transformers/en/philosophy/#philosophy | .md | *pytorchic* as it could be as a result of being converted TensorFlow code and vice versa.
A few other goals:
- Expose the models' internals as consistently as possible:
- We give access, using a single API, to the full hidden-states and attention weights.
- The preprocessing classes and base model APIs are standardized to easily switch between models.
- Incorporate a subjective selection of promising tools for fine-tuning and investigating these models: | 57_1_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/philosophy.md | https://huggingface.co/docs/transformers/en/philosophy/#philosophy | .md | - Incorporate a subjective selection of promising tools for fine-tuning and investigating these models:
- A simple and consistent way to add new tokens to the vocabulary and embeddings for fine-tuning.
- Simple ways to mask and prune Transformer heads.
- Easily switch between PyTorch, TensorFlow 2.0 and Flax, allowing training with one framework and inference with another. | 57_1_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/philosophy.md | https://huggingface.co/docs/transformers/en/philosophy/#main-concepts | .md | The library is built around three types of classes for each model:
- **Model classes** can be PyTorch models ([torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)), Keras models ([tf.keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model)) or JAX/Flax models ([flax.linen.Module](https://flax.readthedocs.io/en/latest/api_reference/flax.linen/module.html)) that work with the pretrained weights provided in the library. | 57_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/philosophy.md | https://huggingface.co/docs/transformers/en/philosophy/#main-concepts | .md | - **Configuration classes** store the hyperparameters required to build a model (such as the number of layers and hidden size). You don't always need to instantiate these yourself. In particular, if you are using a pretrained model without any modification, creating the model will automatically take care of instantiating the configuration (which is part of the model). | 57_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/philosophy.md | https://huggingface.co/docs/transformers/en/philosophy/#main-concepts | .md | - **Preprocessing classes** convert the raw data into a format accepted by the model. A [tokenizer](main_classes/tokenizer) stores the vocabulary for each model and provide methods for encoding and decoding strings in a list of token embedding indices to be fed to a model. [Image processors](main_classes/image_processor) preprocess vision inputs, [feature extractors](main_classes/feature_extractor) preprocess audio inputs, and a [processor](main_classes/processors) handles multimodal inputs. | 57_2_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/philosophy.md | https://huggingface.co/docs/transformers/en/philosophy/#main-concepts | .md | All these classes can be instantiated from pretrained instances, saved locally, and shared on the Hub with three methods:
- `from_pretrained()` lets you instantiate a model, configuration, and preprocessing class from a pretrained version either
provided by the library itself (the supported models can be found on the [Model Hub](https://huggingface.co/models)) or
stored locally (or on a server) by the user. | 57_2_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/philosophy.md | https://huggingface.co/docs/transformers/en/philosophy/#main-concepts | .md | stored locally (or on a server) by the user.
- `save_pretrained()` lets you save a model, configuration, and preprocessing class locally so that it can be reloaded using
`from_pretrained()`.
- `push_to_hub()` lets you share a model, configuration, and a preprocessing class to the Hub, so it is easily accessible to everyone. | 57_2_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_pipeline.md | https://huggingface.co/docs/transformers/en/add_new_pipeline/ | .md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 58_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_pipeline.md | https://huggingface.co/docs/transformers/en/add_new_pipeline/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 58_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_pipeline.md | https://huggingface.co/docs/transformers/en/add_new_pipeline/#how-to-create-a-custom-pipeline | .md | In this guide, we will see how to create a custom pipeline and share it on the [Hub](https://hf.co/models) or add it to the
🤗 Transformers library.
First and foremost, you need to decide the raw entries the pipeline will be able to take. It can be strings, raw bytes,
dictionaries or whatever seems to be the most likely desired input. Try to keep these inputs as pure Python as possible
as it makes compatibility easier (even through other languages via JSON). Those will be the `inputs` of the | 58_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_pipeline.md | https://huggingface.co/docs/transformers/en/add_new_pipeline/#how-to-create-a-custom-pipeline | .md | as it makes compatibility easier (even through other languages via JSON). Those will be the `inputs` of the
pipeline (`preprocess`).
Then define the `outputs`. Same policy as the `inputs`. The simpler, the better. Those will be the outputs of
`postprocess` method.
Start by inheriting the base class `Pipeline` with the 4 methods needed to implement `preprocess`,
`_forward`, `postprocess`, and `_sanitize_parameters`.
```python
from transformers import Pipeline | 58_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_pipeline.md | https://huggingface.co/docs/transformers/en/add_new_pipeline/#how-to-create-a-custom-pipeline | .md | class MyPipeline(Pipeline):
def _sanitize_parameters(self, **kwargs):
preprocess_kwargs = {}
if "maybe_arg" in kwargs:
preprocess_kwargs["maybe_arg"] = kwargs["maybe_arg"]
return preprocess_kwargs, {}, {}
def preprocess(self, inputs, maybe_arg=2):
model_input = Tensor(inputs["input_ids"])
return {"model_input": model_input}
def _forward(self, model_inputs):
# model_inputs == {"model_input": model_input}
outputs = self.model(**model_inputs)
# Maybe {"logits": Tensor(...)}
return outputs | 58_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_pipeline.md | https://huggingface.co/docs/transformers/en/add_new_pipeline/#how-to-create-a-custom-pipeline | .md | def postprocess(self, model_outputs):
best_class = model_outputs["logits"].softmax(-1)
return best_class
```
The structure of this breakdown is to support relatively seamless support for CPU/GPU, while supporting doing
pre/postprocessing on the CPU on different threads
`preprocess` will take the originally defined inputs, and turn them into something feedable to the model. It might
contain more information and is usually a `Dict`. | 58_1_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_pipeline.md | https://huggingface.co/docs/transformers/en/add_new_pipeline/#how-to-create-a-custom-pipeline | .md | contain more information and is usually a `Dict`.
`_forward` is the implementation detail and is not meant to be called directly. `forward` is the preferred
called method as it contains safeguards to make sure everything is working on the expected device. If anything is
linked to a real model it belongs in the `_forward` method, anything else is in the preprocess/postprocess.
`postprocess` methods will take the output of `_forward` and turn it into the final output that was decided
earlier. | 58_1_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_pipeline.md | https://huggingface.co/docs/transformers/en/add_new_pipeline/#how-to-create-a-custom-pipeline | .md | `postprocess` methods will take the output of `_forward` and turn it into the final output that was decided
earlier.
`_sanitize_parameters` exists to allow users to pass any parameters whenever they wish, be it at initialization
time `pipeline(...., maybe_arg=4)` or at call time `pipe = pipeline(...); output = pipe(...., maybe_arg=4)`.
The returns of `_sanitize_parameters` are the 3 dicts of kwargs that will be passed directly to `preprocess`, | 58_1_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_pipeline.md | https://huggingface.co/docs/transformers/en/add_new_pipeline/#how-to-create-a-custom-pipeline | .md | The returns of `_sanitize_parameters` are the 3 dicts of kwargs that will be passed directly to `preprocess`,
`_forward`, and `postprocess`. Don't fill anything if the caller didn't call with any extra parameter. That
allows to keep the default arguments in the function definition which is always more "natural".
A classic example would be a `top_k` argument in the post processing in classification tasks.
```python
>>> pipe = pipeline("my-new-task")
>>> pipe("This is a test") | 58_1_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_pipeline.md | https://huggingface.co/docs/transformers/en/add_new_pipeline/#how-to-create-a-custom-pipeline | .md | ```python
>>> pipe = pipeline("my-new-task")
>>> pipe("This is a test")
[{"label": "1-star", "score": 0.8}, {"label": "2-star", "score": 0.1}, {"label": "3-star", "score": 0.05}
{"label": "4-star", "score": 0.025}, {"label": "5-star", "score": 0.025}] | 58_1_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_pipeline.md | https://huggingface.co/docs/transformers/en/add_new_pipeline/#how-to-create-a-custom-pipeline | .md | >>> pipe("This is a test", top_k=2)
[{"label": "1-star", "score": 0.8}, {"label": "2-star", "score": 0.1}]
```
In order to achieve that, we'll update our `postprocess` method with a default parameter to `5`. and edit
`_sanitize_parameters` to allow this new parameter.
```python
def postprocess(self, model_outputs, top_k=5):
best_class = model_outputs["logits"].softmax(-1)
# Add logic to handle top_k
return best_class | 58_1_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_pipeline.md | https://huggingface.co/docs/transformers/en/add_new_pipeline/#how-to-create-a-custom-pipeline | .md | def _sanitize_parameters(self, **kwargs):
preprocess_kwargs = {}
if "maybe_arg" in kwargs:
preprocess_kwargs["maybe_arg"] = kwargs["maybe_arg"] | 58_1_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_pipeline.md | https://huggingface.co/docs/transformers/en/add_new_pipeline/#how-to-create-a-custom-pipeline | .md | postprocess_kwargs = {}
if "top_k" in kwargs:
postprocess_kwargs["top_k"] = kwargs["top_k"]
return preprocess_kwargs, {}, postprocess_kwargs
```
Try to keep the inputs/outputs very simple and ideally JSON-serializable as it makes the pipeline usage very easy
without requiring users to understand new kinds of objects. It's also relatively common to support many different types
of arguments for ease of use (audio files, which can be filenames, URLs or pure bytes) | 58_1_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_pipeline.md | https://huggingface.co/docs/transformers/en/add_new_pipeline/#adding-it-to-the-list-of-supported-tasks | .md | To register your `new-task` to the list of supported tasks, you have to add it to the `PIPELINE_REGISTRY`:
```python
from transformers.pipelines import PIPELINE_REGISTRY | 58_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_pipeline.md | https://huggingface.co/docs/transformers/en/add_new_pipeline/#adding-it-to-the-list-of-supported-tasks | .md | PIPELINE_REGISTRY.register_pipeline(
"new-task",
pipeline_class=MyPipeline,
pt_model=AutoModelForSequenceClassification,
)
```
You can specify a default model if you want, in which case it should come with a specific revision (which can be the name of a branch or a commit hash, here we took `"abcdef"`) as well as the type:
```python
PIPELINE_REGISTRY.register_pipeline(
"new-task",
pipeline_class=MyPipeline,
pt_model=AutoModelForSequenceClassification,
default={"pt": ("user/awesome_model", "abcdef")}, | 58_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_pipeline.md | https://huggingface.co/docs/transformers/en/add_new_pipeline/#adding-it-to-the-list-of-supported-tasks | .md | pipeline_class=MyPipeline,
pt_model=AutoModelForSequenceClassification,
default={"pt": ("user/awesome_model", "abcdef")},
type="text", # current support type: text, audio, image, multimodal
)
``` | 58_2_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_pipeline.md | https://huggingface.co/docs/transformers/en/add_new_pipeline/#share-your-pipeline-on-the-hub | .md | To share your custom pipeline on the Hub, you just have to save the custom code of your `Pipeline` subclass in a
python file. For instance, let's say we want to use a custom pipeline for sentence pair classification like this:
```py
import numpy as np
from transformers import Pipeline
def softmax(outputs):
maxes = np.max(outputs, axis=-1, keepdims=True)
shifted_exp = np.exp(outputs - maxes)
return shifted_exp / shifted_exp.sum(axis=-1, keepdims=True) | 58_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_pipeline.md | https://huggingface.co/docs/transformers/en/add_new_pipeline/#share-your-pipeline-on-the-hub | .md | class PairClassificationPipeline(Pipeline):
def _sanitize_parameters(self, **kwargs):
preprocess_kwargs = {}
if "second_text" in kwargs:
preprocess_kwargs["second_text"] = kwargs["second_text"]
return preprocess_kwargs, {}, {}
def preprocess(self, text, second_text=None):
return self.tokenizer(text, text_pair=second_text, return_tensors=self.framework)
def _forward(self, model_inputs):
return self.model(**model_inputs) | 58_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_pipeline.md | https://huggingface.co/docs/transformers/en/add_new_pipeline/#share-your-pipeline-on-the-hub | .md | def _forward(self, model_inputs):
return self.model(**model_inputs)
def postprocess(self, model_outputs):
logits = model_outputs.logits[0].numpy()
probabilities = softmax(logits) | 58_3_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_pipeline.md | https://huggingface.co/docs/transformers/en/add_new_pipeline/#share-your-pipeline-on-the-hub | .md | best_class = np.argmax(probabilities)
label = self.model.config.id2label[best_class]
score = probabilities[best_class].item()
logits = logits.tolist()
return {"label": label, "score": score, "logits": logits}
```
The implementation is framework agnostic, and will work for PyTorch and TensorFlow models. If we have saved this in
a file named `pair_classification.py`, we can then import it and register it like this.
```py
from pair_classification import PairClassificationPipeline | 58_3_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_pipeline.md | https://huggingface.co/docs/transformers/en/add_new_pipeline/#share-your-pipeline-on-the-hub | .md | ```py
from pair_classification import PairClassificationPipeline
from transformers.pipelines import PIPELINE_REGISTRY
from transformers import AutoModelForSequenceClassification, TFAutoModelForSequenceClassification | 58_3_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_pipeline.md | https://huggingface.co/docs/transformers/en/add_new_pipeline/#share-your-pipeline-on-the-hub | .md | PIPELINE_REGISTRY.register_pipeline(
"pair-classification",
pipeline_class=PairClassificationPipeline,
pt_model=AutoModelForSequenceClassification,
tf_model=TFAutoModelForSequenceClassification,
)
```
The [register_pipeline](https://github.com/huggingface/transformers/blob/9feae5fb0164e89d4998e5776897c16f7330d3df/src/transformers/pipelines/base.py#L1387) function registers the pipeline details (task type, pipeline class, supported backends) to a models `config.json` file.
```json
"custom_pipelines": { | 58_3_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_pipeline.md | https://huggingface.co/docs/transformers/en/add_new_pipeline/#share-your-pipeline-on-the-hub | .md | ```json
"custom_pipelines": {
"pair-classification": {
"impl": "pair_classification.PairClassificationPipeline",
"pt": [
"AutoModelForSequenceClassification"
],
"tf": [
"TFAutoModelForSequenceClassification"
],
}
},
```
Once this is done, we can use it with a pretrained model. For instance `sgugger/finetuned-bert-mrpc` has been
fine-tuned on the MRPC dataset, which classifies pairs of sentences as paraphrases or not.
```py
from transformers import pipeline | 58_3_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_pipeline.md | https://huggingface.co/docs/transformers/en/add_new_pipeline/#share-your-pipeline-on-the-hub | .md | classifier = pipeline("pair-classification", model="sgugger/finetuned-bert-mrpc")
```
Then we can share it on the Hub by using the `push_to_hub` method:
```py
classifier.push_to_hub("test-dynamic-pipeline")
```
This will copy the file where you defined `PairClassificationPipeline` inside the folder `"test-dynamic-pipeline"`,
along with saving the model and tokenizer of the pipeline, before pushing everything into the repository | 58_3_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_pipeline.md | https://huggingface.co/docs/transformers/en/add_new_pipeline/#share-your-pipeline-on-the-hub | .md | along with saving the model and tokenizer of the pipeline, before pushing everything into the repository
`{your_username}/test-dynamic-pipeline`. After that, anyone can use it as long as they provide the option
`trust_remote_code=True`:
```py
from transformers import pipeline | 58_3_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_pipeline.md | https://huggingface.co/docs/transformers/en/add_new_pipeline/#share-your-pipeline-on-the-hub | .md | classifier = pipeline(model="{your_username}/test-dynamic-pipeline", trust_remote_code=True)
``` | 58_3_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_pipeline.md | https://huggingface.co/docs/transformers/en/add_new_pipeline/#add-the-pipeline-to--transformers | .md | If you want to contribute your pipeline to 🤗 Transformers, you will need to add a new module in the `pipelines` submodule
with the code of your pipeline, then add it to the list of tasks defined in `pipelines/__init__.py`.
Then you will need to add tests. Create a new file `tests/test_pipelines_MY_PIPELINE.py` with examples of the other tests.
The `run_pipeline_test` function will be very generic and run on small random models on every possible | 58_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_pipeline.md | https://huggingface.co/docs/transformers/en/add_new_pipeline/#add-the-pipeline-to--transformers | .md | The `run_pipeline_test` function will be very generic and run on small random models on every possible
architecture as defined by `model_mapping` and `tf_model_mapping`.
This is very important to test future compatibility, meaning if someone adds a new model for
`XXXForQuestionAnswering` then the pipeline test will attempt to run on it. Because the models are random it's
impossible to check for actual values, that's why there is a helper `ANY` that will simply attempt to match the | 58_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_pipeline.md | https://huggingface.co/docs/transformers/en/add_new_pipeline/#add-the-pipeline-to--transformers | .md | impossible to check for actual values, that's why there is a helper `ANY` that will simply attempt to match the
output of the pipeline TYPE.
You also *need* to implement 2 (ideally 4) tests.
- `test_small_model_pt` : Define 1 small model for this pipeline (doesn't matter if the results don't make sense)
and test the pipeline outputs. The results should be the same as `test_small_model_tf`.
- `test_small_model_tf` : Define 1 small model for this pipeline (doesn't matter if the results don't make sense) | 58_4_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_pipeline.md | https://huggingface.co/docs/transformers/en/add_new_pipeline/#add-the-pipeline-to--transformers | .md | - `test_small_model_tf` : Define 1 small model for this pipeline (doesn't matter if the results don't make sense)
and test the pipeline outputs. The results should be the same as `test_small_model_pt`.
- `test_large_model_pt` (`optional`): Tests the pipeline on a real pipeline where the results are supposed to
make sense. These tests are slow and should be marked as such. Here the goal is to showcase the pipeline and to make
sure there is no drift in future releases. | 58_4_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_pipeline.md | https://huggingface.co/docs/transformers/en/add_new_pipeline/#add-the-pipeline-to--transformers | .md | sure there is no drift in future releases.
- `test_large_model_tf` (`optional`): Tests the pipeline on a real pipeline where the results are supposed to
make sense. These tests are slow and should be marked as such. Here the goal is to showcase the pipeline and to make
sure there is no drift in future releases. | 58_4_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/multilingual.md | https://huggingface.co/docs/transformers/en/multilingual/ | .md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 59_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/multilingual.md | https://huggingface.co/docs/transformers/en/multilingual/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 59_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/multilingual.md | https://huggingface.co/docs/transformers/en/multilingual/#multilingual-models-for-inference | .md | [[open-in-colab]]
There are several multilingual models in 🤗 Transformers, and their inference usage differs from monolingual models. Not *all* multilingual model usage is different though. Some models, like [google-bert/bert-base-multilingual-uncased](https://huggingface.co/google-bert/bert-base-multilingual-uncased), can be used just like a monolingual model. This guide will show you how to use multilingual models whose usage differs for inference. | 59_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/multilingual.md | https://huggingface.co/docs/transformers/en/multilingual/#xlm | .md | XLM has ten different checkpoints, only one of which is monolingual. The nine remaining model checkpoints can be split into two categories: the checkpoints that use language embeddings and those that don't. | 59_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/multilingual.md | https://huggingface.co/docs/transformers/en/multilingual/#xlm-with-language-embeddings | .md | The following XLM models use language embeddings to specify the language used at inference:
- `FacebookAI/xlm-mlm-ende-1024` (Masked language modeling, English-German)
- `FacebookAI/xlm-mlm-enfr-1024` (Masked language modeling, English-French)
- `FacebookAI/xlm-mlm-enro-1024` (Masked language modeling, English-Romanian)
- `FacebookAI/xlm-mlm-xnli15-1024` (Masked language modeling, XNLI languages)
- `FacebookAI/xlm-mlm-tlm-xnli15-1024` (Masked language modeling + translation, XNLI languages) | 59_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/multilingual.md | https://huggingface.co/docs/transformers/en/multilingual/#xlm-with-language-embeddings | .md | - `FacebookAI/xlm-mlm-tlm-xnli15-1024` (Masked language modeling + translation, XNLI languages)
- `FacebookAI/xlm-clm-enfr-1024` (Causal language modeling, English-French)
- `FacebookAI/xlm-clm-ende-1024` (Causal language modeling, English-German)
Language embeddings are represented as a tensor of the same shape as the `input_ids` passed to the model. The values in these tensors depend on the language used and are identified by the tokenizer's `lang2id` and `id2lang` attributes. | 59_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/multilingual.md | https://huggingface.co/docs/transformers/en/multilingual/#xlm-with-language-embeddings | .md | In this example, load the `FacebookAI/xlm-clm-enfr-1024` checkpoint (Causal language modeling, English-French):
```py
>>> import torch
>>> from transformers import XLMTokenizer, XLMWithLMHeadModel | 59_3_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/multilingual.md | https://huggingface.co/docs/transformers/en/multilingual/#xlm-with-language-embeddings | .md | >>> tokenizer = XLMTokenizer.from_pretrained("FacebookAI/xlm-clm-enfr-1024")
>>> model = XLMWithLMHeadModel.from_pretrained("FacebookAI/xlm-clm-enfr-1024")
```
The `lang2id` attribute of the tokenizer displays this model's languages and their ids:
```py
>>> print(tokenizer.lang2id)
{'en': 0, 'fr': 1}
```
Next, create an example input:
```py
>>> input_ids = torch.tensor([tokenizer.encode("Wikipedia was used to")]) # batch size of 1
``` | 59_3_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/multilingual.md | https://huggingface.co/docs/transformers/en/multilingual/#xlm-with-language-embeddings | .md | ```py
>>> input_ids = torch.tensor([tokenizer.encode("Wikipedia was used to")]) # batch size of 1
```
Set the language id as `"en"` and use it to define the language embedding. The language embedding is a tensor filled with `0` since that is the language id for English. This tensor should be the same size as `input_ids`.
```py
>>> language_id = tokenizer.lang2id["en"] # 0
>>> langs = torch.tensor([language_id] * input_ids.shape[1]) # torch.tensor([0, 0, 0, ..., 0]) | 59_3_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/multilingual.md | https://huggingface.co/docs/transformers/en/multilingual/#xlm-with-language-embeddings | .md | >>> # We reshape it to be of size (batch_size, sequence_length)
>>> langs = langs.view(1, -1) # is now of shape [1, sequence_length] (we have a batch size of 1)
```
Now you can pass the `input_ids` and language embedding to the model:
```py
>>> outputs = model(input_ids, langs=langs)
```
The [run_generation.py](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-generation/run_generation.py) script can generate text with language embeddings using the `xlm-clm` checkpoints. | 59_3_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/multilingual.md | https://huggingface.co/docs/transformers/en/multilingual/#xlm-without-language-embeddings | .md | The following XLM models do not require language embeddings during inference:
- `FacebookAI/xlm-mlm-17-1280` (Masked language modeling, 17 languages)
- `FacebookAI/xlm-mlm-100-1280` (Masked language modeling, 100 languages)
These models are used for generic sentence representations, unlike the previous XLM checkpoints. | 59_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/multilingual.md | https://huggingface.co/docs/transformers/en/multilingual/#bert | .md | The following BERT models can be used for multilingual tasks:
- `google-bert/bert-base-multilingual-uncased` (Masked language modeling + Next sentence prediction, 102 languages)
- `google-bert/bert-base-multilingual-cased` (Masked language modeling + Next sentence prediction, 104 languages)
These models do not require language embeddings during inference. They should identify the language from the
context and infer accordingly. | 59_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/multilingual.md | https://huggingface.co/docs/transformers/en/multilingual/#xlm-roberta | .md | The following XLM-RoBERTa models can be used for multilingual tasks:
- `FacebookAI/xlm-roberta-base` (Masked language modeling, 100 languages)
- `FacebookAI/xlm-roberta-large` (Masked language modeling, 100 languages)
XLM-RoBERTa was trained on 2.5TB of newly created and cleaned CommonCrawl data in 100 languages. It provides strong gains over previously released multilingual models like mBERT or XLM on downstream tasks like classification, sequence labeling, and question answering. | 59_6_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/multilingual.md | https://huggingface.co/docs/transformers/en/multilingual/#m2m100 | .md | The following M2M100 models can be used for multilingual translation:
- `facebook/m2m100_418M` (Translation)
- `facebook/m2m100_1.2B` (Translation)
In this example, load the `facebook/m2m100_418M` checkpoint to translate from Chinese to English. You can set the source language in the tokenizer:
```py
>>> from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer | 59_7_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/multilingual.md | https://huggingface.co/docs/transformers/en/multilingual/#m2m100 | .md | >>> en_text = "Do not meddle in the affairs of wizards, for they are subtle and quick to anger."
>>> chinese_text = "不要插手巫師的事務, 因為他們是微妙的, 很快就會發怒." | 59_7_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/multilingual.md | https://huggingface.co/docs/transformers/en/multilingual/#m2m100 | .md | >>> tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100_418M", src_lang="zh")
>>> model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100_418M")
```
Tokenize the text:
```py
>>> encoded_zh = tokenizer(chinese_text, return_tensors="pt")
```
M2M100 forces the target language id as the first generated token to translate to the target language. Set the `forced_bos_token_id` to `en` in the `generate` method to translate to English:
```py | 59_7_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/multilingual.md | https://huggingface.co/docs/transformers/en/multilingual/#m2m100 | .md | ```py
>>> generated_tokens = model.generate(**encoded_zh, forced_bos_token_id=tokenizer.get_lang_id("en"))
>>> tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
'Do not interfere with the matters of the witches, because they are delicate and will soon be angry.'
``` | 59_7_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/multilingual.md | https://huggingface.co/docs/transformers/en/multilingual/#mbart | .md | The following MBart models can be used for multilingual translation:
- `facebook/mbart-large-50-one-to-many-mmt` (One-to-many multilingual machine translation, 50 languages)
- `facebook/mbart-large-50-many-to-many-mmt` (Many-to-many multilingual machine translation, 50 languages)
- `facebook/mbart-large-50-many-to-one-mmt` (Many-to-one multilingual machine translation, 50 languages)
- `facebook/mbart-large-50` (Multilingual translation, 50 languages)
- `facebook/mbart-large-cc25` | 59_8_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/multilingual.md | https://huggingface.co/docs/transformers/en/multilingual/#mbart | .md | - `facebook/mbart-large-50` (Multilingual translation, 50 languages)
- `facebook/mbart-large-cc25`
In this example, load the `facebook/mbart-large-50-many-to-many-mmt` checkpoint to translate Finnish to English. You can set the source language in the tokenizer:
```py
>>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM | 59_8_1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.