source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_one.md
https://huggingface.co/docs/transformers/en/perf_train_gpu_one/#mixture-of-experts
.md
(source: [GLAM](https://ai.googleblog.com/2021/12/more-efficient-in-context-learning-with.html)) You can find exhaustive details and comparison tables in the papers listed at the end of this section. The main drawback of this approach is that it requires staggering amounts of GPU memory - almost an order of magnitude larger than its dense equivalent. Various distillation and approaches are proposed to how to overcome the much higher memory requirements.
62_20_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_one.md
https://huggingface.co/docs/transformers/en/perf_train_gpu_one/#mixture-of-experts
.md
There is direct trade-off though, you can use just a few experts with a 2-3x smaller base model instead of dozens or hundreds experts leading to a 5x smaller model and thus increase the training speed moderately while increasing the memory requirements moderately as well. Most related papers and implementations are built around Tensorflow/TPUs: - [GShard: Scaling Giant Models with Conditional Computation and Automatic Sharding](https://arxiv.org/abs/2006.16668)
62_20_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_one.md
https://huggingface.co/docs/transformers/en/perf_train_gpu_one/#mixture-of-experts
.md
- [GShard: Scaling Giant Models with Conditional Computation and Automatic Sharding](https://arxiv.org/abs/2006.16668) - [Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity](https://arxiv.org/abs/2101.03961) - [GLaM: Generalist Language Model (GLaM)](https://ai.googleblog.com/2021/12/more-efficient-in-context-learning-with.html)
62_20_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_one.md
https://huggingface.co/docs/transformers/en/perf_train_gpu_one/#mixture-of-experts
.md
And for Pytorch DeepSpeed has built one as well: [DeepSpeed-MoE: Advancing Mixture-of-Experts Inference and Training to Power Next-Generation AI Scale](https://arxiv.org/abs/2201.05596), [Mixture of Experts](https://www.deepspeed.ai/tutorials/mixture-of-experts/) - blog posts: [1](https://www.microsoft.com/en-us/research/blog/deepspeed-powers-8x-larger-moe-model-training-with-high-performance/),
62_20_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_one.md
https://huggingface.co/docs/transformers/en/perf_train_gpu_one/#mixture-of-experts
.md
[1](https://www.microsoft.com/en-us/research/blog/deepspeed-powers-8x-larger-moe-model-training-with-high-performance/), [2](https://www.microsoft.com/en-us/research/publication/scalable-and-efficient-moe-training-for-multitask-multilingual-models/) and specific deployment with large transformer-based natural language generation models: [blog post](https://www.deepspeed.ai/2021/12/09/deepspeed-moe-nlg.html), [Megatron-Deepspeed branch](https://github.com/microsoft/Megatron-DeepSpeed/tree/moe-training).
62_20_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_one.md
https://huggingface.co/docs/transformers/en/perf_train_gpu_one/#using-pytorch-native-attention-and-flash-attention
.md
PyTorch's [`torch.nn.functional.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html) (SDPA) can also call FlashAttention and memory-efficient attention kernels under the hood. SDPA support is currently being added natively in Transformers and is used by default for `torch>=2.1.1` when an implementation is available. Please refer to [PyTorch scaled dot product
62_21_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_one.md
https://huggingface.co/docs/transformers/en/perf_train_gpu_one/#using-pytorch-native-attention-and-flash-attention
.md
and is used by default for `torch>=2.1.1` when an implementation is available. Please refer to [PyTorch scaled dot product attention](https://huggingface.co/docs/transformers/perf_infer_gpu_one#pytorch-scaled-dot-product-attention) for a list of supported models and more details.
62_21_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_gpu_one.md
https://huggingface.co/docs/transformers/en/perf_train_gpu_one/#using-pytorch-native-attention-and-flash-attention
.md
Check out this [blogpost](https://pytorch.org/blog/out-of-the-box-acceleration/) to learn more about acceleration and memory-savings with SDPA.
62_21_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/installation.md
https://huggingface.co/docs/transformers/en/installation/
.md
<!--- Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
63_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/installation.md
https://huggingface.co/docs/transformers/en/installation/
.md
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
63_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/installation.md
https://huggingface.co/docs/transformers/en/installation/#installation
.md
Install 🤗 Transformers for whichever deep learning library you're working with, setup your cache, and optionally configure 🤗 Transformers to run offline. 🤗 Transformers is tested on Python 3.6+, PyTorch 1.1.0+, TensorFlow 2.0+, and Flax. Follow the installation instructions below for the deep learning library you are using: * [PyTorch](https://pytorch.org/get-started/locally/) installation instructions. * [TensorFlow 2.0](https://www.tensorflow.org/install/pip) installation instructions.
63_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/installation.md
https://huggingface.co/docs/transformers/en/installation/#installation
.md
* [TensorFlow 2.0](https://www.tensorflow.org/install/pip) installation instructions. * [Flax](https://flax.readthedocs.io/en/latest/) installation instructions.
63_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/installation.md
https://huggingface.co/docs/transformers/en/installation/#install-with-pip
.md
You should install 🤗 Transformers in a [virtual environment](https://docs.python.org/3/library/venv.html). If you're unfamiliar with Python virtual environments, take a look at this [guide](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/). A virtual environment makes it easier to manage different projects, and avoid compatibility issues between dependencies. Now you're ready to install 🤗 Transformers with the following command: ```bash pip install transformers ```
63_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/installation.md
https://huggingface.co/docs/transformers/en/installation/#install-with-pip
.md
Now you're ready to install 🤗 Transformers with the following command: ```bash pip install transformers ``` For GPU acceleration, install the appropriate CUDA drivers for [PyTorch](https://pytorch.org/get-started/locally) and TensorFlow(https://www.tensorflow.org/install/pip). Run the command below to check if your system detects an NVIDIA GPU. ```bash nvidia-smi ```
63_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/installation.md
https://huggingface.co/docs/transformers/en/installation/#install-with-pip
.md
Run the command below to check if your system detects an NVIDIA GPU. ```bash nvidia-smi ``` For CPU-support only, you can conveniently install 🤗 Transformers and a deep learning library in one line. For example, install 🤗 Transformers and PyTorch with: ```bash pip install 'transformers[torch]' ``` 🤗 Transformers and TensorFlow 2.0: ```bash pip install 'transformers[tf-cpu]' ``` <Tip warning={true}> M1 / ARM Users You will need to install the following before installing TensorFlow 2.0
63_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/installation.md
https://huggingface.co/docs/transformers/en/installation/#install-with-pip
.md
``` <Tip warning={true}> M1 / ARM Users You will need to install the following before installing TensorFlow 2.0 ```bash brew install cmake brew install pkg-config ``` </Tip> 🤗 Transformers and Flax: ```bash pip install 'transformers[flax]' ``` Finally, check if 🤗 Transformers has been properly installed by running the following command. It will download a pretrained model: ```bash python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('we love you'))" ```
63_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/installation.md
https://huggingface.co/docs/transformers/en/installation/#install-with-pip
.md
```bash python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('we love you'))" ``` Then print out the label and score: ```bash [{'label': 'POSITIVE', 'score': 0.9998704791069031}] ```
63_2_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/installation.md
https://huggingface.co/docs/transformers/en/installation/#install-from-source
.md
Install 🤗 Transformers from source with the following command: ```bash pip install git+https://github.com/huggingface/transformers ```
63_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/installation.md
https://huggingface.co/docs/transformers/en/installation/#install-from-source
.md
This command installs the bleeding edge `main` version rather than the latest `stable` version. The `main` version is useful for staying up-to-date with the latest developments. For instance, if a bug has been fixed since the last official release but a new release hasn't been rolled out yet. However, this means the `main` version may not always be stable. We strive to keep the `main` version operational, and most issues are usually resolved within a few hours or a day. If you run into a problem, please
63_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/installation.md
https://huggingface.co/docs/transformers/en/installation/#install-from-source
.md
`main` version operational, and most issues are usually resolved within a few hours or a day. If you run into a problem, please open an [Issue](https://github.com/huggingface/transformers/issues) so we can fix it even sooner!
63_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/installation.md
https://huggingface.co/docs/transformers/en/installation/#install-from-source
.md
Check if 🤗 Transformers has been properly installed by running the following command: ```bash python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('I love you'))" ```
63_3_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/installation.md
https://huggingface.co/docs/transformers/en/installation/#editable-install
.md
You will need an editable install if you'd like to: * Use the `main` version of the source code. * Contribute to 🤗 Transformers and need to test changes in the code. Clone the repository and install 🤗 Transformers with the following commands: ```bash git clone https://github.com/huggingface/transformers.git cd transformers pip install -e . ```
63_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/installation.md
https://huggingface.co/docs/transformers/en/installation/#editable-install
.md
```bash git clone https://github.com/huggingface/transformers.git cd transformers pip install -e . ``` These commands will link the folder you cloned the repository to and your Python library paths. Python will now look inside the folder you cloned to in addition to the normal library paths. For example, if your Python packages are typically installed in `~/anaconda3/envs/main/lib/python3.7/site-packages/`, Python will also search the folder you cloned to: `~/transformers/`. <Tip warning={true}>
63_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/installation.md
https://huggingface.co/docs/transformers/en/installation/#editable-install
.md
<Tip warning={true}> You must keep the `transformers` folder if you want to keep using the library. </Tip> Now you can easily update your clone to the latest version of 🤗 Transformers with the following command: ```bash cd ~/transformers/ git pull ``` Your Python environment will find the `main` version of 🤗 Transformers on the next run.
63_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/installation.md
https://huggingface.co/docs/transformers/en/installation/#install-with-conda
.md
Install from the conda channel `conda-forge`: ```bash conda install conda-forge::transformers ```
63_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/installation.md
https://huggingface.co/docs/transformers/en/installation/#cache-setup
.md
Pretrained models are downloaded and locally cached at: `~/.cache/huggingface/hub`. This is the default directory given by the shell environment variable `TRANSFORMERS_CACHE`. On Windows, the default directory is given by `C:\Users\username\.cache\huggingface\hub`. You can change the shell environment variables shown below - in order of priority - to specify a different cache directory: 1. Shell environment variable (default): `HF_HUB_CACHE` or `TRANSFORMERS_CACHE`.
63_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/installation.md
https://huggingface.co/docs/transformers/en/installation/#cache-setup
.md
1. Shell environment variable (default): `HF_HUB_CACHE` or `TRANSFORMERS_CACHE`. 2. Shell environment variable: `HF_HOME`. 3. Shell environment variable: `XDG_CACHE_HOME` + `/huggingface`. <Tip> 🤗 Transformers will use the shell environment variables `PYTORCH_TRANSFORMERS_CACHE` or `PYTORCH_PRETRAINED_BERT_CACHE` if you are coming from an earlier iteration of this library and have set those environment variables, unless you specify the shell environment variable `TRANSFORMERS_CACHE`. </Tip>
63_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/installation.md
https://huggingface.co/docs/transformers/en/installation/#offline-mode
.md
Run 🤗 Transformers in a firewalled or offline environment with locally cached files by setting the environment variable `HF_HUB_OFFLINE=1`. <Tip> Add [🤗 Datasets](https://huggingface.co/docs/datasets/) to your offline training workflow with the environment variable `HF_DATASETS_OFFLINE=1`. </Tip> ```bash HF_DATASETS_OFFLINE=1 HF_HUB_OFFLINE=1 \ python examples/pytorch/translation/run_translation.py --model_name_or_path google-t5/t5-small --dataset_name wmt16 --dataset_config ro-en ... ```
63_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/installation.md
https://huggingface.co/docs/transformers/en/installation/#offline-mode
.md
``` This script should run without hanging or waiting to timeout because it won't attempt to download the model from the Hub. You can also bypass loading a model from the Hub from each [`~PreTrainedModel.from_pretrained`] call with the [`local_files_only`] parameter. When set to `True`, only local files are loaded: ```py from transformers import T5Model
63_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/installation.md
https://huggingface.co/docs/transformers/en/installation/#offline-mode
.md
model = T5Model.from_pretrained("./path/to/local/directory", local_files_only=True) ```
63_7_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/installation.md
https://huggingface.co/docs/transformers/en/installation/#fetch-models-and-tokenizers-to-use-offline
.md
Another option for using 🤗 Transformers offline is to download the files ahead of time, and then point to their local path when you need to use them offline. There are three ways to do this: * Download a file through the user interface on the [Model Hub](https://huggingface.co/models) by clicking on the ↓ icon. ![download-icon](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/download-icon.png)
63_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/installation.md
https://huggingface.co/docs/transformers/en/installation/#fetch-models-and-tokenizers-to-use-offline
.md
![download-icon](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/download-icon.png) * Use the [`PreTrainedModel.from_pretrained`] and [`PreTrainedModel.save_pretrained`] workflow: 1. Download your files ahead of time with [`PreTrainedModel.from_pretrained`]: ```py >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
63_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/installation.md
https://huggingface.co/docs/transformers/en/installation/#fetch-models-and-tokenizers-to-use-offline
.md
>>> tokenizer = AutoTokenizer.from_pretrained("bigscience/T0_3B") >>> model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0_3B") ``` 2. Save your files to a specified directory with [`PreTrainedModel.save_pretrained`]: ```py >>> tokenizer.save_pretrained("./your/path/bigscience_t0") >>> model.save_pretrained("./your/path/bigscience_t0") ``` 3. Now when you're offline, reload your files with [`PreTrainedModel.from_pretrained`] from the specified directory: ```py
63_8_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/installation.md
https://huggingface.co/docs/transformers/en/installation/#fetch-models-and-tokenizers-to-use-offline
.md
3. Now when you're offline, reload your files with [`PreTrainedModel.from_pretrained`] from the specified directory: ```py >>> tokenizer = AutoTokenizer.from_pretrained("./your/path/bigscience_t0") >>> model = AutoModel.from_pretrained("./your/path/bigscience_t0") ``` * Programmatically download files with the [huggingface_hub](https://github.com/huggingface/huggingface_hub/tree/main/src/huggingface_hub) library: 1. Install the `huggingface_hub` library in your virtual environment: ```bash
63_8_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/installation.md
https://huggingface.co/docs/transformers/en/installation/#fetch-models-and-tokenizers-to-use-offline
.md
1. Install the `huggingface_hub` library in your virtual environment: ```bash python -m pip install huggingface_hub ``` 2. Use the [`hf_hub_download`](https://huggingface.co/docs/hub/adding-a-library#download-files-from-the-hub) function to download a file to a specific path. For example, the following command downloads the `config.json` file from the [T0](https://huggingface.co/bigscience/T0_3B) model to your desired path: ```py >>> from huggingface_hub import hf_hub_download
63_8_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/installation.md
https://huggingface.co/docs/transformers/en/installation/#fetch-models-and-tokenizers-to-use-offline
.md
>>> hf_hub_download(repo_id="bigscience/T0_3B", filename="config.json", cache_dir="./your/path/bigscience_t0") ``` Once your file is downloaded and locally cached, specify it's local path to load and use it: ```py >>> from transformers import AutoConfig
63_8_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/installation.md
https://huggingface.co/docs/transformers/en/installation/#fetch-models-and-tokenizers-to-use-offline
.md
>>> config = AutoConfig.from_pretrained("./your/path/bigscience_t0/config.json") ``` <Tip> See the [How to download files from the Hub](https://huggingface.co/docs/hub/how-to-downstream) section for more details on downloading files stored on the Hub. </Tip>
63_8_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/installation.md
https://huggingface.co/docs/transformers/en/installation/#troubleshooting
.md
See below for some of the more common installation issues and how to resolve them.
63_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/installation.md
https://huggingface.co/docs/transformers/en/installation/#unsupported-python-version
.md
Ensure you are using Python 3.9 or later. Run the command below to check your Python version. ``` python --version ```
63_10_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/installation.md
https://huggingface.co/docs/transformers/en/installation/#missing-dependencies
.md
Install all required dependencies by running the following command. Ensure you’re in the project directory before executing the command. ``` pip install -r requirements.txt ```
63_11_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/installation.md
https://huggingface.co/docs/transformers/en/installation/#windows-specific
.md
If you encounter issues on Windows, you may need to activate Developer Mode. Navigate to Windows Settings > For Developers > Developer Mode. Alternatively, create and activate a virtual environment as shown below. ``` python -m venv env .\env\Scripts\activate ```
63_12_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents_advanced.md
https://huggingface.co/docs/transformers/en/agents_advanced/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
64_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents_advanced.md
https://huggingface.co/docs/transformers/en/agents_advanced/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
64_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents_advanced.md
https://huggingface.co/docs/transformers/en/agents_advanced/#agents-supercharged---multi-agents-external-tools-and-more
.md
[[open-in-colab]]
64_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents_advanced.md
https://huggingface.co/docs/transformers/en/agents_advanced/#what-is-an-agent
.md
> [!TIP] > If you're new to `transformers.agents`, make sure to first read the main [agents documentation](./agents). In this page we're going to highlight several advanced uses of `transformers.agents`.
64_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents_advanced.md
https://huggingface.co/docs/transformers/en/agents_advanced/#multi-agents
.md
Multi-agent has been introduced in Microsoft's framework [Autogen](https://huggingface.co/papers/2308.08155). It simply means having several agents working together to solve your task instead of only one.
64_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents_advanced.md
https://huggingface.co/docs/transformers/en/agents_advanced/#multi-agents
.md
It simply means having several agents working together to solve your task instead of only one. It empirically yields better performance on most benchmarks. The reason for this better performance is conceptually simple: for many tasks, rather than using a do-it-all system, you would prefer to specialize units on sub-tasks. Here, having agents with separate tool sets and memories allows to achieve efficient specialization. You can easily build hierarchical multi-agent systems with `transformers.agents`.
64_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents_advanced.md
https://huggingface.co/docs/transformers/en/agents_advanced/#multi-agents
.md
You can easily build hierarchical multi-agent systems with `transformers.agents`. To do so, encapsulate the agent in a [`ManagedAgent`] object. This object needs arguments `agent`, `name`, and a `description`, which will then be embedded in the manager agent's system prompt to let it know how to call this managed agent, as we also do for tools. Here's an example of making an agent that managed a specific web search agent using our [`DuckDuckGoSearchTool`]: ```py
64_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents_advanced.md
https://huggingface.co/docs/transformers/en/agents_advanced/#multi-agents
.md
Here's an example of making an agent that managed a specific web search agent using our [`DuckDuckGoSearchTool`]: ```py from transformers.agents import ReactCodeAgent, HfApiEngine, DuckDuckGoSearchTool, ManagedAgent
64_3_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents_advanced.md
https://huggingface.co/docs/transformers/en/agents_advanced/#multi-agents
.md
llm_engine = HfApiEngine() web_agent = ReactCodeAgent(tools=[DuckDuckGoSearchTool()], llm_engine=llm_engine) managed_web_agent = ManagedAgent( agent=web_agent, name="web_search", description="Runs web searches for you. Give it your query as an argument." ) manager_agent = ReactCodeAgent( tools=[], llm_engine=llm_engine, managed_agents=[managed_web_agent] )
64_3_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents_advanced.md
https://huggingface.co/docs/transformers/en/agents_advanced/#multi-agents
.md
manager_agent = ReactCodeAgent( tools=[], llm_engine=llm_engine, managed_agents=[managed_web_agent] ) manager_agent.run("Who is the CEO of Hugging Face?") ``` > [!TIP] > For an in-depth example of an efficient multi-agent implementation, see [how we pushed our multi-agent system to the top of the GAIA leaderboard](https://huggingface.co/blog/beating-gaia).
64_3_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents_advanced.md
https://huggingface.co/docs/transformers/en/agents_advanced/#directly-define-a-tool-by-subclassing-tool-and-share-it-to-the-hub
.md
Let's take again the tool example from main documentation, for which we had implemented a `tool` decorator. If you need to add variation, like custom attributes for your tool, you can build your tool following the fine-grained method: building a class that inherits from the [`Tool`] superclass. The custom tool needs:
64_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents_advanced.md
https://huggingface.co/docs/transformers/en/agents_advanced/#directly-define-a-tool-by-subclassing-tool-and-share-it-to-the-hub
.md
The custom tool needs: - An attribute `name`, which corresponds to the name of the tool itself. The name usually describes what the tool does. Since the code returns the model with the most downloads for a task, let's name it `model_download_counter`. - An attribute `description` is used to populate the agent's system prompt. - An `inputs` attribute, which is a dictionary with keys `"type"` and `"description"`. It contains information that helps the Python interpreter make educated choices about the input.
64_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents_advanced.md
https://huggingface.co/docs/transformers/en/agents_advanced/#directly-define-a-tool-by-subclassing-tool-and-share-it-to-the-hub
.md
- An `output_type` attribute, which specifies the output type. - A `forward` method which contains the inference code to be executed. The types for both `inputs` and `output_type` should be amongst [Pydantic formats](https://docs.pydantic.dev/latest/concepts/json_schema/#generating-json-schema). ```python from transformers import Tool from huggingface_hub import list_models
64_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents_advanced.md
https://huggingface.co/docs/transformers/en/agents_advanced/#directly-define-a-tool-by-subclassing-tool-and-share-it-to-the-hub
.md
class HFModelDownloadsTool(Tool): name = "model_download_counter" description = """ This is a tool that returns the most downloaded model of a given task on the Hugging Face Hub. It returns the name of the checkpoint.""" inputs = { "task": { "type": "string", "description": "the task category (such as text-classification, depth-estimation, etc)", } } output_type = "string"
64_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents_advanced.md
https://huggingface.co/docs/transformers/en/agents_advanced/#directly-define-a-tool-by-subclassing-tool-and-share-it-to-the-hub
.md
def forward(self, task: str): model = next(iter(list_models(filter=task, sort="downloads", direction=-1))) return model.id ``` Now that the custom `HfModelDownloadsTool` class is ready, you can save it to a file named `model_downloads.py` and import it for use. ```python from model_downloads import HFModelDownloadsTool
64_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents_advanced.md
https://huggingface.co/docs/transformers/en/agents_advanced/#directly-define-a-tool-by-subclassing-tool-and-share-it-to-the-hub
.md
tool = HFModelDownloadsTool() ``` You can also share your custom tool to the Hub by calling [`~Tool.push_to_hub`] on the tool. Make sure you've created a repository for it on the Hub and are using a token with read access. ```python tool.push_to_hub("{your_username}/hf-model-downloads") ``` Load the tool with the [`~Tool.load_tool`] function and pass it to the `tools` parameter in your agent. ```python from transformers import load_tool, CodeAgent
64_4_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents_advanced.md
https://huggingface.co/docs/transformers/en/agents_advanced/#directly-define-a-tool-by-subclassing-tool-and-share-it-to-the-hub
.md
model_download_tool = load_tool("m-ric/hf-model-downloads") ```
64_4_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents_advanced.md
https://huggingface.co/docs/transformers/en/agents_advanced/#import-a-space-as-a-tool-
.md
You can directly import a Space from the Hub as a tool using the [`Tool.from_space`] method! You only need to provide the id of the Space on the Hub, its name, and a description that will help you agent understand what the tool does. Under the hood, this will use [`gradio-client`](https://pypi.org/project/gradio-client/) library to call the Space. For instance, let's import the [FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev) Space from the Hub and use it to generate an image. ```
64_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents_advanced.md
https://huggingface.co/docs/transformers/en/agents_advanced/#import-a-space-as-a-tool-
.md
``` from transformers import Tool
64_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents_advanced.md
https://huggingface.co/docs/transformers/en/agents_advanced/#import-a-space-as-a-tool-
.md
image_generation_tool = Tool.from_space( "black-forest-labs/FLUX.1-dev", name="image_generator", description="Generate an image from a prompt")
64_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents_advanced.md
https://huggingface.co/docs/transformers/en/agents_advanced/#import-a-space-as-a-tool-
.md
image_generation_tool("A sunny beach") ``` And voilà, here's your image! 🏖️ <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/sunny_beach.webp"> Then you can use this tool just like any other tool. For example, let's improve the prompt `a rabbit wearing a space suit` and generate an image of it. ```python from transformers import ReactCodeAgent agent = ReactCodeAgent(tools=[image_generation_tool])
64_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents_advanced.md
https://huggingface.co/docs/transformers/en/agents_advanced/#import-a-space-as-a-tool-
.md
agent = ReactCodeAgent(tools=[image_generation_tool]) agent.run( "Improve this prompt, then generate an image of it.", prompt='A rabbit wearing a space suit' ) ``` ```text === Agent thoughts: improved_prompt could be "A bright blue space suit wearing rabbit, on the surface of the moon, under a bright orange sunset, with the Earth visible in the background"
64_5_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents_advanced.md
https://huggingface.co/docs/transformers/en/agents_advanced/#import-a-space-as-a-tool-
.md
Now that I have improved the prompt, I can use the image generator tool to generate an image based on this prompt. >>> Agent is executing the code below: image = image_generator(prompt="A bright blue space suit wearing rabbit, on the surface of the moon, under a bright orange sunset, with the Earth visible in the background") final_answer(image) ``` <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rabbit_spacesuit_flux.webp"> How cool is this? 🤩
64_5_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents_advanced.md
https://huggingface.co/docs/transformers/en/agents_advanced/#use-gradio-tools
.md
[gradio-tools](https://github.com/freddyaboulton/gradio-tools) is a powerful library that allows using Hugging Face Spaces as tools. It supports many existing Spaces as well as custom Spaces. Transformers supports `gradio_tools` with the [`Tool.from_gradio`] method. For example, let's use the [`StableDiffusionPromptGeneratorTool`](https://github.com/freddyaboulton/gradio-tools/blob/main/gradio_tools/tools/prompt_generator.py) from `gradio-tools` toolkit for improving prompts to generate better images.
64_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents_advanced.md
https://huggingface.co/docs/transformers/en/agents_advanced/#use-gradio-tools
.md
Import and instantiate the tool, then pass it to the `Tool.from_gradio` method: ```python from gradio_tools import StableDiffusionPromptGeneratorTool from transformers import Tool, load_tool, CodeAgent
64_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents_advanced.md
https://huggingface.co/docs/transformers/en/agents_advanced/#use-gradio-tools
.md
gradio_prompt_generator_tool = StableDiffusionPromptGeneratorTool() prompt_generator_tool = Tool.from_gradio(gradio_prompt_generator_tool) ``` > [!WARNING] > gradio-tools require *textual* inputs and outputs even when working with different modalities like image and audio objects. Image and audio inputs and outputs are currently incompatible.
64_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents_advanced.md
https://huggingface.co/docs/transformers/en/agents_advanced/#use-langchain-tools
.md
We love Langchain and think it has a very compelling suite of tools. To import a tool from LangChain, use the `from_langchain()` method. Here is how you can use it to recreate the intro's search result using a LangChain web search tool. This tool will need `pip install google-search-results` to work properly. ```python from langchain.agents import load_tools from transformers import Tool, ReactCodeAgent search_tool = Tool.from_langchain(load_tools(["serpapi"])[0])
64_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents_advanced.md
https://huggingface.co/docs/transformers/en/agents_advanced/#use-langchain-tools
.md
search_tool = Tool.from_langchain(load_tools(["serpapi"])[0]) agent = ReactCodeAgent(tools=[search_tool]) agent.run("How many more blocks (also denoted as layers) are in BERT base encoder compared to the encoder from the architecture proposed in Attention is All You Need?") ```
64_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents_advanced.md
https://huggingface.co/docs/transformers/en/agents_advanced/#display-your-agent-run-in-a-cool-gradio-interface
.md
You can leverage `gradio.Chatbot` to display your agent's thoughts using `stream_to_gradio`, here is an example: ```py import gradio as gr from transformers import ( load_tool, ReactCodeAgent, HfApiEngine, stream_to_gradio, ) # Import tool from Hub image_generation_tool = load_tool("m-ric/text-to-image") llm_engine = HfApiEngine("meta-llama/Meta-Llama-3-70B-Instruct") # Initialize the agent with the image generation tool agent = ReactCodeAgent(tools=[image_generation_tool], llm_engine=llm_engine)
64_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents_advanced.md
https://huggingface.co/docs/transformers/en/agents_advanced/#display-your-agent-run-in-a-cool-gradio-interface
.md
def interact_with_agent(task): messages = [] messages.append(gr.ChatMessage(role="user", content=task)) yield messages for msg in stream_to_gradio(agent, task): messages.append(msg) yield messages + [ gr.ChatMessage(role="assistant", content="⏳ Task not finished yet!") ] yield messages
64_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/agents_advanced.md
https://huggingface.co/docs/transformers/en/agents_advanced/#display-your-agent-run-in-a-cool-gradio-interface
.md
with gr.Blocks() as demo: text_input = gr.Textbox(lines=1, label="Chat Message", value="Make me a picture of the Statue of Liberty.") submit = gr.Button("Run illustrator agent!") chatbot = gr.Chatbot( label="Agent", type="messages", avatar_images=( None, "https://em-content.zobj.net/source/twitter/53/robot-face_1f916.png", ), ) submit.click(interact_with_agent, [text_input], [chatbot]) if __name__ == "__main__": demo.launch() ```
64_8_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/pr_checks.md
https://huggingface.co/docs/transformers/en/pr_checks/
.md
<!--- Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
65_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/pr_checks.md
https://huggingface.co/docs/transformers/en/pr_checks/
.md
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
65_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/pr_checks.md
https://huggingface.co/docs/transformers/en/pr_checks/#checks-on-a-pull-request
.md
When you open a pull request on 🤗 Transformers, a fair number of checks will be run to make sure the patch you are adding is not breaking anything existing. Those checks are of four types: - regular tests - documentation build - code and documentation style - general repository consistency In this document, we will take a stab at explaining what those various checks are and the reason behind them, as well as how to debug them locally if one of them fails on your PR.
65_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/pr_checks.md
https://huggingface.co/docs/transformers/en/pr_checks/#checks-on-a-pull-request
.md
Note that, ideally, they require you to have a dev install: ```bash pip install transformers[dev] ``` or for an editable install: ```bash pip install -e .[dev] ``` inside the Transformers repo. Since the number of optional dependencies of Transformers has grown a lot, it's possible you don't manage to get all of them. If the dev install fails, make sure to install the Deep Learning framework you are working with (PyTorch, TensorFlow and/or Flax) then do ```bash pip install transformers[quality]
65_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/pr_checks.md
https://huggingface.co/docs/transformers/en/pr_checks/#checks-on-a-pull-request
.md
```bash pip install transformers[quality] ``` or for an editable install: ```bash pip install -e .[quality] ```
65_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/pr_checks.md
https://huggingface.co/docs/transformers/en/pr_checks/#tests
.md
All the jobs that begin with `ci/circleci: run_tests_` run parts of the Transformers testing suite. Each of those jobs focuses on a part of the library in a certain environment: for instance `ci/circleci: run_tests_pipelines_tf` runs the pipelines test in an environment where TensorFlow only is installed.
65_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/pr_checks.md
https://huggingface.co/docs/transformers/en/pr_checks/#tests
.md
Note that to avoid running tests when there is no real change in the modules they are testing, only part of the test suite is run each time: a utility is run to determine the differences in the library between before and after the PR (what GitHub shows you in the "Files changes" tab) and picks the tests impacted by that diff. That utility can be run locally with: ```bash python utils/tests_fetcher.py ``` from the root of the Transformers repo. It will:
65_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/pr_checks.md
https://huggingface.co/docs/transformers/en/pr_checks/#tests
.md
```bash python utils/tests_fetcher.py ``` from the root of the Transformers repo. It will: 1. Check for each file in the diff if the changes are in the code or only in comments or docstrings. Only the files with real code changes are kept.
65_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/pr_checks.md
https://huggingface.co/docs/transformers/en/pr_checks/#tests
.md
2. Build an internal map that gives for each file of the source code of the library all the files it recursively impacts. Module A is said to impact module B if module B imports module A. For the recursive impact, we need a chain of modules going from module A to module B in which each module imports the previous one. 3. Apply this map on the files gathered in step 1, which gives us the list of model files impacted by the PR.
65_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/pr_checks.md
https://huggingface.co/docs/transformers/en/pr_checks/#tests
.md
3. Apply this map on the files gathered in step 1, which gives us the list of model files impacted by the PR. 4. Map each of those files to their corresponding test file(s) and get the list of tests to run. When executing the script locally, you should get the results of step 1, 3 and 4 printed and thus know which tests are run. The script will also create a file named `test_list.txt` which contains the list of tests to run, and you can run them locally with the following command: ```bash
65_2_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/pr_checks.md
https://huggingface.co/docs/transformers/en/pr_checks/#tests
.md
```bash python -m pytest -n 8 --dist=loadfile -rA -s $(cat test_list.txt) ``` Just in case anything slipped through the cracks, the full test suite is also run daily.
65_2_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/pr_checks.md
https://huggingface.co/docs/transformers/en/pr_checks/#documentation-build
.md
The `build_pr_documentation` job builds and generates a preview of the documentation to make sure everything looks okay once your PR is merged. A bot will add a link to preview the documentation in your PR. Any changes you make to the PR are automatically updated in the preview. If the documentation fails to build, click on **Details** next to the failed job to see where things went wrong. Often, the error is as simple as a missing file in the `toctree`.
65_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/pr_checks.md
https://huggingface.co/docs/transformers/en/pr_checks/#documentation-build
.md
If you're interested in building or previewing the documentation locally, take a look at the [`README.md`](https://github.com/huggingface/transformers/tree/main/docs) in the docs folder.
65_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/pr_checks.md
https://huggingface.co/docs/transformers/en/pr_checks/#code-and-documentation-style
.md
Code formatting is applied to all the source files, the examples and the tests using `black` and `ruff`. We also have a custom tool taking care of the formatting of docstrings and `rst` files (`utils/style_doc.py`), as well as the order of the lazy imports performed in the Transformers `__init__.py` files (`utils/custom_init_isort.py`). All of this can be launched by executing ```bash make style ```
65_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/pr_checks.md
https://huggingface.co/docs/transformers/en/pr_checks/#code-and-documentation-style
.md
```bash make style ``` The CI checks those have been applied inside the `ci/circleci: check_code_quality` check. It also runs `ruff`, that will have a basic look at your code and will complain if it finds an undefined variable, or one that is not used. To run that check locally, use ```bash make quality ``` This can take a lot of time, so to run the same thing on only the files you modified in the current branch, run ```bash make fixup ```
65_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/pr_checks.md
https://huggingface.co/docs/transformers/en/pr_checks/#code-and-documentation-style
.md
```bash make fixup ``` This last command will also run all the additional checks for the repository consistency. Let's have a look at them.
65_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/pr_checks.md
https://huggingface.co/docs/transformers/en/pr_checks/#repository-consistency
.md
This regroups all the tests to make sure your PR leaves the repository in a good state, and is performed by the `ci/circleci: check_repository_consistency` check. You can locally run that check by executing the following: ```bash make repo-consistency ``` This checks that: - All objects added to the init are documented (performed by `utils/check_repo.py`) - All `__init__.py` files have the same content in their two sections (performed by `utils/check_inits.py`)
65_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/pr_checks.md
https://huggingface.co/docs/transformers/en/pr_checks/#repository-consistency
.md
- All `__init__.py` files have the same content in their two sections (performed by `utils/check_inits.py`) - All code identified as a copy from another module is consistent with the original (performed by `utils/check_copies.py`) - All configuration classes have at least one valid checkpoint mentioned in their docstrings (performed by `utils/check_config_docstrings.py`)
65_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/pr_checks.md
https://huggingface.co/docs/transformers/en/pr_checks/#repository-consistency
.md
- All configuration classes only contain attributes that are used in corresponding modeling files (performed by `utils/check_config_attributes.py`) - The translations of the READMEs and the index of the doc have the same model list as the main README (performed by `utils/check_copies.py`) - The auto-generated tables in the documentation are up to date (performed by `utils/check_table.py`)
65_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/pr_checks.md
https://huggingface.co/docs/transformers/en/pr_checks/#repository-consistency
.md
- The auto-generated tables in the documentation are up to date (performed by `utils/check_table.py`) - The library has all objects available even if not all optional dependencies are installed (performed by `utils/check_dummies.py`) - All docstrings properly document the arguments in the signature of the object (performed by `utils/check_docstrings.py`) Should this check fail, the first two items require manual fixing, the last four can be fixed automatically for you by running the command ```bash
65_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/pr_checks.md
https://huggingface.co/docs/transformers/en/pr_checks/#repository-consistency
.md
```bash make fix-copies ``` Additional checks concern PRs that add new models, mainly that: - All models added are in an Auto-mapping (performed by `utils/check_repo.py`) <!-- TODO Sylvain, add a check that makes sure the common tests are implemented.--> - All models are properly tested (performed by `utils/check_repo.py`) <!-- TODO Sylvain, add the following - All models are added to the main README, inside the main doc - All checkpoints used actually exist on the Hub -->
65_5_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/pr_checks.md
https://huggingface.co/docs/transformers/en/pr_checks/#check-copies
.md
Since the Transformers library is very opinionated with respect to model code, and each model should fully be implemented in a single file without relying on other models, we have added a mechanism that checks whether a copy of the code of a layer of a given model stays consistent with the original. This way, when there is a bug fix, we can see all other impacted models and choose to trickle down the modification or break the copy. <Tip>
65_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/pr_checks.md
https://huggingface.co/docs/transformers/en/pr_checks/#check-copies
.md
<Tip> If a file is a full copy of another file, you should register it in the constant `FULL_COPIES` of `utils/check_copies.py`. </Tip>
65_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/pr_checks.md
https://huggingface.co/docs/transformers/en/pr_checks/#check-copies
.md
</Tip> This mechanism relies on comments of the form `# Copied from xxx`. The `xxx` should contain the whole path to the class of function which is being copied below. For instance, `RobertaSelfOutput` is a direct copy of the `BertSelfOutput` class, so you can see [here](https://github.com/huggingface/transformers/blob/2bd7a27a671fd1d98059124024f580f8f5c0f3b5/src/transformers/models/roberta/modeling_roberta.py#L289) it has a comment: ```py
65_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/pr_checks.md
https://huggingface.co/docs/transformers/en/pr_checks/#check-copies
.md
```py # Copied from transformers.models.bert.modeling_bert.BertSelfOutput ``` Note that instead of applying this to a whole class, you can apply it to the relevant methods that are copied from. For instance [here](https://github.com/huggingface/transformers/blob/2bd7a27a671fd1d98059124024f580f8f5c0f3b5/src/transformers/models/roberta/modeling_roberta.py#L598) you can see how `RobertaPreTrainedModel._init_weights` is copied from the same method in `BertPreTrainedModel` with the comment: ```py
65_6_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/pr_checks.md
https://huggingface.co/docs/transformers/en/pr_checks/#check-copies
.md
```py # Copied from transformers.models.bert.modeling_bert.BertPreTrainedModel._init_weights ```
65_6_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/pr_checks.md
https://huggingface.co/docs/transformers/en/pr_checks/#check-copies
.md
Sometimes the copy is exactly the same except for names: for instance in `RobertaAttention`, we use `RobertaSelfAttention` instead of `BertSelfAttention` but other than that, the code is exactly the same. This is why `# Copied from` supports simple string replacements with the following syntax: `Copied from xxx with foo->bar`. This means the code is copied with all instances of `foo` being replaced by `bar`. You can see how it used
65_6_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/pr_checks.md
https://huggingface.co/docs/transformers/en/pr_checks/#check-copies
.md
xxx with foo->bar`. This means the code is copied with all instances of `foo` being replaced by `bar`. You can see how it used [here](https://github.com/huggingface/transformers/blob/2bd7a27a671fd1d98059124024f580f8f5c0f3b5/src/transformers/models/roberta/modeling_roberta.py#L304C1-L304C86) in `RobertaAttention` with the comment:
65_6_6