source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#general-overview-of--transformers
.md
inference, but also as the very product that we want to improve. Hence, when adding a model, the user is not only the person who will use your model, but also everybody who will read, try to understand, and possibly tweak your code. With this in mind, let's go a bit deeper into the general library design.
38_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#overview-of-models
.md
To successfully add a model, it is important to understand the interaction between your model and its config, [`PreTrainedModel`], and [`PretrainedConfig`]. For exemplary purposes, we will call the model to be added to 🤗 Transformers `BrandNewBert`. Let's take a look: <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers_overview.png"/> As you can see, we do make use of inheritance in 🤗 Transformers, but we keep the level of abstraction to an absolute
38_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#overview-of-models
.md
As you can see, we do make use of inheritance in 🤗 Transformers, but we keep the level of abstraction to an absolute minimum. There are never more than two levels of abstraction for any model in the library. `BrandNewBertModel` inherits from `BrandNewBertPreTrainedModel` which in turn inherits from [`PreTrainedModel`] and that's it. As a general rule, we want to make sure that a new model only depends on [`PreTrainedModel`]. The important functionalities that are automatically provided to every new
38_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#overview-of-models
.md
[`PreTrainedModel`]. The important functionalities that are automatically provided to every new model are [`~PreTrainedModel.from_pretrained`] and [`~PreTrainedModel.save_pretrained`], which are used for serialization and deserialization. All of the other important functionalities, such as `BrandNewBertModel.forward` should be completely defined in the new `modeling_brand_new_bert.py` script. Next, we want to make sure that a model with a specific head layer, such as
38_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#overview-of-models
.md
`modeling_brand_new_bert.py` script. Next, we want to make sure that a model with a specific head layer, such as `BrandNewBertForMaskedLM` does not inherit from `BrandNewBertModel`, but rather uses `BrandNewBertModel` as a component that can be called in its forward pass to keep the level of abstraction low. Every new model requires a configuration class, called `BrandNewBertConfig`. This configuration is always stored as an attribute in
38_3_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#overview-of-models
.md
configuration class, called `BrandNewBertConfig`. This configuration is always stored as an attribute in [`PreTrainedModel`], and thus can be accessed via the `config` attribute for all classes inheriting from `BrandNewBertPreTrainedModel`: ```python model = BrandNewBertModel.from_pretrained("brandy/brand_new_bert") model.config # model has access to its config ``` Similar to the model, the configuration inherits basic serialization and deserialization functionalities from
38_3_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#overview-of-models
.md
``` Similar to the model, the configuration inherits basic serialization and deserialization functionalities from [`PretrainedConfig`]. Note that the configuration and the model are always serialized into two different formats - the model to a *pytorch_model.bin* file and the configuration to a *config.json* file. Calling the model's [`~PreTrainedModel.save_pretrained`] will automatically call the config's [`~PretrainedConfig.save_pretrained`], so that both model and configuration are saved.
38_3_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#code-style
.md
When coding your new model, keep in mind that Transformers is an opinionated library and we have a few quirks of our own regarding how code should be written :-) 1. The forward pass of your model should be fully written in the modeling file while being fully independent of other models in the library. If you want to reuse a block from another model, copy the code and paste it with a
38_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#code-style
.md
models in the library. If you want to reuse a block from another model, copy the code and paste it with a `# Copied from` comment on top (see [here](https://github.com/huggingface/transformers/blob/v4.17.0/src/transformers/models/roberta/modeling_roberta.py#L160) for a good example and [there](pr_checks#check-copies) for more documentation on Copied from). 2. The code should be fully understandable, even by a non-native English speaker. This means you should pick
38_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#code-style
.md
2. The code should be fully understandable, even by a non-native English speaker. This means you should pick descriptive variable names and avoid abbreviations. As an example, `activation` is preferred to `act`. One-letter variable names are strongly discouraged unless it's an index in a for loop. 3. More generally we prefer longer explicit code to short magical one. 4. Avoid subclassing `nn.Sequential` in PyTorch but subclass `nn.Module` and write the forward pass, so that anyone
38_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#code-style
.md
4. Avoid subclassing `nn.Sequential` in PyTorch but subclass `nn.Module` and write the forward pass, so that anyone using your code can quickly debug it by adding print statements or breaking points. 5. Your function signature should be type-annotated. For the rest, good variable names are way more readable and understandable than type annotations.
38_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#overview-of-tokenizers
.md
Not quite ready yet :-( This section will be added soon!
38_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#step-by-step-recipe-to-add-a-model-to--transformers
.md
Everyone has different preferences of how to port a model so it can be very helpful for you to take a look at summaries of how other contributors ported models to Hugging Face. Here is a list of community blog posts on how to port a model: 1. [Porting GPT2 Model](https://medium.com/huggingface/from-tensorflow-to-pytorch-265f40ef2a28) by [Thomas](https://huggingface.co/thomwolf) 2. [Porting WMT19 MT Model](https://huggingface.co/blog/porting-fsmt) by [Stas](https://huggingface.co/stas)
38_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#step-by-step-recipe-to-add-a-model-to--transformers
.md
2. [Porting WMT19 MT Model](https://huggingface.co/blog/porting-fsmt) by [Stas](https://huggingface.co/stas) From experience, we can tell you that the most important things to keep in mind when adding a model are: - Don't reinvent the wheel! Most parts of the code you will add for the new 🤗 Transformers model already exist somewhere in 🤗 Transformers. Take some time to find similar, already existing models and tokenizers you can copy
38_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#step-by-step-recipe-to-add-a-model-to--transformers
.md
somewhere in 🤗 Transformers. Take some time to find similar, already existing models and tokenizers you can copy from. [grep](https://www.gnu.org/software/grep/) and [rg](https://github.com/BurntSushi/ripgrep) are your friends. Note that it might very well happen that your model's tokenizer is based on one model implementation, and your model's modeling code on another one. *E.g.* FSMT's modeling code is based on BART, while FSMT's tokenizer code is based on XLM.
38_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#step-by-step-recipe-to-add-a-model-to--transformers
.md
is based on XLM. - It's more of an engineering challenge than a scientific challenge. You should spend more time creating an efficient debugging environment rather than trying to understand all theoretical aspects of the model in the paper. - Ask for help, when you're stuck! Models are the core component of 🤗 Transformers so we at Hugging Face are more than happy to help you at every step to add your model. Don't hesitate to ask if you notice you are not making progress.
38_6_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#step-by-step-recipe-to-add-a-model-to--transformers
.md
than happy to help you at every step to add your model. Don't hesitate to ask if you notice you are not making progress. In the following, we try to give you a general recipe that we found most useful when porting a model to 🤗 Transformers. The following list is a summary of everything that has to be done to add a model and can be used by you as a To-Do List: ☐ (Optional) Understood the model's theoretical aspects<br> ☐ Prepared 🤗 Transformers dev environment<br>
38_6_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#step-by-step-recipe-to-add-a-model-to--transformers
.md
List: ☐ (Optional) Understood the model's theoretical aspects<br> ☐ Prepared 🤗 Transformers dev environment<br> ☐ Set up debugging environment of the original repository<br> ☐ Created script that successfully runs the `forward()` pass using the original repository and checkpoint<br> ☐ Successfully added the model skeleton to 🤗 Transformers<br> ☐ Successfully converted original checkpoint to 🤗 Transformers checkpoint<br>
38_6_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#step-by-step-recipe-to-add-a-model-to--transformers
.md
☐ Successfully converted original checkpoint to 🤗 Transformers checkpoint<br> ☐ Successfully ran `forward()` pass in 🤗 Transformers that gives identical output to original checkpoint<br> ☐ Finished model tests in 🤗 Transformers<br> ☐ Successfully added tokenizer in 🤗 Transformers<br> ☐ Run end-to-end integration tests<br> ☐ Finished docs<br> ☐ Uploaded model weights to the Hub<br> ☐ Submitted the pull request<br> ☐ (Optional) Added a demo notebook
38_6_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#step-by-step-recipe-to-add-a-model-to--transformers
.md
☐ Uploaded model weights to the Hub<br> ☐ Submitted the pull request<br> ☐ (Optional) Added a demo notebook To begin with, we usually recommend starting by getting a good theoretical understanding of `BrandNewBert`. However, if you prefer to understand the theoretical aspects of the model *on-the-job*, then it is totally fine to directly dive into the `BrandNewBert`'s code-base. This option might suit you better if your engineering skills are better than
38_6_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#step-by-step-recipe-to-add-a-model-to--transformers
.md
into the `BrandNewBert`'s code-base. This option might suit you better if your engineering skills are better than your theoretical skill, if you have trouble understanding `BrandNewBert`'s paper, or if you just enjoy programming much more than reading scientific papers.
38_6_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#1-optional-theoretical-aspects-of-brandnewbert
.md
You should take some time to read *BrandNewBert's* paper, if such descriptive work exists. There might be large sections of the paper that are difficult to understand. If this is the case, this is fine - don't worry! The goal is not to get a deep theoretical understanding of the paper, but to extract the necessary information required to effectively re-implement the model in 🤗 Transformers. That being said, you don't have to spend too much time on the
38_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#1-optional-theoretical-aspects-of-brandnewbert
.md
effectively re-implement the model in 🤗 Transformers. That being said, you don't have to spend too much time on the theoretical aspects, but rather focus on the practical ones, namely: - What type of model is *brand_new_bert*? BERT-like encoder-only model? GPT2-like decoder-only model? BART-like encoder-decoder model? Look at the [model_summary](model_summary) if you're not familiar with the differences between those.
38_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#1-optional-theoretical-aspects-of-brandnewbert
.md
encoder-decoder model? Look at the [model_summary](model_summary) if you're not familiar with the differences between those. - What are the applications of *brand_new_bert*? Text classification? Text generation? Seq2Seq tasks, *e.g.,* summarization? - What is the novel feature of the model that makes it different from BERT/GPT-2/BART? - Which of the already existing [🤗 Transformers models](https://huggingface.co/transformers/#contents) is most similar to *brand_new_bert*?
38_7_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#1-optional-theoretical-aspects-of-brandnewbert
.md
similar to *brand_new_bert*? - What type of tokenizer is used? A sentencepiece tokenizer? Word piece tokenizer? Is it the same tokenizer as used for BERT or BART? After you feel like you have gotten a good overview of the architecture of the model, you might want to write to the Hugging Face team with any questions you might have. This might include questions regarding the model's architecture, its attention layer, etc. We will be more than happy to help you.
38_7_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#2-next-prepare-your-environment
.md
1. Fork the [repository](https://github.com/huggingface/transformers) by clicking on the ‘Fork' button on the repository's page. This creates a copy of the code under your GitHub user account. 2. Clone your `transformers` fork to your local disk, and add the base repository as a remote: ```bash git clone https://github.com/[your Github handle]/transformers.git cd transformers git remote add upstream https://github.com/huggingface/transformers.git ```
38_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#2-next-prepare-your-environment
.md
cd transformers git remote add upstream https://github.com/huggingface/transformers.git ``` 3. Set up a development environment, for instance by running the following command: ```bash python -m venv .env source .env/bin/activate pip install -e ".[dev]" ``` Depending on your OS, and since the number of optional dependencies of Transformers is growing, you might get a failure with this command. If that's the case make sure to install the Deep Learning framework you are working with
38_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#2-next-prepare-your-environment
.md
failure with this command. If that's the case make sure to install the Deep Learning framework you are working with (PyTorch, TensorFlow and/or Flax) then do: ```bash pip install -e ".[quality]" ``` which should be enough for most use cases. You can then return to the parent directory ```bash cd .. ``` 4. We recommend adding the PyTorch version of *brand_new_bert* to Transformers. To install PyTorch, please follow the instructions on https://pytorch.org/get-started/locally/.
38_8_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#2-next-prepare-your-environment
.md
instructions on https://pytorch.org/get-started/locally/. **Note:** You don't need to have CUDA installed. Making the new model work on CPU is sufficient. 5. To port *brand_new_bert*, you will also need access to its original repository: ```bash git clone https://github.com/org_that_created_brand_new_bert_org/brand_new_bert.git cd brand_new_bert pip install -e . ``` Now you have set up a development environment to port *brand_new_bert* to 🤗 Transformers.
38_8_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#3-4-run-a-pretrained-checkpoint-using-the-original-repository
.md
At first, you will work on the original *brand_new_bert* repository. Often, the original implementation is very “researchy”. Meaning that documentation might be lacking and the code can be difficult to understand. But this should be exactly your motivation to reimplement *brand_new_bert*. At Hugging Face, one of our main goals is to *make people stand on the shoulders of giants* which translates here very well into taking a working model and rewriting it to make
38_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#3-4-run-a-pretrained-checkpoint-using-the-original-repository
.md
stand on the shoulders of giants* which translates here very well into taking a working model and rewriting it to make it as **accessible, user-friendly, and beautiful** as possible. This is the number-one motivation to re-implement models into 🤗 Transformers - trying to make complex new NLP technology accessible to **everybody**. You should start thereby by diving into the original repository.
38_9_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#3-4-run-a-pretrained-checkpoint-using-the-original-repository
.md
You should start thereby by diving into the original repository. Successfully running the official pretrained model in the original repository is often **the most difficult** step. From our experience, it is very important to spend some time getting familiar with the original code-base. You need to figure out the following: - Where to find the pretrained weights? - How to load the pretrained weights into the corresponding model? - How to run the tokenizer independently from the model?
38_9_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#3-4-run-a-pretrained-checkpoint-using-the-original-repository
.md
- How to load the pretrained weights into the corresponding model? - How to run the tokenizer independently from the model? - Trace one forward pass so that you know which classes and functions are required for a simple forward pass. Usually, you only have to reimplement those functions. - Be able to locate the important components of the model: Where is the model's class? Are there model sub-classes,
38_9_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#3-4-run-a-pretrained-checkpoint-using-the-original-repository
.md
- Be able to locate the important components of the model: Where is the model's class? Are there model sub-classes, *e.g.* EncoderModel, DecoderModel? Where is the self-attention layer? Are there multiple different attention layers, *e.g.* *self-attention*, *cross-attention*...? - How can you debug the model in the original environment of the repo? Do you have to add *print* statements, can you
38_9_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#3-4-run-a-pretrained-checkpoint-using-the-original-repository
.md
- How can you debug the model in the original environment of the repo? Do you have to add *print* statements, can you work with an interactive debugger like *ipdb*, or should you use an efficient IDE to debug the model, like PyCharm? It is very important that before you start the porting process, you can **efficiently** debug code in the original repository! Also, remember that you are working with an open-source library, so do not hesitate to open an issue, or
38_9_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#3-4-run-a-pretrained-checkpoint-using-the-original-repository
.md
repository! Also, remember that you are working with an open-source library, so do not hesitate to open an issue, or even a pull request in the original repository. The maintainers of this repository are most likely very happy about someone looking into their code! At this point, it is really up to you which debugging environment and strategy you prefer to use to debug the original model. We strongly advise against setting up a costly GPU environment, but simply work on a CPU both when starting to
38_9_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#3-4-run-a-pretrained-checkpoint-using-the-original-repository
.md
model. We strongly advise against setting up a costly GPU environment, but simply work on a CPU both when starting to dive into the original repository and also when starting to write the 🤗 Transformers implementation of the model. Only at the very end, when the model has already been successfully ported to 🤗 Transformers, one should verify that the model also works as expected on GPU. In general, there are two possible debugging environments for running the original model
38_9_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#3-4-run-a-pretrained-checkpoint-using-the-original-repository
.md
In general, there are two possible debugging environments for running the original model - [Jupyter notebooks](https://jupyter.org/) / [google colab](https://colab.research.google.com/notebooks/intro.ipynb) - Local python scripts. Jupyter notebooks have the advantage that they allow for cell-by-cell execution which can be helpful to better split logical components from one another and to have faster debugging cycles as intermediate results can be stored. Also,
38_9_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#3-4-run-a-pretrained-checkpoint-using-the-original-repository
.md
logical components from one another and to have faster debugging cycles as intermediate results can be stored. Also, notebooks are often easier to share with other contributors, which might be very helpful if you want to ask the Hugging Face team for help. If you are familiar with Jupyter notebooks, we strongly recommend you work with them. The obvious disadvantage of Jupyter notebooks is that if you are not used to working with them you will have to spend
38_9_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#3-4-run-a-pretrained-checkpoint-using-the-original-repository
.md
The obvious disadvantage of Jupyter notebooks is that if you are not used to working with them you will have to spend some time adjusting to the new programming environment and you might not be able to use your known debugging tools anymore, like `ipdb`. For each code-base, a good first step is always to load a **small** pretrained checkpoint and to be able to reproduce a single forward pass using a dummy integer vector of input IDs as an input. Such a script could look like this (in pseudocode):
38_9_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#3-4-run-a-pretrained-checkpoint-using-the-original-repository
.md
pseudocode): ```python model = BrandNewBertModel.load_pretrained_checkpoint("/path/to/checkpoint/") input_ids = [0, 4, 5, 2, 3, 7, 9] # vector of input ids original_output = model.predict(input_ids) ``` Next, regarding the debugging strategy, there are generally a few from which to choose from: - Decompose the original model into many small testable components and run a forward pass on each of those for verification
38_9_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#3-4-run-a-pretrained-checkpoint-using-the-original-repository
.md
- Decompose the original model into many small testable components and run a forward pass on each of those for verification - Decompose the original model only into the original *tokenizer* and the original *model*, run a forward pass on those, and use intermediate print statements or breakpoints for verification Again, it is up to you which strategy to choose. Often, one or the other is advantageous depending on the original code base.
38_9_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#3-4-run-a-pretrained-checkpoint-using-the-original-repository
.md
Again, it is up to you which strategy to choose. Often, one or the other is advantageous depending on the original code base. If the original code-base allows you to decompose the model into smaller sub-components, *e.g.* if the original code-base can easily be run in eager mode, it is usually worth the effort to do so. There are some important advantages to taking the more difficult road in the beginning:
38_9_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#3-4-run-a-pretrained-checkpoint-using-the-original-repository
.md
to taking the more difficult road in the beginning: - at a later stage when comparing the original model to the Hugging Face implementation, you can verify automatically for each component individually that the corresponding component of the 🤗 Transformers implementation matches instead of relying on visual comparison via print statements - it can give you some rope to decompose the big problem of porting a model into smaller problems of just porting
38_9_14
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#3-4-run-a-pretrained-checkpoint-using-the-original-repository
.md
- it can give you some rope to decompose the big problem of porting a model into smaller problems of just porting individual components and thus structure your work better - separating the model into logical meaningful components will help you to get a better overview of the model's design and thus to better understand the model - at a later stage those component-by-component tests help you to ensure that no regression occurs as you continue changing your code
38_9_15
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#3-4-run-a-pretrained-checkpoint-using-the-original-repository
.md
changing your code [Lysandre's](https://gist.github.com/LysandreJik/db4c948f6b4483960de5cbac598ad4ed) integration checks for ELECTRA gives a nice example of how this can be done. However, if the original code-base is very complex or only allows intermediate components to be run in a compiled mode, it might be too time-consuming or even impossible to separate the model into smaller testable sub-components. A good
38_9_16
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#3-4-run-a-pretrained-checkpoint-using-the-original-repository
.md
it might be too time-consuming or even impossible to separate the model into smaller testable sub-components. A good example is [T5's MeshTensorFlow](https://github.com/tensorflow/mesh/tree/master/mesh_tensorflow) library which is very complex and does not offer a simple way to decompose the model into its sub-components. For such libraries, one often relies on verifying print statements. No matter which strategy you choose, the recommended procedure is often the same that you should start to debug the
38_9_17
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#3-4-run-a-pretrained-checkpoint-using-the-original-repository
.md
No matter which strategy you choose, the recommended procedure is often the same that you should start to debug the starting layers first and the ending layers last. It is recommended that you retrieve the output, either by print statements or sub-component functions, of the following layers in the following order: 1. Retrieve the input IDs passed to the model 2. Retrieve the word embeddings 3. Retrieve the input of the first Transformer layer 4. Retrieve the output of the first Transformer layer
38_9_18
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#3-4-run-a-pretrained-checkpoint-using-the-original-repository
.md
3. Retrieve the input of the first Transformer layer 4. Retrieve the output of the first Transformer layer 5. Retrieve the output of the following n - 1 Transformer layers 6. Retrieve the output of the whole BrandNewBert Model Input IDs should thereby consists of an array of integers, *e.g.* `input_ids = [0, 4, 4, 3, 2, 4, 1, 7, 19]` The outputs of the following layers often consist of multi-dimensional float arrays and can look like this: ``` [[
38_9_19
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#3-4-run-a-pretrained-checkpoint-using-the-original-repository
.md
The outputs of the following layers often consist of multi-dimensional float arrays and can look like this: ``` [[ [-0.1465, -0.6501, 0.1993, ..., 0.1451, 0.3430, 0.6024], [-0.4417, -0.5920, 0.3450, ..., -0.3062, 0.6182, 0.7132], [-0.5009, -0.7122, 0.4548, ..., -0.3662, 0.6091, 0.7648], ..., [-0.5613, -0.6332, 0.4324, ..., -0.3792, 0.7372, 0.9288], [-0.5416, -0.6345, 0.4180, ..., -0.3564, 0.6992, 0.9191], [-0.5334, -0.6403, 0.4271, ..., -0.3339, 0.6533, 0.8694]]], ```
38_9_20
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#3-4-run-a-pretrained-checkpoint-using-the-original-repository
.md
[-0.5334, -0.6403, 0.4271, ..., -0.3339, 0.6533, 0.8694]]], ``` We expect that every model added to 🤗 Transformers passes a couple of integration tests, meaning that the original model and the reimplemented version in 🤗 Transformers have to give the exact same output up to a precision of 0.001! Since it is normal that the exact same model written in different libraries can give a slightly different output
38_9_21
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#3-4-run-a-pretrained-checkpoint-using-the-original-repository
.md
Since it is normal that the exact same model written in different libraries can give a slightly different output depending on the library framework, we accept an error tolerance of 1e-3 (0.001). It is not enough if the model gives nearly the same output, they have to be almost identical. Therefore, you will certainly compare the intermediate outputs of the 🤗 Transformers version multiple times against the intermediate outputs of the original implementation of
38_9_22
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#3-4-run-a-pretrained-checkpoint-using-the-original-repository
.md
outputs of the 🤗 Transformers version multiple times against the intermediate outputs of the original implementation of *brand_new_bert* in which case an **efficient** debugging environment of the original repository is absolutely important. Here is some advice to make your debugging environment as efficient as possible. - Find the best way of debugging intermediate results. Is the original repository written in PyTorch? Then you should
38_9_23
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#3-4-run-a-pretrained-checkpoint-using-the-original-repository
.md
- Find the best way of debugging intermediate results. Is the original repository written in PyTorch? Then you should probably take the time to write a longer script that decomposes the original model into smaller sub-components to retrieve intermediate values. Is the original repository written in Tensorflow 1? Then you might have to rely on TensorFlow print operations like [tf.print](https://www.tensorflow.org/api_docs/python/tf/print) to output
38_9_24
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#3-4-run-a-pretrained-checkpoint-using-the-original-repository
.md
TensorFlow print operations like [tf.print](https://www.tensorflow.org/api_docs/python/tf/print) to output intermediate values. Is the original repository written in Jax? Then make sure that the model is **not jitted** when running the forward pass, *e.g.* check-out [this link](https://github.com/google/jax/issues/196). - Use the smallest pretrained checkpoint you can find. The smaller the checkpoint, the faster your debug cycle
38_9_25
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#3-4-run-a-pretrained-checkpoint-using-the-original-repository
.md
- Use the smallest pretrained checkpoint you can find. The smaller the checkpoint, the faster your debug cycle becomes. It is not efficient if your pretrained model is so big that your forward pass takes more than 10 seconds. In case only very large checkpoints are available, it might make more sense to create a dummy model in the new environment with randomly initialized weights and save those weights for comparison with the 🤗 Transformers version of your model
38_9_26
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#3-4-run-a-pretrained-checkpoint-using-the-original-repository
.md
of your model - Make sure you are using the easiest way of calling a forward pass in the original repository. Ideally, you want to find the function in the original repository that **only** calls a single forward pass, *i.e.* that is often called `predict`, `evaluate`, `forward` or `__call__`. You don't want to debug a function that calls `forward` multiple times, *e.g.* to generate text, like `autoregressive_sample`, `generate`.
38_9_27
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#3-4-run-a-pretrained-checkpoint-using-the-original-repository
.md
multiple times, *e.g.* to generate text, like `autoregressive_sample`, `generate`. - Try to separate the tokenization from the model's *forward* pass. If the original repository shows examples where you have to input a string, then try to find out where in the forward call the string input is changed to input ids and start from this point. This might mean that you have to possibly write a small script yourself or change the original code so that you can directly input the ids instead of an input string.
38_9_28
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#3-4-run-a-pretrained-checkpoint-using-the-original-repository
.md
original code so that you can directly input the ids instead of an input string. - Make sure that the model in your debugging setup is **not** in training mode, which often causes the model to yield random outputs due to multiple dropout layers in the model. Make sure that the forward pass in your debugging environment is **deterministic** so that the dropout layers are not used. Or use *transformers.utils.set_seed* if the old and new implementations are in the same framework.
38_9_29
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#3-4-run-a-pretrained-checkpoint-using-the-original-repository
.md
if the old and new implementations are in the same framework. The following section gives you more specific details/tips on how you can do this for *brand_new_bert*.
38_9_30
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers
.md
Next, you can finally start adding new code to 🤗 Transformers. Go into the clone of your 🤗 Transformers' fork: ```bash cd transformers ``` In the special case that you are adding a model whose architecture exactly matches the model architecture of an existing model you only have to add a conversion script as described in [this section](#write-a-conversion-script). In this case, you can just re-use the whole model architecture of the already existing model.
38_10_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers
.md
In this case, you can just re-use the whole model architecture of the already existing model. Otherwise, let's start generating a new model. We recommend using the following script to add a model starting from an existing model: ```bash transformers-cli add-new-model-like ``` You will be prompted with a questionnaire to fill in the basic information of your model. **Open a Pull Request on the main huggingface/transformers repo**
38_10_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers
.md
**Open a Pull Request on the main huggingface/transformers repo** Before starting to adapt the automatically generated code, now is the time to open a “Work in progress (WIP)” pull request, *e.g.* “[WIP] Add *brand_new_bert*”, in 🤗 Transformers so that you and the Hugging Face team can work side-by-side on integrating the model into 🤗 Transformers. You should do the following: 1. Create a branch with a descriptive name from your main branch ```bash git checkout -b add_brand_new_bert ```
38_10_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers
.md
1. Create a branch with a descriptive name from your main branch ```bash git checkout -b add_brand_new_bert ``` 2. Commit the automatically generated code: ```bash git add . git commit ``` 3. Fetch and rebase to current main ```bash git fetch upstream git rebase upstream/main ``` 4. Push the changes to your account using: ```bash git push -u origin a-descriptive-name-for-my-changes ```
38_10_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers
.md
``` 4. Push the changes to your account using: ```bash git push -u origin a-descriptive-name-for-my-changes ``` 5. Once you are satisfied, go to the webpage of your fork on GitHub. Click on “Pull request”. Make sure to add the GitHub handle of some members of the Hugging Face team as reviewers, so that the Hugging Face team gets notified for future changes. 6. Change the PR into a draft by clicking on “Convert to draft” on the right of the GitHub pull request web page.
38_10_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers
.md
6. Change the PR into a draft by clicking on “Convert to draft” on the right of the GitHub pull request web page. In the following, whenever you have made some progress, don't forget to commit your work and push it to your account so that it shows in the pull request. Additionally, you should make sure to update your work with the current main from time to time by doing: ```bash git fetch upstream git merge upstream/main ```
38_10_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers
.md
time to time by doing: ```bash git fetch upstream git merge upstream/main ``` In general, all questions you might have regarding the model or your implementation should be asked in your PR and discussed/solved in the PR. This way, the Hugging Face team will always be notified when you are committing new code or if you have a question. It is often very helpful to point the Hugging Face team to your added code so that the Hugging Face team can efficiently understand your problem or question.
38_10_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers
.md
Face team can efficiently understand your problem or question. To do so, you can go to the “Files changed” tab where you see all of your changes, go to a line regarding which you want to ask a question, and click on the “+” symbol to add a comment. Whenever a question or problem has been solved, you can click on the “Resolve” button of the created comment. In the same way, the Hugging Face team will open comments when reviewing your code. We recommend asking most questions
38_10_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers
.md
In the same way, the Hugging Face team will open comments when reviewing your code. We recommend asking most questions on GitHub on your PR. For some very general questions that are not very useful for the public, feel free to ping the Hugging Face team by Slack or email. **5. Adapt the generated models code for brand_new_bert** At first, we will focus only on the model itself and not care about the tokenizer. All the relevant code should be
38_10_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers
.md
At first, we will focus only on the model itself and not care about the tokenizer. All the relevant code should be found in the generated files `src/transformers/models/brand_new_bert/modeling_brand_new_bert.py` and `src/transformers/models/brand_new_bert/configuration_brand_new_bert.py`. Now you can finally start coding :). The generated code in `src/transformers/models/brand_new_bert/modeling_brand_new_bert.py` will either have the same architecture as BERT if
38_10_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers
.md
`src/transformers/models/brand_new_bert/modeling_brand_new_bert.py` will either have the same architecture as BERT if it's an encoder-only model or BART if it's an encoder-decoder model. At this point, you should remind yourself what you've learned in the beginning about the theoretical aspects of the model: *How is the model different from BERT or BART?*". Implement those changes which often means changing the *self-attention* layer, the order of the normalization
38_10_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers
.md
BART?*". Implement those changes which often means changing the *self-attention* layer, the order of the normalization layer, etc… Again, it is often useful to look at the similar architecture of already existing models in Transformers to get a better feeling of how your model should be implemented. **Note** that at this point, you don't have to be very sure that your code is fully correct or clean. Rather, it is advised to add a first *unclean*, copy-pasted version of the original code to
38_10_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers
.md
advised to add a first *unclean*, copy-pasted version of the original code to `src/transformers/models/brand_new_bert/modeling_brand_new_bert.py` until you feel like all the necessary code is added. From our experience, it is much more efficient to quickly add a first version of the required code and improve/correct the code iteratively with the conversion script as described in the next section. The only thing that
38_10_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers
.md
improve/correct the code iteratively with the conversion script as described in the next section. The only thing that has to work at this point is that you can instantiate the 🤗 Transformers implementation of *brand_new_bert*, *i.e.* the following command should work: ```python from transformers import BrandNewBertModel, BrandNewBertConfig
38_10_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers
.md
model = BrandNewBertModel(BrandNewBertConfig()) ``` The above command will create a model according to the default parameters as defined in `BrandNewBertConfig()` with random weights, thus making sure that the `init()` methods of all components works. Note that all random initialization should happen in the `_init_weights` method of your `BrandnewBertPreTrainedModel` class. It should initialize all leaf modules depending on the variables of the config. Here is an example with the
38_10_14
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers
.md
class. It should initialize all leaf modules depending on the variables of the config. Here is an example with the BERT `_init_weights` method: ```py def _init_weights(self, module): """Initialize the weights""" if isinstance(module, nn.Linear): module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) if module.bias is not None: module.bias.data.zero_() elif isinstance(module, nn.Embedding): module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
38_10_15
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers
.md
elif isinstance(module, nn.Embedding): module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) if module.padding_idx is not None: module.weight.data[module.padding_idx].zero_() elif isinstance(module, nn.LayerNorm): module.bias.data.zero_() module.weight.data.fill_(1.0) ``` You can have some more custom schemes if you need a special initialization for some modules. For instance, in
38_10_16
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers
.md
``` You can have some more custom schemes if you need a special initialization for some modules. For instance, in `Wav2Vec2ForPreTraining`, the last two linear layers need to have the initialization of the regular PyTorch `nn.Linear` but all the other ones should use an initialization as above. This is coded like this: ```py def _init_weights(self, module): """Initialize the weights""" if isinstance(module, Wav2Vec2ForPreTraining): module.project_hid.reset_parameters()
38_10_17
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers
.md
"""Initialize the weights""" if isinstance(module, Wav2Vec2ForPreTraining): module.project_hid.reset_parameters() module.project_q.reset_parameters() module.project_hid._is_hf_initialized = True module.project_q._is_hf_initialized = True elif isinstance(module, nn.Linear): module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) if module.bias is not None: module.bias.data.zero_() ```
38_10_18
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers
.md
if module.bias is not None: module.bias.data.zero_() ``` The `_is_hf_initialized` flag is internally used to make sure we only initialize a submodule once. By setting it to `True` for `module.project_q` and `module.project_hid`, we make sure the custom initialization we did is not overridden later on, the `_init_weights` function won't be applied to them. **6. Write a conversion script**
38_10_19
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers
.md
the `_init_weights` function won't be applied to them. **6. Write a conversion script** Next, you should write a conversion script that lets you convert the checkpoint you used to debug *brand_new_bert* in the original repository to a checkpoint compatible with your just created 🤗 Transformers implementation of *brand_new_bert*. It is not advised to write the conversion script from scratch, but rather to look through already
38_10_20
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers
.md
*brand_new_bert*. It is not advised to write the conversion script from scratch, but rather to look through already existing conversion scripts in 🤗 Transformers for one that has been used to convert a similar model that was written in the same framework as *brand_new_bert*. Usually, it is enough to copy an already existing conversion script and slightly adapt it for your use case. Don't hesitate to ask the Hugging Face team to point you to a similar already existing conversion script for your model.
38_10_21
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers
.md
existing conversion script for your model. - If you are porting a model from TensorFlow to PyTorch, a good starting point might be BERT's conversion script [here](https://github.com/huggingface/transformers/blob/7acfa95afb8194f8f9c1f4d2c6028224dbed35a2/src/transformers/models/bert/modeling_bert.py#L91)
38_10_22
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers
.md
- If you are porting a model from PyTorch to PyTorch, a good starting point might be BART's conversion script [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/bart/convert_bart_original_pytorch_checkpoint_to_pytorch.py) In the following, we'll quickly explain how PyTorch models store layer weights and define layer names. In PyTorch, the name of a layer is defined by the name of the class attribute you give the layer. Let's define a dummy model in
38_10_23
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers
.md
name of a layer is defined by the name of the class attribute you give the layer. Let's define a dummy model in PyTorch, called `SimpleModel` as follows: ```python from torch import nn
38_10_24
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers
.md
class SimpleModel(nn.Module): def __init__(self): super().__init__() self.dense = nn.Linear(10, 10) self.intermediate = nn.Linear(10, 10) self.layer_norm = nn.LayerNorm(10) ``` Now we can create an instance of this model definition which will fill all weights: `dense`, `intermediate`, `layer_norm` with random weights. We can print the model to see its architecture ```python model = SimpleModel()
38_10_25
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers
.md
print(model) ``` This will print out the following: ``` SimpleModel( (dense): Linear(in_features=10, out_features=10, bias=True) (intermediate): Linear(in_features=10, out_features=10, bias=True) (layer_norm): LayerNorm((10,), eps=1e-05, elementwise_affine=True) ) ``` We can see that the layer names are defined by the name of the class attribute in PyTorch. You can print out the weight values of a specific layer: ```python print(model.dense.weight.data) ```
38_10_26
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers
.md
values of a specific layer: ```python print(model.dense.weight.data) ``` to see that the weights were randomly initialized ``` tensor([[-0.0818, 0.2207, -0.0749, -0.0030, 0.0045, -0.1569, -0.1598, 0.0212, -0.2077, 0.2157], [ 0.1044, 0.0201, 0.0990, 0.2482, 0.3116, 0.2509, 0.2866, -0.2190, 0.2166, -0.0212], [-0.2000, 0.1107, -0.1999, -0.3119, 0.1559, 0.0993, 0.1776, -0.1950, -0.1023, -0.0447], [-0.0888, -0.1092, 0.2281, 0.0336, 0.1817, -0.0115, 0.2096, 0.1415, -0.1876, -0.2467],
38_10_27
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers
.md
-0.1023, -0.0447], [-0.0888, -0.1092, 0.2281, 0.0336, 0.1817, -0.0115, 0.2096, 0.1415, -0.1876, -0.2467], [ 0.2208, -0.2352, -0.1426, -0.2636, -0.2889, -0.2061, -0.2849, -0.0465, 0.2577, 0.0402], [ 0.1502, 0.2465, 0.2566, 0.0693, 0.2352, -0.0530, 0.1859, -0.0604, 0.2132, 0.1680], [ 0.1733, -0.2407, -0.1721, 0.1484, 0.0358, -0.0633, -0.0721, -0.0090, 0.2707, -0.2509], [-0.1173, 0.1561, 0.2945, 0.0595, -0.1996, 0.2988, -0.0802, 0.0407, 0.1829, -0.1568],
38_10_28
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers
.md
0.2707, -0.2509], [-0.1173, 0.1561, 0.2945, 0.0595, -0.1996, 0.2988, -0.0802, 0.0407, 0.1829, -0.1568], [-0.1164, -0.2228, -0.0403, 0.0428, 0.1339, 0.0047, 0.1967, 0.2923, 0.0333, -0.0536], [-0.1492, -0.1616, 0.1057, 0.1950, -0.2807, -0.2710, -0.1586, 0.0739, 0.2220, 0.2358]]). ``` In the conversion script, you should fill those randomly initialized weights with the exact weights of the corresponding layer in the checkpoint. *E.g.* ```python # retrieve matching layer weights, e.g. by
38_10_29
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers
.md
corresponding layer in the checkpoint. *E.g.* ```python # retrieve matching layer weights, e.g. by # recursive algorithm layer_name = "dense" pretrained_weight = array_of_dense_layer
38_10_30
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers
.md
model_pointer = getattr(model, "dense")
38_10_31
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers
.md
model_pointer.weight.data = torch.from_numpy(pretrained_weight) ``` While doing so, you must verify that each randomly initialized weight of your PyTorch model and its corresponding pretrained checkpoint weight exactly match in both **shape and name**. To do so, it is **necessary** to add assert statements for the shape and print out the names of the checkpoints weights. E.g. you should add statements like: ```python assert ( model_pointer.weight.shape == pretrained_weight.shape
38_10_32
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers
.md
```python assert ( model_pointer.weight.shape == pretrained_weight.shape ), f"Pointer shape of random weight {model_pointer.shape} and array shape of checkpoint weight {pretrained_weight.shape} mismatched" ``` Besides, you should also print out the names of both weights to make sure they match, *e.g.* ```python logger.info(f"Initialize PyTorch weight {layer_name} from {pretrained_weight.name}") ```
38_10_33
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers
.md
```python logger.info(f"Initialize PyTorch weight {layer_name} from {pretrained_weight.name}") ``` If either the shape or the name doesn't match, you probably assigned the wrong checkpoint weight to a randomly initialized layer of the 🤗 Transformers implementation. An incorrect shape is most likely due to an incorrect setting of the config parameters in `BrandNewBertConfig()` that do not exactly match those that were used for the checkpoint you want to convert. However, it could also be that
38_10_34
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers
.md
do not exactly match those that were used for the checkpoint you want to convert. However, it could also be that PyTorch's implementation of a layer requires the weight to be transposed beforehand. Finally, you should also check that **all** required weights are initialized and print out all checkpoint weights that were not used for initialization to make sure the model is correctly converted. It is completely normal, that the
38_10_35
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers
.md
were not used for initialization to make sure the model is correctly converted. It is completely normal, that the conversion trials fail with either a wrong shape statement or a wrong name assignment. This is most likely because either you used incorrect parameters in `BrandNewBertConfig()`, have a wrong architecture in the 🤗 Transformers implementation, you have a bug in the `init()` functions of one of the components of the 🤗 Transformers
38_10_36
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers
.md
implementation, you have a bug in the `init()` functions of one of the components of the 🤗 Transformers implementation or you need to transpose one of the checkpoint weights. This step should be iterated with the previous step until all weights of the checkpoint are correctly loaded in the Transformers model. Having correctly loaded the checkpoint into the 🤗 Transformers implementation, you can then save
38_10_37
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers
.md
Transformers model. Having correctly loaded the checkpoint into the 🤗 Transformers implementation, you can then save the model under a folder of your choice `/path/to/converted/checkpoint/folder` that should then contain both a `pytorch_model.bin` file and a `config.json` file: ```python model.save_pretrained("/path/to/converted/checkpoint/folder") ``` **7. Implement the forward pass** Having managed to correctly load the pretrained weights into the 🤗 Transformers implementation, you should now make
38_10_38
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md
https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers
.md
Having managed to correctly load the pretrained weights into the 🤗 Transformers implementation, you should now make sure that the forward pass is correctly implemented. In [Get familiar with the original repository](#3-4-run-a-pretrained-checkpoint-using-the-original-repository), you have already created a script that runs a forward pass of the model using the original repository. Now you should write an analogous script using the 🤗 Transformers
38_10_39