source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md | https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers | .md | pass of the model using the original repository. Now you should write an analogous script using the 🤗 Transformers
implementation instead of the original one. It should look as follows:
```python
model = BrandNewBertModel.from_pretrained("/path/to/converted/checkpoint/folder")
input_ids = [0, 4, 4, 3, 2, 4, 1, 7, 19]
output = model(input_ids).last_hidden_states
```
It is very likely that the 🤗 Transformers implementation and the original model implementation don't give the exact | 38_10_40 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md | https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers | .md | ```
It is very likely that the 🤗 Transformers implementation and the original model implementation don't give the exact
same output the very first time or that the forward pass throws an error. Don't be disappointed - it's expected! First,
you should make sure that the forward pass doesn't throw any errors. It often happens that the wrong dimensions are
used leading to a *Dimensionality mismatch* error or that the wrong data type object is used, *e.g.* `torch.long` | 38_10_41 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md | https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers | .md | used leading to a *Dimensionality mismatch* error or that the wrong data type object is used, *e.g.* `torch.long`
instead of `torch.float32`. Don't hesitate to ask the Hugging Face team for help, if you don't manage to solve
certain errors.
The final part to make sure the 🤗 Transformers implementation works correctly is to ensure that the outputs are
equivalent to a precision of `1e-3`. First, you should ensure that the output shapes are identical, *i.e.* | 38_10_42 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md | https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers | .md | equivalent to a precision of `1e-3`. First, you should ensure that the output shapes are identical, *i.e.*
`outputs.shape` should yield the same value for the script of the 🤗 Transformers implementation and the original
implementation. Next, you should make sure that the output values are identical as well. This one of the most difficult
parts of adding a new model. Common mistakes why the outputs are not identical are: | 38_10_43 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md | https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers | .md | parts of adding a new model. Common mistakes why the outputs are not identical are:
- Some layers were not added, *i.e.* an *activation* layer was not added, or the residual connection was forgotten
- The word embedding matrix was not tied
- The wrong positional embeddings are used because the original implementation uses on offset
- Dropout is applied during the forward pass. To fix this make sure *model.training is False* and that no dropout | 38_10_44 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md | https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers | .md | - Dropout is applied during the forward pass. To fix this make sure *model.training is False* and that no dropout
layer is falsely activated during the forward pass, *i.e.* pass *self.training* to [PyTorch's functional dropout](https://pytorch.org/docs/stable/nn.functional.html?highlight=dropout#torch.nn.functional.dropout)
The best way to fix the problem is usually to look at the forward pass of the original implementation and the 🤗 | 38_10_45 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md | https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers | .md | The best way to fix the problem is usually to look at the forward pass of the original implementation and the 🤗
Transformers implementation side-by-side and check if there are any differences. Ideally, you should debug/print out
intermediate outputs of both implementations of the forward pass to find the exact position in the network where the 🤗
Transformers implementation shows a different output than the original implementation. First, make sure that the | 38_10_46 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md | https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers | .md | Transformers implementation shows a different output than the original implementation. First, make sure that the
hard-coded `input_ids` in both scripts are identical. Next, verify that the outputs of the first transformation of
the `input_ids` (usually the word embeddings) are identical. And then work your way up to the very last layer of the
network. At some point, you will notice a difference between the two implementations, which should point you to the bug | 38_10_47 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md | https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers | .md | network. At some point, you will notice a difference between the two implementations, which should point you to the bug
in the 🤗 Transformers implementation. From our experience, a simple and efficient way is to add many print statements
in both the original implementation and 🤗 Transformers implementation, at the same positions in the network
respectively, and to successively remove print statements showing the same values for intermediate presentations. | 38_10_48 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md | https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers | .md | respectively, and to successively remove print statements showing the same values for intermediate presentations.
When you're confident that both implementations yield the same output, verify the outputs with
`torch.allclose(original_output, output, atol=1e-3)`, you're done with the most difficult part! Congratulations - the
work left to be done should be a cakewalk 😊.
**8. Adding all necessary model tests** | 38_10_49 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md | https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers | .md | work left to be done should be a cakewalk 😊.
**8. Adding all necessary model tests**
At this point, you have successfully added a new model. However, it is very much possible that the model does not yet
fully comply with the required design. To make sure, the implementation is fully compatible with 🤗 Transformers, all
common tests should pass. The Cookiecutter should have automatically added a test file for your model, probably under | 38_10_50 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md | https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers | .md | common tests should pass. The Cookiecutter should have automatically added a test file for your model, probably under
the same `tests/models/brand_new_bert/test_modeling_brand_new_bert.py`. Run this test file to verify that all common
tests pass:
```bash
pytest tests/models/brand_new_bert/test_modeling_brand_new_bert.py
```
Having fixed all common tests, it is now crucial to ensure that all the nice work you have done is well tested, so that | 38_10_51 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md | https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers | .md | ```
Having fixed all common tests, it is now crucial to ensure that all the nice work you have done is well tested, so that
- a) The community can easily understand your work by looking at specific tests of *brand_new_bert*
- b) Future changes to your model will not break any important feature of the model.
At first, integration tests should be added. Those integration tests essentially do the same as the debugging scripts | 38_10_52 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md | https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers | .md | At first, integration tests should be added. Those integration tests essentially do the same as the debugging scripts
you used earlier to implement the model to 🤗 Transformers. A template of those model tests has already added by the
Cookiecutter, called `BrandNewBertModelIntegrationTests` and only has to be filled out by you. To ensure that those
tests are passing, run
```bash
RUN_SLOW=1 pytest -sv tests/models/brand_new_bert/test_modeling_brand_new_bert.py::BrandNewBertModelIntegrationTests
``` | 38_10_53 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md | https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers | .md | RUN_SLOW=1 pytest -sv tests/models/brand_new_bert/test_modeling_brand_new_bert.py::BrandNewBertModelIntegrationTests
```
<Tip>
In case you are using Windows, you should replace `RUN_SLOW=1` with `SET RUN_SLOW=1`
</Tip>
Second, all features that are special to *brand_new_bert* should be tested additionally in a separate test under
`BrandNewBertModelTester`/`BrandNewBertModelTest`. This part is often forgotten but is extremely useful in two
ways: | 38_10_54 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md | https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers | .md | `BrandNewBertModelTester`/`BrandNewBertModelTest`. This part is often forgotten but is extremely useful in two
ways:
- It helps to transfer the knowledge you have acquired during the model addition to the community by showing how the
special features of *brand_new_bert* should work.
- Future contributors can quickly test changes to the model by running those special tests.
**9. Implement the tokenizer** | 38_10_55 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md | https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers | .md | - Future contributors can quickly test changes to the model by running those special tests.
**9. Implement the tokenizer**
Next, we should add the tokenizer of *brand_new_bert*. Usually, the tokenizer is equivalent to or very similar to an
already existing tokenizer of 🤗 Transformers.
It is very important to find/extract the original tokenizer file and to manage to load this file into the 🤗
Transformers' implementation of the tokenizer. | 38_10_56 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md | https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers | .md | Transformers' implementation of the tokenizer.
To ensure that the tokenizer works correctly, it is recommended to first create a script in the original repository
that inputs a string and returns the `input_ids`. It could look similar to this (in pseudo-code):
```python
input_str = "This is a long example input string containing special characters .$?-, numbers 2872 234 12 and words."
model = BrandNewBertModel.load_pretrained_checkpoint("/path/to/checkpoint/")
input_ids = model.tokenize(input_str) | 38_10_57 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md | https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers | .md | model = BrandNewBertModel.load_pretrained_checkpoint("/path/to/checkpoint/")
input_ids = model.tokenize(input_str)
```
You might have to take a deeper look again into the original repository to find the correct tokenizer function or you
might even have to do changes to your clone of the original repository to only output the `input_ids`. Having written
a functional tokenization script that uses the original repository, an analogous script for 🤗 Transformers should be | 38_10_58 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md | https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers | .md | a functional tokenization script that uses the original repository, an analogous script for 🤗 Transformers should be
created. It should look similar to this:
```python
from transformers import BrandNewBertTokenizer | 38_10_59 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md | https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers | .md | input_str = "This is a long example input string containing special characters .$?-, numbers 2872 234 12 and words."
tokenizer = BrandNewBertTokenizer.from_pretrained("/path/to/tokenizer/folder/") | 38_10_60 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md | https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers | .md | input_ids = tokenizer(input_str).input_ids
```
When both `input_ids` yield the same values, as a final step a tokenizer test file should also be added.
Analogous to the modeling test files of *brand_new_bert*, the tokenization test files of *brand_new_bert* should
contain a couple of hard-coded integration tests.
**10. Run End-to-end integration tests**
Having added the tokenizer, you should also add a couple of end-to-end integration tests using both the model and the | 38_10_61 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md | https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers | .md | Having added the tokenizer, you should also add a couple of end-to-end integration tests using both the model and the
tokenizer to `tests/models/brand_new_bert/test_modeling_brand_new_bert.py` in 🤗 Transformers.
Such a test should show on a meaningful
text-to-text sample that the 🤗 Transformers implementation works as expected. A meaningful text-to-text sample can
include *e.g.* a source-to-target-translation pair, an article-to-summary pair, a question-to-answer pair, etc… If none | 38_10_62 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md | https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers | .md | include *e.g.* a source-to-target-translation pair, an article-to-summary pair, a question-to-answer pair, etc… If none
of the ported checkpoints has been fine-tuned on a downstream task it is enough to simply rely on the model tests. In a
final step to ensure that the model is fully functional, it is advised that you also run all tests on GPU. It can
happen that you forgot to add some `.to(self.device)` statements to internal tensors of the model, which in such a | 38_10_63 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md | https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers | .md | happen that you forgot to add some `.to(self.device)` statements to internal tensors of the model, which in such a
test would show in an error. In case you have no access to a GPU, the Hugging Face team can take care of running those
tests for you.
**11. Add Docstring**
Now, all the necessary functionality for *brand_new_bert* is added - you're almost done! The only thing left to add is
a nice docstring and a doc page. The Cookiecutter should have added a template file called | 38_10_64 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md | https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers | .md | a nice docstring and a doc page. The Cookiecutter should have added a template file called
`docs/source/model_doc/brand_new_bert.md` that you should fill out. Users of your model will usually first look at
this page before using your model. Hence, the documentation must be understandable and concise. It is very useful for
the community to add some *Tips* to show how the model should be used. Don't hesitate to ping the Hugging Face team
regarding the docstrings. | 38_10_65 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md | https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers | .md | regarding the docstrings.
Next, make sure that the docstring added to `src/transformers/models/brand_new_bert/modeling_brand_new_bert.py` is
correct and included all necessary inputs and outputs. We have a detailed guide about writing documentation and our docstring format [here](writing-documentation). It is always good to remind oneself that documentation should
be treated at least as carefully as the code in 🤗 Transformers since the documentation is usually the first contact | 38_10_66 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md | https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers | .md | be treated at least as carefully as the code in 🤗 Transformers since the documentation is usually the first contact
point of the community with the model.
**Code refactor**
Great, now you have added all the necessary code for *brand_new_bert*. At this point, you should correct some potential
incorrect code style by running:
```bash
make style
```
and verify that your coding style passes the quality check:
```bash
make quality
``` | 38_10_67 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md | https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers | .md | ```bash
make style
```
and verify that your coding style passes the quality check:
```bash
make quality
```
There are a couple of other very strict design tests in 🤗 Transformers that might still be failing, which shows up in
the tests of your pull request. This is often because of some missing information in the docstring or some incorrect
naming. The Hugging Face team will surely help you if you're stuck here. | 38_10_68 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md | https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers | .md | naming. The Hugging Face team will surely help you if you're stuck here.
Lastly, it is always a good idea to refactor one's code after having ensured that the code works correctly. With all
tests passing, now it's a good time to go over the added code again and do some refactoring.
You have now finished the coding part, congratulation! 🎉 You are Awesome! 😎
**12. Upload the models to the model hub** | 38_10_69 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md | https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers | .md | You have now finished the coding part, congratulation! 🎉 You are Awesome! 😎
**12. Upload the models to the model hub**
In this final part, you should convert and upload all checkpoints to the model hub and add a model card for each
uploaded model checkpoint. You can get familiar with the hub functionalities by reading our [Model sharing and uploading Page](model_sharing). You should work alongside the Hugging Face team here to decide on a fitting name for each | 38_10_70 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md | https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers | .md | checkpoint and to get the required access rights to be able to upload the model under the author's organization of
*brand_new_bert*. The `push_to_hub` method, present in all models in `transformers`, is a quick and efficient way to push your checkpoint to the hub. A little snippet is pasted below:
```python
brand_new_bert.push_to_hub("brand_new_bert")
# Uncomment the following line to push to an organization.
# brand_new_bert.push_to_hub("<organization>/brand_new_bert")
``` | 38_10_71 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md | https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers | .md | # Uncomment the following line to push to an organization.
# brand_new_bert.push_to_hub("<organization>/brand_new_bert")
```
It is worth spending some time to create fitting model cards for each checkpoint. The model cards should highlight the
specific characteristics of this particular checkpoint, *e.g.* On which dataset was the checkpoint
pretrained/fine-tuned on? On what down-stream task should the model be used? And also include some code on how to
correctly use the model. | 38_10_72 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md | https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers | .md | correctly use the model.
**13. (Optional) Add notebook**
It is very helpful to add a notebook that showcases in-detail how *brand_new_bert* can be used for inference and/or
fine-tuned on a downstream task. This is not mandatory to merge your PR, but very useful for the community.
**14. Submit your finished PR**
You're done programming now and can move to the last step, which is getting your PR merged into main. Usually, the | 38_10_73 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md | https://huggingface.co/docs/transformers/en/add_new_model/#5-14-port-brandnewbert-to--transformers | .md | You're done programming now and can move to the last step, which is getting your PR merged into main. Usually, the
Hugging Face team should have helped you already at this point, but it is worth taking some time to give your finished
PR a nice description and eventually add comments to your code, if you want to point out certain design choices to your
reviewer. | 38_10_74 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md | https://huggingface.co/docs/transformers/en/add_new_model/#share-your-work | .md | Now, it's time to get some credit from the community for your work! Having completed a model addition is a major
contribution to Transformers and the whole NLP community. Your code and the ported pre-trained models will certainly be
used by hundreds and possibly even thousands of developers and researchers. You should be proud of your work and share
your achievements with the community.
**You have made another model that is super easy to access for everyone in the community! 🤯** | 38_11_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md | https://huggingface.co/docs/transformers/en/add_new_model/#model-additions-and-their-timeline-when-is-a-model-added-to-transformers | .md | We aim for `transformers` to have support for new model architectures and checkpoints as early as possible:
availability can range from day-0 (and hour-0) releases for some models, to a few days/weeks for others.
The availability of this is usually up to the model contributors, as well as how excited the community is for the
architecture.
We can split the model architecture possibilities in four sections:
- Day-0 integration
- Same-week integration
- Post-release integration
- Hub-first release | 38_12_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md | https://huggingface.co/docs/transformers/en/add_new_model/#model-additions-and-their-timeline-when-is-a-model-added-to-transformers | .md | - Day-0 integration
- Same-week integration
- Post-release integration
- Hub-first release
Let's dive into each of these and see how we (the transformers team) can help you contribute your architecture and get
your architecture to be very easily used by all members of the community. | 38_12_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md | https://huggingface.co/docs/transformers/en/add_new_model/#day-0-integration | .md | For a day-0 integration to work, we'll usually want to work hand-in-hand with you directly. In order to keep your
architecture private until your checkpoints and release are ready, we'll work together in a private fork of
transformers.
If you plan on having a transformers-first release, this is a great option: we run CI ahead of time, ensure the
documentation is clear, and we aim to optimize your model as much as possible (providing quantization, optimizing it | 38_13_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md | https://huggingface.co/docs/transformers/en/add_new_model/#day-0-integration | .md | documentation is clear, and we aim to optimize your model as much as possible (providing quantization, optimizing it
with Flash-Attention/SDPA, optimizing the KV cache, etc).
We can also lend you a hand in adding the model, reviewing it early, and help you make sure the `transformers`
API works as expected!
If this is the path you wish to go with, we ask for you to reach out in advance, especially if the architecture is | 38_13_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md | https://huggingface.co/docs/transformers/en/add_new_model/#day-0-integration | .md | If this is the path you wish to go with, we ask for you to reach out in advance, especially if the architecture is
particularly novel (at least a few days, but a few weeks will enable the absolute best integration). In order to reach
out, please contact transformers@huggingface.co 🤗. | 38_13_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md | https://huggingface.co/docs/transformers/en/add_new_model/#same-week-integration | .md | A same-week integration usually happens when model authors do not reach out; but we see significant community
requests.
In order to specify you'd like for us to integrate a specific model, we'll redirect you to our
[issue tracker](https://github.com/huggingface/transformers/issues/new?assignees=&labels=New+model&projects=&template=new-model-addition.yml)
where you can request a specific model.
The more activity on the issue, the faster/more likely we are to integrate the model! | 38_14_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md | https://huggingface.co/docs/transformers/en/add_new_model/#post-release-integration | .md | A post-release integration usually happens when there has not been sufficient activity/requests to warrant a same-week
integration, or that we lack the sufficient bandwidth to integrate it.
We very gladly welcome community contributions in those instances; more than half of the library was contributed
by contributors external to Hugging Face. If this is something that is interesting to you, we recommend that you look | 38_15_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md | https://huggingface.co/docs/transformers/en/add_new_model/#post-release-integration | .md | by contributors external to Hugging Face. If this is something that is interesting to you, we recommend that you look
at our [open issues tagged with "New model"](https://github.com/huggingface/transformers/issues?q=is%3Aopen+is%3Aissue+label%3A%22New+model%22).
We recommend you try your hand at a heavily requested model as this will multiply the impact of your contribution.
We'll be there to help you in case that's your first contribution 🤗. | 38_15_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md | https://huggingface.co/docs/transformers/en/add_new_model/#code-on-hub-release | .md | Finally, transformers has a "remote-code" possibility, in which contributions are not made within the toolkit, but on
the Hub. This can be particularly interesting for groups that are using `transformers` as a backbone for their project,
but don't have the bandwidth to contribute the model to transformers directly.
In case the model is very successful, then we'll very likely end up integrating it in `transformers` at the end - as this | 38_16_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/add_new_model.md | https://huggingface.co/docs/transformers/en/add_new_model/#code-on-hub-release | .md | In case the model is very successful, then we'll very likely end up integrating it in `transformers` at the end - as this
provides better documentation, CI, maintenance, and optimizations - but this remains a great way to make your model
accessible day-0 with minimal friction.
This guide is a great starting point for a Hub-first release: [Custom models](./custom_models) | 38_16_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/pipeline_webserver.md | https://huggingface.co/docs/transformers/en/pipeline_webserver/ | .md | <!--⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 39_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/pipeline_webserver.md | https://huggingface.co/docs/transformers/en/pipeline_webserver/#using-pipelines-for-a-webserver | .md | <Tip>
Creating an inference engine is a complex topic, and the "best" solution
will most likely depend on your problem space. Are you on CPU or GPU? Do
you want the lowest latency, the highest throughput, support for
many models, or just highly optimize 1 specific model?
There are many ways to tackle this topic, so what we are going to present is a good default
to get started which may not necessarily be the most optimal solution for you.
</Tip> | 39_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/pipeline_webserver.md | https://huggingface.co/docs/transformers/en/pipeline_webserver/#using-pipelines-for-a-webserver | .md | to get started which may not necessarily be the most optimal solution for you.
</Tip>
The key thing to understand is that we can use an iterator, just like you would [on a
dataset](pipeline_tutorial#using-pipelines-on-a-dataset), since a webserver is basically a system that waits for requests and
treats them as they come in.
Usually webservers are multiplexed (multithreaded, async, etc..) to handle various
requests concurrently. Pipelines on the other hand (and mostly the underlying models) | 39_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/pipeline_webserver.md | https://huggingface.co/docs/transformers/en/pipeline_webserver/#using-pipelines-for-a-webserver | .md | requests concurrently. Pipelines on the other hand (and mostly the underlying models)
are not really great for parallelism; they take up a lot of RAM, so it's best to give them all the available resources when they are running or it's a compute-intensive job.
We are going to solve that by having the webserver handle the light load of receiving
and sending requests, and having a single thread handling the actual work.
This example is going to use `starlette`. The actual framework is not really | 39_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/pipeline_webserver.md | https://huggingface.co/docs/transformers/en/pipeline_webserver/#using-pipelines-for-a-webserver | .md | This example is going to use `starlette`. The actual framework is not really
important, but you might have to tune or change the code if you are using another
one to achieve the same effect.
Create `server.py`:
```py
from starlette.applications import Starlette
from starlette.responses import JSONResponse
from starlette.routing import Route
from transformers import pipeline
import asyncio | 39_1_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/pipeline_webserver.md | https://huggingface.co/docs/transformers/en/pipeline_webserver/#using-pipelines-for-a-webserver | .md | async def homepage(request):
payload = await request.body()
string = payload.decode("utf-8")
response_q = asyncio.Queue()
await request.app.model_queue.put((string, response_q))
output = await response_q.get()
return JSONResponse(output)
async def server_loop(q):
pipe = pipeline(model="google-bert/bert-base-uncased")
while True:
(string, response_q) = await q.get()
out = pipe(string)
await response_q.put(out)
app = Starlette(
routes=[
Route("/", homepage, methods=["POST"]),
],
) | 39_1_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/pipeline_webserver.md | https://huggingface.co/docs/transformers/en/pipeline_webserver/#using-pipelines-for-a-webserver | .md | @app.on_event("startup")
async def startup_event():
q = asyncio.Queue()
app.model_queue = q
asyncio.create_task(server_loop(q))
```
Now you can start it with:
```bash
uvicorn server:app
```
And you can query it:
```bash
curl -X POST -d "test [MASK]" http://localhost:8000/
#[{"score":0.7742936015129089,"token":1012,"token_str":".","sequence":"test."},...]
```
And there you go, now you have a good idea of how to create a webserver! | 39_1_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/pipeline_webserver.md | https://huggingface.co/docs/transformers/en/pipeline_webserver/#using-pipelines-for-a-webserver | .md | ```
And there you go, now you have a good idea of how to create a webserver!
What is really important is that we load the model only **once**, so there are no copies
of the model on the webserver. This way, no unnecessary RAM is being used.
Then the queuing mechanism allows you to do fancy stuff like maybe accumulating a few
items before inferring to use dynamic batching:
<Tip warning={true}>
The code sample below is intentionally written like pseudo-code for readability. | 39_1_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/pipeline_webserver.md | https://huggingface.co/docs/transformers/en/pipeline_webserver/#using-pipelines-for-a-webserver | .md | <Tip warning={true}>
The code sample below is intentionally written like pseudo-code for readability.
Do not run this without checking if it makes sense for your system resources!
</Tip>
```py
(string, rq) = await q.get()
strings = []
queues = []
while True:
try:
(string, rq) = await asyncio.wait_for(q.get(), timeout=0.001) # 1ms
except asyncio.exceptions.TimeoutError:
break
strings.append(string)
queues.append(rq)
strings
outs = pipe(strings, batch_size=len(strings)) | 39_1_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/pipeline_webserver.md | https://huggingface.co/docs/transformers/en/pipeline_webserver/#using-pipelines-for-a-webserver | .md | break
strings.append(string)
queues.append(rq)
strings
outs = pipe(strings, batch_size=len(strings))
for rq, out in zip(queues, outs):
await rq.put(out)
```
Again, the proposed code is optimized for readability, not for being the best code.
First of all, there's no batch size limit which is usually not a
great idea. Next, the timeout is reset on every queue fetch, meaning you could
wait much more than 1ms before running the inference (delaying the first request
by that much). | 39_1_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/pipeline_webserver.md | https://huggingface.co/docs/transformers/en/pipeline_webserver/#using-pipelines-for-a-webserver | .md | wait much more than 1ms before running the inference (delaying the first request
by that much).
It would be better to have a single 1ms deadline.
This will always wait for 1ms even if the queue is empty, which might not be the
best since you probably want to start doing inference if there's nothing in the queue.
But maybe it does make sense if batching is really crucial for your use case.
Again, there's really no one best solution. | 39_1_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/pipeline_webserver.md | https://huggingface.co/docs/transformers/en/pipeline_webserver/#error-checking | .md | There's a lot that can go wrong in production: out of memory, out of space,
loading the model might fail, the query might be wrong, the query might be
correct but still fail to run because of a model misconfiguration, and so on.
Generally, it's good if the server outputs the errors to the user, so
adding a lot of `try..except` statements to show those errors is a good
idea. But keep in mind it may also be a security risk to reveal all those errors depending
on your security context. | 39_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/pipeline_webserver.md | https://huggingface.co/docs/transformers/en/pipeline_webserver/#circuit-breaking | .md | Webservers usually look better when they do circuit breaking. It means they
return proper errors when they're overloaded instead of just waiting for the query indefinitely. Return a 503 error instead of waiting for a super long time or a 504 after a long time.
This is relatively easy to implement in the proposed code since there is a single queue.
Looking at the queue size is a basic way to start returning errors before your
webserver fails under load. | 39_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/pipeline_webserver.md | https://huggingface.co/docs/transformers/en/pipeline_webserver/#blocking-the-main-thread | .md | Currently PyTorch is not async aware, and computation will block the main
thread while running. That means it would be better if PyTorch was forced to run
on its own thread/process. This wasn't done here because the code is a lot more
complex (mostly because threads and async and queues don't play nice together).
But ultimately it does the same thing.
This would be important if the inference of single items were long (> 1s) because | 39_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/pipeline_webserver.md | https://huggingface.co/docs/transformers/en/pipeline_webserver/#blocking-the-main-thread | .md | But ultimately it does the same thing.
This would be important if the inference of single items were long (> 1s) because
in this case, it means every query during inference would have to wait for 1s before
even receiving an error. | 39_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/pipeline_webserver.md | https://huggingface.co/docs/transformers/en/pipeline_webserver/#dynamic-batching | .md | In general, batching is not necessarily an improvement over passing 1 item at
a time (see [batching details](./main_classes/pipelines#pipeline-batching) for more information). But it can be very effective
when used in the correct setting. In the API, there is no dynamic
batching by default (too much opportunity for a slowdown). But for BLOOM inference -
which is a very large model - dynamic batching is **essential** to provide a decent experience for everyone. | 39_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tokenizer_summary.md | https://huggingface.co/docs/transformers/en/tokenizer_summary/ | .md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 40_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tokenizer_summary.md | https://huggingface.co/docs/transformers/en/tokenizer_summary/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 40_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tokenizer_summary.md | https://huggingface.co/docs/transformers/en/tokenizer_summary/#summary-of-the-tokenizers | .md | [[open-in-colab]]
On this page, we will have a closer look at tokenization.
<Youtube id="VFp38yj8h3A"/>
As we saw in [the preprocessing tutorial](preprocessing), tokenizing a text is splitting it into words or
subwords, which then are converted to ids through a look-up table. Converting words or subwords to ids is
straightforward, so in this summary, we will focus on splitting a text into words or subwords (i.e. tokenizing a text). | 40_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tokenizer_summary.md | https://huggingface.co/docs/transformers/en/tokenizer_summary/#summary-of-the-tokenizers | .md | straightforward, so in this summary, we will focus on splitting a text into words or subwords (i.e. tokenizing a text).
More specifically, we will look at the three main types of tokenizers used in 🤗 Transformers: [Byte-Pair Encoding
(BPE)](#byte-pair-encoding), [WordPiece](#wordpiece), and [SentencePiece](#sentencepiece), and show examples
of which tokenizer type is used by which model.
Note that on each model page, you can look at the documentation of the associated tokenizer to know which tokenizer | 40_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tokenizer_summary.md | https://huggingface.co/docs/transformers/en/tokenizer_summary/#summary-of-the-tokenizers | .md | Note that on each model page, you can look at the documentation of the associated tokenizer to know which tokenizer
type was used by the pretrained model. For instance, if we look at [`BertTokenizer`], we can see
that the model uses [WordPiece](#wordpiece). | 40_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tokenizer_summary.md | https://huggingface.co/docs/transformers/en/tokenizer_summary/#introduction | .md | Splitting a text into smaller chunks is a task that is harder than it looks, and there are multiple ways of doing so.
For instance, let's look at the sentence `"Don't you love 🤗 Transformers? We sure do."`
<Youtube id="nhJxYji1aho"/>
A simple way of tokenizing this text is to split it by spaces, which would give:
```
["Don't", "you", "love", "🤗", "Transformers?", "We", "sure", "do."]
```
This is a sensible first step, but if we look at the tokens `"Transformers?"` and `"do."`, we notice that the | 40_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tokenizer_summary.md | https://huggingface.co/docs/transformers/en/tokenizer_summary/#introduction | .md | ```
This is a sensible first step, but if we look at the tokens `"Transformers?"` and `"do."`, we notice that the
punctuation is attached to the words `"Transformer"` and `"do"`, which is suboptimal. We should take the
punctuation into account so that a model does not have to learn a different representation of a word and every possible
punctuation symbol that could follow it, which would explode the number of representations the model has to learn. | 40_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tokenizer_summary.md | https://huggingface.co/docs/transformers/en/tokenizer_summary/#introduction | .md | punctuation symbol that could follow it, which would explode the number of representations the model has to learn.
Taking punctuation into account, tokenizing our exemplary text would give:
```
["Don", "'", "t", "you", "love", "🤗", "Transformers", "?", "We", "sure", "do", "."]
```
Better. However, it is disadvantageous, how the tokenization dealt with the word `"Don't"`. `"Don't"` stands for | 40_2_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tokenizer_summary.md | https://huggingface.co/docs/transformers/en/tokenizer_summary/#introduction | .md | ```
Better. However, it is disadvantageous, how the tokenization dealt with the word `"Don't"`. `"Don't"` stands for
`"do not"`, so it would be better tokenized as `["Do", "n't"]`. This is where things start getting complicated, and
part of the reason each model has its own tokenizer type. Depending on the rules we apply for tokenizing a text, a
different tokenized output is generated for the same text. A pretrained model only performs properly if you feed it an | 40_2_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tokenizer_summary.md | https://huggingface.co/docs/transformers/en/tokenizer_summary/#introduction | .md | different tokenized output is generated for the same text. A pretrained model only performs properly if you feed it an
input that was tokenized with the same rules that were used to tokenize its training data.
[spaCy](https://spacy.io/) and [Moses](http://www.statmt.org/moses/?n=Development.GetStarted) are two popular
rule-based tokenizers. Applying them on our example, *spaCy* and *Moses* would output something like:
```
["Do", "n't", "you", "love", "🤗", "Transformers", "?", "We", "sure", "do", "."] | 40_2_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tokenizer_summary.md | https://huggingface.co/docs/transformers/en/tokenizer_summary/#introduction | .md | ```
["Do", "n't", "you", "love", "🤗", "Transformers", "?", "We", "sure", "do", "."]
```
As can be seen space and punctuation tokenization, as well as rule-based tokenization, is used here. Space and
punctuation tokenization and rule-based tokenization are both examples of word tokenization, which is loosely defined
as splitting sentences into words. While it's the most intuitive way to split texts into smaller chunks, this | 40_2_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tokenizer_summary.md | https://huggingface.co/docs/transformers/en/tokenizer_summary/#introduction | .md | as splitting sentences into words. While it's the most intuitive way to split texts into smaller chunks, this
tokenization method can lead to problems for massive text corpora. In this case, space and punctuation tokenization
usually generates a very big vocabulary (the set of all unique words and tokens used). *E.g.*, [Transformer XL](model_doc/transfo-xl) uses space and punctuation tokenization, resulting in a vocabulary size of 267,735! | 40_2_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tokenizer_summary.md | https://huggingface.co/docs/transformers/en/tokenizer_summary/#introduction | .md | Such a big vocabulary size forces the model to have an enormous embedding matrix as the input and output layer, which
causes both an increased memory and time complexity. In general, transformers models rarely have a vocabulary size
greater than 50,000, especially if they are pretrained only on a single language.
So if simple space and punctuation tokenization is unsatisfactory, why not simply tokenize on characters?
<Youtube id="ssLq_EK2jLE"/> | 40_2_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tokenizer_summary.md | https://huggingface.co/docs/transformers/en/tokenizer_summary/#introduction | .md | <Youtube id="ssLq_EK2jLE"/>
While character tokenization is very simple and would greatly reduce memory and time complexity it makes it much harder
for the model to learn meaningful input representations. *E.g.* learning a meaningful context-independent
representation for the letter `"t"` is much harder than learning a context-independent representation for the word
`"today"`. Therefore, character tokenization is often accompanied by a loss of performance. So to get the best of | 40_2_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tokenizer_summary.md | https://huggingface.co/docs/transformers/en/tokenizer_summary/#introduction | .md | `"today"`. Therefore, character tokenization is often accompanied by a loss of performance. So to get the best of
both worlds, transformers models use a hybrid between word-level and character-level tokenization called **subword**
tokenization. | 40_2_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tokenizer_summary.md | https://huggingface.co/docs/transformers/en/tokenizer_summary/#subword-tokenization | .md | <Youtube id="zHvTiHr506c"/>
Subword tokenization algorithms rely on the principle that frequently used words should not be split into smaller
subwords, but rare words should be decomposed into meaningful subwords. For instance `"annoyingly"` might be
considered a rare word and could be decomposed into `"annoying"` and `"ly"`. Both `"annoying"` and `"ly"` as
stand-alone subwords would appear more frequently while at the same time the meaning of `"annoyingly"` is kept by the | 40_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tokenizer_summary.md | https://huggingface.co/docs/transformers/en/tokenizer_summary/#subword-tokenization | .md | stand-alone subwords would appear more frequently while at the same time the meaning of `"annoyingly"` is kept by the
composite meaning of `"annoying"` and `"ly"`. This is especially useful in agglutinative languages such as Turkish,
where you can form (almost) arbitrarily long complex words by stringing together subwords.
Subword tokenization allows the model to have a reasonable vocabulary size while being able to learn meaningful | 40_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tokenizer_summary.md | https://huggingface.co/docs/transformers/en/tokenizer_summary/#subword-tokenization | .md | Subword tokenization allows the model to have a reasonable vocabulary size while being able to learn meaningful
context-independent representations. In addition, subword tokenization enables the model to process words it has never
seen before, by decomposing them into known subwords. For instance, the [`~transformers.BertTokenizer`] tokenizes
`"I have a new GPU!"` as follows:
```py
>>> from transformers import BertTokenizer | 40_3_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tokenizer_summary.md | https://huggingface.co/docs/transformers/en/tokenizer_summary/#subword-tokenization | .md | >>> tokenizer = BertTokenizer.from_pretrained("google-bert/bert-base-uncased")
>>> tokenizer.tokenize("I have a new GPU!")
["i", "have", "a", "new", "gp", "##u", "!"]
```
Because we are considering the uncased model, the sentence was lowercased first. We can see that the words `["i", "have", "a", "new"]` are present in the tokenizer's vocabulary, but the word `"gpu"` is not. Consequently, the
tokenizer splits `"gpu"` into known subwords: `["gp" and "##u"]`. `"##"` means that the rest of the token should | 40_3_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tokenizer_summary.md | https://huggingface.co/docs/transformers/en/tokenizer_summary/#subword-tokenization | .md | tokenizer splits `"gpu"` into known subwords: `["gp" and "##u"]`. `"##"` means that the rest of the token should
be attached to the previous one, without space (for decoding or reversal of the tokenization).
As another example, [`~transformers.XLNetTokenizer`] tokenizes our previously exemplary text as follows:
```py
>>> from transformers import XLNetTokenizer | 40_3_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tokenizer_summary.md | https://huggingface.co/docs/transformers/en/tokenizer_summary/#subword-tokenization | .md | >>> tokenizer = XLNetTokenizer.from_pretrained("xlnet/xlnet-base-cased")
>>> tokenizer.tokenize("Don't you love 🤗 Transformers? We sure do.")
["▁Don", "'", "t", "▁you", "▁love", "▁", "🤗", "▁", "Transform", "ers", "?", "▁We", "▁sure", "▁do", "."]
```
We'll get back to the meaning of those `"▁"` when we look at [SentencePiece](#sentencepiece). As one can see,
the rare word `"Transformers"` has been split into the more frequent subwords `"Transform"` and `"ers"`. | 40_3_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tokenizer_summary.md | https://huggingface.co/docs/transformers/en/tokenizer_summary/#subword-tokenization | .md | the rare word `"Transformers"` has been split into the more frequent subwords `"Transform"` and `"ers"`.
Let's now look at how the different subword tokenization algorithms work. Note that all of those tokenization
algorithms rely on some form of training which is usually done on the corpus the corresponding model will be trained
on.
<a id='byte-pair-encoding'></a> | 40_3_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tokenizer_summary.md | https://huggingface.co/docs/transformers/en/tokenizer_summary/#byte-pair-encoding-bpe | .md | Byte-Pair Encoding (BPE) was introduced in [Neural Machine Translation of Rare Words with Subword Units (Sennrich et
al., 2015)](https://arxiv.org/abs/1508.07909). BPE relies on a pre-tokenizer that splits the training data into
words. Pretokenization can be as simple as space tokenization, e.g. [GPT-2](model_doc/gpt2), [RoBERTa](model_doc/roberta). More advanced pre-tokenization include rule-based tokenization, e.g. [XLM](model_doc/xlm), | 40_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tokenizer_summary.md | https://huggingface.co/docs/transformers/en/tokenizer_summary/#byte-pair-encoding-bpe | .md | [FlauBERT](model_doc/flaubert) which uses Moses for most languages, or [GPT](model_doc/openai-gpt) which uses
spaCy and ftfy, to count the frequency of each word in the training corpus.
After pre-tokenization, a set of unique words has been created and the frequency with which each word occurred in the
training data has been determined. Next, BPE creates a base vocabulary consisting of all symbols that occur in the set | 40_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tokenizer_summary.md | https://huggingface.co/docs/transformers/en/tokenizer_summary/#byte-pair-encoding-bpe | .md | training data has been determined. Next, BPE creates a base vocabulary consisting of all symbols that occur in the set
of unique words and learns merge rules to form a new symbol from two symbols of the base vocabulary. It does so until
the vocabulary has attained the desired vocabulary size. Note that the desired vocabulary size is a hyperparameter to
define before training the tokenizer. | 40_4_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tokenizer_summary.md | https://huggingface.co/docs/transformers/en/tokenizer_summary/#byte-pair-encoding-bpe | .md | define before training the tokenizer.
As an example, let's assume that after pre-tokenization, the following set of words including their frequency has been
determined:
```
("hug", 10), ("pug", 5), ("pun", 12), ("bun", 4), ("hugs", 5)
```
Consequently, the base vocabulary is `["b", "g", "h", "n", "p", "s", "u"]`. Splitting all words into symbols of the
base vocabulary, we obtain:
```
("h" "u" "g", 10), ("p" "u" "g", 5), ("p" "u" "n", 12), ("b" "u" "n", 4), ("h" "u" "g" "s", 5)
``` | 40_4_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tokenizer_summary.md | https://huggingface.co/docs/transformers/en/tokenizer_summary/#byte-pair-encoding-bpe | .md | ```
("h" "u" "g", 10), ("p" "u" "g", 5), ("p" "u" "n", 12), ("b" "u" "n", 4), ("h" "u" "g" "s", 5)
```
BPE then counts the frequency of each possible symbol pair and picks the symbol pair that occurs most frequently. In
the example above `"h"` followed by `"u"` is present _10 + 5 = 15_ times (10 times in the 10 occurrences of
`"hug"`, 5 times in the 5 occurrences of `"hugs"`). However, the most frequent symbol pair is `"u"` followed by | 40_4_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tokenizer_summary.md | https://huggingface.co/docs/transformers/en/tokenizer_summary/#byte-pair-encoding-bpe | .md | `"hug"`, 5 times in the 5 occurrences of `"hugs"`). However, the most frequent symbol pair is `"u"` followed by
`"g"`, occurring _10 + 5 + 5 = 20_ times in total. Thus, the first merge rule the tokenizer learns is to group all
`"u"` symbols followed by a `"g"` symbol together. Next, `"ug"` is added to the vocabulary. The set of words then
becomes
```
("h" "ug", 10), ("p" "ug", 5), ("p" "u" "n", 12), ("b" "u" "n", 4), ("h" "ug" "s", 5)
``` | 40_4_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tokenizer_summary.md | https://huggingface.co/docs/transformers/en/tokenizer_summary/#byte-pair-encoding-bpe | .md | becomes
```
("h" "ug", 10), ("p" "ug", 5), ("p" "u" "n", 12), ("b" "u" "n", 4), ("h" "ug" "s", 5)
```
BPE then identifies the next most common symbol pair. It's `"u"` followed by `"n"`, which occurs 16 times. `"u"`,
`"n"` is merged to `"un"` and added to the vocabulary. The next most frequent symbol pair is `"h"` followed by
`"ug"`, occurring 15 times. Again the pair is merged and `"hug"` can be added to the vocabulary. | 40_4_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tokenizer_summary.md | https://huggingface.co/docs/transformers/en/tokenizer_summary/#byte-pair-encoding-bpe | .md | `"ug"`, occurring 15 times. Again the pair is merged and `"hug"` can be added to the vocabulary.
At this stage, the vocabulary is `["b", "g", "h", "n", "p", "s", "u", "ug", "un", "hug"]` and our set of unique words
is represented as
```
("hug", 10), ("p" "ug", 5), ("p" "un", 12), ("b" "un", 4), ("hug" "s", 5)
```
Assuming, that the Byte-Pair Encoding training would stop at this point, the learned merge rules would then be applied | 40_4_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tokenizer_summary.md | https://huggingface.co/docs/transformers/en/tokenizer_summary/#byte-pair-encoding-bpe | .md | ```
Assuming, that the Byte-Pair Encoding training would stop at this point, the learned merge rules would then be applied
to new words (as long as those new words do not include symbols that were not in the base vocabulary). For instance,
the word `"bug"` would be tokenized to `["b", "ug"]` but `"mug"` would be tokenized as `["<unk>", "ug"]` since
the symbol `"m"` is not in the base vocabulary. In general, single letters such as `"m"` are not replaced by the | 40_4_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tokenizer_summary.md | https://huggingface.co/docs/transformers/en/tokenizer_summary/#byte-pair-encoding-bpe | .md | the symbol `"m"` is not in the base vocabulary. In general, single letters such as `"m"` are not replaced by the
`"<unk>"` symbol because the training data usually includes at least one occurrence of each letter, but it is likely
to happen for very special characters like emojis.
As mentioned earlier, the vocabulary size, *i.e.* the base vocabulary size + the number of merges, is a hyperparameter | 40_4_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tokenizer_summary.md | https://huggingface.co/docs/transformers/en/tokenizer_summary/#byte-pair-encoding-bpe | .md | As mentioned earlier, the vocabulary size, *i.e.* the base vocabulary size + the number of merges, is a hyperparameter
to choose. For instance [GPT](model_doc/openai-gpt) has a vocabulary size of 40,478 since they have 478 base characters
and chose to stop training after 40,000 merges. | 40_4_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tokenizer_summary.md | https://huggingface.co/docs/transformers/en/tokenizer_summary/#byte-level-bpe | .md | A base vocabulary that includes all possible base characters can be quite large if *e.g.* all unicode characters are
considered as base characters. To have a better base vocabulary, [GPT-2](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) uses bytes
as the base vocabulary, which is a clever trick to force the base vocabulary to be of size 256 while ensuring that | 40_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tokenizer_summary.md | https://huggingface.co/docs/transformers/en/tokenizer_summary/#byte-level-bpe | .md | as the base vocabulary, which is a clever trick to force the base vocabulary to be of size 256 while ensuring that
every base character is included in the vocabulary. With some additional rules to deal with punctuation, the GPT2's
tokenizer can tokenize every text without the need for the <unk> symbol. [GPT-2](model_doc/gpt) has a vocabulary
size of 50,257, which corresponds to the 256 bytes base tokens, a special end-of-text token and the symbols learned
with 50,000 merges.
<a id='wordpiece'></a> | 40_5_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tokenizer_summary.md | https://huggingface.co/docs/transformers/en/tokenizer_summary/#wordpiece | .md | WordPiece is the subword tokenization algorithm used for [BERT](model_doc/bert), [DistilBERT](model_doc/distilbert), and [Electra](model_doc/electra). The algorithm was outlined in [Japanese and Korean
Voice Search (Schuster et al., 2012)](https://static.googleusercontent.com/media/research.google.com/ja//pubs/archive/37842.pdf) and is very similar to
BPE. WordPiece first initializes the vocabulary to include every character present in the training data and | 40_6_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tokenizer_summary.md | https://huggingface.co/docs/transformers/en/tokenizer_summary/#wordpiece | .md | BPE. WordPiece first initializes the vocabulary to include every character present in the training data and
progressively learns a given number of merge rules. In contrast to BPE, WordPiece does not choose the most frequent
symbol pair, but the one that maximizes the likelihood of the training data once added to the vocabulary.
So what does this mean exactly? Referring to the previous example, maximizing the likelihood of the training data is | 40_6_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tokenizer_summary.md | https://huggingface.co/docs/transformers/en/tokenizer_summary/#wordpiece | .md | So what does this mean exactly? Referring to the previous example, maximizing the likelihood of the training data is
equivalent to finding the symbol pair, whose probability divided by the probabilities of its first symbol followed by
its second symbol is the greatest among all symbol pairs. *E.g.* `"u"`, followed by `"g"` would have only been
merged if the probability of `"ug"` divided by `"u"`, `"g"` would have been greater than for any other symbol | 40_6_2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.