source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#31-improving-positional-embeddings-of-llms | .md | For *RoPE*, keeping the same \\( \theta \\) that was used during training leads to poor results when passing text inputs much longer than those seen during training, *c.f* [Press et al.](https://arxiv.org/abs/2108.12409). However, the community has found a couple of effective tricks that adapt \\( \theta \\), thereby allowing *RoPE* position embeddings to work well for extrapolated text input sequences (see [here](https://github.com/huggingface/transformers/pull/24653)). | 43_5_17 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#31-improving-positional-embeddings-of-llms | .md | > Both RoPE and ALiBi are relative positional embeddings that are *not* learned during training, but instead are based on the following intuitions:
- Positional cues about the text inputs should be given directly to the \\( QK^T \\) matrix of the self-attention layer
- The LLM should be incentivized to learn a constant *relative* distance positional encodings have to each other | 43_5_18 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#31-improving-positional-embeddings-of-llms | .md | - The LLM should be incentivized to learn a constant *relative* distance positional encodings have to each other
- The further text input tokens are from each other, the lower the probability of their query-value probability. Both RoPE and ALiBi lower the query-key probability of tokens far away from each other. RoPE by decreasing their vector product by increasing the angle between the query-key vectors. ALiBi by adding large negative numbers to the vector product | 43_5_19 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#31-improving-positional-embeddings-of-llms | .md | In conclusion, LLMs that are intended to be deployed in tasks that require handling large text inputs are better trained with relative positional embeddings, such as RoPE and ALiBi. Also note that even if an LLM with RoPE and ALiBi has been trained only on a fixed length of say \\( N_1 = 2048 \\) it can still be used in practice with text inputs much larger than \\( N_1 \\), like \\( N_2 = 8192 > N_1 \\) by extrapolating the positional embeddings. | 43_5_20 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#32-the-key-value-cache | .md | Auto-regressive text generation with LLMs works by iteratively putting in an input sequence, sampling the next token, appending the next token to the input sequence, and continuing to do so until the LLM produces a token that signifies that the generation has finished.
Please have a look at [Transformer's Generate Text Tutorial](https://huggingface.co/docs/transformers/llm_tutorial#generate-text) to get a more visual explanation of how auto-regressive generation works. | 43_6_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#32-the-key-value-cache | .md | Let's run a quick code snippet to show how auto-regressive works in practice. We will simply take the most likely next token via `torch.argmax`.
```python
input_ids = tokenizer(prompt, return_tensors="pt")["input_ids"].to("cuda") | 43_6_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#32-the-key-value-cache | .md | for _ in range(5):
next_logits = model(input_ids)["logits"][:, -1:]
next_token_id = torch.argmax(next_logits,dim=-1)
input_ids = torch.cat([input_ids, next_token_id], dim=-1)
print("shape of input_ids", input_ids.shape) | 43_6_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#32-the-key-value-cache | .md | generated_text = tokenizer.batch_decode(input_ids[:, -5:])
generated_text
```
**Output**:
```
shape of input_ids torch.Size([1, 21])
shape of input_ids torch.Size([1, 22])
shape of input_ids torch.Size([1, 23])
shape of input_ids torch.Size([1, 24])
shape of input_ids torch.Size([1, 25])
[' Here is a Python function']
```
As we can see every time we increase the text input tokens by the just sampled token. | 43_6_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#32-the-key-value-cache | .md | With very few exceptions, LLMs are trained using the [causal language modeling objective](https://huggingface.co/docs/transformers/tasks/language_modeling#causal-language-modeling) and therefore mask the upper triangle matrix of the attention score - this is why in the two diagrams above the attention scores are left blank (*a.k.a* have 0 probability). For a quick recap on causal language modeling you can refer to the [*Illustrated Self Attention | 43_6_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#32-the-key-value-cache | .md | (*a.k.a* have 0 probability). For a quick recap on causal language modeling you can refer to the [*Illustrated Self Attention blog*](https://jalammar.github.io/illustrated-gpt2/#part-2-illustrated-self-attention). | 43_6_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#32-the-key-value-cache | .md | As a consequence, tokens *never* depend on previous tokens, more specifically the \\( \mathbf{q}_i \\) vector is never put in relation with any key, values vectors \\( \mathbf{k}_j, \mathbf{v}_j \\) if \\( j > i \\) . Instead \\( \mathbf{q}_i \\) only attends to previous key-value vectors \\( \mathbf{k}_{m < i}, \mathbf{v}_{m < i} \text{ , for } m \in \{0, \ldots i - 1\} \\). In order to reduce unnecessary computation, one can therefore cache each layer's key-value vectors for all previous timesteps. | 43_6_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#32-the-key-value-cache | .md | In the following, we will tell the LLM to make use of the key-value cache by retrieving and forwarding it for each forward pass.
In Transformers, we can retrieve the key-value cache by passing the `use_cache` flag to the `forward` call and can then pass it with the current token.
```python
past_key_values = None # past_key_values is the key-value cache
generated_tokens = []
next_token_id = tokenizer(prompt, return_tensors="pt")["input_ids"].to("cuda") | 43_6_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#32-the-key-value-cache | .md | for _ in range(5):
next_logits, past_key_values = model(next_token_id, past_key_values=past_key_values, use_cache=True).to_tuple()
next_logits = next_logits[:, -1:]
next_token_id = torch.argmax(next_logits, dim=-1)
print("shape of input_ids", next_token_id.shape)
print("length of key-value cache", len(past_key_values[0][0])) # past_key_values are of shape [num_layers, 0 for k, 1 for v, batch_size, length, hidden_dim]
generated_tokens.append(next_token_id.item()) | 43_6_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#32-the-key-value-cache | .md | generated_text = tokenizer.batch_decode(generated_tokens)
generated_text
```
**Output**:
```
shape of input_ids torch.Size([1, 1])
length of key-value cache 20
shape of input_ids torch.Size([1, 1])
length of key-value cache 21
shape of input_ids torch.Size([1, 1])
length of key-value cache 22
shape of input_ids torch.Size([1, 1])
length of key-value cache 23
shape of input_ids torch.Size([1, 1])
length of key-value cache 24
[' Here', ' is', ' a', ' Python', ' function']
``` | 43_6_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#32-the-key-value-cache | .md | shape of input_ids torch.Size([1, 1])
length of key-value cache 24
[' Here', ' is', ' a', ' Python', ' function']
```
As one can see, when using the key-value cache the text input tokens are *not* increased in length, but remain a single input vector. The length of the key-value cache on the other hand is increased by one at every decoding step. | 43_6_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#32-the-key-value-cache | .md | > Making use of the key-value cache means that the \\( \mathbf{QK}^T \\) is essentially reduced to \\( \mathbf{q}_c\mathbf{K}^T \\) with \\( \mathbf{q}_c \\) being the query projection of the currently passed input token which is *always* just a single vector.
Using the key-value cache has two advantages:
- Significant increase in computational efficiency as less computations are performed compared to computing the full \\( \mathbf{QK}^T \\) matrix. This leads to an increase in inference speed | 43_6_11 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#32-the-key-value-cache | .md | - The maximum required memory is not increased quadratically with the number of generated tokens, but only increases linearly. | 43_6_12 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#32-the-key-value-cache | .md | > One should *always* make use of the key-value cache as it leads to identical results and a significant speed-up for longer input sequences. Transformers has the key-value cache enabled by default when making use of the text pipeline or the [`generate` method](https://huggingface.co/docs/transformers/main_classes/text_generation). We have an entire guide dedicated to caches [here](./kv_cache).
<Tip warning={true}> | 43_6_13 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#32-the-key-value-cache | .md | <Tip warning={true}>
Note that, despite our advice to use key-value caches, your LLM output may be slightly different when you use them. This is a property of the matrix multiplication kernels themselves -- you can read more about it [here](https://github.com/huggingface/transformers/issues/25420#issuecomment-1775317535).
</Tip> | 43_6_14 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#321-multi-round-conversation | .md | The key-value cache is especially useful for applications such as chat where multiple passes of auto-regressive decoding are required. Let's look at an example.
```
User: How many people live in France?
Assistant: Roughly 75 million people live in France
User: And how many are in Germany?
Assistant: Germany has ca. 81 million inhabitants
```
In this chat, the LLM runs auto-regressive decoding twice: | 43_7_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#321-multi-round-conversation | .md | Assistant: Germany has ca. 81 million inhabitants
```
In this chat, the LLM runs auto-regressive decoding twice:
1. The first time, the key-value cache is empty and the input prompt is `"User: How many people live in France?"` and the model auto-regressively generates the text `"Roughly 75 million people live in France"` while increasing the key-value cache at every decoding step. | 43_7_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#321-multi-round-conversation | .md | 2. The second time the input prompt is `"User: How many people live in France? \n Assistant: Roughly 75 million people live in France \n User: And how many in Germany?"`. Thanks to the cache, all key-value vectors for the first two sentences are already computed. Therefore the input prompt only consists of `"User: And how many in Germany?"`. While processing the shortened input prompt, its computed key-value vectors are concatenated to the key-value cache of the first decoding. The second Assistant's | 43_7_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#321-multi-round-conversation | .md | prompt, its computed key-value vectors are concatenated to the key-value cache of the first decoding. The second Assistant's answer `"Germany has ca. 81 million inhabitants"` is then auto-regressively generated with the key-value cache consisting of encoded key-value vectors of `"User: How many people live in France? \n Assistant: Roughly 75 million people live in France \n User: And how many are in Germany?"`. | 43_7_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#321-multi-round-conversation | .md | Two things should be noted here:
1. Keeping all the context is crucial for LLMs deployed in chat so that the LLM understands all the previous context of the conversation. E.g. for the example above the LLM needs to understand that the user refers to the population when asking `"And how many are in Germany"`. | 43_7_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#321-multi-round-conversation | .md | 2. The key-value cache is extremely useful for chat as it allows us to continuously grow the encoded chat history instead of having to re-encode the chat history again from scratch (as e.g. would be the case when using an encoder-decoder architecture).
In `transformers`, a `generate` call will return `past_key_values` when `return_dict_in_generate=True` is passed, in addition to the default `use_cache=True`. Note that it is not yet available through the `pipeline` interface.
```python | 43_7_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#321-multi-round-conversation | .md | ```python
# Generation as usual
prompt = system_prompt + "Question: Please write a function in Python that transforms bytes to Giga bytes.\n\nAnswer: Here"
model_inputs = tokenizer(prompt, return_tensors='pt')
generation_output = model.generate(**model_inputs, max_new_tokens=60, return_dict_in_generate=True)
decoded_output = tokenizer.batch_decode(generation_output.sequences)[0] | 43_7_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#321-multi-round-conversation | .md | # Piping the returned `past_key_values` to speed up the next conversation round
prompt = decoded_output + "\nQuestion: How can I modify the function above to return Mega bytes instead?\n\nAnswer: Here"
model_inputs = tokenizer(prompt, return_tensors='pt')
generation_output = model.generate(
**model_inputs,
past_key_values=generation_output.past_key_values,
max_new_tokens=60,
return_dict_in_generate=True
)
tokenizer.batch_decode(generation_output.sequences)[0][len(prompt):]
```
**Output**:
``` | 43_7_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#321-multi-round-conversation | .md | return_dict_in_generate=True
)
tokenizer.batch_decode(generation_output.sequences)[0][len(prompt):]
```
**Output**:
```
is a modified version of the function that returns Mega bytes instead. | 43_7_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#321-multi-round-conversation | .md | def bytes_to_megabytes(bytes):
return bytes / 1024 / 1024 | 43_7_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#321-multi-round-conversation | .md | Answer: The function takes a number of bytes as input and returns the number of
``` | 43_7_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#321-multi-round-conversation | .md | Great, no additional time is spent recomputing the same key and values for the attention layer! There is however one catch. While the required peak memory for the \\( \mathbf{QK}^T \\) matrix is significantly reduced, holding the key-value cache in memory can become very memory expensive for long input sequences or multi-turn chat. Remember that the key-value cache needs to store the key-value vectors for all previous input vectors \\( \mathbf{x}_i \text{, for } i \in \{1, \ldots, c - 1\} \\) for all | 43_7_11 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#321-multi-round-conversation | .md | the key-value vectors for all previous input vectors \\( \mathbf{x}_i \text{, for } i \in \{1, \ldots, c - 1\} \\) for all self-attention layers and for all attention heads. | 43_7_12 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#321-multi-round-conversation | .md | Let's compute the number of float values that need to be stored in the key-value cache for the LLM `bigcode/octocoder` that we used before.
The number of float values amounts to two times the sequence length times the number of attention heads times the attention head dimension and times the number of layers.
Computing this for our LLM at a hypothetical input sequence length of 16000 gives:
```python
config = model.config
2 * 16_000 * config.n_layer * config.n_head * config.n_embd // config.n_head
``` | 43_7_13 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#321-multi-round-conversation | .md | ```python
config = model.config
2 * 16_000 * config.n_layer * config.n_head * config.n_embd // config.n_head
```
**Output**:
```
7864320000
```
Roughly 8 billion float values! Storing 8 billion float values in `float16` precision requires around 15 GB of RAM which is circa half as much as the model weights themselves!
Researchers have proposed two methods that allow to significantly reduce the memory cost of storing the key-value cache, which are explored in the next subsections. | 43_7_14 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#322-multi-query-attention-mqa | .md | [Multi-Query-Attention](https://arxiv.org/abs/1911.02150) was proposed in Noam Shazeer's *Fast Transformer Decoding: One Write-Head is All You Need* paper. As the title says, Noam found out that instead of using `n_head` key-value projections weights, one can use a single head-value projection weight pair that is shared across all attention heads without that the model's performance significantly degrades. | 43_8_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#322-multi-query-attention-mqa | .md | > By using a single head-value projection weight pair, the key value vectors \\( \mathbf{k}_i, \mathbf{v}_i \\) have to be identical across all attention heads which in turn means that we only need to store 1 key-value projection pair in the cache instead of `n_head` ones. | 43_8_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#322-multi-query-attention-mqa | .md | As most LLMs use between 20 and 100 attention heads, MQA significantly reduces the memory consumption of the key-value cache. For the LLM used in this notebook we could therefore reduce the required memory consumption from 15 GB to less than 400 MB at an input sequence length of 16000.
In addition to memory savings, MQA also leads to improved computational efficiency as explained in the following. | 43_8_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#322-multi-query-attention-mqa | .md | In auto-regressive decoding, large key-value vectors need to be reloaded, concatenated with the current key-value vector pair to be then fed into the \\( \mathbf{q}_c\mathbf{K}^T \\) computation at every step. For auto-regressive decoding, the required memory bandwidth for the constant reloading can become a serious time bottleneck. By reducing the size of the key-value vectors less memory needs to be accessed, thus reducing the memory bandwidth bottleneck. For more detail, please have a look at [Noam's | 43_8_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#322-multi-query-attention-mqa | .md | less memory needs to be accessed, thus reducing the memory bandwidth bottleneck. For more detail, please have a look at [Noam's paper](https://arxiv.org/abs/1911.02150). | 43_8_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#322-multi-query-attention-mqa | .md | The important part to understand here is that reducing the number of key-value attention heads to 1 only makes sense if a key-value cache is used. The peak memory consumption of the model for a single forward pass without key-value cache stays unchanged as every attention head still has a unique query vector so that each attention head still has a different \\( \mathbf{QK}^T \\) matrix.
MQA has seen wide adoption by the community and is now used by many of the most popular LLMs: | 43_8_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#322-multi-query-attention-mqa | .md | MQA has seen wide adoption by the community and is now used by many of the most popular LLMs:
- [**Falcon**](https://huggingface.co/tiiuae/falcon-40b)
- [**PaLM**](https://arxiv.org/abs/2204.02311)
- [**MPT**](https://huggingface.co/mosaicml/mpt-30b)
- [**BLOOM**](https://huggingface.co/bigscience/bloom)
Also, the checkpoint used in this notebook - `bigcode/octocoder` - makes use of MQA. | 43_8_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#323-grouped-query-attention-gqa | .md | [Grouped-Query-Attention](https://arxiv.org/abs/2305.13245), as proposed by Ainslie et al. from Google, found that using MQA can often lead to quality degradation compared to using vanilla multi-key-value head projections. The paper argues that more model performance can be kept by less drastically reducing the number of query head projection weights. Instead of using just a single key-value projection weight, `n < n_head` key-value projection weights should be used. By choosing `n` to a significantly | 43_9_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#323-grouped-query-attention-gqa | .md | key-value projection weight, `n < n_head` key-value projection weights should be used. By choosing `n` to a significantly smaller value than `n_head`, such as 2,4 or 8 almost all of the memory and speed gains from MQA can be kept while sacrificing less model capacity and thus arguably less performance. | 43_9_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#323-grouped-query-attention-gqa | .md | Moreover, the authors of GQA found out that existing model checkpoints can be *uptrained* to have a GQA architecture with as little as 5% of the original pre-training compute. While 5% of the original pre-training compute can still be a massive amount, GQA *uptraining* allows existing checkpoints to be useful for longer input sequences.
GQA was only recently proposed which is why there is less adoption at the time of writing this notebook. | 43_9_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#323-grouped-query-attention-gqa | .md | GQA was only recently proposed which is why there is less adoption at the time of writing this notebook.
The most notable application of GQA is [Llama-v2](https://huggingface.co/meta-llama/Llama-2-70b-hf).
> As a conclusion, it is strongly recommended to make use of either GQA or MQA if the LLM is deployed with auto-regressive decoding and is required to handle large input sequences as is the case for example for chat. | 43_9_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#conclusion | .md | The research community is constantly coming up with new, nifty ways to speed up inference time for ever-larger LLMs. As an example, one such promising research direction is [speculative decoding](https://arxiv.org/abs/2211.17192) where "easy tokens" are generated by smaller, faster language models and only "hard tokens" are generated by the LLM itself. Going into more detail is out of the scope of this notebook, but can be read upon in this [nice blog post](https://huggingface.co/blog/assisted-generation). | 43_10_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#conclusion | .md | of the scope of this notebook, but can be read upon in this [nice blog post](https://huggingface.co/blog/assisted-generation). | 43_10_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#conclusion | .md | The reason massive LLMs such as GPT3/4, Llama-2-70b, Claude, PaLM can run so quickly in chat-interfaces such as [Hugging Face Chat](https://huggingface.co/chat/) or ChatGPT is to a big part thanks to the above-mentioned improvements in precision, algorithms, and architecture. | 43_10_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial_optimization.md | https://huggingface.co/docs/transformers/en/llm_tutorial_optimization/#conclusion | .md | Going forward, accelerators such as GPUs, TPUs, etc... will only get faster and allow for more memory, but one should nevertheless always make sure to use the best available algorithms and architectures to get the most bang for your buck π€ | 43_10_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/contributing.md | https://huggingface.co/docs/transformers/en/contributing/ | .md | <!---
Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | 44_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/contributing.md | https://huggingface.co/docs/transformers/en/contributing/ | .md | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
--> | 44_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/contributing.md | https://huggingface.co/docs/transformers/en/contributing/#contribute-to--transformers | .md | Everyone is welcome to contribute, and we value everybody's contribution. Code
contributions are not the only way to help the community. Answering questions, helping
others, and improving the documentation are also immensely valuable.
It also helps us if you spread the word! Reference the library in blog posts
about the awesome projects it made possible, shout out on Twitter every time it has
helped you, or simply βοΈ the repository to say thank you. | 44_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/contributing.md | https://huggingface.co/docs/transformers/en/contributing/#contribute-to--transformers | .md | helped you, or simply βοΈ the repository to say thank you.
However you choose to contribute, please be mindful and respect our
[code of conduct](https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md).
**This guide was heavily inspired by the awesome [scikit-learn guide to contributing](https://github.com/scikit-learn/scikit-learn/blob/main/CONTRIBUTING.md).** | 44_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/contributing.md | https://huggingface.co/docs/transformers/en/contributing/#ways-to-contribute | .md | There are several ways you can contribute to π€ Transformers:
* Fix outstanding issues with the existing code.
* Submit issues related to bugs or desired new features.
* Implement new models.
* Contribute to the examples or to the documentation.
If you don't know where to start, there is a special [Good First
Issue](https://github.com/huggingface/transformers/contribute) listing. It will give you a list of | 44_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/contributing.md | https://huggingface.co/docs/transformers/en/contributing/#ways-to-contribute | .md | Issue](https://github.com/huggingface/transformers/contribute) listing. It will give you a list of
open issues that are beginner-friendly and help you start contributing to open-source. The best way to do that is to open a Pull Request and link it to the issue that you'd like to work on. We try to give priority to opened PRs as we can easily track the progress of the fix, and if the contributor does not have time anymore, someone else can take the PR over. | 44_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/contributing.md | https://huggingface.co/docs/transformers/en/contributing/#ways-to-contribute | .md | For something slightly more challenging, you can also take a look at the [Good Second Issue](https://github.com/huggingface/transformers/labels/Good%20Second%20Issue) list. In general though, if you feel like you know what you're doing, go for it and we'll help you get there! π
> All contributions are equally valuable to the community. π₯° | 44_2_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/contributing.md | https://huggingface.co/docs/transformers/en/contributing/#fixing-outstanding-issues | .md | If you notice an issue with the existing code and have a fix in mind, feel free to [start contributing](#create-a-pull-request) and open a Pull Request! | 44_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/contributing.md | https://huggingface.co/docs/transformers/en/contributing/#submitting-a-bug-related-issue-or-feature-request | .md | Do your best to follow these guidelines when submitting a bug-related issue or a feature
request. It will make it easier for us to come back to you quickly and with good
feedback. | 44_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/contributing.md | https://huggingface.co/docs/transformers/en/contributing/#did-you-find-a-bug | .md | The π€ Transformers library is robust and reliable thanks to users who report the problems they encounter.
Before you report an issue, we would really appreciate it if you could **make sure the bug was not | 44_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/contributing.md | https://huggingface.co/docs/transformers/en/contributing/#did-you-find-a-bug | .md | already reported** (use the search bar on GitHub under Issues). Your issue should also be related to bugs in the library itself, and not your code. If you're unsure whether the bug is in your code or the library, please ask in the [forum](https://discuss.huggingface.co/) or on our [discord](https://discord.com/invite/hugging-face-879548962464493619) first. This helps us respond quicker to fixing issues related to the library versus general questions.
> [!TIP] | 44_5_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/contributing.md | https://huggingface.co/docs/transformers/en/contributing/#did-you-find-a-bug | .md | > [!TIP]
> We have a [docs bot](https://huggingface.co/spaces/huggingchat/hf-docs-chat), and we highly encourage you to ask all your questions there. There is always a chance your bug can be fixed with a simple flag πΎπ«
Once you've confirmed the bug hasn't already been reported, please include the following information in your issue so we can quickly resolve it:
* Your **OS type and version** and **Python**, **PyTorch** and
**TensorFlow** versions when applicable. | 44_5_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/contributing.md | https://huggingface.co/docs/transformers/en/contributing/#did-you-find-a-bug | .md | * Your **OS type and version** and **Python**, **PyTorch** and
**TensorFlow** versions when applicable.
* A short, self-contained, code snippet that allows us to reproduce the bug in
less than 30s.
* The *full* traceback if an exception is raised.
* Attach any other additional information, like screenshots, you think may help.
To get the OS and software versions automatically, run the following command:
```bash
transformers-cli env
``` | 44_5_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/contributing.md | https://huggingface.co/docs/transformers/en/contributing/#did-you-find-a-bug | .md | To get the OS and software versions automatically, run the following command:
```bash
transformers-cli env
```
You can also run the same command from the root of the repository:
```bash
python src/transformers/commands/transformers_cli.py env
``` | 44_5_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/contributing.md | https://huggingface.co/docs/transformers/en/contributing/#do-you-want-a-new-feature | .md | If there is a new feature you'd like to see in π€ Transformers, please open an issue and describe:
1. What is the *motivation* behind this feature? Is it related to a problem or frustration with the library? Is it a feature related to something you need for a project? Is it something you worked on and think it could benefit the community?
Whatever it is, we'd love to hear about it! | 44_6_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/contributing.md | https://huggingface.co/docs/transformers/en/contributing/#do-you-want-a-new-feature | .md | Whatever it is, we'd love to hear about it!
2. Describe your requested feature in as much detail as possible. The more you can tell us about it, the better we'll be able to help you.
3. Provide a *code snippet* that demonstrates the features usage.
4. If the feature is related to a paper, please include a link.
If your issue is well written we're already 80% of the way there by the time you create it. | 44_6_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/contributing.md | https://huggingface.co/docs/transformers/en/contributing/#do-you-want-a-new-feature | .md | If your issue is well written we're already 80% of the way there by the time you create it.
We have added [templates](https://github.com/huggingface/transformers/tree/main/templates) to help you get started with your issue. | 44_6_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/contributing.md | https://huggingface.co/docs/transformers/en/contributing/#do-you-want-to-implement-a-new-model | .md | New models are constantly released and if you want to implement a new model, please provide the following information:
* A short description of the model and a link to the paper.
* Link to the implementation if it is open-sourced.
* Link to the model weights if they are available.
If you are willing to contribute the model yourself, let us know so we can help you add it to π€ Transformers! | 44_7_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/contributing.md | https://huggingface.co/docs/transformers/en/contributing/#do-you-want-to-implement-a-new-model | .md | If you are willing to contribute the model yourself, let us know so we can help you add it to π€ Transformers!
We have a technical guide for [how to add a model to π€ Transformers](https://huggingface.co/docs/transformers/add_new_model). | 44_7_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/contributing.md | https://huggingface.co/docs/transformers/en/contributing/#do-you-want-to-add-documentation | .md | We're always looking for improvements to the documentation that make it more clear and accurate. Please let us know how the documentation can be improved such as typos and any content that is missing, unclear or inaccurate. We'll be happy to make the changes or help you make a contribution if you're interested!
For more details about how to generate, build, and write the documentation, take a look at the documentation [README](https://github.com/huggingface/transformers/tree/main/docs). | 44_8_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/contributing.md | https://huggingface.co/docs/transformers/en/contributing/#create-a-pull-request | .md | Before writing any code, we strongly advise you to search through the existing PRs or
issues to make sure nobody is already working on the same thing. If you are
unsure, it is always a good idea to open an issue to get some feedback.
You will need basic `git` proficiency to contribute to
π€ Transformers. While `git` is not the easiest tool to use, it has the greatest
manual. Type `git --help` in a shell and enjoy! If you prefer books, [Pro
Git](https://git-scm.com/book/en/v2) is a very good reference. | 44_9_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/contributing.md | https://huggingface.co/docs/transformers/en/contributing/#create-a-pull-request | .md | Git](https://git-scm.com/book/en/v2) is a very good reference.
You'll need **[Python 3.9](https://github.com/huggingface/transformers/blob/main/setup.py#L449)** or above to contribute to π€ Transformers. Follow the steps below to start contributing:
1. Fork the [repository](https://github.com/huggingface/transformers) by
clicking on the **[Fork](https://github.com/huggingface/transformers/fork)** button on the repository's page. This creates a copy of the code
under your GitHub user account. | 44_9_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/contributing.md | https://huggingface.co/docs/transformers/en/contributing/#create-a-pull-request | .md | under your GitHub user account.
2. Clone your fork to your local disk, and add the base repository as a remote:
```bash
git clone git@github.com:<your Github handle>/transformers.git
cd transformers
git remote add upstream https://github.com/huggingface/transformers.git
```
3. Create a new branch to hold your development changes:
```bash
git checkout -b a-descriptive-name-for-my-changes
```
π¨ **Do not** work on the `main` branch! | 44_9_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/contributing.md | https://huggingface.co/docs/transformers/en/contributing/#create-a-pull-request | .md | ```bash
git checkout -b a-descriptive-name-for-my-changes
```
π¨ **Do not** work on the `main` branch!
4. Set up a development environment by running the following command in a virtual environment:
```bash
pip install -e ".[dev]"
```
If π€ Transformers was already installed in the virtual environment, remove
it with `pip uninstall transformers` before reinstalling it in editable
mode with the `-e` flag. | 44_9_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/contributing.md | https://huggingface.co/docs/transformers/en/contributing/#create-a-pull-request | .md | it with `pip uninstall transformers` before reinstalling it in editable
mode with the `-e` flag.
Depending on your OS, and since the number of optional dependencies of Transformers is growing, you might get a
failure with this command. If that's the case make sure to install the Deep Learning framework you are working with
(PyTorch, TensorFlow and/or Flax) then do:
```bash
pip install -e ".[quality]"
```
which should be enough for most use cases.
5. Develop the features in your branch. | 44_9_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/contributing.md | https://huggingface.co/docs/transformers/en/contributing/#create-a-pull-request | .md | pip install -e ".[quality]"
```
which should be enough for most use cases.
5. Develop the features in your branch.
As you work on your code, you should make sure the test suite
passes. Run the tests impacted by your changes like this:
```bash
pytest tests/<TEST_TO_RUN>.py
```
For more information about tests, check out the
[Testing](https://huggingface.co/docs/transformers/testing) guide.
π€ Transformers relies on `black` and `ruff` to format its source code | 44_9_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/contributing.md | https://huggingface.co/docs/transformers/en/contributing/#create-a-pull-request | .md | π€ Transformers relies on `black` and `ruff` to format its source code
consistently. After you make changes, apply automatic style corrections and code verifications
that can't be automated in one go with:
```bash
make fixup
```
This target is also optimized to only work with files modified by the PR you're working on.
If you prefer to run the checks one after the other, the following command applies the
style corrections:
```bash
make style
``` | 44_9_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/contributing.md | https://huggingface.co/docs/transformers/en/contributing/#create-a-pull-request | .md | style corrections:
```bash
make style
```
π€ Transformers also uses `ruff` and a few custom scripts to check for coding mistakes. Quality
controls are run by the CI, but you can run the same checks with:
```bash
make quality
```
Finally, we have a lot of scripts to make sure we don't forget to update
some files when adding a new model. You can run these scripts with:
```bash
make repo-consistency
```
To learn more about those checks and how to fix any issues with them, check out the | 44_9_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/contributing.md | https://huggingface.co/docs/transformers/en/contributing/#create-a-pull-request | .md | ```bash
make repo-consistency
```
To learn more about those checks and how to fix any issues with them, check out the
[Checks on a Pull Request](https://huggingface.co/docs/transformers/pr_checks) guide.
If you're modifying documents under the `docs/source` directory, make sure the documentation can still be built. This check will also run in the CI when you open a pull request. To run a local check
make sure you install the documentation builder:
```bash
pip install ".[docs]"
``` | 44_9_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/contributing.md | https://huggingface.co/docs/transformers/en/contributing/#create-a-pull-request | .md | make sure you install the documentation builder:
```bash
pip install ".[docs]"
```
Run the following command from the root of the repository:
```bash
doc-builder build transformers docs/source/en --build_dir ~/tmp/test-build
```
This will build the documentation in the `~/tmp/test-build` folder where you can inspect the generated
Markdown files with your favorite editor. You can also preview the docs on GitHub when you open a pull request. | 44_9_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/contributing.md | https://huggingface.co/docs/transformers/en/contributing/#create-a-pull-request | .md | Markdown files with your favorite editor. You can also preview the docs on GitHub when you open a pull request.
Once you're happy with your changes, add the changed files with `git add` and
record your changes locally with `git commit`:
```bash
git add modified_file.py
git commit
```
Please remember to write [good commit
messages](https://chris.beams.io/posts/git-commit/) to clearly communicate the changes you made!
To keep your copy of the code up to date with the original | 44_9_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/contributing.md | https://huggingface.co/docs/transformers/en/contributing/#create-a-pull-request | .md | To keep your copy of the code up to date with the original
repository, rebase your branch on `upstream/branch` *before* you open a pull request or if requested by a maintainer:
```bash
git fetch upstream
git rebase upstream/main
```
Push your changes to your branch:
```bash
git push -u origin a-descriptive-name-for-my-changes
``` | 44_9_11 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/contributing.md | https://huggingface.co/docs/transformers/en/contributing/#create-a-pull-request | .md | ```
Push your changes to your branch:
```bash
git push -u origin a-descriptive-name-for-my-changes
```
If you've already opened a pull request, you'll need to force push with the `--force` flag. Otherwise, if the pull request hasn't been opened yet, you can just push your changes normally. | 44_9_12 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/contributing.md | https://huggingface.co/docs/transformers/en/contributing/#create-a-pull-request | .md | 6. Now you can go to your fork of the repository on GitHub and click on **Pull Request** to open a pull request. Make sure you tick off all the boxes on our [checklist](#pull-request-checklist) below. When you're ready, you can send your changes to the project maintainers for review.
7. It's ok if maintainers request changes, it happens to our core contributors
too! So everyone can see the changes in the pull request, work in your local | 44_9_13 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/contributing.md | https://huggingface.co/docs/transformers/en/contributing/#create-a-pull-request | .md | too! So everyone can see the changes in the pull request, work in your local
branch and push the changes to your fork. They will automatically appear in
the pull request. | 44_9_14 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/contributing.md | https://huggingface.co/docs/transformers/en/contributing/#pull-request-checklist | .md | β The pull request title should summarize your contribution.<br>
β If your pull request addresses an issue, please mention the issue number in the pull
request description to make sure they are linked (and people viewing the issue know you
are working on it).<br>
β To indicate a work in progress please prefix the title with `[WIP]`. These are
useful to avoid duplicated work, and to differentiate it from PRs ready to be merged.<br>
β Make sure existing tests pass.<br> | 44_10_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/contributing.md | https://huggingface.co/docs/transformers/en/contributing/#pull-request-checklist | .md | useful to avoid duplicated work, and to differentiate it from PRs ready to be merged.<br>
β Make sure existing tests pass.<br>
β If adding a new feature, also add tests for it.<br>
- If you are adding a new model, make sure you use
`ModelTester.all_model_classes = (MyModel, MyModelWithLMHead,...)` to trigger the common tests.
- If you are adding new `@slow` tests, make sure they pass using
`RUN_SLOW=1 python -m pytest tests/models/my_new_model/test_my_new_model.py`. | 44_10_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/contributing.md | https://huggingface.co/docs/transformers/en/contributing/#pull-request-checklist | .md | `RUN_SLOW=1 python -m pytest tests/models/my_new_model/test_my_new_model.py`.
- If you are adding a new tokenizer, write tests and make sure
`RUN_SLOW=1 python -m pytest tests/models/{your_model_name}/test_tokenization_{your_model_name}.py` passes.
- CircleCI does not run the slow tests, but GitHub Actions does every night!<br>
β All public methods must have informative docstrings (see | 44_10_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/contributing.md | https://huggingface.co/docs/transformers/en/contributing/#pull-request-checklist | .md | β All public methods must have informative docstrings (see
[`modeling_bert.py`](https://github.com/huggingface/transformers/blob/main/src/transformers/models/bert/modeling_bert.py)
for an example).<br>
β Due to the rapidly growing repository, don't add any images, videos and other
non-text files that'll significantly weigh down the repository. Instead, use a Hub
repository such as [`hf-internal-testing`](https://huggingface.co/hf-internal-testing) | 44_10_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/contributing.md | https://huggingface.co/docs/transformers/en/contributing/#pull-request-checklist | .md | repository such as [`hf-internal-testing`](https://huggingface.co/hf-internal-testing)
to host these files and reference them by URL. We recommend placing documentation
related images in the following repository:
[huggingface/documentation-images](https://huggingface.co/datasets/huggingface/documentation-images).
You can open a PR on this dataset repository and ask a Hugging Face member to merge it. | 44_10_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/contributing.md | https://huggingface.co/docs/transformers/en/contributing/#pull-request-checklist | .md | You can open a PR on this dataset repository and ask a Hugging Face member to merge it.
For more information about the checks run on a pull request, take a look at our [Checks on a Pull Request](https://huggingface.co/docs/transformers/pr_checks) guide. | 44_10_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/contributing.md | https://huggingface.co/docs/transformers/en/contributing/#tests | .md | An extensive test suite is included to test the library behavior and several examples. Library tests can be found in
the [tests](https://github.com/huggingface/transformers/tree/main/tests) folder and examples tests in the
[examples](https://github.com/huggingface/transformers/tree/main/examples) folder.
We like `pytest` and `pytest-xdist` because it's faster. From the root of the
repository, specify a *path to a subfolder or a test file* to run the test:
```bash | 44_11_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/contributing.md | https://huggingface.co/docs/transformers/en/contributing/#tests | .md | repository, specify a *path to a subfolder or a test file* to run the test:
```bash
python -m pytest -n auto --dist=loadfile -s -v ./tests/models/my_new_model
```
Similarly, for the `examples` directory, specify a *path to a subfolder or test file* to run the test. For example, the following command tests the text classification subfolder in the PyTorch `examples` directory:
```bash
pip install -r examples/xxx/requirements.txt # only needed the first time | 44_11_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/contributing.md | https://huggingface.co/docs/transformers/en/contributing/#tests | .md | ```bash
pip install -r examples/xxx/requirements.txt # only needed the first time
python -m pytest -n auto --dist=loadfile -s -v ./examples/pytorch/text-classification
```
In fact, this is actually how our `make test` and `make test-examples` commands are implemented (not including the `pip install`)!
You can also specify a smaller set of tests in order to test only the feature
you're working on.
By default, slow tests are skipped but you can set the `RUN_SLOW` environment variable to | 44_11_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/contributing.md | https://huggingface.co/docs/transformers/en/contributing/#tests | .md | you're working on.
By default, slow tests are skipped but you can set the `RUN_SLOW` environment variable to
`yes` to run them. This will download many gigabytes of models so make sure you
have enough disk space, a good internet connection or a lot of patience!
<Tip warning={true}>
Remember to specify a *path to a subfolder or a test file* to run the test. Otherwise, you'll run all the tests in the `tests` or `examples` folder, which will take a very long time!
</Tip>
```bash | 44_11_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/contributing.md | https://huggingface.co/docs/transformers/en/contributing/#tests | .md | </Tip>
```bash
RUN_SLOW=yes python -m pytest -n auto --dist=loadfile -s -v ./tests/models/my_new_model
RUN_SLOW=yes python -m pytest -n auto --dist=loadfile -s -v ./examples/pytorch/text-classification
```
Like the slow tests, there are other environment variables available which are not enabled by default during testing:
- `RUN_CUSTOM_TOKENIZERS`: Enables tests for custom tokenizers.
- `RUN_PT_FLAX_CROSS_TESTS`: Enables tests for PyTorch + Flax integration. | 44_11_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/contributing.md | https://huggingface.co/docs/transformers/en/contributing/#tests | .md | - `RUN_PT_FLAX_CROSS_TESTS`: Enables tests for PyTorch + Flax integration.
- `RUN_PT_TF_CROSS_TESTS`: Enables tests for TensorFlow + PyTorch integration.
More environment variables and additional information can be found in the [testing_utils.py](https://github.com/huggingface/transformers/blob/main/src/transformers/testing_utils.py).
π€ Transformers uses `pytest` as a test runner only. It doesn't use any
`pytest`-specific features in the test suite itself. | 44_11_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/contributing.md | https://huggingface.co/docs/transformers/en/contributing/#tests | .md | π€ Transformers uses `pytest` as a test runner only. It doesn't use any
`pytest`-specific features in the test suite itself.
This means `unittest` is fully supported. Here's how to run tests with
`unittest`:
```bash
python -m unittest discover -s tests -t . -v
python -m unittest discover -s examples -t examples -v
``` | 44_11_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/contributing.md | https://huggingface.co/docs/transformers/en/contributing/#style-guide | .md | For documentation strings, π€ Transformers follows the [Google Python Style Guide](https://google.github.io/styleguide/pyguide.html).
Check our [documentation writing guide](https://github.com/huggingface/transformers/tree/main/docs#writing-documentation---specification)
for more information. | 44_12_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/contributing.md | https://huggingface.co/docs/transformers/en/contributing/#develop-on-windows | .md | On Windows (unless you're working in [Windows Subsystem for Linux](https://learn.microsoft.com/en-us/windows/wsl/) or WSL), you need to configure git to transform Windows `CRLF` line endings to Linux `LF` line endings:
```bash
git config core.autocrlf input
```
One way to run the `make` command on Windows is with MSYS2:
1. [Download MSYS2](https://www.msys2.org/), and we assume it's installed in `C:\msys64`. | 44_13_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/contributing.md | https://huggingface.co/docs/transformers/en/contributing/#develop-on-windows | .md | 1. [Download MSYS2](https://www.msys2.org/), and we assume it's installed in `C:\msys64`.
2. Open the command line `C:\msys64\msys2.exe` (it should be available from the **Start** menu).
3. Run in the shell: `pacman -Syu` and install `make` with `pacman -S make`.
4. Add `C:\msys64\usr\bin` to your PATH environment variable.
You can now use `make` from any terminal (PowerShell, cmd.exe, etc.)! π | 44_13_1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.