source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#prompt-lookup-decoding | .md | device, _, _ = get_backend() # automatically detects the underlying device type (CUDA, CPU, XPU, MPS, etc.)
tokenizer = AutoTokenizer.from_pretrained("facebook/opt-1.3b")
inputs = tokenizer("The second law of thermodynamics states", return_tensors="pt").to(device) | 18_4_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#prompt-lookup-decoding | .md | model = AutoModelForCausalLM.from_pretrained("facebook/opt-1.3b", torch_dtype="auto").to(device)
assistant_model = AutoModelForCausalLM.from_pretrained("facebook/opt-125m").to(device)
outputs = model.generate(**inputs, prompt_lookup_num_tokens=3)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
['The second law of thermodynamics states that entropy increases with temperature. ']
```
</hfoption>
<hfoption id="sampling"> | 18_4_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#prompt-lookup-decoding | .md | ```
</hfoption>
<hfoption id="sampling">
For prompt lookup decoding with sampling, add the `do_sample` and `temperature` parameters to the [`~GenerationMixin.generate`] method.
```py
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
from accelerate.test_utils.testing import get_backend | 18_4_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#prompt-lookup-decoding | .md | device, _, _ = get_backend() # automatically detects the underlying device type (CUDA, CPU, XPU, MPS, etc.)
tokenizer = AutoTokenizer.from_pretrained("facebook/opt-1.3b")
inputs = tokenizer("The second law of thermodynamics states", return_tensors="pt").to(device) | 18_4_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#prompt-lookup-decoding | .md | model = AutoModelForCausalLM.from_pretrained("facebook/opt-1.3b", torch_dtype="auto").to(device)
outputs = model.generate(**inputs, prompt_lookup_num_tokens=3, do_sample=True, temperature=0.7)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
["The second law of thermodynamics states that energy cannot be created nor destroyed. It's not a"]
```
</hfoption>
</hfoptions> | 18_4_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#attention-optimizations | .md | A known issue with transformer models is that the self-attention mechanism grows quadratically in compute and memory with the number of input tokens. This limitation is only magnified in LLMs which handles much longer sequences. To address this, try FlashAttention2 or PyTorch's scaled dot product attention (SDPA), which are more memory efficient attention implementations and can accelerate inference. | 18_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#flashattention-2 | .md | FlashAttention and [FlashAttention-2](./perf_infer_gpu_one#flashattention-2) break up the attention computation into smaller chunks and reduces the number of intermediate read/write operations to GPU memory to speed up inference. FlashAttention-2 improves on the original FlashAttention algorithm by also parallelizing over sequence length dimension and better partitioning work on the hardware to reduce synchronization and communication overhead. | 18_6_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#flashattention-2 | .md | To use FlashAttention-2, set `attn_implementation="flash_attention_2"` in the [`~PreTrainedModel.from_pretrained`] method.
```py
from transformers import AutoModelForCausalLM, BitsAndBytesConfig | 18_6_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#flashattention-2 | .md | quant_config = BitsAndBytesConfig(load_in_8bit=True)
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2b",
quantization_config=quant_config,
torch_dtype=torch.bfloat16,
attn_implementation="flash_attention_2",
)
``` | 18_6_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#fine-tuning-with-torchcompile-and-padding-free-data-collation | .md | In addition to optimizing inference, you can also enhance the training efficiency of large language models by leveraging torch.compile during fine-tuning and using a padding-free data collator. This approach can significantly speed up training and reduce computational overhead.
Here's how you can fine-tune a Llama model using SFTTrainer from the TRL library, with torch_compile enabled and a padding-free data collator:
```
#################### IMPORTS ################### | 18_7_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#fine-tuning-with-torchcompile-and-padding-free-data-collation | .md | import math
import datasets
import dataclasses
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
TrainingArguments
)
from trl import SFTConfig, SFTTrainer, DataCollatorForCompletionOnlyLM
#################### MODEL LOADING WITH FLASH ATTENTION ################### | 18_7_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#fine-tuning-with-torchcompile-and-padding-free-data-collation | .md | #################### MODEL LOADING WITH FLASH ATTENTION ###################
model_name = "meta-llama/Llama-3.2-1B"
model = AutoModelForCausalLM.from_pretrained(
model_name,
attn_implementation="flash_attention_2" # Enables FlashAttention-2
)
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=True)
#################### DATA PREPROCESSING (PADDING-FREE) ################### | 18_7_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#fine-tuning-with-torchcompile-and-padding-free-data-collation | .md | #################### DATA PREPROCESSING (PADDING-FREE) ###################
response_template = "\n### Label:"
response_template_ids = tokenizer.encode(
response_template, add_special_tokens=False
)[2:] # Exclude special tokens
data_collator = DataCollatorForCompletionOnlyLM(
response_template_ids=response_template_ids,
tokenizer=tokenizer,
ignore_index=-100,
padding_free=True # Enables padding-free collation
)
def format_dataset(example):
return {
"output": example["output"] + tokenizer.eos_token
} | 18_7_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#fine-tuning-with-torchcompile-and-padding-free-data-collation | .md | def format_dataset(example):
return {
"output": example["output"] + tokenizer.eos_token
}
data_files = {"train": "path/to/dataset"} # Replace with your dataset path
json_dataset = datasets.load_dataset("json", data_files=data_files)
formatted_train_dataset = json_dataset["train"].map(format_dataset)
################# TRAINING CONFIGURATION ############################ | 18_7_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#fine-tuning-with-torchcompile-and-padding-free-data-collation | .md | ################# TRAINING CONFIGURATION ############################
train_args = TrainingArguments(
num_train_epochs=5,
per_device_train_batch_size=4,
per_device_eval_batch_size=4,
gradient_accumulation_steps=4,
learning_rate=1e-5,
weight_decay=0.0,
warmup_ratio=0.03,
lr_scheduler_type="cosine",
logging_steps=1,
include_tokens_per_second=True,
save_strategy="epoch",
output_dir="output",
torch_compile=True, # Enables torch.compile
torch_compile_backend="inductor",
torch_compile_mode="default"
) | 18_7_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#fine-tuning-with-torchcompile-and-padding-free-data-collation | .md | # Convert TrainingArguments to SFTConfig
transformer_train_arg_fields = [x.name for x in dataclasses.fields(SFTConfig)]
transformer_kwargs = {
k: v
for k, v in train_args.to_dict().items()
if k in transformer_train_arg_fields
}
training_args = SFTConfig(**transformer_kwargs)
####################### FINE-TUNING ##################### | 18_7_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#fine-tuning-with-torchcompile-and-padding-free-data-collation | .md | ####################### FINE-TUNING #####################
trainer = SFTTrainer(
model=model,
tokenizer=tokenizer,
train_dataset=formatted_train_dataset,
data_collator=data_collator,
dataset_text_field="output",
args=training_args,
)
trainer.train()
``` | 18_7_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#pytorch-scaled-dot-product-attention | .md | Scaled dot product attention (SDPA) is automatically enabled in PyTorch 2.0 and it supports FlashAttention, xFormers, and PyTorch's C++ implementation. SDPA chooses the most performant attention algorithm if you're using a CUDA backend. For other backends, SDPA defaults to the PyTorch C++ implementation.
> [!TIP]
> SDPA supports FlashAttention-2 as long as you have the latest PyTorch version installed. | 18_8_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#pytorch-scaled-dot-product-attention | .md | > [!TIP]
> SDPA supports FlashAttention-2 as long as you have the latest PyTorch version installed.
Use the [torch.nn.attention.sdpa_kernel](https://pytorch.org/docs/stable/generated/torch.nn.attention.sdpa_kernel.html) context manager to explicitly enable or disable any of the four attention algorithms. For example, use `SDPBackend.FLASH_ATTENTION` to enable FlashAttention.
```py
import torch
from torch.nn.attention import SDPBackend, sdpa_kernel
from transformers import AutoModelForCausalLM | 18_8_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#pytorch-scaled-dot-product-attention | .md | model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2b",
torch_dtype=torch.bfloat16,
)
with sdpa_kernel(SDPBackend.FLASH_ATTENTION):
outputs = model.generate(**inputs)
``` | 18_8_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#quantization | .md | Quantization reduces the size of the LLM weights by storing them in a lower precision. This translates to lower memory usage and makes loading LLMs for inference more accessible if you're constrained by your GPUs memory. If you aren't limited by your GPU, you don't necessarily need to quantize your model because it can incur a small latency cost (except for AWQ and fused AWQ modules) due to the extra step required to quantize and dequantize the weights.
> [!TIP] | 18_9_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#quantization | .md | > [!TIP]
> There are many quantization libraries (see the [Quantization](./quantization) guide for more details) available, such as Quanto, AQLM, VPTQ, AWQ, and AutoGPTQ. Feel free to try them out and see which one works best for your use case. We also recommend reading the [Overview of natively supported quantization schemes in 🤗 Transformers](https://hf.co/blog/overview-quantization-transformers) blog post which compares AutoGPTQ and bitsandbytes. | 18_9_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#quantization | .md | Use the Model Memory Calculator below to estimate and compare how much memory is required to load a model. For example, try estimating how much memory it costs to load [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1).
<iframe
src="https://hf-accelerate-model-memory-usage.hf.space"
frameborder="0"
width="850"
height="450"
></iframe> | 18_9_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#quantization | .md | <iframe
src="https://hf-accelerate-model-memory-usage.hf.space"
frameborder="0"
width="850"
height="450"
></iframe>
To load Mistral-7B-v0.1 in half-precision, set the `torch_dtype` parameter in the [`~transformers.AutoModelForCausalLM.from_pretrained`] method to `torch.bfloat16`. This requires 13.74GB of memory.
```py
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch | 18_9_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#quantization | .md | model = AutoModelForCausalLM.from_pretrained(
"mistralai/Mistral-7B-v0.1", torch_dtype=torch.bfloat16, device_map="auto",
)
```
To load a quantized model (8-bit or 4-bit) for inference, try [bitsandbytes](https://hf.co/docs/bitsandbytes) and set the `load_in_4bit` or `load_in_8bit` parameters to `True`. Loading the model in 8-bits only requires 6.87 GB of memory.
```py
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
import torch | 18_9_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_optims.md | https://huggingface.co/docs/transformers/en/llm_optims/#quantization | .md | quant_config = BitsAndBytesConfig(load_in_8bit=True)
model = AutoModelForCausalLM.from_pretrained(
"mistralai/Mistral-7B-v0.1", quantization_config=quant_config, device_map="auto"
)
``` | 18_9_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial.md | https://huggingface.co/docs/transformers/en/llm_tutorial/ | .md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 19_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial.md | https://huggingface.co/docs/transformers/en/llm_tutorial/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 19_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial.md | https://huggingface.co/docs/transformers/en/llm_tutorial/#generation-with-llms | .md | [[open-in-colab]]
LLMs, or Large Language Models, are the key component behind text generation. In a nutshell, they consist of large pretrained transformer models trained to predict the next word (or, more precisely, token) given some input text. Since they predict one token at a time, you need to do something more elaborate to generate new sentences other than just calling the model -- you need to do autoregressive generation. | 19_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial.md | https://huggingface.co/docs/transformers/en/llm_tutorial/#generation-with-llms | .md | Autoregressive generation is the inference-time procedure of iteratively calling a model with its own generated outputs, given a few initial inputs. In 🤗 Transformers, this is handled by the [`~generation.GenerationMixin.generate`] method, which is available to all models with generative capabilities.
This tutorial will show you how to:
* Generate text with an LLM
* Avoid common pitfalls
* Next steps to help you get the most out of your LLM | 19_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial.md | https://huggingface.co/docs/transformers/en/llm_tutorial/#generation-with-llms | .md | * Generate text with an LLM
* Avoid common pitfalls
* Next steps to help you get the most out of your LLM
Before you begin, make sure you have all the necessary libraries installed:
```bash
pip install transformers bitsandbytes>=0.39.0 -q
``` | 19_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial.md | https://huggingface.co/docs/transformers/en/llm_tutorial/#generate-text | .md | A language model trained for [causal language modeling](tasks/language_modeling) takes a sequence of text tokens as input and returns the probability distribution for the next token.
<!-- [GIF 1 -- FWD PASS] -->
<figure class="image table text-center m-0 w-full">
<video
style="max-width: 90%; margin: auto;"
autoplay loop muted playsinline
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/assisted-generation/gif_1_1080p.mov"
></video> | 19_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial.md | https://huggingface.co/docs/transformers/en/llm_tutorial/#generate-text | .md | ></video>
<figcaption>"Forward pass of an LLM"</figcaption>
</figure>
A critical aspect of autoregressive generation with LLMs is how to select the next token from this probability distribution. Anything goes in this step as long as you end up with a token for the next iteration. This means it can be as simple as selecting the most likely token from the probability distribution or as complex as applying a dozen transformations before sampling from the resulting distribution. | 19_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial.md | https://huggingface.co/docs/transformers/en/llm_tutorial/#generate-text | .md | <!-- [GIF 2 -- TEXT GENERATION] -->
<figure class="image table text-center m-0 w-full">
<video
style="max-width: 90%; margin: auto;"
autoplay loop muted playsinline
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/assisted-generation/gif_2_1080p.mov"
></video>
<figcaption>"Autoregressive generation iteratively selects the next token from a probability distribution to generate text"</figcaption>
</figure> | 19_2_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial.md | https://huggingface.co/docs/transformers/en/llm_tutorial/#generate-text | .md | </figure>
The process depicted above is repeated iteratively until some stopping condition is reached. Ideally, the stopping condition is dictated by the model, which should learn when to output an end-of-sequence (`EOS`) token. If this is not the case, generation stops when some predefined maximum length is reached. | 19_2_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial.md | https://huggingface.co/docs/transformers/en/llm_tutorial/#generate-text | .md | Properly setting up the token selection step and the stopping condition is essential to make your model behave as you'd expect on your task. That is why we have a [`~generation.GenerationConfig`] file associated with each model, which contains a good default generative parameterization and is loaded alongside your model.
Let's talk code!
<Tip> | 19_2_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial.md | https://huggingface.co/docs/transformers/en/llm_tutorial/#generate-text | .md | Let's talk code!
<Tip>
If you're interested in basic LLM usage, our high-level [`Pipeline`](pipeline_tutorial) interface is a great starting point. However, LLMs often require advanced features like quantization and fine control of the token selection step, which is best done through [`~generation.GenerationMixin.generate`]. Autoregressive generation with LLMs is also resource-intensive and should be executed on a GPU for adequate throughput.
</Tip>
First, you need to load the model.
```py | 19_2_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial.md | https://huggingface.co/docs/transformers/en/llm_tutorial/#generate-text | .md | </Tip>
First, you need to load the model.
```py
>>> from transformers import AutoModelForCausalLM | 19_2_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial.md | https://huggingface.co/docs/transformers/en/llm_tutorial/#generate-text | .md | >>> model = AutoModelForCausalLM.from_pretrained(
... "mistralai/Mistral-7B-v0.1", device_map="auto", load_in_4bit=True
... )
```
You'll notice two flags in the `from_pretrained` call:
- `device_map` ensures the model is moved to your GPU(s)
- `load_in_4bit` applies [4-bit dynamic quantization](main_classes/quantization) to massively reduce the resource requirements
There are other ways to initialize a model, but this is a good baseline to begin with an LLM. | 19_2_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial.md | https://huggingface.co/docs/transformers/en/llm_tutorial/#generate-text | .md | There are other ways to initialize a model, but this is a good baseline to begin with an LLM.
Next, you need to preprocess your text input with a [tokenizer](tokenizer_summary).
```py
>>> from transformers import AutoTokenizer | 19_2_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial.md | https://huggingface.co/docs/transformers/en/llm_tutorial/#generate-text | .md | >>> tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-v0.1", padding_side="left")
>>> model_inputs = tokenizer(["A list of colors: red, blue"], return_tensors="pt").to("cuda")
```
The `model_inputs` variable holds the tokenized text input, as well as the attention mask. While [`~generation.GenerationMixin.generate`] does its best effort to infer the attention mask when it is not passed, we recommend passing it whenever possible for optimal results. | 19_2_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial.md | https://huggingface.co/docs/transformers/en/llm_tutorial/#generate-text | .md | After tokenizing the inputs, you can call the [`~generation.GenerationMixin.generate`] method to returns the generated tokens. The generated tokens then should be converted to text before printing.
```py
>>> generated_ids = model.generate(**model_inputs)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
'A list of colors: red, blue, green, yellow, orange, purple, pink,'
``` | 19_2_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial.md | https://huggingface.co/docs/transformers/en/llm_tutorial/#generate-text | .md | 'A list of colors: red, blue, green, yellow, orange, purple, pink,'
```
Finally, you don't need to do it one sequence at a time! You can batch your inputs, which will greatly improve the throughput at a small latency and memory cost. All you need to do is to make sure you pad your inputs properly (more on that below).
```py
>>> tokenizer.pad_token = tokenizer.eos_token # Most LLMs don't have a pad token by default
>>> model_inputs = tokenizer( | 19_2_11 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial.md | https://huggingface.co/docs/transformers/en/llm_tutorial/#generate-text | .md | >>> tokenizer.pad_token = tokenizer.eos_token # Most LLMs don't have a pad token by default
>>> model_inputs = tokenizer(
... ["A list of colors: red, blue", "Portugal is"], return_tensors="pt", padding=True
... ).to("cuda")
>>> generated_ids = model.generate(**model_inputs)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
['A list of colors: red, blue, green, yellow, orange, purple, pink,',
'Portugal is a country in southwestern Europe, on the Iber']
``` | 19_2_12 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial.md | https://huggingface.co/docs/transformers/en/llm_tutorial/#generate-text | .md | 'Portugal is a country in southwestern Europe, on the Iber']
```
And that's it! In a few lines of code, you can harness the power of an LLM. | 19_2_13 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial.md | https://huggingface.co/docs/transformers/en/llm_tutorial/#common-pitfalls | .md | There are many [generation strategies](generation_strategies), and sometimes the default values may not be appropriate for your use case. If your outputs aren't aligned with what you're expecting, we've created a list of the most common pitfalls and how to avoid them.
```py
>>> from transformers import AutoModelForCausalLM, AutoTokenizer | 19_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial.md | https://huggingface.co/docs/transformers/en/llm_tutorial/#common-pitfalls | .md | >>> tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-v0.1")
>>> tokenizer.pad_token = tokenizer.eos_token # Most LLMs don't have a pad token by default
>>> model = AutoModelForCausalLM.from_pretrained(
... "mistralai/Mistral-7B-v0.1", device_map="auto", load_in_4bit=True
... )
``` | 19_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial.md | https://huggingface.co/docs/transformers/en/llm_tutorial/#generated-output-is-too-shortlong | .md | If not specified in the [`~generation.GenerationConfig`] file, `generate` returns up to 20 tokens by default. We highly recommend manually setting `max_new_tokens` in your `generate` call to control the maximum number of new tokens it can return. Keep in mind LLMs (more precisely, [decoder-only models](https://huggingface.co/learn/nlp-course/chapter1/6?fw=pt)) also return the input prompt as part of the output.
```py | 19_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial.md | https://huggingface.co/docs/transformers/en/llm_tutorial/#generated-output-is-too-shortlong | .md | ```py
>>> model_inputs = tokenizer(["A sequence of numbers: 1, 2"], return_tensors="pt").to("cuda") | 19_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial.md | https://huggingface.co/docs/transformers/en/llm_tutorial/#generated-output-is-too-shortlong | .md | >>> # By default, the output will contain up to 20 tokens
>>> generated_ids = model.generate(**model_inputs)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
'A sequence of numbers: 1, 2, 3, 4, 5' | 19_4_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial.md | https://huggingface.co/docs/transformers/en/llm_tutorial/#generated-output-is-too-shortlong | .md | >>> # Setting `max_new_tokens` allows you to control the maximum length
>>> generated_ids = model.generate(**model_inputs, max_new_tokens=50)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
'A sequence of numbers: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,'
``` | 19_4_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial.md | https://huggingface.co/docs/transformers/en/llm_tutorial/#incorrect-generation-mode | .md | By default, and unless specified in the [`~generation.GenerationConfig`] file, `generate` selects the most likely token at each iteration (greedy decoding). Depending on your task, this may be undesirable; creative tasks like chatbots or writing an essay benefit from sampling. On the other hand, input-grounded tasks like audio transcription or translation benefit from greedy decoding. Enable sampling with `do_sample=True`, and you can learn more about this topic in this [blog | 19_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial.md | https://huggingface.co/docs/transformers/en/llm_tutorial/#incorrect-generation-mode | .md | benefit from greedy decoding. Enable sampling with `do_sample=True`, and you can learn more about this topic in this [blog post](https://huggingface.co/blog/how-to-generate). | 19_5_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial.md | https://huggingface.co/docs/transformers/en/llm_tutorial/#incorrect-generation-mode | .md | ```py
>>> # Set seed for reproducibility -- you don't need this unless you want full reproducibility
>>> from transformers import set_seed
>>> set_seed(42) | 19_5_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial.md | https://huggingface.co/docs/transformers/en/llm_tutorial/#incorrect-generation-mode | .md | >>> model_inputs = tokenizer(["I am a cat."], return_tensors="pt").to("cuda")
>>> # LLM + greedy decoding = repetitive, boring output
>>> generated_ids = model.generate(**model_inputs)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
'I am a cat. I am a cat. I am a cat. I am a cat' | 19_5_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial.md | https://huggingface.co/docs/transformers/en/llm_tutorial/#incorrect-generation-mode | .md | >>> # With sampling, the output becomes more creative!
>>> generated_ids = model.generate(**model_inputs, do_sample=True)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
'I am a cat. Specifically, I am an indoor-only cat. I'
``` | 19_5_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial.md | https://huggingface.co/docs/transformers/en/llm_tutorial/#wrong-padding-side | .md | LLMs are [decoder-only](https://huggingface.co/learn/nlp-course/chapter1/6?fw=pt) architectures, meaning they continue to iterate on your input prompt. If your inputs do not have the same length, they need to be padded. Since LLMs are not trained to continue from pad tokens, your input needs to be left-padded. Make sure you also don't forget to pass the attention mask to generate!
```py
>>> # The tokenizer initialized above has right-padding active by default: the 1st sequence, | 19_6_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial.md | https://huggingface.co/docs/transformers/en/llm_tutorial/#wrong-padding-side | .md | ```py
>>> # The tokenizer initialized above has right-padding active by default: the 1st sequence,
>>> # which is shorter, has padding on the right side. Generation fails to capture the logic.
>>> model_inputs = tokenizer(
... ["1, 2, 3", "A, B, C, D, E"], padding=True, return_tensors="pt"
... ).to("cuda")
>>> generated_ids = model.generate(**model_inputs)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
'1, 2, 33333333333' | 19_6_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial.md | https://huggingface.co/docs/transformers/en/llm_tutorial/#wrong-padding-side | .md | >>> # With left-padding, it works as expected!
>>> tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-v0.1", padding_side="left")
>>> tokenizer.pad_token = tokenizer.eos_token # Most LLMs don't have a pad token by default
>>> model_inputs = tokenizer(
... ["1, 2, 3", "A, B, C, D, E"], padding=True, return_tensors="pt"
... ).to("cuda")
>>> generated_ids = model.generate(**model_inputs)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
'1, 2, 3, 4, 5, 6,'
``` | 19_6_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial.md | https://huggingface.co/docs/transformers/en/llm_tutorial/#wrong-prompt | .md | Some models and tasks expect a certain input prompt format to work properly. When this format is not applied, you will get a silent performance degradation: the model kinda works, but not as well as if you were following the expected prompt. More information about prompting, including which models and tasks need to be careful, is available in this [guide](tasks/prompting). Let's see an example with a chat LLM, which makes use of [chat templating](chat_templating):
```python | 19_7_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial.md | https://huggingface.co/docs/transformers/en/llm_tutorial/#wrong-prompt | .md | ```python
>>> tokenizer = AutoTokenizer.from_pretrained("HuggingFaceH4/zephyr-7b-alpha")
>>> model = AutoModelForCausalLM.from_pretrained(
... "HuggingFaceH4/zephyr-7b-alpha", device_map="auto", load_in_4bit=True
... )
>>> set_seed(0)
>>> prompt = """How many helicopters can a human eat in one sitting? Reply as a thug."""
>>> model_inputs = tokenizer([prompt], return_tensors="pt").to("cuda")
>>> input_length = model_inputs.input_ids.shape[1] | 19_7_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial.md | https://huggingface.co/docs/transformers/en/llm_tutorial/#wrong-prompt | .md | >>> model_inputs = tokenizer([prompt], return_tensors="pt").to("cuda")
>>> input_length = model_inputs.input_ids.shape[1]
>>> generated_ids = model.generate(**model_inputs, max_new_tokens=20)
>>> print(tokenizer.batch_decode(generated_ids[:, input_length:], skip_special_tokens=True)[0])
"I'm not a thug, but i can tell you that a human cannot eat"
>>> # Oh no, it did not follow our instruction to reply as a thug! Let's see what happens when we write | 19_7_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial.md | https://huggingface.co/docs/transformers/en/llm_tutorial/#wrong-prompt | .md | >>> # Oh no, it did not follow our instruction to reply as a thug! Let's see what happens when we write
>>> # a better prompt and use the right template for this model (through `tokenizer.apply_chat_template`) | 19_7_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial.md | https://huggingface.co/docs/transformers/en/llm_tutorial/#wrong-prompt | .md | >>> set_seed(0)
>>> messages = [
... {
... "role": "system",
... "content": "You are a friendly chatbot who always responds in the style of a thug",
... },
... {"role": "user", "content": "How many helicopters can a human eat in one sitting?"},
... ]
>>> model_inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to("cuda")
>>> input_length = model_inputs.shape[1] | 19_7_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial.md | https://huggingface.co/docs/transformers/en/llm_tutorial/#wrong-prompt | .md | >>> input_length = model_inputs.shape[1]
>>> generated_ids = model.generate(model_inputs, do_sample=True, max_new_tokens=20)
>>> print(tokenizer.batch_decode(generated_ids[:, input_length:], skip_special_tokens=True)[0])
'None, you thug. How bout you try to focus on more useful questions?'
>>> # As we can see, it followed a proper thug style 😎
``` | 19_7_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial.md | https://huggingface.co/docs/transformers/en/llm_tutorial/#further-resources | .md | While the autoregressive generation process is relatively straightforward, making the most out of your LLM can be a challenging endeavor because there are many moving parts. For your next steps to help you dive deeper into LLM usage and understanding: | 19_8_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial.md | https://huggingface.co/docs/transformers/en/llm_tutorial/#advanced-generate-usage | .md | 1. Guide on how to [control different generation methods](generation_strategies), how to set up the generation configuration file, and how to stream the output;
2. [Accelerating text generation](llm_optims);
3. [Prompt templates for chat LLMs](chat_templating);
4. [Prompt design guide](tasks/prompting); | 19_9_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial.md | https://huggingface.co/docs/transformers/en/llm_tutorial/#advanced-generate-usage | .md | 3. [Prompt templates for chat LLMs](chat_templating);
4. [Prompt design guide](tasks/prompting);
5. API reference on [`~generation.GenerationConfig`], [`~generation.GenerationMixin.generate`], and [generate-related classes](internal/generation_utils). Most of the classes, including the logits processors, have usage examples! | 19_9_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial.md | https://huggingface.co/docs/transformers/en/llm_tutorial/#llm-leaderboards | .md | 1. [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard), which focuses on the quality of the open-source models;
2. [Open LLM-Perf Leaderboard](https://huggingface.co/spaces/optimum/llm-perf-leaderboard), which focuses on LLM throughput. | 19_10_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial.md | https://huggingface.co/docs/transformers/en/llm_tutorial/#latency-throughput-and-memory-utilization | .md | 1. Guide on how to [optimize LLMs for speed and memory](llm_tutorial_optimization);
2. Guide on [quantization](main_classes/quantization) such as bitsandbytes and autogptq, which shows you how to drastically reduce your memory requirements. | 19_11_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial.md | https://huggingface.co/docs/transformers/en/llm_tutorial/#related-libraries | .md | 1. [`optimum`](https://github.com/huggingface/optimum), an extension of 🤗 Transformers that optimizes for specific hardware devices;
2. [`outlines`](https://github.com/outlines-dev/outlines), a library where you can constrain text generation (e.g. to generate JSON files);
3. [`SynCode`](https://github.com/uiuc-focal-lab/syncode), a library for context-free grammar guided generation (e.g. JSON, SQL, Python); | 19_12_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/llm_tutorial.md | https://huggingface.co/docs/transformers/en/llm_tutorial/#related-libraries | .md | 4. [`text-generation-inference`](https://github.com/huggingface/text-generation-inference), a production-ready server for LLMs;
5. [`text-generation-webui`](https://github.com/oobabooga/text-generation-webui), a UI for text generation;
6. [`logits-processor-zoo`](https://github.com/NVIDIA/logits-processor-zoo), containing additional options to control text generation with 🤗 Transformers. See our related [blog post](https://huggingface.co/blog/logits-processor-zoo). | 19_12_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/torchscript.md | https://huggingface.co/docs/transformers/en/torchscript/ | .md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 20_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/torchscript.md | https://huggingface.co/docs/transformers/en/torchscript/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 20_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/torchscript.md | https://huggingface.co/docs/transformers/en/torchscript/#export-to-torchscript | .md | <Tip>
This is the very beginning of our experiments with TorchScript and we are still
exploring its capabilities with variable-input-size models. It is a focus of interest to
us and we will deepen our analysis in upcoming releases, with more code examples, a more
flexible implementation, and benchmarks comparing Python-based codes with compiled
TorchScript.
</Tip>
According to the [TorchScript documentation](https://pytorch.org/docs/stable/jit.html): | 20_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/torchscript.md | https://huggingface.co/docs/transformers/en/torchscript/#export-to-torchscript | .md | TorchScript.
</Tip>
According to the [TorchScript documentation](https://pytorch.org/docs/stable/jit.html):
> TorchScript is a way to create serializable and optimizable models from PyTorch code.
There are two PyTorch modules, [JIT and
TRACE](https://pytorch.org/docs/stable/jit.html), that allow developers to export their
models to be reused in other programs like efficiency-oriented C++ programs.
We provide an interface that allows you to export 🤗 Transformers models to TorchScript | 20_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/torchscript.md | https://huggingface.co/docs/transformers/en/torchscript/#export-to-torchscript | .md | We provide an interface that allows you to export 🤗 Transformers models to TorchScript
so they can be reused in a different environment than PyTorch-based Python programs.
Here, we explain how to export and use our models using TorchScript.
Exporting a model requires two things:
- model instantiation with the `torchscript` flag
- a forward pass with dummy inputs
These necessities imply several things developers should be careful about as detailed
below. | 20_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/torchscript.md | https://huggingface.co/docs/transformers/en/torchscript/#torchscript-flag-and-tied-weights | .md | The `torchscript` flag is necessary because most of the 🤗 Transformers language models
have tied weights between their `Embedding` layer and their `Decoding` layer.
TorchScript does not allow you to export models that have tied weights, so it is
necessary to untie and clone the weights beforehand.
Models instantiated with the `torchscript` flag have their `Embedding` layer and
`Decoding` layer separated, which means that they should not be trained down the line. | 20_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/torchscript.md | https://huggingface.co/docs/transformers/en/torchscript/#torchscript-flag-and-tied-weights | .md | `Decoding` layer separated, which means that they should not be trained down the line.
Training would desynchronize the two layers, leading to unexpected results.
This is not the case for models that do not have a language model head, as those do not
have tied weights. These models can be safely exported without the `torchscript` flag. | 20_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/torchscript.md | https://huggingface.co/docs/transformers/en/torchscript/#dummy-inputs-and-standard-lengths | .md | The dummy inputs are used for a models forward pass. While the inputs' values are
propagated through the layers, PyTorch keeps track of the different operations executed
on each tensor. These recorded operations are then used to create the *trace* of the
model.
The trace is created relative to the inputs' dimensions. It is therefore constrained by
the dimensions of the dummy input, and will not work for any other sequence length or | 20_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/torchscript.md | https://huggingface.co/docs/transformers/en/torchscript/#dummy-inputs-and-standard-lengths | .md | the dimensions of the dummy input, and will not work for any other sequence length or
batch size. When trying with a different size, the following error is raised:
```
`The expanded size of the tensor (3) must match the existing size (7) at non-singleton dimension 2`
```
We recommended you trace the model with a dummy input size at least as large as the
largest input that will be fed to the model during inference. Padding can help fill the | 20_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/torchscript.md | https://huggingface.co/docs/transformers/en/torchscript/#dummy-inputs-and-standard-lengths | .md | largest input that will be fed to the model during inference. Padding can help fill the
missing values. However, since the model is traced with a larger input size, the
dimensions of the matrix will also be large, resulting in more calculations.
Be careful of the total number of operations done on each input and follow the
performance closely when exporting varying sequence-length models. | 20_3_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/torchscript.md | https://huggingface.co/docs/transformers/en/torchscript/#using-torchscript-in-python | .md | This section demonstrates how to save and load models as well as how to use the trace
for inference. | 20_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/torchscript.md | https://huggingface.co/docs/transformers/en/torchscript/#saving-a-model | .md | To export a `BertModel` with TorchScript, instantiate `BertModel` from the `BertConfig`
class and then save it to disk under the filename `traced_bert.pt`:
```python
from transformers import BertModel, BertTokenizer, BertConfig
import torch
enc = BertTokenizer.from_pretrained("google-bert/bert-base-uncased")
# Tokenizing input text
text = "[CLS] Who was Jim Henson ? [SEP] Jim Henson was a puppeteer [SEP]"
tokenized_text = enc.tokenize(text) | 20_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/torchscript.md | https://huggingface.co/docs/transformers/en/torchscript/#saving-a-model | .md | # Masking one of the input tokens
masked_index = 8
tokenized_text[masked_index] = "[MASK]"
indexed_tokens = enc.convert_tokens_to_ids(tokenized_text)
segments_ids = [0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1]
# Creating a dummy input
tokens_tensor = torch.tensor([indexed_tokens])
segments_tensors = torch.tensor([segments_ids])
dummy_input = [tokens_tensor, segments_tensors] | 20_5_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/torchscript.md | https://huggingface.co/docs/transformers/en/torchscript/#saving-a-model | .md | # Initializing the model with the torchscript flag
# Flag set to True even though it is not necessary as this model does not have an LM Head.
config = BertConfig(
vocab_size_or_config_json_file=32000,
hidden_size=768,
num_hidden_layers=12,
num_attention_heads=12,
intermediate_size=3072,
torchscript=True,
)
# Instantiating the model
model = BertModel(config)
# The model needs to be in evaluation mode
model.eval() | 20_5_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/torchscript.md | https://huggingface.co/docs/transformers/en/torchscript/#saving-a-model | .md | # Instantiating the model
model = BertModel(config)
# The model needs to be in evaluation mode
model.eval()
# If you are instantiating the model with *from_pretrained* you can also easily set the TorchScript flag
model = BertModel.from_pretrained("google-bert/bert-base-uncased", torchscript=True)
# Creating the trace
traced_model = torch.jit.trace(model, [tokens_tensor, segments_tensors])
torch.jit.save(traced_model, "traced_bert.pt")
``` | 20_5_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/torchscript.md | https://huggingface.co/docs/transformers/en/torchscript/#loading-a-model | .md | Now you can load the previously saved `BertModel`, `traced_bert.pt`, from disk and use
it on the previously initialised `dummy_input`:
```python
loaded_model = torch.jit.load("traced_bert.pt")
loaded_model.eval()
all_encoder_layers, pooled_output = loaded_model(*dummy_input)
``` | 20_6_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/torchscript.md | https://huggingface.co/docs/transformers/en/torchscript/#using-a-traced-model-for-inference | .md | Use the traced model for inference by using its `__call__` dunder method:
```python
traced_model(tokens_tensor, segments_tensors)
``` | 20_7_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/torchscript.md | https://huggingface.co/docs/transformers/en/torchscript/#deploy-hugging-face-torchscript-models-to-aws-with-the-neuron-sdk | .md | AWS introduced the [Amazon EC2 Inf1](https://aws.amazon.com/ec2/instance-types/inf1/)
instance family for low cost, high performance machine learning inference in the cloud.
The Inf1 instances are powered by the AWS Inferentia chip, a custom-built hardware
accelerator, specializing in deep learning inferencing workloads. [AWS
Neuron](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/#) is the SDK for
Inferentia that supports tracing and optimizing transformers models for deployment on | 20_8_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/torchscript.md | https://huggingface.co/docs/transformers/en/torchscript/#deploy-hugging-face-torchscript-models-to-aws-with-the-neuron-sdk | .md | Inferentia that supports tracing and optimizing transformers models for deployment on
Inf1. The Neuron SDK provides:
1. Easy-to-use API with one line of code change to trace and optimize a TorchScript
model for inference in the cloud.
2. Out of the box performance optimizations for [improved
cost-performance](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/benchmark/>).
3. Support for Hugging Face transformers models built with either | 20_8_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/torchscript.md | https://huggingface.co/docs/transformers/en/torchscript/#deploy-hugging-face-torchscript-models-to-aws-with-the-neuron-sdk | .md | 3. Support for Hugging Face transformers models built with either
[PyTorch](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/src/examples/pytorch/bert_tutorial/tutorial_pretrained_bert.html)
or
[TensorFlow](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/src/examples/tensorflow/huggingface_bert/huggingface_bert.html). | 20_8_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/torchscript.md | https://huggingface.co/docs/transformers/en/torchscript/#implications | .md | Transformers models based on the [BERT (Bidirectional Encoder Representations from
Transformers)](https://huggingface.co/docs/transformers/main/model_doc/bert)
architecture, or its variants such as
[distilBERT](https://huggingface.co/docs/transformers/main/model_doc/distilbert) and
[roBERTa](https://huggingface.co/docs/transformers/main/model_doc/roberta) run best on
Inf1 for non-generative tasks such as extractive question answering, sequence | 20_9_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/torchscript.md | https://huggingface.co/docs/transformers/en/torchscript/#implications | .md | Inf1 for non-generative tasks such as extractive question answering, sequence
classification, and token classification. However, text generation tasks can still be
adapted to run on Inf1 according to this [AWS Neuron MarianMT
tutorial](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/src/examples/pytorch/transformers-marianmt.html).
More information about models that can be converted out of the box on Inferentia can be
found in the [Model Architecture | 20_9_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/torchscript.md | https://huggingface.co/docs/transformers/en/torchscript/#implications | .md | More information about models that can be converted out of the box on Inferentia can be
found in the [Model Architecture
Fit](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/models/models-inferentia.html#models-inferentia)
section of the Neuron documentation. | 20_9_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/torchscript.md | https://huggingface.co/docs/transformers/en/torchscript/#dependencies | .md | Using AWS Neuron to convert models requires a [Neuron SDK
environment](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/neuron-frameworks/pytorch-neuron/index.html#installation-guide)
which comes preconfigured on [AWS Deep Learning
AMI](https://docs.aws.amazon.com/dlami/latest/devguide/tutorial-inferentia-launching.html). | 20_10_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/torchscript.md | https://huggingface.co/docs/transformers/en/torchscript/#converting-a-model-for-aws-neuron | .md | Convert a model for AWS NEURON using the same code from [Using TorchScript in
Python](torchscript#using-torchscript-in-python) to trace a `BertModel`. Import the
`torch.neuron` framework extension to access the components of the Neuron SDK through a
Python API:
```python
from transformers import BertModel, BertTokenizer, BertConfig
import torch
import torch.neuron
```
You only need to modify the following line:
```diff
- torch.jit.trace(model, [tokens_tensor, segments_tensors]) | 20_11_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/torchscript.md | https://huggingface.co/docs/transformers/en/torchscript/#converting-a-model-for-aws-neuron | .md | ```
You only need to modify the following line:
```diff
- torch.jit.trace(model, [tokens_tensor, segments_tensors])
+ torch.neuron.trace(model, [tokens_tensor, segments_tensors])
```
This enables the Neuron SDK to trace the model and optimize it for Inf1 instances.
To learn more about AWS Neuron SDK features, tools, example tutorials and latest
updates, please see the [AWS NeuronSDK
documentation](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/index.html). | 20_11_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/how_to_hack_models.md | https://huggingface.co/docs/transformers/en/how_to_hack_models/ | .md | <!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 21_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/how_to_hack_models.md | https://huggingface.co/docs/transformers/en/how_to_hack_models/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 21_0_1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.