source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_summary.md | https://huggingface.co/docs/transformers/en/model_summary/#encodernlp-encoder | .md | It uses a combination of local windowed attention (attention only calculated from fixed window size around each token) and global attention (only for specific task tokens like `[CLS]` for classification) to create a sparse attention matrix instead of a full attention matrix. | 47_8_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_summary.md | https://huggingface.co/docs/transformers/en/model_summary/#decodernlp-decoder | .md | [GPT-2](model_doc/gpt2) is a decoder-only Transformer that predicts the next word in the sequence. It masks tokens to the right so the model can't "cheat" by looking ahead. By pretraining on a massive body of text, GPT-2 became really good at generating text, even if the text is only sometimes accurate or true. But GPT-2 lacked the bidirectional context from BERT's pretraining, which made it unsuitable for certain tasks. [XLNET](model_doc/xlnet) combines the best of both BERT and GPT-2's pretraining | 47_9_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_summary.md | https://huggingface.co/docs/transformers/en/model_summary/#decodernlp-decoder | .md | which made it unsuitable for certain tasks. [XLNET](model_doc/xlnet) combines the best of both BERT and GPT-2's pretraining objectives by using a permutation language modeling objective (PLM) that allows it to learn bidirectionally. | 47_9_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_summary.md | https://huggingface.co/docs/transformers/en/model_summary/#decodernlp-decoder | .md | After GPT-2, language models grew even bigger and are now known as *large language models (LLMs)*. LLMs demonstrate few- or even zero-shot learning if pretrained on a large enough dataset. [GPT-J](model_doc/gptj) is an LLM with 6B parameters and trained on 400B tokens. GPT-J was followed by [OPT](model_doc/opt), a family of decoder-only models, the largest of which is 175B and trained on 180B tokens. [BLOOM](model_doc/bloom) was released around the same time, and the largest model in the family has 176B | 47_9_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_summary.md | https://huggingface.co/docs/transformers/en/model_summary/#decodernlp-decoder | .md | on 180B tokens. [BLOOM](model_doc/bloom) was released around the same time, and the largest model in the family has 176B parameters and is trained on 366B tokens in 46 languages and 13 programming languages. | 47_9_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_summary.md | https://huggingface.co/docs/transformers/en/model_summary/#encoder-decodernlp-encoder-decoder | .md | [BART](model_doc/bart) keeps the original Transformer architecture, but it modifies the pretraining objective with *text infilling* corruption, where some text spans are replaced with a single `mask` token. The decoder predicts the uncorrupted tokens (future tokens are masked) and uses the encoder's hidden states to help it. [Pegasus](model_doc/pegasus) is similar to BART, but Pegasus masks entire sentences instead of text spans. In addition to masked language modeling, Pegasus is pretrained by gap sentence | 47_10_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_summary.md | https://huggingface.co/docs/transformers/en/model_summary/#encoder-decodernlp-encoder-decoder | .md | masks entire sentences instead of text spans. In addition to masked language modeling, Pegasus is pretrained by gap sentence generation (GSG). The GSG objective masks whole sentences important to a document, replacing them with a `mask` token. The decoder must generate the output from the remaining sentences. [T5](model_doc/t5) is a more unique model that casts all NLP tasks into a text-to-text problem using specific prefixes. For example, the prefix `Summarize:` indicates a summarization task. T5 is | 47_10_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_summary.md | https://huggingface.co/docs/transformers/en/model_summary/#encoder-decodernlp-encoder-decoder | .md | into a text-to-text problem using specific prefixes. For example, the prefix `Summarize:` indicates a summarization task. T5 is pretrained by supervised (GLUE and SuperGLUE) training and self-supervised training (randomly sample and drop out 15% of tokens). | 47_10_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_summary.md | https://huggingface.co/docs/transformers/en/model_summary/#audio | .md | <iframe style="border: 1px solid rgba(0, 0, 0, 0.1);" width="1000" height="450" src="https://www.figma.com/embed?embed_host=share&url=https%3A%2F%2Fwww.figma.com%2Ffile%2Fvrchl8jDV9YwNVPWu2W0kK%2Fspeech-and-audio-model-timeline%3Fnode-id%3D0%253A1%26t%3DmM4H8pPMuK23rClL-1" allowfullscreen></iframe> | 47_11_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_summary.md | https://huggingface.co/docs/transformers/en/model_summary/#encoderaudio-encoder | .md | [Wav2Vec2](model_doc/wav2vec2) uses a Transformer encoder to learn speech representations directly from raw audio waveforms. It is pretrained with a contrastive task to determine the true speech representation from a set of false ones. [HuBERT](model_doc/hubert) is similar to Wav2Vec2 but has a different training process. Target labels are created by a clustering step in which segments of similar audio are assigned to a cluster which becomes a hidden unit. The hidden unit is mapped to an embedding to make a | 47_12_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_summary.md | https://huggingface.co/docs/transformers/en/model_summary/#encoderaudio-encoder | .md | of similar audio are assigned to a cluster which becomes a hidden unit. The hidden unit is mapped to an embedding to make a prediction. | 47_12_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_summary.md | https://huggingface.co/docs/transformers/en/model_summary/#encoder-decoderaudio-encoder-decoder | .md | [Speech2Text](model_doc/speech_to_text) is a speech model designed for automatic speech recognition (ASR) and speech translation. The model accepts log mel-filter bank features extracted from the audio waveform and pretrained autoregressively to generate a transcript or translation. [Whisper](model_doc/whisper) is also an ASR model, but unlike many other speech models, it is pretrained on a massive amount of ✨ labeled ✨ audio transcription data for zero-shot performance. A large chunk of the dataset also | 47_13_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_summary.md | https://huggingface.co/docs/transformers/en/model_summary/#encoder-decoderaudio-encoder-decoder | .md | on a massive amount of ✨ labeled ✨ audio transcription data for zero-shot performance. A large chunk of the dataset also contains non-English languages, meaning Whisper can also be used for low-resource languages. Structurally, Whisper is similar to Speech2Text. The audio signal is converted to a log-mel spectrogram encoded by the encoder. The decoder generates the transcript autoregressively from the encoder's hidden states and the previous tokens. | 47_13_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_summary.md | https://huggingface.co/docs/transformers/en/model_summary/#multimodal | .md | <iframe style="border: 1px solid rgba(0, 0, 0, 0.1);" width="1000" height="450" src="https://www.figma.com/embed?embed_host=share&url=https%3A%2F%2Fwww.figma.com%2Ffile%2FcX125FQHXJS2gxeICiY93p%2Fmultimodal%3Fnode-id%3D0%253A1%26t%3DhPQwdx3HFPWJWnVf-1" allowfullscreen></iframe> | 47_14_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_summary.md | https://huggingface.co/docs/transformers/en/model_summary/#encodermm-encoder | .md | [VisualBERT](model_doc/visual_bert) is a multimodal model for vision-language tasks released shortly after BERT. It combines BERT and a pretrained object detection system to extract image features into visual embeddings, passed alongside text embeddings to BERT. VisualBERT predicts the masked text based on the unmasked text and the visual embeddings, and it also has to predict whether the text is aligned with the image. When ViT was released, [ViLT](model_doc/vilt) adopted ViT in its architecture because it | 47_15_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_summary.md | https://huggingface.co/docs/transformers/en/model_summary/#encodermm-encoder | .md | the text is aligned with the image. When ViT was released, [ViLT](model_doc/vilt) adopted ViT in its architecture because it was easier to get the image embeddings this way. The image embeddings are jointly processed with the text embeddings. From there, ViLT is pretrained by image text matching, masked language modeling, and whole word masking. | 47_15_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_summary.md | https://huggingface.co/docs/transformers/en/model_summary/#encodermm-encoder | .md | [CLIP](model_doc/clip) takes a different approach and makes a pair prediction of (`image`, `text`) . An image encoder (ViT) and a text encoder (Transformer) are jointly trained on a 400 million (`image`, `text`) pair dataset to maximize the similarity between the image and text embeddings of the (`image`, `text`) pairs. After pretraining, you can use natural language to instruct CLIP to predict the text given an image or vice versa. [OWL-ViT](model_doc/owlvit) builds on top of CLIP by using it as its | 47_15_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_summary.md | https://huggingface.co/docs/transformers/en/model_summary/#encodermm-encoder | .md | CLIP to predict the text given an image or vice versa. [OWL-ViT](model_doc/owlvit) builds on top of CLIP by using it as its backbone for zero-shot object detection. After pretraining, an object detection head is added to make a set prediction over the (`class`, `bounding box`) pairs. | 47_15_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_summary.md | https://huggingface.co/docs/transformers/en/model_summary/#encoder-decodermm-encoder-decoder | .md | Optical character recognition (OCR) is a long-standing text recognition task that typically involves several components to understand the image and generate the text. [TrOCR](model_doc/trocr) simplifies the process using an end-to-end Transformer. The encoder is a ViT-style model for image understanding and processes the image as fixed-size patches. The decoder accepts the encoder's hidden states and autoregressively generates text. [Donut](model_doc/donut) is a more general visual document understanding | 47_16_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_summary.md | https://huggingface.co/docs/transformers/en/model_summary/#encoder-decodermm-encoder-decoder | .md | hidden states and autoregressively generates text. [Donut](model_doc/donut) is a more general visual document understanding model that doesn't rely on OCR-based approaches. It uses a Swin Transformer as the encoder and multilingual BART as the decoder. Donut is pretrained to read text by predicting the next word based on the image and text annotations. The decoder generates a token sequence given a prompt. The prompt is represented by a special token for each downstream task. For example, document parsing | 47_16_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_summary.md | https://huggingface.co/docs/transformers/en/model_summary/#encoder-decodermm-encoder-decoder | .md | sequence given a prompt. The prompt is represented by a special token for each downstream task. For example, document parsing has a special `parsing` token that is combined with the encoder hidden states to parse the document into a structured output format (JSON). | 47_16_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_summary.md | https://huggingface.co/docs/transformers/en/model_summary/#reinforcement-learning | .md | <iframe style="border: 1px solid rgba(0, 0, 0, 0.1);" width="1000" height="450" src="https://www.figma.com/embed?embed_host=share&url=https%3A%2F%2Fwww.figma.com%2Ffile%2FiB3Y6RvWYki7ZuKO6tNgZq%2Freinforcement-learning%3Fnode-id%3D0%253A1%26t%3DhPQwdx3HFPWJWnVf-1" allowfullscreen></iframe> | 47_17_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_summary.md | https://huggingface.co/docs/transformers/en/model_summary/#decoderrl-decoder | .md | The Decision and Trajectory Transformer casts the state, action, and reward as a sequence modeling problem. The [Decision Transformer](model_doc/decision_transformer) generates a series of actions that lead to a future desired return based on returns-to-go, past states, and actions. For the last *K* timesteps, each of the three modalities are converted into token embeddings and processed by a GPT-like model to predict a future action token. [Trajectory Transformer](model_doc/trajectory_transformer) also | 47_18_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_summary.md | https://huggingface.co/docs/transformers/en/model_summary/#decoderrl-decoder | .md | processed by a GPT-like model to predict a future action token. [Trajectory Transformer](model_doc/trajectory_transformer) also tokenizes the states, actions, and rewards and processes them with a GPT architecture. Unlike the Decision Transformer, which is focused on reward conditioning, the Trajectory Transformer generates future actions with beam search. | 47_18_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_hardware.md | https://huggingface.co/docs/transformers/en/perf_hardware/ | .md | <!---
Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | 48_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_hardware.md | https://huggingface.co/docs/transformers/en/perf_hardware/ | .md | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 48_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_hardware.md | https://huggingface.co/docs/transformers/en/perf_hardware/#custom-hardware-for-training | .md | The hardware you use to run model training and inference can have a big effect on performance. For a deep dive into GPUs make sure to check out Tim Dettmer's excellent [blog post](https://timdettmers.com/2020/09/07/which-gpu-for-deep-learning/).
Let's have a look at some practical advice for GPU setups. | 48_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_hardware.md | https://huggingface.co/docs/transformers/en/perf_hardware/#gpu | .md | When you train bigger models you have essentially three options:
- bigger GPUs
- more GPUs
- more CPU and NVMe (offloaded to by [DeepSpeed-Infinity](main_classes/deepspeed#nvme-support))
Let's start at the case where you have a single GPU. | 48_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_hardware.md | https://huggingface.co/docs/transformers/en/perf_hardware/#power-and-cooling | .md | If you bought an expensive high end GPU make sure you give it the correct power and sufficient cooling.
**Power**: | 48_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_hardware.md | https://huggingface.co/docs/transformers/en/perf_hardware/#power-and-cooling | .md | Some high end consumer GPU cards have 2 and sometimes 3 PCI-E 8-Pin power sockets. Make sure you have as many independent 12V PCI-E 8-Pin cables plugged into the card as there are sockets. Do not use the 2 splits at one end of the same cable (also known as pigtail cable). That is if you have 2 sockets on the GPU, you want 2 PCI-E 8-Pin cables going from your PSU to the card and not one that has 2 PCI-E 8-Pin connectors at the end! You won't get the full performance out of your card otherwise. | 48_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_hardware.md | https://huggingface.co/docs/transformers/en/perf_hardware/#power-and-cooling | .md | Each PCI-E 8-Pin power cable needs to be plugged into a 12V rail on the PSU side and can supply up to 150W of power.
Some other cards may use a PCI-E 12-Pin connectors, and these can deliver up to 500-600W of power.
Low end cards may use 6-Pin connectors, which supply up to 75W of power.
Additionally you want the high-end PSU that has stable voltage. Some lower quality ones may not give the card the stable voltage it needs to function at its peak. | 48_3_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_hardware.md | https://huggingface.co/docs/transformers/en/perf_hardware/#power-and-cooling | .md | And of course the PSU needs to have enough unused Watts to power the card.
**Cooling**:
When a GPU gets overheated it will start throttling down and will not deliver full performance and it can even shutdown if it gets too hot. | 48_3_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_hardware.md | https://huggingface.co/docs/transformers/en/perf_hardware/#power-and-cooling | .md | It's hard to tell the exact best temperature to strive for when a GPU is heavily loaded, but probably anything under +80C is good, but lower is better - perhaps 70-75C is an excellent range to be in. The throttling down is likely to start at around 84-90C. But other than throttling performance a prolonged very high temperature is likely to reduce the lifespan of a GPU.
Next let's have a look at one of the most important aspects when having multiple GPUs: connectivity. | 48_3_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_hardware.md | https://huggingface.co/docs/transformers/en/perf_hardware/#multi-gpu-connectivity | .md | If you use multiple GPUs the way cards are inter-connected can have a huge impact on the total training time. If the GPUs are on the same physical node, you can run:
```bash
nvidia-smi topo -m
```
and it will tell you how the GPUs are inter-connected. On a machine with dual-GPU and which are connected with NVLink, you will most likely see something like:
```
GPU0 GPU1 CPU Affinity NUMA Affinity
GPU0 X NV2 0-23 N/A
GPU1 NV2 X 0-23 N/A
``` | 48_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_hardware.md | https://huggingface.co/docs/transformers/en/perf_hardware/#multi-gpu-connectivity | .md | GPU0 X NV2 0-23 N/A
GPU1 NV2 X 0-23 N/A
```
on a different machine w/o NVLink we may see:
```
GPU0 GPU1 CPU Affinity NUMA Affinity
GPU0 X PHB 0-11 N/A
GPU1 PHB X 0-11 N/A
```
The report includes this legend:
```
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI) | 48_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_hardware.md | https://huggingface.co/docs/transformers/en/perf_hardware/#multi-gpu-connectivity | .md | ```
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge | 48_4_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_hardware.md | https://huggingface.co/docs/transformers/en/perf_hardware/#multi-gpu-connectivity | .md | PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
```
So the first report `NV2` tells us the GPUs are interconnected with 2 NVLinks, and the second report `PHB` we have a typical consumer-level PCIe+Bridge setup.
Check what type of connectivity you have on your setup. Some of these will make the communication between cards faster (e.g. NVLink), others slower (e.g. PHB). | 48_4_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_hardware.md | https://huggingface.co/docs/transformers/en/perf_hardware/#multi-gpu-connectivity | .md | Depending on the type of scalability solution used, the connectivity speed could have a major or a minor impact. If the GPUs need to sync rarely, as in DDP, the impact of a slower connection will be less significant. If the GPUs need to send messages to each other often, as in ZeRO-DP, then faster connectivity becomes super important to achieve faster training. | 48_4_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_hardware.md | https://huggingface.co/docs/transformers/en/perf_hardware/#nvlink | .md | [NVLink](https://en.wikipedia.org/wiki/NVLink) is a wire-based serial multi-lane near-range communications link developed by Nvidia.
Each new generation provides a faster bandwidth, e.g. here is a quote from [Nvidia Ampere GA102 GPU Architecture](https://www.nvidia.com/content/dam/en-zz/Solutions/geforce/ampere/pdf/NVIDIA-ampere-GA102-GPU-Architecture-Whitepaper-V1.pdf):
> Third-Generation NVLink®
> GA102 GPUs utilize NVIDIA’s third-generation NVLink interface, which includes four x4 links, | 48_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_hardware.md | https://huggingface.co/docs/transformers/en/perf_hardware/#nvlink | .md | > Third-Generation NVLink®
> GA102 GPUs utilize NVIDIA’s third-generation NVLink interface, which includes four x4 links,
> with each link providing 14.0625 GB/sec bandwidth in each direction between two GPUs. Four
> links provide 56.25 GB/sec bandwidth in each direction, and 112.5 GB/sec total bandwidth
> between two GPUs. Two RTX 3090 GPUs can be connected together for SLI using NVLink.
> (Note that 3-Way and 4-Way SLI configurations are not supported.) | 48_5_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_hardware.md | https://huggingface.co/docs/transformers/en/perf_hardware/#nvlink | .md | > (Note that 3-Way and 4-Way SLI configurations are not supported.)
So the higher `X` you get in the report of `NVX` in the output of `nvidia-smi topo -m` the better. The generation will depend on your GPU architecture.
Let's compare the execution of an `openai-community/gpt2` language model training over a small sample of wikitext.
The results are:
| NVlink | Time |
| ----- | ---: |
| Y | 101s |
| N | 131s | | 48_5_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_hardware.md | https://huggingface.co/docs/transformers/en/perf_hardware/#nvlink | .md | The results are:
| NVlink | Time |
| ----- | ---: |
| Y | 101s |
| N | 131s |
You can see that NVLink completes the training ~23% faster. In the second benchmark we use `NCCL_P2P_DISABLE=1` to tell the GPUs not to use NVLink.
Here is the full benchmark code and outputs:
```bash
# DDP w/ NVLink | 48_5_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_hardware.md | https://huggingface.co/docs/transformers/en/perf_hardware/#nvlink | .md | rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 torchrun \
--nproc_per_node 2 examples/pytorch/language-modeling/run_clm.py --model_name_or_path openai-community/gpt2 \
--dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train \
--output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200
{'train_runtime': 101.9003, 'train_samples_per_second': 1.963, 'epoch': 0.69}
# DDP w/o NVLink | 48_5_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_hardware.md | https://huggingface.co/docs/transformers/en/perf_hardware/#nvlink | .md | {'train_runtime': 101.9003, 'train_samples_per_second': 1.963, 'epoch': 0.69}
# DDP w/o NVLink
rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 NCCL_P2P_DISABLE=1 torchrun \
--nproc_per_node 2 examples/pytorch/language-modeling/run_clm.py --model_name_or_path openai-community/gpt2 \
--dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train
--output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200 | 48_5_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_hardware.md | https://huggingface.co/docs/transformers/en/perf_hardware/#nvlink | .md | {'train_runtime': 131.4367, 'train_samples_per_second': 1.522, 'epoch': 0.69}
```
Hardware: 2x TITAN RTX 24GB each + NVlink with 2 NVLinks (`NV2` in `nvidia-smi topo -m`)
Software: `pytorch-1.8-to-be` + `cuda-11.0` / `transformers==4.3.0.dev0` | 48_5_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/gguf.md | https://huggingface.co/docs/transformers/en/gguf/ | .md | <!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 49_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/gguf.md | https://huggingface.co/docs/transformers/en/gguf/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 49_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/gguf.md | https://huggingface.co/docs/transformers/en/gguf/#gguf-and-interaction-with-transformers | .md | The GGUF file format is used to store models for inference with [GGML](https://github.com/ggerganov/ggml) and other
libraries that depend on it, like the very popular [llama.cpp](https://github.com/ggerganov/llama.cpp) or
[whisper.cpp](https://github.com/ggerganov/whisper.cpp).
It is a file format [supported by the Hugging Face Hub](https://huggingface.co/docs/hub/en/gguf) with features
allowing for quick inspection of tensors and metadata within the file. | 49_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/gguf.md | https://huggingface.co/docs/transformers/en/gguf/#gguf-and-interaction-with-transformers | .md | allowing for quick inspection of tensors and metadata within the file.
This file format is designed as a "single-file-format" where a single file usually contains both the configuration
attributes, the tokenizer vocabulary and other attributes, as well as all tensors to be loaded in the model. These
files come in different formats according to the quantization type of the file. We briefly go over some of them
[here](https://huggingface.co/docs/hub/en/gguf#quantization-types). | 49_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/gguf.md | https://huggingface.co/docs/transformers/en/gguf/#support-within-transformers | .md | We have added the ability to load `gguf` files within `transformers` in order to offer further training/fine-tuning
capabilities to gguf models, before converting back those models to `gguf` to use within the `ggml` ecosystem. When
loading a model, we first dequantize it to fp32, before loading the weights to be used in PyTorch.
> [!NOTE]
> The support is still very exploratory and we welcome contributions in order to solidify it across quantization types
> and model architectures. | 49_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/gguf.md | https://huggingface.co/docs/transformers/en/gguf/#support-within-transformers | .md | > and model architectures.
For now, here are the supported model architectures and quantization types: | 49_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/gguf.md | https://huggingface.co/docs/transformers/en/gguf/#supported-quantization-types | .md | The initial supported quantization types are decided according to the popular quantized files that have been shared
on the Hub.
- F32
- F16
- BF16
- Q4_0
- Q4_1
- Q5_0
- Q5_1
- Q8_0
- Q2_K
- Q3_K
- Q4_K
- Q5_K
- Q6_K
- IQ1_S
- IQ1_M
- IQ2_XXS
- IQ2_XS
- IQ2_S
- IQ3_XXS
- IQ3_S
- IQ4_XS
- IQ4_NL
> [!NOTE]
> To support gguf dequantization, `gguf>=0.10.0` installation is required. | 49_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/gguf.md | https://huggingface.co/docs/transformers/en/gguf/#supported-model-architectures | .md | For now the supported model architectures are the architectures that have been very popular on the Hub, namely:
- LLaMa
- Mistral
- Qwen2
- Qwen2Moe
- Phi3
- Bloom
- Falcon
- StableLM
- GPT2
- Starcoder2
- T5
- Mamba
- Nemotron
- Gemma2 | 49_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/gguf.md | https://huggingface.co/docs/transformers/en/gguf/#example-usage | .md | In order to load `gguf` files in `transformers`, you should specify the `gguf_file` argument to the `from_pretrained`
methods of both tokenizers and models. Here is how one would load a tokenizer and a model, which can be loaded
from the exact same file:
```py
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "TheBloke/TinyLlama-1.1B-Chat-v1.0-GGUF"
filename = "tinyllama-1.1b-chat-v1.0.Q6_K.gguf" | 49_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/gguf.md | https://huggingface.co/docs/transformers/en/gguf/#example-usage | .md | tokenizer = AutoTokenizer.from_pretrained(model_id, gguf_file=filename)
model = AutoModelForCausalLM.from_pretrained(model_id, gguf_file=filename)
```
Now you have access to the full, unquantized version of the model in the PyTorch ecosystem, where you can combine it
with a plethora of other tools.
In order to convert back to a `gguf` file, we recommend using the
[`convert-hf-to-gguf.py` file](https://github.com/ggerganov/llama.cpp/blob/master/convert_hf_to_gguf.py) from llama.cpp. | 49_5_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/gguf.md | https://huggingface.co/docs/transformers/en/gguf/#example-usage | .md | [`convert-hf-to-gguf.py` file](https://github.com/ggerganov/llama.cpp/blob/master/convert_hf_to_gguf.py) from llama.cpp.
Here's how you would complete the script above to save the model and export it back to `gguf`:
```py
tokenizer.save_pretrained('directory')
model.save_pretrained('directory') | 49_5_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/gguf.md | https://huggingface.co/docs/transformers/en/gguf/#example-usage | .md | !python ${path_to_llama_cpp}/convert-hf-to-gguf.py ${directory}
``` | 49_5_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/attention.md | https://huggingface.co/docs/transformers/en/attention/ | .md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 50_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/attention.md | https://huggingface.co/docs/transformers/en/attention/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 50_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/attention.md | https://huggingface.co/docs/transformers/en/attention/#attention-mechanisms | .md | Most transformer models use full attention in the sense that the attention matrix is square. It can be a big
computational bottleneck when you have long texts. Longformer and reformer are models that try to be more efficient and
use a sparse version of the attention matrix to speed up training. | 50_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/attention.md | https://huggingface.co/docs/transformers/en/attention/#lsh-attention | .md | [Reformer](model_doc/reformer) uses LSH attention. In the softmax(QK^t), only the biggest elements (in the softmax
dimension) of the matrix QK^t are going to give useful contributions. So for each query q in Q, we can consider only
the keys k in K that are close to q. A hash function is used to determine if q and k are close. The attention mask is
modified to mask the current token (except at the first position), because it will give a query and a key equal (so | 50_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/attention.md | https://huggingface.co/docs/transformers/en/attention/#lsh-attention | .md | modified to mask the current token (except at the first position), because it will give a query and a key equal (so
very similar to each other). Since the hash can be a bit random, several hash functions are used in practice
(determined by a n_rounds parameter) and then are averaged together. | 50_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/attention.md | https://huggingface.co/docs/transformers/en/attention/#local-attention | .md | [Longformer](model_doc/longformer) uses local attention: often, the local context (e.g., what are the two tokens to the
left and right?) is enough to take action for a given token. Also, by stacking attention layers that have a small
window, the last layer will have a receptive field of more than just the tokens in the window, allowing them to build a
representation of the whole sentence.
Some preselected input tokens are also given global attention: for those few tokens, the attention matrix can access | 50_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/attention.md | https://huggingface.co/docs/transformers/en/attention/#local-attention | .md | Some preselected input tokens are also given global attention: for those few tokens, the attention matrix can access
all tokens and this process is symmetric: all other tokens have access to those specific tokens (on top of the ones in
their local window). This is shown in Figure 2d of the paper, see below for a sample attention mask:
<div class="flex justify-center"> | 50_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/attention.md | https://huggingface.co/docs/transformers/en/attention/#local-attention | .md | <div class="flex justify-center">
<img scale="50 %" align="center" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/local_attention_mask.png"/>
</div>
Using those attention matrices with less parameters then allows the model to have inputs having a bigger sequence
length. | 50_3_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/attention.md | https://huggingface.co/docs/transformers/en/attention/#axial-positional-encodings | .md | [Reformer](model_doc/reformer) uses axial positional encodings: in traditional transformer models, the positional encoding
E is a matrix of size \\(l\\) by \\(d\\), \\(l\\) being the sequence length and \\(d\\) the dimension of the
hidden state. If you have very long texts, this matrix can be huge and take way too much space on the GPU. To alleviate
that, axial positional encodings consist of factorizing that big matrix E in two smaller matrices E1 and E2, with | 50_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/attention.md | https://huggingface.co/docs/transformers/en/attention/#axial-positional-encodings | .md | that, axial positional encodings consist of factorizing that big matrix E in two smaller matrices E1 and E2, with
dimensions \\(l_{1} \times d_{1}\\) and \\(l_{2} \times d_{2}\\), such that \\(l_{1} \times l_{2} = l\\) and
\\(d_{1} + d_{2} = d\\) (with the product for the lengths, this ends up being way smaller). The embedding for time
step \\(j\\) in E is obtained by concatenating the embeddings for timestep \\(j \% l1\\) in E1 and \\(j // l1\\)
in E2. | 50_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/fast_tokenizers.md | https://huggingface.co/docs/transformers/en/fast_tokenizers/ | .md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 51_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/fast_tokenizers.md | https://huggingface.co/docs/transformers/en/fast_tokenizers/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 51_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/fast_tokenizers.md | https://huggingface.co/docs/transformers/en/fast_tokenizers/#use-tokenizers-from--tokenizers | .md | The [`PreTrainedTokenizerFast`] depends on the [🤗 Tokenizers](https://huggingface.co/docs/tokenizers) library. The tokenizers obtained from the 🤗 Tokenizers library can be
loaded very simply into 🤗 Transformers.
Before getting in the specifics, let's first start by creating a dummy tokenizer in a few lines:
```python
>>> from tokenizers import Tokenizer
>>> from tokenizers.models import BPE
>>> from tokenizers.trainers import BpeTrainer
>>> from tokenizers.pre_tokenizers import Whitespace | 51_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/fast_tokenizers.md | https://huggingface.co/docs/transformers/en/fast_tokenizers/#use-tokenizers-from--tokenizers | .md | >>> tokenizer = Tokenizer(BPE(unk_token="[UNK]"))
>>> trainer = BpeTrainer(special_tokens=["[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]"])
>>> tokenizer.pre_tokenizer = Whitespace()
>>> files = [...]
>>> tokenizer.train(files, trainer)
```
We now have a tokenizer trained on the files we defined. We can either continue using it in that runtime, or save it to
a JSON file for future re-use. | 51_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/fast_tokenizers.md | https://huggingface.co/docs/transformers/en/fast_tokenizers/#loading-directly-from-the-tokenizer-object | .md | Let's see how to leverage this tokenizer object in the 🤗 Transformers library. The
[`PreTrainedTokenizerFast`] class allows for easy instantiation, by accepting the instantiated
*tokenizer* object as an argument:
```python
>>> from transformers import PreTrainedTokenizerFast | 51_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/fast_tokenizers.md | https://huggingface.co/docs/transformers/en/fast_tokenizers/#loading-directly-from-the-tokenizer-object | .md | >>> fast_tokenizer = PreTrainedTokenizerFast(tokenizer_object=tokenizer)
```
This object can now be used with all the methods shared by the 🤗 Transformers tokenizers! Head to [the tokenizer
page](main_classes/tokenizer) for more information. | 51_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/fast_tokenizers.md | https://huggingface.co/docs/transformers/en/fast_tokenizers/#loading-from-a-json-file | .md | In order to load a tokenizer from a JSON file, let's first start by saving our tokenizer:
```python
>>> tokenizer.save("tokenizer.json")
```
The path to which we saved this file can be passed to the [`PreTrainedTokenizerFast`] initialization
method using the `tokenizer_file` parameter:
```python
>>> from transformers import PreTrainedTokenizerFast | 51_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/fast_tokenizers.md | https://huggingface.co/docs/transformers/en/fast_tokenizers/#loading-from-a-json-file | .md | >>> fast_tokenizer = PreTrainedTokenizerFast(tokenizer_file="tokenizer.json")
```
This object can now be used with all the methods shared by the 🤗 Transformers tokenizers! Head to [the tokenizer
page](main_classes/tokenizer) for more information. | 51_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/big_models.md | https://huggingface.co/docs/transformers/en/big_models/ | .md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 52_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/big_models.md | https://huggingface.co/docs/transformers/en/big_models/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 52_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/big_models.md | https://huggingface.co/docs/transformers/en/big_models/#instantiate-a-big-model | .md | A barrier to accessing very large pretrained models is the amount of memory required. When loading a pretrained PyTorch model, you usually:
1. Create a model with random weights.
2. Load your pretrained weights.
3. Put those pretrained weights in the model. | 52_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/big_models.md | https://huggingface.co/docs/transformers/en/big_models/#instantiate-a-big-model | .md | 1. Create a model with random weights.
2. Load your pretrained weights.
3. Put those pretrained weights in the model.
The first two steps both require a full version of the model in memory and if the model weighs several GBs, you may not have enough memory for two copies of it. This problem is amplified in distributed training environments because each process loads a pretrained model and stores two copies in memory.
> [!TIP] | 52_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/big_models.md | https://huggingface.co/docs/transformers/en/big_models/#instantiate-a-big-model | .md | > [!TIP]
> The randomly created model is initialized with "empty" tensors, which take space in memory without filling it. The random values are whatever was in this chunk of memory at the time. To improve loading speed, the [`_fast_init`](https://github.com/huggingface/transformers/blob/c9f6e5e35156e068b227dd9b15521767f6afd4d2/src/transformers/modeling_utils.py#L2710) parameter is set to `True` by default to skip the random initialization for all weights that are correctly loaded. | 52_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/big_models.md | https://huggingface.co/docs/transformers/en/big_models/#instantiate-a-big-model | .md | This guide will show you how Transformers can help you load large pretrained models despite their memory requirements. | 52_1_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/big_models.md | https://huggingface.co/docs/transformers/en/big_models/#sharded-checkpoints | .md | From Transformers v4.18.0, a checkpoint larger than 10GB is automatically sharded by the [`~PreTrainedModel.save_pretrained`] method. It is split into several smaller partial checkpoints and creates an index file that maps parameter names to the files they're stored in.
The maximum shard size is controlled with the `max_shard_size` parameter, but by default it is 5GB, because it is easier to run on free-tier GPU instances without running out of memory. | 52_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/big_models.md | https://huggingface.co/docs/transformers/en/big_models/#sharded-checkpoints | .md | For example, let's shard [BioMistral/BioMistral-7B](https://hf.co/BioMistral/BioMistral-7B).
```py
>>> with tempfile.TemporaryDirectory() as tmp_dir:
... model.save_pretrained(tmp_dir, max_shard_size="5GB")
... print(sorted(os.listdir(tmp_dir))) | 52_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/big_models.md | https://huggingface.co/docs/transformers/en/big_models/#sharded-checkpoints | .md | ... model.save_pretrained(tmp_dir, max_shard_size="5GB")
... print(sorted(os.listdir(tmp_dir)))
['config.json', 'generation_config.json', 'model-00001-of-00006.safetensors', 'model-00002-of-00006.safetensors', 'model-00003-of-00006.safetensors', 'model-00004-of-00006.safetensors', 'model-00005-of-00006.safetensors', 'model-00006-of-00006.safetensors', 'model.safetensors.index.json']
```
The sharded checkpoint is reloaded with the [`~PreTrainedModel.from_pretrained`] method.
```py | 52_2_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/big_models.md | https://huggingface.co/docs/transformers/en/big_models/#sharded-checkpoints | .md | ```
The sharded checkpoint is reloaded with the [`~PreTrainedModel.from_pretrained`] method.
```py
>>> with tempfile.TemporaryDirectory() as tmp_dir:
... model.save_pretrained(tmp_dir, max_shard_size="5GB")
... new_model = AutoModel.from_pretrained(tmp_dir)
```
The main advantage of sharded checkpoints for big models is that each shard is loaded after the previous one, which caps the memory usage to only the model size and the largest shard size. | 52_2_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/big_models.md | https://huggingface.co/docs/transformers/en/big_models/#sharded-checkpoints | .md | You could also directly load a sharded checkpoint inside a model without the [`~PreTrainedModel.from_pretrained`] method (similar to PyTorch's `load_state_dict()` method for a full checkpoint). In this case, use the [`~modeling_utils.load_sharded_checkpoint`] method.
```py
>>> from transformers.modeling_utils import load_sharded_checkpoint | 52_2_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/big_models.md | https://huggingface.co/docs/transformers/en/big_models/#sharded-checkpoints | .md | >>> with tempfile.TemporaryDirectory() as tmp_dir:
... model.save_pretrained(tmp_dir, max_shard_size="5GB")
... load_sharded_checkpoint(model, tmp_dir)
``` | 52_2_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/big_models.md | https://huggingface.co/docs/transformers/en/big_models/#shard-metadata | .md | The index file determines which keys are in the checkpoint and where the corresponding weights are stored. This file is loaded like any other JSON file and you can get a dictionary from it.
```py
>>> import json
>>> with tempfile.TemporaryDirectory() as tmp_dir:
... model.save_pretrained(tmp_dir, max_shard_size="5GB")
... with open(os.path.join(tmp_dir, "model.safetensors.index.json"), "r") as f:
... index = json.load(f) | 52_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/big_models.md | https://huggingface.co/docs/transformers/en/big_models/#shard-metadata | .md | >>> print(index.keys())
dict_keys(['metadata', 'weight_map'])
```
The `metadata` key provides the total model size.
```py
>>> index["metadata"]
{'total_size': 28966928384}
```
The `weight_map` key maps each parameter name (typically `state_dict` in a PyTorch model) to the shard it's stored in.
```py
>>> index["weight_map"]
{'lm_head.weight': 'model-00006-of-00006.safetensors',
'model.embed_tokens.weight': 'model-00001-of-00006.safetensors', | 52_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/big_models.md | https://huggingface.co/docs/transformers/en/big_models/#shard-metadata | .md | {'lm_head.weight': 'model-00006-of-00006.safetensors',
'model.embed_tokens.weight': 'model-00001-of-00006.safetensors',
'model.layers.0.input_layernorm.weight': 'model-00001-of-00006.safetensors',
'model.layers.0.mlp.down_proj.weight': 'model-00001-of-00006.safetensors',
...
}
``` | 52_3_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/big_models.md | https://huggingface.co/docs/transformers/en/big_models/#accelerates-big-model-inference | .md | > [!TIP]
> Make sure you have Accelerate v0.9.0 or later and PyTorch v1.9.0 or later installed. | 52_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/big_models.md | https://huggingface.co/docs/transformers/en/big_models/#accelerates-big-model-inference | .md | From Transformers v4.20.0, the [`~PreTrainedModel.from_pretrained`] method is supercharged with Accelerate's [Big Model Inference](https://hf.co/docs/accelerate/usage_guides/big_modeling) feature to efficiently handle really big models! Big Model Inference creates a *model skeleton* on PyTorch's [**meta**](https://pytorch.org/docs/main/meta.html) device. The randomly initialized parameters are only created when the pretrained weights are loaded. This way, you aren't keeping two copies of the model in | 52_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/big_models.md | https://huggingface.co/docs/transformers/en/big_models/#accelerates-big-model-inference | .md | parameters are only created when the pretrained weights are loaded. This way, you aren't keeping two copies of the model in memory at the same time (one for the randomly initialized model and one for the pretrained weights), and the maximum memory consumed is only the full model size. | 52_4_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/big_models.md | https://huggingface.co/docs/transformers/en/big_models/#accelerates-big-model-inference | .md | To enable Big Model Inference in Transformers, set `low_cpu_mem_usage=True` in the [`~PreTrainedModel.from_pretrained`] method.
```py
from transformers import AutoModelForCausalLM | 52_4_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/big_models.md | https://huggingface.co/docs/transformers/en/big_models/#accelerates-big-model-inference | .md | gemma = AutoModelForCausalLM.from_pretrained("google/gemma-7b", low_cpu_mem_usage=True)
``` | 52_4_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/big_models.md | https://huggingface.co/docs/transformers/en/big_models/#accelerates-big-model-inference | .md | ```
Accelerate automatically dispatches the model weights across all available devices, starting with the fastest device (GPU) first and then offloading to the slower devices (CPU and even hard drive). This is enabled by setting `device_map="auto"` in the [`~PreTrainedModel.from_pretrained`] method. When you pass the `device_map` parameter, `low_cpu_mem_usage` is automatically set to `True` so you don't need to specify it.
```py
from transformers import AutoModelForCausalLM | 52_4_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/big_models.md | https://huggingface.co/docs/transformers/en/big_models/#accelerates-big-model-inference | .md | # these loading methods are equivalent
gemma = AutoModelForCausalLM.from_pretrained("google/gemma-7b", device_map="auto")
gemma = AutoModelForCausalLM.from_pretrained("google/gemma-7b", device_map="auto", low_cpu_mem_usage=True)
```
You can also write your own `device_map` by mapping each layer to a device. It should map all model parameters to a device, but you don't have to detail where all the submodules of a layer go if the entire layer is on the same device.
```python | 52_4_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/big_models.md | https://huggingface.co/docs/transformers/en/big_models/#accelerates-big-model-inference | .md | ```python
device_map = {"model.layers.1": 0, "model.layers.14": 1, "model.layers.31": "cpu", "lm_head": "disk"}
```
Access `hf_device_map` attribute to see how Accelerate split the model across devices.
```py
gemma.hf_device_map
```
```python out
{'model.embed_tokens': 0,
'model.layers.0': 0,
'model.layers.1': 0,
'model.layers.2': 0,
'model.layers.3': 0,
'model.layers.4': 0,
'model.layers.5': 0,
'model.layers.6': 0,
'model.layers.7': 0,
'model.layers.8': 0,
'model.layers.9': 0,
'model.layers.10': 0, | 52_4_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/big_models.md | https://huggingface.co/docs/transformers/en/big_models/#accelerates-big-model-inference | .md | 'model.layers.5': 0,
'model.layers.6': 0,
'model.layers.7': 0,
'model.layers.8': 0,
'model.layers.9': 0,
'model.layers.10': 0,
'model.layers.11': 0,
'model.layers.12': 0,
'model.layers.13': 0,
'model.layers.14': 'cpu',
'model.layers.15': 'cpu',
'model.layers.16': 'cpu',
'model.layers.17': 'cpu',
'model.layers.18': 'cpu',
'model.layers.19': 'cpu',
'model.layers.20': 'cpu',
'model.layers.21': 'cpu',
'model.layers.22': 'cpu',
'model.layers.23': 'cpu',
'model.layers.24': 'cpu',
'model.layers.25': 'cpu', | 52_4_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/big_models.md | https://huggingface.co/docs/transformers/en/big_models/#accelerates-big-model-inference | .md | 'model.layers.22': 'cpu',
'model.layers.23': 'cpu',
'model.layers.24': 'cpu',
'model.layers.25': 'cpu',
'model.layers.26': 'cpu',
'model.layers.27': 'cpu',
'model.layers.28': 'cpu',
'model.layers.29': 'cpu',
'model.layers.30': 'cpu',
'model.layers.31': 'cpu',
'model.norm': 'cpu',
'lm_head': 'cpu'}
``` | 52_4_9 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.