source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava.md | https://huggingface.co/docs/transformers/en/model_doc/llava/#single-image-inference | .md | processor = AutoProcessor.from_pretrained("llava-hf/llava-1.5-7b-hf")
conversation = [
{
"role": "user",
"content": [
{"type": "image"},
{"type": "text", "text": "What’s shown in this image?"},
],
},
{
"role": "assistant",
"content": [{"type": "text", "text": "This image shows a red stop sign."},]
},
{
"role": "user",
"content": [
{"type": "text", "text": "Describe the image in more details."},
],
},
]
text_prompt = processor.apply_chat_template(conversation, add_generation_prompt=True) | 377_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava.md | https://huggingface.co/docs/transformers/en/model_doc/llava/#single-image-inference | .md | text_prompt = processor.apply_chat_template(conversation, add_generation_prompt=True)
# Note that the template simply formats your prompt, you still have to tokenize it and obtain pixel values for your images
print(text_prompt)
>>> "USER: <image>\n<What’s shown in this image? ASSISTANT: This image shows a red stop sign.</s>USER: Describe the image in more details. ASSISTANT:"
``` | 377_3_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava.md | https://huggingface.co/docs/transformers/en/model_doc/llava/#batched-inference | .md | LLaVa also supports batched inference. Here is how you can do it:
```python
import requests
from PIL import Image
import torch
from transformers import AutoProcessor, LlavaForConditionalGeneration
# Load the model in half-precision
model = LlavaForConditionalGeneration.from_pretrained("llava-hf/llava-1.5-7b-hf", torch_dtype=torch.float16, device_map="auto")
processor = AutoProcessor.from_pretrained("llava-hf/llava-1.5-7b-hf") | 377_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava.md | https://huggingface.co/docs/transformers/en/model_doc/llava/#batched-inference | .md | # Get two different images
url = "https://www.ilankelman.org/stopsigns/australia.jpg"
image_stop = Image.open(requests.get(url, stream=True).raw)
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image_cats = Image.open(requests.get(url, stream=True).raw)
# Prepare a batch of two prompts
conversation_1 = [
{
"role": "user",
"content": [
{"type": "image"},
{"type": "text", "text": "What is shown in this image?"},
],
},
] | 377_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava.md | https://huggingface.co/docs/transformers/en/model_doc/llava/#batched-inference | .md | conversation_2 = [
{
"role": "user",
"content": [
{"type": "image"},
{"type": "text", "text": "What is shown in this image?"},
],
},
]
prompt_1 = processor.apply_chat_template(conversation_1, add_generation_prompt=True)
prompt_2 = processor.apply_chat_template(conversation_2, add_generation_prompt=True)
prompts = [prompt_1, prompt_2] | 377_4_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava.md | https://huggingface.co/docs/transformers/en/model_doc/llava/#batched-inference | .md | # We can simply feed images in the order they have to be used in the text prompt
inputs = processor(images=[image_stop, image_cats], text=prompts, padding=True, return_tensors="pt").to(model.device, torch.float16) | 377_4_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava.md | https://huggingface.co/docs/transformers/en/model_doc/llava/#batched-inference | .md | # Generate
generate_ids = model.generate(**inputs, max_new_tokens=30)
processor.batch_decode(generate_ids, skip_special_tokens=True)
```
- If you want to construct a chat prompt yourself, below is a list of prompt formats accepted by each llava checkpoint:
[llava-interleave models](https://huggingface.co/collections/llava-hf/llava-interleave-668e19a97da0036aad4a2f19) requires the following format:
```bash
"<|im_start|>user <image>\nWhat is shown in this image?<|im_end|><|im_start|>assistant"
``` | 377_4_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava.md | https://huggingface.co/docs/transformers/en/model_doc/llava/#batched-inference | .md | ```bash
"<|im_start|>user <image>\nWhat is shown in this image?<|im_end|><|im_start|>assistant"
```
For multiple turns conversation:
```bash
"<|im_start|>user <image>\n<prompt1><|im_end|><|im_start|>assistant <answer1><|im_end|><|im_start|>user <image>\n<prompt1><|im_end|><|im_start|>assistant "
```
[llava-1.5 models](https://huggingface.co/collections/llava-hf/llava-15-65f762d5b6941db5c2ba07e0) requires the following format:
```bash
"USER: <image>\n<prompt> ASSISTANT:"
``` | 377_4_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava.md | https://huggingface.co/docs/transformers/en/model_doc/llava/#batched-inference | .md | ```bash
"USER: <image>\n<prompt> ASSISTANT:"
```
For multiple turns conversation:
```bash
"USER: <image>\n<prompt1> ASSISTANT: <answer1></s>USER: <prompt2> ASSISTANT: <answer2></s>USER: <prompt3> ASSISTANT:"
``` | 377_4_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava.md | https://huggingface.co/docs/transformers/en/model_doc/llava/#using-flash-attention-2 | .md | Flash Attention 2 is an even faster, optimized version of the previous optimization, please refer to the [Flash Attention 2 section of performance docs](https://huggingface.co/docs/transformers/perf_infer_gpu_one). | 377_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava.md | https://huggingface.co/docs/transformers/en/model_doc/llava/#resources | .md | A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with BEiT.
<PipelineTag pipeline="image-to-text"/>
- A [Google Colab demo](https://colab.research.google.com/drive/1qsl6cd2c8gGtEW1xV5io7S8NHh-Cp1TV?usp=sharing) on how to run Llava on a free-tier Google colab instance leveraging 4-bit inference. | 377_6_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava.md | https://huggingface.co/docs/transformers/en/model_doc/llava/#resources | .md | - A [similar notebook](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LLaVa/Inference_with_LLaVa_for_multimodal_generation.ipynb) showcasing batched inference. 🌎 | 377_6_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava.md | https://huggingface.co/docs/transformers/en/model_doc/llava/#llavaconfig | .md | This is the configuration class to store the configuration of a [`LlavaForConditionalGeneration`]. It is used to instantiate an
Llava model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the Llava-9B.
e.g. [llava-hf/llava-9b](https://huggingface.co/llava-hf/llava-9b)
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the | 377_7_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava.md | https://huggingface.co/docs/transformers/en/model_doc/llava/#llavaconfig | .md | Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vision_config (`Union[AutoConfig, dict]`, *optional*, defaults to `CLIPVisionConfig`):
The config object or dictionary of the vision backbone.
text_config (`Union[AutoConfig, dict]`, *optional*, defaults to `LlamaConfig`):
The config object or dictionary of the text backbone.
ignore_index (`int`, *optional*, defaults to -100): | 377_7_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava.md | https://huggingface.co/docs/transformers/en/model_doc/llava/#llavaconfig | .md | The config object or dictionary of the text backbone.
ignore_index (`int`, *optional*, defaults to -100):
The ignore index for the loss function.
image_token_index (`int`, *optional*, defaults to 32000):
The image token index to encode the image prompt.
projector_hidden_act (`str`, *optional*, defaults to `"gelu"`):
The activation function used by the multimodal projector.
vision_feature_select_strategy (`str`, *optional*, defaults to `"default"`): | 377_7_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava.md | https://huggingface.co/docs/transformers/en/model_doc/llava/#llavaconfig | .md | vision_feature_select_strategy (`str`, *optional*, defaults to `"default"`):
The feature selection strategy used to select the vision feature from the vision backbone.
Can be one of `"default"` or `"full"`.
vision_feature_layer (`int`, *optional*, defaults to -2):
The index of the layer to select the vision feature.
image_seq_length (`int`, *optional*, defaults to 576):
Sequence length of one image embedding.
multimodal_projector_bias (`bool`, *optional*, defaults to `True`): | 377_7_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava.md | https://huggingface.co/docs/transformers/en/model_doc/llava/#llavaconfig | .md | Sequence length of one image embedding.
multimodal_projector_bias (`bool`, *optional*, defaults to `True`):
Whether to use bias in the multimodal projector.
Example:
```python
>>> from transformers import LlavaForConditionalGeneration, LlavaConfig, CLIPVisionConfig, LlamaConfig | 377_7_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava.md | https://huggingface.co/docs/transformers/en/model_doc/llava/#llavaconfig | .md | >>> # Initializing a CLIP-vision config
>>> vision_config = CLIPVisionConfig()
>>> # Initializing a Llama config
>>> text_config = LlamaConfig()
>>> # Initializing a Llava llava-1.5-7b style configuration
>>> configuration = LlavaConfig(vision_config, text_config)
>>> # Initializing a model from the llava-1.5-7b style configuration
>>> model = LlavaForConditionalGeneration(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
``` | 377_7_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava.md | https://huggingface.co/docs/transformers/en/model_doc/llava/#llavaprocessor | .md | Constructs a Llava processor which wraps a Llava image processor and a Llava tokenizer into a single processor.
[`LlavaProcessor`] offers all the functionalities of [`CLIPImageProcessor`] and [`LlamaTokenizerFast`]. See the
[`~LlavaProcessor.__call__`] and [`~LlavaProcessor.decode`] for more information.
Args:
image_processor ([`CLIPImageProcessor`], *optional*):
The image processor is a required input.
tokenizer ([`LlamaTokenizerFast`], *optional*):
The tokenizer is a required input. | 377_8_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava.md | https://huggingface.co/docs/transformers/en/model_doc/llava/#llavaprocessor | .md | The image processor is a required input.
tokenizer ([`LlamaTokenizerFast`], *optional*):
The tokenizer is a required input.
patch_size (`int`, *optional*):
Patch size from the vision tower.
vision_feature_select_strategy (`str`, *optional*):
The feature selection strategy used to select the vision feature from the vision backbone.
Shoudl be same as in model's config
chat_template (`str`, *optional*): A Jinja template which will be used to convert lists of messages
in a chat into a tokenizable string. | 377_8_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava.md | https://huggingface.co/docs/transformers/en/model_doc/llava/#llavaprocessor | .md | in a chat into a tokenizable string.
image_token (`str`, *optional*, defaults to `"<image>"`):
Special token used to denote image location.
num_additional_image_tokens (`int`, *optional*, defaults to 0):
Number of additional tokens added to the image embeddings, such as CLS (+1). If the backbone has no CLS or other
extra tokens appended, no need to set this arg. | 377_8_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava.md | https://huggingface.co/docs/transformers/en/model_doc/llava/#llavaforconditionalgeneration | .md | The LLAVA model which consists of a vision backbone and a language model.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | 377_9_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava.md | https://huggingface.co/docs/transformers/en/model_doc/llava/#llavaforconditionalgeneration | .md | etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`LlavaConfig`] or [`LlavaVisionConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the | 377_9_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava.md | https://huggingface.co/docs/transformers/en/model_doc/llava/#llavaforconditionalgeneration | .md | load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 377_9_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/megatron_gpt2.md | https://huggingface.co/docs/transformers/en/model_doc/megatron_gpt2/ | .md | <!--Copyright 2021 NVIDIA Corporation and The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on | 378_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/megatron_gpt2.md | https://huggingface.co/docs/transformers/en/model_doc/megatron_gpt2/ | .md | Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 378_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/megatron_gpt2.md | https://huggingface.co/docs/transformers/en/model_doc/megatron_gpt2/#overview | .md | The MegatronGPT2 model was proposed in [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model
Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley,
Jared Casper and Bryan Catanzaro.
The abstract from the paper is the following:
*Recent work in language modeling demonstrates that training large transformer models advances the state of the art in | 378_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/megatron_gpt2.md | https://huggingface.co/docs/transformers/en/model_doc/megatron_gpt2/#overview | .md | *Recent work in language modeling demonstrates that training large transformer models advances the state of the art in
Natural Language Processing applications. However, very large models can be quite difficult to train due to memory
constraints. In this work, we present our techniques for training very large transformer models and implement a simple,
efficient intra-layer model parallel approach that enables training transformer models with billions of parameters. Our | 378_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/megatron_gpt2.md | https://huggingface.co/docs/transformers/en/model_doc/megatron_gpt2/#overview | .md | efficient intra-layer model parallel approach that enables training transformer models with billions of parameters. Our
approach does not require a new compiler or library changes, is orthogonal and complimentary to pipeline model
parallelism, and can be fully implemented with the insertion of a few communication operations in native PyTorch. We
illustrate this approach by converging transformer based models up to 8.3 billion parameters using 512 GPUs. We sustain | 378_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/megatron_gpt2.md | https://huggingface.co/docs/transformers/en/model_doc/megatron_gpt2/#overview | .md | illustrate this approach by converging transformer based models up to 8.3 billion parameters using 512 GPUs. We sustain
15.1 PetaFLOPs across the entire application with 76% scaling efficiency when compared to a strong single GPU baseline
that sustains 39 TeraFLOPs, which is 30% of peak FLOPs. To demonstrate that large language models can further advance
the state of the art (SOTA), we train an 8.3 billion parameter transformer language model similar to GPT-2 and a 3.9 | 378_1_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/megatron_gpt2.md | https://huggingface.co/docs/transformers/en/model_doc/megatron_gpt2/#overview | .md | the state of the art (SOTA), we train an 8.3 billion parameter transformer language model similar to GPT-2 and a 3.9
billion parameter model similar to BERT. We show that careful attention to the placement of layer normalization in
BERT-like models is critical to achieving increased performance as the model size grows. Using the GPT-2 model we
achieve SOTA results on the WikiText103 (10.8 compared to SOTA perplexity of 15.8) and LAMBADA (66.5% compared to SOTA | 378_1_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/megatron_gpt2.md | https://huggingface.co/docs/transformers/en/model_doc/megatron_gpt2/#overview | .md | achieve SOTA results on the WikiText103 (10.8 compared to SOTA perplexity of 15.8) and LAMBADA (66.5% compared to SOTA
accuracy of 63.2%) datasets. Our BERT model achieves SOTA results on the RACE dataset (90.9% compared to SOTA accuracy
of 89.4%).*
This model was contributed by [jdemouth](https://huggingface.co/jdemouth). The original code can be found [here](https://github.com/NVIDIA/Megatron-LM). | 378_1_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/megatron_gpt2.md | https://huggingface.co/docs/transformers/en/model_doc/megatron_gpt2/#overview | .md | That repository contains a multi-GPU and multi-node implementation of the Megatron Language models. In particular, it
contains a hybrid model parallel approach using "tensor parallel" and "pipeline parallel" techniques. | 378_1_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/megatron_gpt2.md | https://huggingface.co/docs/transformers/en/model_doc/megatron_gpt2/#usage-tips | .md | We have provided pretrained [GPT2-345M](https://ngc.nvidia.com/catalog/models/nvidia:megatron_lm_345m) checkpoints
for use to evaluate or finetuning downstream tasks.
To access these checkpoints, first [sign up](https://ngc.nvidia.com/signup) for and setup the NVIDIA GPU Cloud (NGC)
Registry CLI. Further documentation for downloading models can be found in the [NGC documentation](https://docs.nvidia.com/dgx/ngc-registry-cli-user-guide/index.html#topic_6_4_1). | 378_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/megatron_gpt2.md | https://huggingface.co/docs/transformers/en/model_doc/megatron_gpt2/#usage-tips | .md | Alternatively, you can directly download the checkpoints using:
```bash
wget --content-disposition https://api.ngc.nvidia.com/v2/models/nvidia/megatron_lm_345m/versions/v0.0/zip -O
megatron_gpt2_345m_v0_0.zip
```
Once you have obtained the checkpoint from NVIDIA GPU Cloud (NGC), you have to convert it to a format that will easily
be loaded by Hugging Face Transformers GPT2 implementation.
The following command allows you to do the conversion. We assume that the folder `models/megatron_gpt2` contains | 378_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/megatron_gpt2.md | https://huggingface.co/docs/transformers/en/model_doc/megatron_gpt2/#usage-tips | .md | The following command allows you to do the conversion. We assume that the folder `models/megatron_gpt2` contains
`megatron_gpt2_345m_v0_0.zip` and that the command is run from that folder:
```bash
python3 $PATH_TO_TRANSFORMERS/models/megatron_gpt2/convert_megatron_gpt2_checkpoint.py megatron_gpt2_345m_v0_0.zip
```
<Tip>
MegatronGPT2 architecture is the same as OpenAI GPT-2 . Refer to [GPT-2 documentation](gpt2) for information on
configuration classes and their parameters.
</Tip> | 378_2_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nemotron.md | https://huggingface.co/docs/transformers/en/model_doc/nemotron/ | .md | <!--Copyright 2024 The HuggingFace Team. All rights reserved.
Copyright (c) 2024, NVIDIA CORPORATION. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on | 379_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nemotron.md | https://huggingface.co/docs/transformers/en/model_doc/nemotron/ | .md | Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
--> | 379_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nemotron.md | https://huggingface.co/docs/transformers/en/model_doc/nemotron/#license | .md | The use of this model is governed by the [NVIDIA AI Foundation Models Community License Agreement](https://developer.nvidia.com/downloads/nv-ai-foundation-models-license). | 379_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nemotron.md | https://huggingface.co/docs/transformers/en/model_doc/nemotron/#description | .md | Nemotron-4 is a family of enterprise ready generative text models compatible with [NVIDIA NeMo Framework](https://www.nvidia.com/en-us/ai-data-science/generative-ai/nemo-framework/). | 379_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nemotron.md | https://huggingface.co/docs/transformers/en/model_doc/nemotron/#description | .md | NVIDIA NeMo is an end-to-end, cloud-native platform to build, customize, and deploy generative AI models anywhere. It includes training and inferencing frameworks, guardrailing toolkits, data curation tools, and pretrained models, offering enterprises an easy, cost-effective, and fast way to adopt generative AI. To get access to NeMo Framework, please sign up at [this link](https://developer.nvidia.com/nemo-framework/join). | 379_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nemotron.md | https://huggingface.co/docs/transformers/en/model_doc/nemotron/#references | .md | [Announcement Blog](https://developer.nvidia.com/blog/nvidia-ai-foundation-models-build-custom-enterprise-chatbots-and-co-pilots-with-production-ready-llms/) | 379_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nemotron.md | https://huggingface.co/docs/transformers/en/model_doc/nemotron/#model-architecture | .md | **Architecture Type:** Transformer
**Network Architecture:** Transformer Decoder (auto-regressive language model). | 379_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nemotron.md | https://huggingface.co/docs/transformers/en/model_doc/nemotron/#minitron-4b-base | .md | Minitron is a family of small language models (SLMs) obtained by pruning NVIDIA's [Nemotron-4 15B](https://arxiv.org/abs/2402.16819) model. We prune model embedding size, attention heads, and MLP intermediate dimension, following which, we perform continued training with distillation to arrive at the final models. | 379_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nemotron.md | https://huggingface.co/docs/transformers/en/model_doc/nemotron/#minitron-4b-base | .md | Deriving the Minitron 8B and 4B models from the base 15B model using our approach requires up to **40x fewer training tokens** per model compared to training from scratch; this results in **compute cost savings of 1.8x** for training the full model family (15B, 8B, and 4B). Minitron models exhibit up to a 16% improvement in MMLU scores compared to training from scratch, perform comparably to other community models such as Mistral 7B, Gemma 7B and Llama-3 8B, and outperform state-of-the-art compression | 379_5_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nemotron.md | https://huggingface.co/docs/transformers/en/model_doc/nemotron/#minitron-4b-base | .md | comparably to other community models such as Mistral 7B, Gemma 7B and Llama-3 8B, and outperform state-of-the-art compression techniques from the literature. Please refer to our [arXiv paper](https://arxiv.org/abs/2407.14679) for more details. | 379_5_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nemotron.md | https://huggingface.co/docs/transformers/en/model_doc/nemotron/#minitron-4b-base | .md | Minitron models are for research and development only. | 379_5_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nemotron.md | https://huggingface.co/docs/transformers/en/model_doc/nemotron/#huggingface-quickstart | .md | The following code provides an example of how to load the Minitron-4B model and use it to perform text generation.
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
# Load the tokenizer and model
model_path = 'nvidia/Minitron-4B-Base'
tokenizer = AutoTokenizer.from_pretrained(model_path)
device = 'cuda'
dtype = torch.bfloat16
model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=dtype, device_map=device) | 379_6_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nemotron.md | https://huggingface.co/docs/transformers/en/model_doc/nemotron/#huggingface-quickstart | .md | # Prepare the input text
prompt = 'Complete the paragraph: our solar system is'
inputs = tokenizer.encode(prompt, return_tensors='pt').to(model.device)
# Generate the output
outputs = model.generate(inputs, max_length=20)
# Decode and print the output
output_text = tokenizer.decode(outputs[0])
print(output_text)
``` | 379_6_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nemotron.md | https://huggingface.co/docs/transformers/en/model_doc/nemotron/#license | .md | Minitron is released under the [NVIDIA Open Model License Agreement](https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf). | 379_7_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nemotron.md | https://huggingface.co/docs/transformers/en/model_doc/nemotron/#evaluation-results | .md | *5-shot performance.* Language Understanding evaluated using [Massive Multitask Language Understanding](https://arxiv.org/abs/2009.03300):
| Average |
| :---- |
| 58.6 |
*Zero-shot performance.* Evaluated using select datasets from the [LM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) with additions:
| HellaSwag | Winogrande | GSM8K| ARC-C | XLSum |
| :------------- | :------------- | :------------- | :------------- | :------------- |
| 75.0 | 74.0 | 24.1 | 50.9 | 29.5 | 379_8_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nemotron.md | https://huggingface.co/docs/transformers/en/model_doc/nemotron/#evaluation-results | .md | | :------------- | :------------- | :------------- | :------------- | :------------- |
| 75.0 | 74.0 | 24.1 | 50.9 | 29.5
*Code generation performance*. Evaluated using [HumanEval](https://github.com/openai/human-eval):
| p@1, 0-Shot |
| :------------- |
| 23.3 |
Please refer to our [paper](https://arxiv.org/abs/2407.14679) for the full set of results. | 379_8_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nemotron.md | https://huggingface.co/docs/transformers/en/model_doc/nemotron/#citation | .md | If you find our work helpful, please consider citing our paper:
```
@article{minitron2024,
title={Compact Language Models via Pruning and Knowledge Distillation},
author={Saurav Muralidharan and Sharath Turuvekere Sreenivas and Raviraj Joshi and Marcin Chochowski and Mostofa Patwary and Mohammad Shoeybi and Bryan Catanzaro and Jan Kautz and Pavlo Molchanov},
journal={arXiv preprint arXiv:2407.14679},
year={2024},
url={https://arxiv.org/abs/2407.14679},
}
``` | 379_9_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nemotron.md | https://huggingface.co/docs/transformers/en/model_doc/nemotron/#nemotronconfig | .md | This is the configuration class to store the configuration of a [`NemotronModel`]. It is used to instantiate an Nemotron
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the Nemotron-8B.
e.g. [nvidia/nemotron-3-8b-base-4k-hf](https://huggingface.co/nvidia/nemotron-3-8b-base-4k-hf). | 379_10_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nemotron.md | https://huggingface.co/docs/transformers/en/model_doc/nemotron/#nemotronconfig | .md | e.g. [nvidia/nemotron-3-8b-base-4k-hf](https://huggingface.co/nvidia/nemotron-3-8b-base-4k-hf).
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 256000):
Vocabulary size of the Nemotron model. Defines the number of different tokens that can be represented by the
`inputs_ids` passed when calling [`NemotronModel`] | 379_10_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nemotron.md | https://huggingface.co/docs/transformers/en/model_doc/nemotron/#nemotronconfig | .md | `inputs_ids` passed when calling [`NemotronModel`]
hidden_size (`int`, *optional*, defaults to 6144):
Dimension of the hidden representations.
intermediate_size (`int`, *optional*, defaults to 24576):
Dimension of the MLP representations.
num_hidden_layers (`int`, *optional*, defaults to 32):
Number of hidden layers in the Transformer decoder.
num_attention_heads (`int`, *optional*, defaults to 48):
Number of attention heads for each attention layer in the Transformer decoder.
head_dim (`int`, *optional*): | 379_10_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nemotron.md | https://huggingface.co/docs/transformers/en/model_doc/nemotron/#nemotronconfig | .md | Number of attention heads for each attention layer in the Transformer decoder.
head_dim (`int`, *optional*):
Projection weights dimension in multi-head attention. Set to hidden_size // num_attention_heads if None
num_key_value_heads (`int`, *optional*):
This is the number of key_value heads that should be used to implement Grouped Query Attention. If
`num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if | 379_10_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nemotron.md | https://huggingface.co/docs/transformers/en/model_doc/nemotron/#nemotronconfig | .md | `num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if
`num_key_value_heads=1 the model will use Multi Query Attention (MQA) otherwise GQA is used. When
converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
by meanpooling all the original heads within that group. For more details checkout [this
paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to
`num_attention_heads`. | 379_10_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nemotron.md | https://huggingface.co/docs/transformers/en/model_doc/nemotron/#nemotronconfig | .md | paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to
`num_attention_heads`.
hidden_act (`str` or `function`, *optional*, defaults to `"relu2"`):
The non-linear activation function (function or string) in the decoder.
max_position_embeddings (`int`, *optional*, defaults to 4096):
The maximum sequence length that this model might ever be used with.
initializer_range (`float`, *optional*, defaults to 0.0134): | 379_10_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nemotron.md | https://huggingface.co/docs/transformers/en/model_doc/nemotron/#nemotronconfig | .md | initializer_range (`float`, *optional*, defaults to 0.0134):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
norm_eps (`float`, *optional*, defaults to 1e-05):
The epsilon used by the normalization layers.
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if `config.is_decoder=True`.
pad_token_id (`int`, *optional*):
Padding token id. | 379_10_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nemotron.md | https://huggingface.co/docs/transformers/en/model_doc/nemotron/#nemotronconfig | .md | relevant if `config.is_decoder=True`.
pad_token_id (`int`, *optional*):
Padding token id.
bos_token_id (`int`, *optional*, defaults to 2):
Beginning of stream token id.
eos_token_id (`int`, *optional*, defaults to 3):
End of stream token id.
tie_word_embeddings (`bool`, *optional*, defaults to `False`):
Whether to tie weight embeddings
rope_theta (`float`, *optional*, defaults to 10000.0):
The base period of the RoPE embeddings. | 379_10_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nemotron.md | https://huggingface.co/docs/transformers/en/model_doc/nemotron/#nemotronconfig | .md | Whether to tie weight embeddings
rope_theta (`float`, *optional*, defaults to 10000.0):
The base period of the RoPE embeddings.
partial_rotary_factor (`float`, *optional*, defaults to 0.5): Percentage of the query and keys which will have rotary embedding.
attention_bias (`bool`, *optional*, defaults to `False`):
Whether to use a bias in the query, key, value and output projection layers during self-attention.
attention_dropout (`float`, *optional*, defaults to 0.0): | 379_10_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nemotron.md | https://huggingface.co/docs/transformers/en/model_doc/nemotron/#nemotronconfig | .md | attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
mlp_bias (`bool`, *optional*, defaults to `False`):
Whether to use a bias in up_proj and down_proj layers in the MLP layers.
```python
>>> from transformers import NemotronModel, NemotronConfig | 379_10_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nemotron.md | https://huggingface.co/docs/transformers/en/model_doc/nemotron/#nemotronconfig | .md | >>> # Initializing a Nemotron nemotron-15b style configuration
>>> configuration = NemotronConfig()
>>> # Initializing a model from the nemotron-15b style configuration
>>> model = NemotronModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
``` | 379_10_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nemotron.md | https://huggingface.co/docs/transformers/en/model_doc/nemotron/#nemotronmodel | .md | The bare Nemotron Model outputting raw hidden-states without any specific head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | 379_11_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nemotron.md | https://huggingface.co/docs/transformers/en/model_doc/nemotron/#nemotronmodel | .md | etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`NemotronConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the | 379_11_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nemotron.md | https://huggingface.co/docs/transformers/en/model_doc/nemotron/#nemotronmodel | .md | load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Transformer decoder consisting of *config.num_hidden_layers* layers. Each layer is a [`NemotronDecoderLayer`]
Args:
config: NemotronConfig
Methods: forward | 379_11_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nemotron.md | https://huggingface.co/docs/transformers/en/model_doc/nemotron/#nemotronforcausallm | .md | No docstring available for NemotronForCausalLM
Methods: forward | 379_12_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nemotron.md | https://huggingface.co/docs/transformers/en/model_doc/nemotron/#nemotronforsequenceclassification | .md | The Nemotron Model transformer with a sequence classification head on top (linear layer).
[`NemotronForSequenceClassification`] uses the last token in order to do the classification, as other causal models
(e.g. GPT-2) do.
Since it does classification on the last token, it requires to know the position of the last token. If a
`pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If | 379_13_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nemotron.md | https://huggingface.co/docs/transformers/en/model_doc/nemotron/#nemotronforsequenceclassification | .md | `pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If
no `pad_token_id` is defined, it simply takes the last value in each row of the batch. Since it cannot guess the
padding tokens when `inputs_embeds` are passed instead of `input_ids`, it does the same (take the last value in
each row of the batch).
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the | 379_13_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nemotron.md | https://huggingface.co/docs/transformers/en/model_doc/nemotron/#nemotronforsequenceclassification | .md | This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters: | 379_13_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nemotron.md | https://huggingface.co/docs/transformers/en/model_doc/nemotron/#nemotronforsequenceclassification | .md | and behavior.
Parameters:
config ([`NemotronConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 379_13_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nemotron.md | https://huggingface.co/docs/transformers/en/model_doc/nemotron/#nemotronforquestionanswering | .md | The Nemotron Model transformer with a span classification head on top for extractive question-answering tasks like
SQuAD (a linear layer on top of the hidden-states output to compute `span start logits` and `span end logits`).
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.) | 379_14_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nemotron.md | https://huggingface.co/docs/transformers/en/model_doc/nemotron/#nemotronforquestionanswering | .md | library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`NemotronConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not | 379_14_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nemotron.md | https://huggingface.co/docs/transformers/en/model_doc/nemotron/#nemotronforquestionanswering | .md | Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 379_14_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nemotron.md | https://huggingface.co/docs/transformers/en/model_doc/nemotron/#nemotronfortokenclassification | .md | The Nemotron Model transformer with a token classification head on top (a linear layer on top of the hidden-states
output) e.g. for Named-Entity-Recognition (NER) tasks.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.) | 379_15_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nemotron.md | https://huggingface.co/docs/transformers/en/model_doc/nemotron/#nemotronfortokenclassification | .md | library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`NemotronConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not | 379_15_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nemotron.md | https://huggingface.co/docs/transformers/en/model_doc/nemotron/#nemotronfortokenclassification | .md | Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 379_15_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md | https://huggingface.co/docs/transformers/en/model_doc/opt/ | .md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 380_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md | https://huggingface.co/docs/transformers/en/model_doc/opt/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 380_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md | https://huggingface.co/docs/transformers/en/model_doc/opt/#overview | .md | The OPT model was proposed in [Open Pre-trained Transformer Language Models](https://arxiv.org/pdf/2205.01068) by Meta AI.
OPT is a series of open-sourced large causal language models which perform similar in performance to GPT3.
The abstract from the paper is the following: | 380_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md | https://huggingface.co/docs/transformers/en/model_doc/opt/#overview | .md | *Large language models, which are often trained for hundreds of thousands of compute days, have shown remarkable capabilities for zero- and few-shot learning. Given their computational cost, these models are difficult to replicate without significant capital. For the few that are available through APIs, no access is granted to the full model weights, making them difficult to study. We present Open Pre-trained Transformers (OPT), a suite of decoder-only pre-trained transformers ranging from 125M to 175B | 380_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md | https://huggingface.co/docs/transformers/en/model_doc/opt/#overview | .md | We present Open Pre-trained Transformers (OPT), a suite of decoder-only pre-trained transformers ranging from 125M to 175B parameters, which we aim to fully and responsibly share with interested researchers. We show that OPT-175B is comparable to GPT-3, while requiring only 1/7th the carbon footprint to develop. We are also releasing our logbook detailing the infrastructure challenges we faced, along with code for experimenting with all of the released models.* | 380_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md | https://huggingface.co/docs/transformers/en/model_doc/opt/#overview | .md | This model was contributed by [Arthur Zucker](https://huggingface.co/ArthurZ), [Younes Belkada](https://huggingface.co/ybelkada), and [Patrick Von Platen](https://huggingface.co/patrickvonplaten).
The original code can be found [here](https://github.com/facebookresearch/metaseq).
Tips:
- OPT has the same architecture as [`BartDecoder`].
- Contrary to GPT2, OPT adds the EOS token `</s>` to the beginning of every prompt. | 380_1_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md | https://huggingface.co/docs/transformers/en/model_doc/opt/#resources | .md | A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with OPT. If you're
interested in submitting a resource to be included here, please feel free to open a Pull Request and we will review it.
The resource should ideally demonstrate something new instead of duplicating an existing resource.
<PipelineTag pipeline="text-generation" /> | 380_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md | https://huggingface.co/docs/transformers/en/model_doc/opt/#resources | .md | <PipelineTag pipeline="text-generation" />
- A notebook on [fine-tuning OPT with PEFT, bitsandbytes, and Transformers](https://colab.research.google.com/drive/1jCkpikz0J2o20FBQmYmAGdiKmJGOMo-o?usp=sharing). 🌎
- A blog post on [decoding strategies with OPT](https://huggingface.co/blog/introducing-csearch#62-example-two---opt).
- [Causal language modeling](https://huggingface.co/course/en/chapter7/6?fw=pt#training-a-causal-language-model-from-scratch) chapter of the 🤗 Hugging Face Course. | 380_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md | https://huggingface.co/docs/transformers/en/model_doc/opt/#resources | .md | - [`OPTForCausalLM`] is supported by this [causal language modeling example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling#gpt-2gpt-and-causal-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb). | 380_2_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md | https://huggingface.co/docs/transformers/en/model_doc/opt/#resources | .md | - [`TFOPTForCausalLM`] is supported by this [causal language modeling example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/language-modeling#run_clmpy) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb). | 380_2_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md | https://huggingface.co/docs/transformers/en/model_doc/opt/#resources | .md | - [`FlaxOPTForCausalLM`] is supported by this [causal language modeling example script](https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling#causal-language-modeling).
<PipelineTag pipeline="text-classification" />
- [Text classification task guide](sequence_classification.md) | 380_2_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md | https://huggingface.co/docs/transformers/en/model_doc/opt/#resources | .md | <PipelineTag pipeline="text-classification" />
- [Text classification task guide](sequence_classification.md)
- [`OPTForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification.ipynb).
<PipelineTag pipeline="question-answering" /> | 380_2_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md | https://huggingface.co/docs/transformers/en/model_doc/opt/#resources | .md | <PipelineTag pipeline="question-answering" />
- [`OPTForQuestionAnswering`] is supported by this [question answering example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering.ipynb).
- [Question answering](https://huggingface.co/course/chapter7/7?fw=pt) chapter
of the 🤗 Hugging Face Course.
⚡️ Inference | 380_2_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md | https://huggingface.co/docs/transformers/en/model_doc/opt/#resources | .md | - [Question answering](https://huggingface.co/course/chapter7/7?fw=pt) chapter
of the 🤗 Hugging Face Course.
⚡️ Inference
- A blog post on [How 🤗 Accelerate runs very large models thanks to PyTorch](https://huggingface.co/blog/accelerate-large-models) with OPT. | 380_2_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md | https://huggingface.co/docs/transformers/en/model_doc/opt/#combining-opt-and-flash-attention-2 | .md | First, make sure to install the latest version of Flash Attention 2 to include the sliding window attention feature.
```bash
pip install -U flash-attn --no-build-isolation
```
Make also sure that you have a hardware that is compatible with Flash-Attention 2. Read more about it in the official documentation of flash-attn repository. Make also sure to load your model in half-precision (e.g. `torch.float16``)
To load and run a model using Flash Attention 2, refer to the snippet below:
```python | 380_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md | https://huggingface.co/docs/transformers/en/model_doc/opt/#combining-opt-and-flash-attention-2 | .md | To load and run a model using Flash Attention 2, refer to the snippet below:
```python
>>> import torch
>>> from transformers import OPTForCausalLM, GPT2Tokenizer
>>> device = "cuda" # the device to load the model onto | 380_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md | https://huggingface.co/docs/transformers/en/model_doc/opt/#combining-opt-and-flash-attention-2 | .md | >>> model = OPTForCausalLM.from_pretrained("facebook/opt-350m", torch_dtype=torch.float16, attn_implementation="flash_attention_2")
>>> tokenizer = GPT2Tokenizer.from_pretrained("facebook/opt-350m")
>>> prompt = ("A chat between a curious human and the Statue of Liberty.\n\nHuman: What is your name?\nStatue: I am the "
"Statue of Liberty.\nHuman: Where do you live?\nStatue: New York City.\nHuman: How long have you lived "
"there?") | 380_3_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md | https://huggingface.co/docs/transformers/en/model_doc/opt/#combining-opt-and-flash-attention-2 | .md | >>> model_inputs = tokenizer([prompt], return_tensors="pt").to(device)
>>> model.to(device) | 380_3_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md | https://huggingface.co/docs/transformers/en/model_doc/opt/#combining-opt-and-flash-attention-2 | .md | >>> generated_ids = model.generate(**model_inputs, max_new_tokens=30, do_sample=False)
>>> tokenizer.batch_decode(generated_ids)[0]
'</s>A chat between a curious human and the Statue of Liberty.\n\nHuman: What is your name?\nStatue: I am the Statue of Liberty.\nHuman: Where do you live?\nStatue: New York City.\nHuman: How long have you lived there?\nStatue: I have lived here for about a year.\nHuman: What is your favorite place to eat?\nStatue: I love'
``` | 380_3_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md | https://huggingface.co/docs/transformers/en/model_doc/opt/#expected-speedups | .md | Below is an expected speedup diagram that compares pure inference time between the native implementation in transformers using `facebook/opt-2.7b` checkpoint and the Flash Attention 2 version of the model using two different sequence lengths.
<div style="text-align: center">
<img src="https://user-images.githubusercontent.com/49240599/281101546-d2fca6d2-ee44-48f3-9534-ba8d5bee4531.png">
</div> | 380_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md | https://huggingface.co/docs/transformers/en/model_doc/opt/#expected-speedups | .md | <img src="https://user-images.githubusercontent.com/49240599/281101546-d2fca6d2-ee44-48f3-9534-ba8d5bee4531.png">
</div>
Below is an expected speedup diagram that compares pure inference time between the native implementation in transformers using `facebook/opt-350m` checkpoint and the Flash Attention 2 version of the model using two different sequence lengths.
<div style="text-align: center"> | 380_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/opt.md | https://huggingface.co/docs/transformers/en/model_doc/opt/#expected-speedups | .md | <div style="text-align: center">
<img src="https://user-images.githubusercontent.com/49240599/281101682-d1144e90-0dbc-46f4-8fc8-c6206cb793c9.png">
</div> | 380_4_2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.