source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpt.md | https://huggingface.co/docs/transformers/en/model_doc/mpt/#mptforsequenceclassification | .md | The MPT Model transformer with a sequence classification head on top (linear layer).
[`MptForSequenceClassification`] uses the last token in order to do the classification, as other causal models
(e.g. GPT-1) do.
Since it does classification on the last token, it requires to know the position of the last token. If a
`pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If | 128_7_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpt.md | https://huggingface.co/docs/transformers/en/model_doc/mpt/#mptforsequenceclassification | .md | `pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If
no `pad_token_id` is defined, it simply takes the last value in each row of the batch. Since it cannot guess the
padding tokens when `inputs_embeds` are passed instead of `input_ids`, it does the same (take the last value in
each row of the batch).
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the | 128_7_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpt.md | https://huggingface.co/docs/transformers/en/model_doc/mpt/#mptforsequenceclassification | .md | This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters: | 128_7_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpt.md | https://huggingface.co/docs/transformers/en/model_doc/mpt/#mptforsequenceclassification | .md | and behavior.
Parameters:
config ([`MptConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 128_7_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpt.md | https://huggingface.co/docs/transformers/en/model_doc/mpt/#mptfortokenclassification | .md | MPT Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for
Named-Entity-Recognition (NER) tasks.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | 128_8_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpt.md | https://huggingface.co/docs/transformers/en/model_doc/mpt/#mptfortokenclassification | .md | This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`MptConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the | 128_8_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpt.md | https://huggingface.co/docs/transformers/en/model_doc/mpt/#mptfortokenclassification | .md | Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 128_8_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpt.md | https://huggingface.co/docs/transformers/en/model_doc/mpt/#mptforquestionanswering | .md | The MPT Model transformer with a span classification head on top for extractive question-answering tasks like SQuAD
(a linear layers on top of the hidden-states output to compute `span start logits` and `span end logits`).
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings etc.) | 128_9_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpt.md | https://huggingface.co/docs/transformers/en/model_doc/mpt/#mptforquestionanswering | .md | library implements for all its model (such as downloading or saving, resizing the input embeddings etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`MptConfig`]): Model configuration class with all the parameters of the model. | 128_9_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mpt.md | https://huggingface.co/docs/transformers/en/model_doc/mpt/#mptforquestionanswering | .md | and behavior.
Parameters:
config ([`MptConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 128_9_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/code_llama.md | https://huggingface.co/docs/transformers/en/model_doc/code_llama/ | .md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 129_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/code_llama.md | https://huggingface.co/docs/transformers/en/model_doc/code_llama/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 129_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/code_llama.md | https://huggingface.co/docs/transformers/en/model_doc/code_llama/#overview | .md | The Code Llama model was proposed in [Code Llama: Open Foundation Models for Code](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/) by Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, | 129_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/code_llama.md | https://huggingface.co/docs/transformers/en/model_doc/code_llama/#overview | .md | Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve. | 129_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/code_llama.md | https://huggingface.co/docs/transformers/en/model_doc/code_llama/#overview | .md | The abstract from the paper is the following: | 129_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/code_llama.md | https://huggingface.co/docs/transformers/en/model_doc/code_llama/#overview | .md | *We release Code Llama, a family of large language models for code based on Llama 2 providing state-of-the-art performance among open models, infilling capabilities, support for large input contexts, and zero-shot instruction following ability for programming tasks. We provide multiple flavors to cover a wide range of applications: foundation models (Code Llama), Python specializations (Code Llama - Python), and instruction-following models (Code Llama - Instruct) with 7B, 13B and 34B parameters each. All | 129_1_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/code_llama.md | https://huggingface.co/docs/transformers/en/model_doc/code_llama/#overview | .md | (Code Llama - Python), and instruction-following models (Code Llama - Instruct) with 7B, 13B and 34B parameters each. All models are trained on sequences of 16k tokens and show improvements on inputs with up to 100k tokens. 7B and 13B Code Llama and Code Llama - Instruct variants support infilling based on surrounding content. Code Llama reaches state-of-the-art performance among open models on several code benchmarks, with scores of up to 53% and 55% on HumanEval and MBPP, respectively. Notably, Code | 129_1_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/code_llama.md | https://huggingface.co/docs/transformers/en/model_doc/code_llama/#overview | .md | open models on several code benchmarks, with scores of up to 53% and 55% on HumanEval and MBPP, respectively. Notably, Code Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our models outperform every other publicly available model on MultiPL-E. We release Code Llama under a permissive license that allows for both research and commercial use.* | 129_1_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/code_llama.md | https://huggingface.co/docs/transformers/en/model_doc/code_llama/#overview | .md | Check out all Code Llama model checkpoints [here](https://huggingface.co/models?search=code_llama) and the officially released ones in the [Meta Llama org](https://huggingface.co/meta-llama).
This model was contributed by [ArthurZucker](https://huggingface.co/ArthurZ). The original code of the authors can be found [here](https://github.com/facebookresearch/llama). | 129_1_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/code_llama.md | https://huggingface.co/docs/transformers/en/model_doc/code_llama/#usage-tips-and-examples | .md | <Tip warning={true}>
The `Llama2` family models, on which Code Llama is based, were trained using `bfloat16`, but the original inference uses `float16`. Let's look at the different precisions: | 129_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/code_llama.md | https://huggingface.co/docs/transformers/en/model_doc/code_llama/#usage-tips-and-examples | .md | * `float32`: PyTorch convention on model initialization is to load models in `float32`, no matter with which `dtype` the model weights were stored. `transformers` also follows this convention for consistency with PyTorch. This will be picked by default. If you want the `AutoModel` API to load the checkpoints with the storage weights type, you must specify `torch_dtype="auto"`, e.g. `model = AutoModelForCausalLM.from_pretrained("path", torch_dtype = "auto")`. | 129_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/code_llama.md | https://huggingface.co/docs/transformers/en/model_doc/code_llama/#usage-tips-and-examples | .md | * `bfloat16`: Code Llama was trained with this precision, so we recommend using it for further training or fine-tuning.
* `float16`: We recommend running inference using this precision, as it's usually faster than `bfloat16`, and evaluation metrics show no discernible degradation with respect to `bfloat16`. You can also run inference using `bfloat16`, and we recommend you check inference results with both `float16` and `bfloat16` after fine-tuning. | 129_2_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/code_llama.md | https://huggingface.co/docs/transformers/en/model_doc/code_llama/#usage-tips-and-examples | .md | As mentioned above, the `dtype` of the storage weights is mostly irrelevant unless you are using `torch_dtype="auto"` when initializing a model using. The reason is that the model will first be downloaded (using the `dtype` of the checkpoints online) and then will be casted to the default `dtype` of `torch` (becomes `torch.float32`). If there is a specified `torch_dtype`, it will be used instead.
</Tip>
Tips: | 129_2_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/code_llama.md | https://huggingface.co/docs/transformers/en/model_doc/code_llama/#usage-tips-and-examples | .md | </Tip>
Tips:
- The infilling task is supported out of the box. You should be using the `tokenizer.fill_token` where you want your input to be filled.
- The model conversion script is the same as for the `Llama2` family:
Here is a sample usage:
```bash
python src/transformers/models/llama/convert_llama_weights_to_hf.py \
--input_dir /path/to/downloaded/llama/weights --model_size 7B --output_dir /output/path
``` | 129_2_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/code_llama.md | https://huggingface.co/docs/transformers/en/model_doc/code_llama/#usage-tips-and-examples | .md | --input_dir /path/to/downloaded/llama/weights --model_size 7B --output_dir /output/path
```
Note that executing the script requires enough CPU RAM to host the whole model in float16 precision (even if the biggest versions
come in several checkpoints they each contain a part of each weight of the model, so we need to load them all in RAM).
After conversion, the model and tokenizer can be loaded via:
```python
>>> from transformers import LlamaForCausalLM, CodeLlamaTokenizer | 129_2_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/code_llama.md | https://huggingface.co/docs/transformers/en/model_doc/code_llama/#usage-tips-and-examples | .md | >>> tokenizer = CodeLlamaTokenizer.from_pretrained("meta-llama/CodeLlama-7b-hf")
>>> model = LlamaForCausalLM.from_pretrained("meta-llama/CodeLlama-7b-hf")
>>> PROMPT = '''def remove_non_ascii(s: str) -> str:
... """ <FILL_ME>
... return result
... '''
>>> input_ids = tokenizer(PROMPT, return_tensors="pt")["input_ids"]
>>> generated_ids = model.generate(input_ids, max_new_tokens=128) | 129_2_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/code_llama.md | https://huggingface.co/docs/transformers/en/model_doc/code_llama/#usage-tips-and-examples | .md | >>> filling = tokenizer.batch_decode(generated_ids[:, input_ids.shape[1]:], skip_special_tokens = True)[0]
>>> print(PROMPT.replace("<FILL_ME>", filling))
def remove_non_ascii(s: str) -> str:
""" Remove non-ASCII characters from a string.
<BLANKLINE>
Args:
s: The string to remove non-ASCII characters from.
<BLANKLINE>
Returns:
The string with non-ASCII characters removed.
"""
result = ""
for c in s:
if ord(c) < 128:
result += c
return result
<BLANKLINE>
```
If you only want the infilled part:
```python | 129_2_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/code_llama.md | https://huggingface.co/docs/transformers/en/model_doc/code_llama/#usage-tips-and-examples | .md | for c in s:
if ord(c) < 128:
result += c
return result
<BLANKLINE>
```
If you only want the infilled part:
```python
>>> from transformers import pipeline
>>> import torch | 129_2_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/code_llama.md | https://huggingface.co/docs/transformers/en/model_doc/code_llama/#usage-tips-and-examples | .md | >>> generator = pipeline("text-generation",model="meta-llama/CodeLlama-7b-hf",torch_dtype=torch.float16, device_map="auto")
>>> generator('def remove_non_ascii(s: str) -> str:\n """ <FILL_ME>\n return result', max_new_tokens = 128)
[{'generated_text': 'def remove_non_ascii(s: str) -> str:\n """ <FILL_ME>\n return resultRemove non-ASCII characters from a string. """\n result = ""\n for c in s:\n if ord(c) < 128:\n result += c'}]
``` | 129_2_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/code_llama.md | https://huggingface.co/docs/transformers/en/model_doc/code_llama/#usage-tips-and-examples | .md | Under the hood, the tokenizer [automatically splits by `<FILL_ME>`](https://huggingface.co/docs/transformers/main/model_doc/code_llama#transformers.CodeLlamaTokenizer.fill_token) to create a formatted input string that follows [the original training pattern](https://github.com/facebookresearch/codellama/blob/cb51c14ec761370ba2e2bc351374a79265d0465e/llama/generation.py#L402). This is more robust than preparing the pattern yourself: it avoids pitfalls, such as token glueing, that are very hard to debug. To | 129_2_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/code_llama.md | https://huggingface.co/docs/transformers/en/model_doc/code_llama/#usage-tips-and-examples | .md | is more robust than preparing the pattern yourself: it avoids pitfalls, such as token glueing, that are very hard to debug. To see how much CPU and GPU memory you need for this model or others, try [this calculator](https://huggingface.co/spaces/hf-accelerate/model-memory-usage) which can help determine that value. | 129_2_11 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/code_llama.md | https://huggingface.co/docs/transformers/en/model_doc/code_llama/#usage-tips-and-examples | .md | The LLaMA tokenizer is a BPE model based on [sentencepiece](https://github.com/google/sentencepiece). One quirk of sentencepiece is that when decoding a sequence, if the first token is the start of the word (e.g. "Banana"), the tokenizer does not prepend the prefix space to the string.
<Tip>
Code Llama has the same architecture as the `Llama2` models, refer to [Llama2's documentation page](llama2) for the API reference.
Find Code Llama tokenizer reference below.
</Tip> | 129_2_12 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/code_llama.md | https://huggingface.co/docs/transformers/en/model_doc/code_llama/#codellamatokenizer | .md | Construct a CodeLlama tokenizer. Based on byte-level Byte-Pair-Encoding. The default padding token is unset as
there is no padding token in the original model.
The default configuration match that of
[codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/meta-llama/CodeLlama-7b-Instruct-hf/blob/main/tokenizer_config.json)
which supports prompt infilling.
Args:
vocab_file (`str`):
Path to the vocabulary file.
unk_token (`str`, *optional*, defaults to `"<unk>"`): | 129_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/code_llama.md | https://huggingface.co/docs/transformers/en/model_doc/code_llama/#codellamatokenizer | .md | Args:
vocab_file (`str`):
Path to the vocabulary file.
unk_token (`str`, *optional*, defaults to `"<unk>"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
bos_token (`str`, *optional*, defaults to `"<s>"`):
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
eos_token (`str`, *optional*, defaults to `"</s>"`):
The end of sequence token.
<Tip> | 129_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/code_llama.md | https://huggingface.co/docs/transformers/en/model_doc/code_llama/#codellamatokenizer | .md | eos_token (`str`, *optional*, defaults to `"</s>"`):
The end of sequence token.
<Tip>
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the `sep_token`.
</Tip>
prefix_token (`str`, *optional*, defaults to `"▁<PRE>"`):
Prefix token used for infilling.
middle_token (`str`, *optional*, defaults to `"▁<MID>"`):
Middle token used for infilling.
suffix_token (`str`, *optional*, defaults to `"▁<SUF>"`): | 129_3_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/code_llama.md | https://huggingface.co/docs/transformers/en/model_doc/code_llama/#codellamatokenizer | .md | Middle token used for infilling.
suffix_token (`str`, *optional*, defaults to `"▁<SUF>"`):
Suffix token used for infilling.
eot_token (`str`, *optional*, defaults to `"▁<EOT>"`):
End of text token used for infilling.
fill_token (`str`, *optional*, defaults to `"<FILL_ME>"`):
The token used to split the input between the prefix and suffix.
suffix_first (`bool`, *optional*, defaults to `False`):
Whether the input prompt and suffix should be formatted with the suffix first. | 129_3_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/code_llama.md | https://huggingface.co/docs/transformers/en/model_doc/code_llama/#codellamatokenizer | .md | Whether the input prompt and suffix should be formatted with the suffix first.
sp_model_kwargs (`dict`, *optional*):
Will be passed to the `SentencePieceProcessor.__init__()` method. The [Python wrapper for
SentencePiece](https://github.com/google/sentencepiece/tree/master/python) can be used, among other things,
to set:
- `enable_sampling`: Enable subword regularization.
- `nbest_size`: Sampling parameters for unigram. Invalid for BPE-Dropout.
- `nbest_size = {0,1}`: No sampling is performed. | 129_3_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/code_llama.md | https://huggingface.co/docs/transformers/en/model_doc/code_llama/#codellamatokenizer | .md | - `nbest_size`: Sampling parameters for unigram. Invalid for BPE-Dropout.
- `nbest_size = {0,1}`: No sampling is performed.
- `nbest_size > 1`: samples from the nbest_size results.
- `nbest_size < 0`: assuming that nbest_size is infinite and samples from the all hypothesis (lattice)
using forward-filtering-and-backward-sampling algorithm.
- `alpha`: Smoothing parameter for unigram sampling, and dropout probability of merge operations for
BPE-dropout. | 129_3_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/code_llama.md | https://huggingface.co/docs/transformers/en/model_doc/code_llama/#codellamatokenizer | .md | - `alpha`: Smoothing parameter for unigram sampling, and dropout probability of merge operations for
BPE-dropout.
add_bos_token (`bool`, *optional*, defaults to `True`):
Whether to add a beginning of sequence token at the start of sequences.
add_eos_token (`bool`, *optional*, defaults to `False`):
Whether to add an end of sequence token at the end of sequences.
clean_up_tokenization_spaces (`bool`, *optional*, defaults to `False`):
Whether or not to clean up the tokenization spaces. | 129_3_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/code_llama.md | https://huggingface.co/docs/transformers/en/model_doc/code_llama/#codellamatokenizer | .md | clean_up_tokenization_spaces (`bool`, *optional*, defaults to `False`):
Whether or not to clean up the tokenization spaces.
additional_special_tokens (`List[str]`, *optional*):
Additional special tokens used by the tokenizer.
use_default_system_prompt (`bool`, *optional*, defaults to `False`):
Whether or not the default system prompt for Llama should be used.
Methods: build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary | 129_3_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/code_llama.md | https://huggingface.co/docs/transformers/en/model_doc/code_llama/#codellamatokenizerfast | .md | Construct a Llama tokenizer. Based on byte-level Byte-Pair-Encoding.
This uses notably ByteFallback and no normalization.
```python
>>> from transformers import CodeLlamaTokenizerFast | 129_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/code_llama.md | https://huggingface.co/docs/transformers/en/model_doc/code_llama/#codellamatokenizerfast | .md | >>> tokenizer = CodeLlamaTokenizerFast.from_pretrained("hf-internal-testing/llama-tokenizer")
>>> tokenizer.encode("Hello this is a test")
[1, 15043, 445, 338, 263, 1243]
```
If you want to change the `bos_token` or the `eos_token`, make sure to specify them when initializing the model, or
call `tokenizer.update_post_processor()` to make sure that the post-processing is correctly done (otherwise the | 129_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/code_llama.md | https://huggingface.co/docs/transformers/en/model_doc/code_llama/#codellamatokenizerfast | .md | call `tokenizer.update_post_processor()` to make sure that the post-processing is correctly done (otherwise the
values of the first token and final token of an encoded sequence will not be correct). For more details, checkout
[post-processors] (https://huggingface.co/docs/tokenizers/api/post-processors) documentation.
This tokenizer inherits from [`PreTrainedTokenizerFast`] which contains most of the main methods. Users should | 129_4_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/code_llama.md | https://huggingface.co/docs/transformers/en/model_doc/code_llama/#codellamatokenizerfast | .md | This tokenizer inherits from [`PreTrainedTokenizerFast`] which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods. The default configuration match that of
[meta-llama/CodeLlama-7b-Instruct-hf](https://huggingface.co/meta-llama/CodeLlama-7b-Instruct-hf/blob/main/tokenizer_config.json)
which supports prompt infilling.
Args:
vocab_file (`str`, *optional*): | 129_4_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/code_llama.md | https://huggingface.co/docs/transformers/en/model_doc/code_llama/#codellamatokenizerfast | .md | which supports prompt infilling.
Args:
vocab_file (`str`, *optional*):
[SentencePiece](https://github.com/google/sentencepiece) file (generally has a .model extension) that
contains the vocabulary necessary to instantiate a tokenizer.
tokenizer_file (`str`, *optional*):
[tokenizers](https://github.com/huggingface/tokenizers) file (generally has a .json extension) that
contains everything needed to load the tokenizer.
clean_up_tokenization_spaces (`str`, *optional*, defaults to `False`): | 129_4_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/code_llama.md | https://huggingface.co/docs/transformers/en/model_doc/code_llama/#codellamatokenizerfast | .md | contains everything needed to load the tokenizer.
clean_up_tokenization_spaces (`str`, *optional*, defaults to `False`):
Wether to cleanup spaces after decoding, cleanup consists in removing potential artifacts like extra
spaces.
unk_token (`str`, *optional*, defaults to `"<unk>"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
bos_token (`str`, *optional*, defaults to `"<s>"`): | 129_4_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/code_llama.md | https://huggingface.co/docs/transformers/en/model_doc/code_llama/#codellamatokenizerfast | .md | token instead.
bos_token (`str`, *optional*, defaults to `"<s>"`):
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
eos_token (`str`, *optional*, defaults to `"</s>"`):
The end of sequence token.
prefix_token (`str`, *optional*, defaults to `"▁<PRE>"`):
Prefix token used for infilling.
middle_token (`str`, *optional*, defaults to `"▁<MID>"`):
Middle token used for infilling.
suffix_token (`str`, *optional*, defaults to `"▁<SUF>"`): | 129_4_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/code_llama.md | https://huggingface.co/docs/transformers/en/model_doc/code_llama/#codellamatokenizerfast | .md | Middle token used for infilling.
suffix_token (`str`, *optional*, defaults to `"▁<SUF>"`):
Suffix token used for infilling.
eot_token (`str`, *optional*, defaults to `"▁<EOT>"`):
End of text token used for infilling.
fill_token (`str`, *optional*, defaults to `"<FILL_ME>"`):
The token used to split the input between the prefix and suffix.
additional_special_tokens (`List[str]`, *optional*):
Additional special tokens used by the tokenizer.
add_bos_token (`bool`, *optional*, defaults to `True`): | 129_4_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/code_llama.md | https://huggingface.co/docs/transformers/en/model_doc/code_llama/#codellamatokenizerfast | .md | Additional special tokens used by the tokenizer.
add_bos_token (`bool`, *optional*, defaults to `True`):
Whether to add a beginning of sequence token at the start of sequences.
add_eos_token (`bool`, *optional*, defaults to `False`):
Whether to add an end of sequence token at the end of sequences.
use_default_system_prompt (`bool`, *optional*, defaults to `False`):
Whether or not the default system prompt for Llama should be used.
Methods: build_inputs_with_special_tokens
- get_special_tokens_mask | 129_4_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/code_llama.md | https://huggingface.co/docs/transformers/en/model_doc/code_llama/#codellamatokenizerfast | .md | Methods: build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- update_post_processor
- save_vocabulary | 129_4_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next/ | .md | <!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 130_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 130_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next/#overview | .md | The LLaVA-NeXT model was proposed in [LLaVA-NeXT: Improved reasoning, OCR, and world knowledge](https://llava-vl.github.io/blog/2024-01-30-llava-next/) by Haotian Liu, Chunyuan Li, Yuheng Li, Bo Li, Yuanhan Zhang, Sheng Shen, Yong Jae Lee. LLaVa-NeXT (also called LLaVa-1.6) improves upon [LLaVa](llava) by increasing the input image resolution and training on an improved visual instruction tuning dataset to improve OCR and common sense reasoning.
The introduction from the blog is the following: | 130_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next/#overview | .md | The introduction from the blog is the following:
*In October 2023, we released LLaVA-1.5 with a simple and efficient design along with great performance on a benchmark suite of 12 datasets. It has since served as the foundation of many comprehensive studies of data, model, and capabilities of large multimodal models (LMM), and has enabled various new applications. | 130_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next/#overview | .md | Today, we are thrilled to present LLaVA-NeXT, with improved reasoning, OCR, and world knowledge. LLaVA-NeXT even exceeds Gemini Pro on several benchmarks.
Compared with LLaVA-1.5, LLaVA-NeXT has several improvements:
Increasing the input image resolution to 4x more pixels. This allows it to grasp more visual details. It supports three aspect ratios, up to 672x672, 336x1344, 1344x336 resolution.
Better visual reasoning and OCR capability with an improved visual instruction tuning data mixture. | 130_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next/#overview | .md | Better visual reasoning and OCR capability with an improved visual instruction tuning data mixture.
Better visual conversation for more scenarios, covering different applications. Better world knowledge and logical reasoning.
Efficient deployment and inference with SGLang. | 130_1_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next/#overview | .md | Efficient deployment and inference with SGLang.
Along with performance improvements, LLaVA-NeXT maintains the minimalist design and data efficiency of LLaVA-1.5. It re-uses the pretrained connector of LLaVA-1.5, and still uses less than 1M visual instruction tuning samples. The largest 34B variant finishes training in ~1 day with 32 A100s.*
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/llava_next_overview.png"
alt="drawing" width="600"/> | 130_1_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next/#overview | .md | alt="drawing" width="600"/>
<small> LLaVa-NeXT incorporates a higher input resolution by encoding various patches of the input image. Taken from the <a href="https://arxiv.org/abs/2310.03744">original paper.</a> </small>
This model was contributed by [nielsr](https://huggingface.co/nielsr).
The original code can be found [here](https://github.com/haotian-liu/LLaVA/tree/main). | 130_1_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next/#usage-tips | .md | - We advise users to use `padding_side="left"` when computing batched generation as it leads to more accurate results. Simply make sure to call `processor.tokenizer.padding_side = "left"` before generating.
<Tip warning={true}>
- Llava-Next uses different number of patches for images and thus has to pad the inputs inside modeling code, aside from the padding done when processing the inputs. The default setting is "left-padding" if model is in `eval()` mode, otherwise "right-padding".
</Tip> | 130_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next/#usage-tips | .md | </Tip>
> [!NOTE]
> LLaVA models after release v4.46 will raise warnings about adding `processor.patch_size = {{patch_size}}`, `processor.num_additional_image_tokens = {{num_additional_image_tokens}}` and processor.vision_feature_select_strategy = {{vision_feature_select_strategy}}`. It is strongly recommended to add the attributes to the processor if you own the model checkpoint, or open a PR if it is not owned by you. | 130_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next/#usage-tips | .md | Adding these attributes means that LLaVA will try to infer the number of image tokens required per image and expand the text with as many `<image>` placeholders as there will be tokens. Usually it is around 500 tokens per image, so make sure that the text is not truncated as otherwise there will be failure when merging the embeddings. | 130_2_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next/#usage-tips | .md | The attributes can be obtained from model config, as `model.config.vision_config.patch_size` or `model.config.vision_feature_select_strategy`. The `num_additional_image_tokens` should be `1` if the vision backbone adds a CLS token or `0` if nothing extra is added to the vision patches. | 130_2_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next/#usage-tips | .md | - Note that each checkpoint has been trained with a specific prompt format, depending on which large language model (LLM) was used. You can use the processor's `apply_chat_template` to format your prompts correctly. For that you have to construct a conversation history, passing a plain string will not format your prompt. Each message in the conversation history for chat templates is a dictionary with keys "role" and "content". The "content" should be a list of dictionaries, for "text" and "image" | 130_2_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next/#usage-tips | .md | is a dictionary with keys "role" and "content". The "content" should be a list of dictionaries, for "text" and "image" modalities. Below is an example of how to do that and the list of formats accepted by each checkpoint. | 130_2_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next/#usage-tips | .md | We will use [llava-v1.6-mistral-7b-hf](https://huggingface.co/llava-hf/llava-v1.6-mistral-7b-hf) and a conversation history of text and image. Each content field has to be a list of dicts, as follows:
```python
from transformers import LlavaNextProcessor | 130_2_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next/#usage-tips | .md | processor = LlavaNextProcessor.from_pretrained("llava-hf/llava-v1.6-mistral-7b-hf")
conversation = [
{
"role": "user",
"content": [
{"type": "image"},
{"type": "text", "text": "What’s shown in this image?"},
],
},
{
"role": "assistant",
"content": [{"type": "text", "text": "This image shows a red stop sign."},]
},
{
"role": "user",
"content": [
{"type": "text", "text": "Describe the image in more details."},
],
},
]
text_prompt = processor.apply_chat_template(conversation, add_generation_prompt=True) | 130_2_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next/#usage-tips | .md | # Note that the template simply formats your prompt, you still have to tokenize it and obtain pixel values for your images
print(text_prompt)
>>> "[INST] <image>\nWhat's shown in this image? [/INST] This image shows a red stop sign. [INST] Describe the image in more details. [/INST]"
```
- If you want to construct a chat prompt yourself, below is a list of possible formats
.
[llava-v1.6-mistral-7b-hf](https://huggingface.co/llava-hf/llava-v1.6-mistral-7b-hf) requires the following format:
```bash | 130_2_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next/#usage-tips | .md | .
[llava-v1.6-mistral-7b-hf](https://huggingface.co/llava-hf/llava-v1.6-mistral-7b-hf) requires the following format:
```bash
"[INST] <image>\nWhat is shown in this image? [/INST]"
```
[llava-v1.6-vicuna-7b-hf](https://huggingface.co/llava-hf/llava-v1.6-vicuna-7b-hf) and [llava-v1.6-vicuna-13b-hf](https://huggingface.co/llava-hf/llava-v1.6-vicuna-13b-hf) require the following format:
```bash | 130_2_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next/#usage-tips | .md | ```bash
"A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions. USER: <image>\nWhat is shown in this image? ASSISTANT:"
```
[llava-v1.6-34b-hf](https://huggingface.co/llava-hf/llava-v1.6-34b-hf) requires the following format:
```bash
"<|im_start|>system\nAnswer the questions.<|im_end|><|im_start|>user\n<image>\nWhat is shown in this image?<|im_end|><|im_start|>assistant\n"
``` | 130_2_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next/#usage-tips | .md | ```
[llama3-llava-next-8b-hf](https://huggingface.co/llava-hf/llava-next-8b-hf) requires the following format:
```bash | 130_2_11 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next/#usage-tips | .md | ```bash
"<|start_header_id|>system<|end_header_id|>\n\nYou are a helpful language and vision assistant. You are able to understand the visual content that the user provides, and assist the user with a variety of tasks using natural language.<|eot_id|><|start_header_id|><|start_header_id|>user<|end_header_id|>\n\n<image>\nWhat is shown in this image?<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n"
``` | 130_2_12 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next/#usage-tips | .md | ```
[llava-next-72b-hf](https://huggingface.co/llava-hf/llava-next-72b-hf) and [llava-next-110b-hf](https://huggingface.co/llava-hf/llava-next-110b-hf) require the following format:
```bash
"<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\n<image>\nWhat is shown in this image?<|im_end|>\n<|im_start|>assistant\n"
``` | 130_2_13 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next/#single-image-inference | .md | Here's how to load the model and perform inference in half-precision (`torch.float16`):
```python
from transformers import LlavaNextProcessor, LlavaNextForConditionalGeneration
import torch
from PIL import Image
import requests
processor = LlavaNextProcessor.from_pretrained("llava-hf/llava-v1.6-mistral-7b-hf")
model = LlavaNextForConditionalGeneration.from_pretrained("llava-hf/llava-v1.6-mistral-7b-hf", torch_dtype=torch.float16, low_cpu_mem_usage=True)
model.to("cuda:0") | 130_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next/#single-image-inference | .md | # prepare image and text prompt, using the appropriate prompt template
url = "https://github.com/haotian-liu/LLaVA/blob/1a91fc274d7c35a9b50b3cb29c4247ae5837ce39/images/llava_v1_5_radar.jpg?raw=true"
image = Image.open(requests.get(url, stream=True).raw) | 130_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next/#single-image-inference | .md | conversation = [
{
"role": "user",
"content": [
{"type": "image"},
{"type": "text", "text": "What is shown in this image?"},
],
},
]
prompt = processor.apply_chat_template(conversation, add_generation_prompt=True)
inputs = processor(image, prompt, return_tensors="pt").to("cuda:0")
# autoregressively complete prompt
output = model.generate(**inputs, max_new_tokens=100)
print(processor.decode(output[0], skip_special_tokens=True))
``` | 130_3_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next/#multi-image-inference | .md | LLaVa-Next can perform inference with multiple images as input, where images either belong to the same prompt or different prompts (in batched inference). Here is how you can do it:
```python
import requests
from PIL import Image
import torch
from transformers import AutoProcessor, AutoModelForImageTextToText | 130_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next/#multi-image-inference | .md | # Load the model in half-precision
model = AutoModelForImageTextToText.from_pretrained("llava-hf/llava-v1.6-mistral-7b-hf", torch_dtype=torch.float16, device_map="auto")
processor = AutoProcessor.from_pretrained("llava-hf/llava-v1.6-mistral-7b-hf")
# Get three different images
url = "https://www.ilankelman.org/stopsigns/australia.jpg"
image_stop = Image.open(requests.get(url, stream=True).raw) | 130_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next/#multi-image-inference | .md | url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image_cats = Image.open(requests.get(url, stream=True).raw)
url = "https://huggingface.co/microsoft/kosmos-2-patch14-224/resolve/main/snowman.jpg"
image_snowman = Image.open(requests.get(url, stream=True).raw) | 130_4_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next/#multi-image-inference | .md | # Prepare a batch of two prompts, where the first one is a multi-turn conversation and the second is not
conversation_1 = [
{
"role": "user",
"content": [
{"type": "image"},
{"type": "text", "text": "What is shown in this image?"},
],
},
{
"role": "assistant",
"content": [
{"type": "text", "text": "There is a red stop sign in the image."},
],
},
{
"role": "user",
"content": [
{"type": "image"},
{"type": "text", "text": "What about this image? How many cats do you see?"},
],
},
] | 130_4_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next/#multi-image-inference | .md | conversation_2 = [
{
"role": "user",
"content": [
{"type": "image"},
{"type": "text", "text": "What is shown in this image?"},
],
},
]
prompt_1 = processor.apply_chat_template(conversation_1, add_generation_prompt=True)
prompt_2 = processor.apply_chat_template(conversation_2, add_generation_prompt=True)
prompts = [prompt_1, prompt_2] | 130_4_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next/#multi-image-inference | .md | # We can simply feed images in the order they have to be used in the text prompt
# Each "<image>" token uses one image leaving the next for the subsequent "<image>" tokens
inputs = processor(images=[image_stop, image_cats, image_snowman], text=prompts, padding=True, return_tensors="pt").to(model.device)
# Generate
generate_ids = model.generate(**inputs, max_new_tokens=30)
processor.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)
``` | 130_4_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next/#quantization-using-bitsandbytes | .md | The model can be loaded in 8 or 4 bits, greatly reducing the memory requirements while maintaining the performance of the original model. First make sure to install bitsandbytes, `pip install bitsandbytes`, and to have access to a GPU/accelerator that is supported by the library.
<Tip> | 130_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next/#quantization-using-bitsandbytes | .md | <Tip>
bitsandbytes is being refactored to support multiple backends beyond CUDA. Currently, ROCm (AMD GPU) and Intel CPU implementations are mature, with Intel XPU in progress and Apple Silicon support expected by Q4/Q1. For installation instructions and the latest backend updates, visit [this link](https://huggingface.co/docs/bitsandbytes/main/en/installation#multi-backend). | 130_5_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next/#quantization-using-bitsandbytes | .md | We value your feedback to help identify bugs before the full release! Check out [these docs](https://huggingface.co/docs/bitsandbytes/main/en/non_cuda_backends) for more details and feedback links.
</Tip>
Simply change the snippet above with:
```python
from transformers import AutoModelForImageTextToText, BitsAndBytesConfig | 130_5_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next/#quantization-using-bitsandbytes | .md | # specify how to quantize the model
quantization_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.float16,
)
model = AutoModelForImageTextToText.from_pretrained("llava-hf/llava-v1.6-mistral-7b-hf", quantization_config=quantization_config, device_map="auto")
``` | 130_5_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next/#use-flash-attention-2-to-further-speed-up-generation | .md | First make sure to install flash-attn. Refer to the [original repository of Flash Attention](https://github.com/Dao-AILab/flash-attention) regarding that package installation. Simply change the snippet above with:
```python
from transformers import AutoModelForImageTextToText
model = AutoModelForImageTextToText.from_pretrained(
model_id,
torch_dtype=torch.float16,
low_cpu_mem_usage=True,
use_flash_attention_2=True
).to(0)
``` | 130_6_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next/#llavanextconfig | .md | This is the configuration class to store the configuration of a [`LlavaNextForConditionalGeneration`]. It is used to instantiate an
Llava-NeXT model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the [llava-hf/llava-v1.6-mistral-7b-hf](https://huggingface.co/llava-hf/llava-v1.6-mistral-7b-hf)
model. | 130_7_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next/#llavanextconfig | .md | model.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vision_config (`Union[AutoConfig, dict]`, *optional*, defaults to `CLIPVisionConfig`):
The config object or dictionary of the vision backbone.
text_config (`Union[AutoConfig, dict]`, *optional*, defaults to `LlamaConfig`):
The config object or dictionary of the text backbone. | 130_7_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next/#llavanextconfig | .md | The config object or dictionary of the text backbone.
ignore_index (`int`, *optional*, defaults to -100):
The ignore index for the loss function.
image_token_index (`int`, *optional*, defaults to 32000):
The image token index to encode the image prompt.
projector_hidden_act (`str`, *optional*, defaults to `"gelu"`):
The activation function used by the multimodal projector.
vision_feature_select_strategy (`str`, *optional*, defaults to `"default"`): | 130_7_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next/#llavanextconfig | .md | vision_feature_select_strategy (`str`, *optional*, defaults to `"default"`):
The feature selection strategy used to select the vision feature from the vision backbone.
Can be one of `"default"` or `"full"`. If `"default"`, the CLS token is removed from the vision features.
If `"full"`, the full vision features are used.
vision_feature_layer (`int`, *optional*, defaults to -2):
The index of the layer to select the vision feature. | 130_7_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next/#llavanextconfig | .md | vision_feature_layer (`int`, *optional*, defaults to -2):
The index of the layer to select the vision feature.
image_grid_pinpoints (`List`, *optional*, defaults to `[[336, 672], [672, 336], [672, 672], [1008, 336], [336, 1008]]`):
A list of possible resolutions to use for processing high resolution images. Each item in the list should be a tuple or list
of the form `(height, width)`.
tie_word_embeddings (`bool`, *optional*, defaults to `False`): | 130_7_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next/#llavanextconfig | .md | of the form `(height, width)`.
tie_word_embeddings (`bool`, *optional*, defaults to `False`):
Whether the model's input and output word embeddings should be tied.
image_seq_length (`int`, *optional*, defaults to 576):
Sequence length of one image embedding.
multimodal_projector_bias (`bool`, *optional*, defaults to `True`):
Whether to use bias in the multimodal projector.
Example:
```python
>>> from transformers import LlavaNextForConditionalGeneration, LlavaNextConfig, CLIPVisionConfig, LlamaConfig | 130_7_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next/#llavanextconfig | .md | >>> # Initializing a CLIP-vision config
>>> vision_config = CLIPVisionConfig()
>>> # Initializing a Llama config
>>> text_config = LlamaConfig()
>>> # Initializing a Llava-Next llava-hf/llava-v1.6-mistral-7b-hf style configuration
>>> configuration = LlavaNextConfig(vision_config, text_config)
>>> # Initializing a model from the llava-hf/llava-v1.6-mistral-7b-hf style configuration
>>> model = LlavaNextForConditionalGeneration(configuration) | 130_7_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next/#llavanextconfig | .md | >>> # Accessing the model configuration
>>> configuration = model.config
``` | 130_7_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next/#llavanextimageprocessor | .md | Constructs a LLaVa-NeXT image processor. Based on [`CLIPImageProcessor`] with incorporation of additional techniques
for processing high resolution images as explained in the [LLaVa paper](https://arxiv.org/abs/2310.03744).
Args:
do_resize (`bool`, *optional*, defaults to `True`):
Whether to resize the image's (height, width) dimensions to the specified `size`. Can be overridden by
`do_resize` in the `preprocess` method.
size (`Dict[str, int]` *optional*, defaults to `{"shortest_edge": 224}`): | 130_8_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next/#llavanextimageprocessor | .md | `do_resize` in the `preprocess` method.
size (`Dict[str, int]` *optional*, defaults to `{"shortest_edge": 224}`):
Size of the image after resizing. The shortest edge of the image is resized to size["shortest_edge"], with
the longest edge resized to keep the input aspect ratio. Can be overridden by `size` in the `preprocess`
method.
image_grid_pinpoints (`List` *optional*, defaults to `[[672, 336], [336, 672], [672, 672], [336, 1008], [1008, 336]]`): | 130_8_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next/#llavanextimageprocessor | .md | method.
image_grid_pinpoints (`List` *optional*, defaults to `[[672, 336], [336, 672], [672, 672], [336, 1008], [1008, 336]]`):
A list of possible resolutions to use for processing high resolution images. The best resolution is selected
based on the original size of the image. Can be overridden by `image_grid_pinpoints` in the `preprocess`
method.
resample (`PILImageResampling`, *optional*, defaults to `Resampling.BICUBIC`): | 130_8_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next/#llavanextimageprocessor | .md | method.
resample (`PILImageResampling`, *optional*, defaults to `Resampling.BICUBIC`):
Resampling filter to use if resizing the image. Can be overridden by `resample` in the `preprocess` method.
do_center_crop (`bool`, *optional*, defaults to `True`):
Whether to center crop the image to the specified `crop_size`. Can be overridden by `do_center_crop` in the
`preprocess` method.
crop_size (`Dict[str, int]` *optional*, defaults to 224): | 130_8_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next/#llavanextimageprocessor | .md | `preprocess` method.
crop_size (`Dict[str, int]` *optional*, defaults to 224):
Size of the output image after applying `center_crop`. Can be overridden by `crop_size` in the `preprocess`
method.
do_rescale (`bool`, *optional*, defaults to `True`):
Whether to rescale the image by the specified scale `rescale_factor`. Can be overridden by `do_rescale` in
the `preprocess` method.
rescale_factor (`int` or `float`, *optional*, defaults to `1/255`): | 130_8_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llava_next.md | https://huggingface.co/docs/transformers/en/model_doc/llava_next/#llavanextimageprocessor | .md | the `preprocess` method.
rescale_factor (`int` or `float`, *optional*, defaults to `1/255`):
Scale factor to use if rescaling the image. Can be overridden by `rescale_factor` in the `preprocess`
method.
do_normalize (`bool`, *optional*, defaults to `True`):
Whether to normalize the image. Can be overridden by `do_normalize` in the `preprocess` method.
image_mean (`float` or `List[float]`, *optional*, defaults to `[0.48145466, 0.4578275, 0.40821073]`): | 130_8_5 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.