source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/big_bird.md
|
https://huggingface.co/docs/transformers/en/model_doc/big_bird/#bigbirdformaskedlm
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
175_11_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/big_bird.md
|
https://huggingface.co/docs/transformers/en/model_doc/big_bird/#bigbirdforsequenceclassification
|
.md
|
BigBird Model transformer with a sequence classification/regression head on top (a linear layer on top of the
pooled output) e.g. for GLUE tasks.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`BigBirdConfig`]): Model configuration class with all the parameters of the model.
|
175_12_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/big_bird.md
|
https://huggingface.co/docs/transformers/en/model_doc/big_bird/#bigbirdforsequenceclassification
|
.md
|
behavior.
Parameters:
config ([`BigBirdConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
175_12_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/big_bird.md
|
https://huggingface.co/docs/transformers/en/model_doc/big_bird/#bigbirdformultiplechoice
|
.md
|
BigBird Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a
softmax) e.g. for RocStories/SWAG tasks.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`BigBirdConfig`]): Model configuration class with all the parameters of the model.
|
175_13_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/big_bird.md
|
https://huggingface.co/docs/transformers/en/model_doc/big_bird/#bigbirdformultiplechoice
|
.md
|
behavior.
Parameters:
config ([`BigBirdConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
175_13_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/big_bird.md
|
https://huggingface.co/docs/transformers/en/model_doc/big_bird/#bigbirdfortokenclassification
|
.md
|
BigBird Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for
Named-Entity-Recognition (NER) tasks.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`BigBirdConfig`]): Model configuration class with all the parameters of the model.
|
175_14_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/big_bird.md
|
https://huggingface.co/docs/transformers/en/model_doc/big_bird/#bigbirdfortokenclassification
|
.md
|
behavior.
Parameters:
config ([`BigBirdConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
175_14_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/big_bird.md
|
https://huggingface.co/docs/transformers/en/model_doc/big_bird/#bigbirdforquestionanswering
|
.md
|
BigBird Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layers on top of the hidden-states output to compute `span start logits` and `span end logits`).
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
|
175_15_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/big_bird.md
|
https://huggingface.co/docs/transformers/en/model_doc/big_bird/#bigbirdforquestionanswering
|
.md
|
behavior.
Parameters:
config ([`BigBirdConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
</pt>
<jax>
|
175_15_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/big_bird.md
|
https://huggingface.co/docs/transformers/en/model_doc/big_bird/#flaxbigbirdmodel
|
.md
|
No docstring available for FlaxBigBirdModel
Methods: __call__
|
175_16_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/big_bird.md
|
https://huggingface.co/docs/transformers/en/model_doc/big_bird/#flaxbigbirdforpretraining
|
.md
|
No docstring available for FlaxBigBirdForPreTraining
Methods: __call__
|
175_17_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/big_bird.md
|
https://huggingface.co/docs/transformers/en/model_doc/big_bird/#flaxbigbirdforcausallm
|
.md
|
No docstring available for FlaxBigBirdForCausalLM
Methods: __call__
|
175_18_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/big_bird.md
|
https://huggingface.co/docs/transformers/en/model_doc/big_bird/#flaxbigbirdformaskedlm
|
.md
|
No docstring available for FlaxBigBirdForMaskedLM
Methods: __call__
|
175_19_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/big_bird.md
|
https://huggingface.co/docs/transformers/en/model_doc/big_bird/#flaxbigbirdforsequenceclassification
|
.md
|
No docstring available for FlaxBigBirdForSequenceClassification
Methods: __call__
|
175_20_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/big_bird.md
|
https://huggingface.co/docs/transformers/en/model_doc/big_bird/#flaxbigbirdformultiplechoice
|
.md
|
No docstring available for FlaxBigBirdForMultipleChoice
Methods: __call__
|
175_21_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/big_bird.md
|
https://huggingface.co/docs/transformers/en/model_doc/big_bird/#flaxbigbirdfortokenclassification
|
.md
|
No docstring available for FlaxBigBirdForTokenClassification
Methods: __call__
|
175_22_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/big_bird.md
|
https://huggingface.co/docs/transformers/en/model_doc/big_bird/#flaxbigbirdforquestionanswering
|
.md
|
No docstring available for FlaxBigBirdForQuestionAnswering
Methods: __call__
</jax>
</frameworkcontent>
|
175_23_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mixtral.md
|
https://huggingface.co/docs/transformers/en/model_doc/mixtral/
|
.md
|
<!--Copyright 2023 Mistral AI and The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
176_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mixtral.md
|
https://huggingface.co/docs/transformers/en/model_doc/mixtral/
|
.md
|
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
176_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mixtral.md
|
https://huggingface.co/docs/transformers/en/model_doc/mixtral/#overview
|
.md
|
Mixtral-8x7B was introduced in the [Mixtral of Experts blogpost](https://mistral.ai/news/mixtral-of-experts/) by Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
The introduction of the blog post says:
|
176_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mixtral.md
|
https://huggingface.co/docs/transformers/en/model_doc/mixtral/#overview
|
.md
|
The introduction of the blog post says:
*Today, the team is proud to release Mixtral 8x7B, a high-quality sparse mixture of experts models (SMoE) with open weights. Licensed under Apache 2.0. Mixtral outperforms Llama 2 70B on most benchmarks with 6x faster inference. It is the strongest open-weight model with a permissive license and the best model overall regarding cost/performance trade-offs. In particular, it matches or outperforms GPT3.5 on most standard benchmarks.*
|
176_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mixtral.md
|
https://huggingface.co/docs/transformers/en/model_doc/mixtral/#overview
|
.md
|
Mixtral-8x7B is the second large language model (LLM) released by [mistral.ai](https://mistral.ai/), after [Mistral-7B](mistral).
|
176_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mixtral.md
|
https://huggingface.co/docs/transformers/en/model_doc/mixtral/#architectural-details
|
.md
|
Mixtral-8x7B is a decoder-only Transformer with the following architectural choices:
- Mixtral is a Mixture of Experts (MoE) model with 8 experts per MLP, with a total of 45 billion parameters. To learn more about mixture-of-experts, refer to the [blog post](https://huggingface.co/blog/moe).
|
176_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mixtral.md
|
https://huggingface.co/docs/transformers/en/model_doc/mixtral/#architectural-details
|
.md
|
- Despite the model having 45 billion parameters, the compute required for a single forward pass is the same as that of a 14 billion parameter model. This is because even though each of the experts have to be loaded in RAM (70B like ram requirement) each token from the hidden states are dispatched twice (top 2 routing) and thus the compute (the operation required at each forward computation) is just 2 X sequence_length.
|
176_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mixtral.md
|
https://huggingface.co/docs/transformers/en/model_doc/mixtral/#architectural-details
|
.md
|
The following implementation details are shared with Mistral AI's first model [Mistral-7B](mistral):
- Sliding Window Attention - Trained with 8k context length and fixed cache size, with a theoretical attention span of 128K tokens
- GQA (Grouped Query Attention) - allowing faster inference and lower cache size.
- Byte-fallback BPE tokenizer - ensures that characters are never mapped to out of vocabulary tokens.
|
176_2_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mixtral.md
|
https://huggingface.co/docs/transformers/en/model_doc/mixtral/#architectural-details
|
.md
|
- Byte-fallback BPE tokenizer - ensures that characters are never mapped to out of vocabulary tokens.
For more details refer to the [release blog post](https://mistral.ai/news/mixtral-of-experts/).
|
176_2_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mixtral.md
|
https://huggingface.co/docs/transformers/en/model_doc/mixtral/#license
|
.md
|
`Mixtral-8x7B` is released under the Apache 2.0 license.
|
176_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mixtral.md
|
https://huggingface.co/docs/transformers/en/model_doc/mixtral/#usage-tips
|
.md
|
The Mistral team has released 2 checkpoints:
- a base model, [Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1), which has been pre-trained to predict the next token on internet-scale data.
- an instruction tuned model, [Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1), which is the base model optimized for chat purposes using supervised fine-tuning (SFT) and direct preference optimization (DPO).
The base model can be used as follows:
|
176_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mixtral.md
|
https://huggingface.co/docs/transformers/en/model_doc/mixtral/#usage-tips
|
.md
|
The base model can be used as follows:
```python
>>> from transformers import AutoModelForCausalLM, AutoTokenizer
|
176_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mixtral.md
|
https://huggingface.co/docs/transformers/en/model_doc/mixtral/#usage-tips
|
.md
|
>>> model = AutoModelForCausalLM.from_pretrained("mistralai/Mixtral-8x7B-v0.1", device_map="auto")
>>> tokenizer = AutoTokenizer.from_pretrained("mistralai/Mixtral-8x7B-v0.1")
>>> prompt = "My favourite condiment is"
>>> model_inputs = tokenizer([prompt], return_tensors="pt").to("cuda")
>>> model.to(device)
|
176_4_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mixtral.md
|
https://huggingface.co/docs/transformers/en/model_doc/mixtral/#usage-tips
|
.md
|
>>> model_inputs = tokenizer([prompt], return_tensors="pt").to("cuda")
>>> model.to(device)
>>> generated_ids = model.generate(**model_inputs, max_new_tokens=100, do_sample=True)
>>> tokenizer.batch_decode(generated_ids)[0]
"My favourite condiment is to ..."
```
The instruction tuned model can be used as follows:
```python
>>> from transformers import AutoModelForCausalLM, AutoTokenizer
|
176_4_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mixtral.md
|
https://huggingface.co/docs/transformers/en/model_doc/mixtral/#usage-tips
|
.md
|
>>> model = AutoModelForCausalLM.from_pretrained("mistralai/Mixtral-8x7B-Instruct-v0.1", device_map="auto")
>>> tokenizer = AutoTokenizer.from_pretrained("mistralai/Mixtral-8x7B-Instruct-v0.1")
|
176_4_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mixtral.md
|
https://huggingface.co/docs/transformers/en/model_doc/mixtral/#usage-tips
|
.md
|
>>> messages = [
... {"role": "user", "content": "What is your favourite condiment?"},
... {"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
... {"role": "user", "content": "Do you have mayonnaise recipes?"}
... ]
>>> model_inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to("cuda")
|
176_4_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mixtral.md
|
https://huggingface.co/docs/transformers/en/model_doc/mixtral/#usage-tips
|
.md
|
>>> model_inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to("cuda")
>>> generated_ids = model.generate(model_inputs, max_new_tokens=100, do_sample=True)
>>> tokenizer.batch_decode(generated_ids)[0]
"Mayonnaise can be made as follows: (...)"
```
As can be seen, the instruction-tuned model requires a [chat template](../chat_templating) to be applied to make sure the inputs are prepared in the right format.
|
176_4_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mixtral.md
|
https://huggingface.co/docs/transformers/en/model_doc/mixtral/#speeding-up-mixtral-by-using-flash-attention
|
.md
|
The code snippets above showcase inference without any optimization tricks. However, one can drastically speed up the model by leveraging [Flash Attention](../perf_train_gpu_one#flash-attention-2), which is a faster implementation of the attention mechanism used inside the model.
First, make sure to install the latest version of Flash Attention 2 to include the sliding window attention feature.
```bash
pip install -U flash-attn --no-build-isolation
```
|
176_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mixtral.md
|
https://huggingface.co/docs/transformers/en/model_doc/mixtral/#speeding-up-mixtral-by-using-flash-attention
|
.md
|
```bash
pip install -U flash-attn --no-build-isolation
```
Make also sure that you have a hardware that is compatible with Flash-Attention 2. Read more about it in the official documentation of the [flash attention repository](https://github.com/Dao-AILab/flash-attention). Make also sure to load your model in half-precision (e.g. `torch.float16`)
To load and run a model using Flash Attention-2, refer to the snippet below:
```python
>>> import torch
|
176_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mixtral.md
|
https://huggingface.co/docs/transformers/en/model_doc/mixtral/#speeding-up-mixtral-by-using-flash-attention
|
.md
|
To load and run a model using Flash Attention-2, refer to the snippet below:
```python
>>> import torch
>>> from transformers import AutoModelForCausalLM, AutoTokenizer
|
176_5_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mixtral.md
|
https://huggingface.co/docs/transformers/en/model_doc/mixtral/#speeding-up-mixtral-by-using-flash-attention
|
.md
|
>>> model = AutoModelForCausalLM.from_pretrained("mistralai/Mixtral-8x7B-v0.1", torch_dtype=torch.float16, attn_implementation="flash_attention_2", device_map="auto")
>>> tokenizer = AutoTokenizer.from_pretrained("mistralai/Mixtral-8x7B-v0.1")
>>> prompt = "My favourite condiment is"
>>> model_inputs = tokenizer([prompt], return_tensors="pt").to("cuda")
>>> model.to(device)
|
176_5_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mixtral.md
|
https://huggingface.co/docs/transformers/en/model_doc/mixtral/#speeding-up-mixtral-by-using-flash-attention
|
.md
|
>>> model_inputs = tokenizer([prompt], return_tensors="pt").to("cuda")
>>> model.to(device)
>>> generated_ids = model.generate(**model_inputs, max_new_tokens=100, do_sample=True)
>>> tokenizer.batch_decode(generated_ids)[0]
"The expected output"
```
|
176_5_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mixtral.md
|
https://huggingface.co/docs/transformers/en/model_doc/mixtral/#expected-speedups
|
.md
|
Below is a expected speedup diagram that compares pure inference time between the native implementation in transformers using `mistralai/Mixtral-8x7B-v0.1` checkpoint and the Flash Attention 2 version of the model.
<div style="text-align: center">
<img src="https://huggingface.co/datasets/ybelkada/documentation-images/resolve/main/mixtral-7b-inference-large-seqlen.png">
</div>
|
176_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mixtral.md
|
https://huggingface.co/docs/transformers/en/model_doc/mixtral/#sliding-window-attention
|
.md
|
The current implementation supports the sliding window attention mechanism and memory efficient cache management.
To enable sliding window attention, just make sure to have a `flash-attn` version that is compatible with sliding window attention (`>=2.3.0`).
|
176_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mixtral.md
|
https://huggingface.co/docs/transformers/en/model_doc/mixtral/#sliding-window-attention
|
.md
|
The Flash Attention-2 model uses also a more memory efficient cache slicing mechanism - as recommended per the official implementation of Mistral model that use rolling cache mechanism we keep the cache size fixed (`self.config.sliding_window`), support batched generation only for `padding_side="left"` and use the absolute position of the current token to compute the positional embedding.
|
176_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mixtral.md
|
https://huggingface.co/docs/transformers/en/model_doc/mixtral/#shrinking-down-mixtral-using-quantization
|
.md
|
As the Mixtral model has 45 billion parameters, that would require about 90GB of GPU RAM in half precision (float16), since each parameter is stored in 2 bytes. However, one can shrink down the size of the model using [quantization](../quantization.md). If the model is quantized to 4 bits (or half a byte per parameter), a single A100 with 40GB of RAM is enough to fit the entire model, as in that case only about 27 GB of RAM is required.
|
176_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mixtral.md
|
https://huggingface.co/docs/transformers/en/model_doc/mixtral/#shrinking-down-mixtral-using-quantization
|
.md
|
Quantizing a model is as simple as passing a `quantization_config` to the model. Below, we'll leverage the bitsandbytes quantization library (but refer to [this page](../quantization.md) for alternative quantization methods):
```python
>>> import torch
>>> from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
|
176_8_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mixtral.md
|
https://huggingface.co/docs/transformers/en/model_doc/mixtral/#shrinking-down-mixtral-using-quantization
|
.md
|
>>> # specify how to quantize the model
>>> quantization_config = BitsAndBytesConfig(
... load_in_4bit=True,
... bnb_4bit_quant_type="nf4",
... bnb_4bit_compute_dtype="torch.float16",
... )
>>> model = AutoModelForCausalLM.from_pretrained("mistralai/Mixtral-8x7B-Instruct-v0.1", quantization_config=True, device_map="auto")
>>> tokenizer = AutoTokenizer.from_pretrained("mistralai/Mixtral-8x7B-Instruct-v0.1")
>>> prompt = "My favourite condiment is"
|
176_8_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mixtral.md
|
https://huggingface.co/docs/transformers/en/model_doc/mixtral/#shrinking-down-mixtral-using-quantization
|
.md
|
>>> prompt = "My favourite condiment is"
>>> messages = [
... {"role": "user", "content": "What is your favourite condiment?"},
... {"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
... {"role": "user", "content": "Do you have mayonnaise recipes?"}
... ]
>>> model_inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to("cuda")
|
176_8_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mixtral.md
|
https://huggingface.co/docs/transformers/en/model_doc/mixtral/#shrinking-down-mixtral-using-quantization
|
.md
|
>>> model_inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to("cuda")
>>> generated_ids = model.generate(model_inputs, max_new_tokens=100, do_sample=True)
>>> tokenizer.batch_decode(generated_ids)[0]
"The expected output"
```
This model was contributed by [Younes Belkada](https://huggingface.co/ybelkada) and [Arthur Zucker](https://huggingface.co/ArthurZ) .
The original code can be found [here](https://github.com/mistralai/mistral-src).
|
176_8_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mixtral.md
|
https://huggingface.co/docs/transformers/en/model_doc/mixtral/#resources
|
.md
|
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with Mixtral. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
<PipelineTag pipeline="text-generation"/>
|
176_9_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mixtral.md
|
https://huggingface.co/docs/transformers/en/model_doc/mixtral/#resources
|
.md
|
<PipelineTag pipeline="text-generation"/>
- A demo notebook to perform supervised fine-tuning (SFT) of Mixtral-8x7B can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/Mistral/Supervised_fine_tuning_(SFT)_of_an_LLM_using_Hugging_Face_tooling.ipynb). 🌎
- A [blog post](https://medium.com/@prakharsaxena11111/finetuning-mixtral-7bx8-6071b0ebf114) on fine-tuning Mixtral-8x7B using PEFT. 🌎
|
176_9_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mixtral.md
|
https://huggingface.co/docs/transformers/en/model_doc/mixtral/#resources
|
.md
|
- The [Alignment Handbook](https://github.com/huggingface/alignment-handbook) by Hugging Face includes scripts and recipes to perform supervised fine-tuning (SFT) and direct preference optimization with Mistral-7B. This includes scripts for full fine-tuning, QLoRa on a single GPU as well as multi-GPU fine-tuning.
- [Causal language modeling task guide](../tasks/language_modeling)
|
176_9_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mixtral.md
|
https://huggingface.co/docs/transformers/en/model_doc/mixtral/#mixtralconfig
|
.md
|
This is the configuration class to store the configuration of a [`MixtralModel`]. It is used to instantiate an
Mixtral model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the Mixtral-7B-v0.1 or Mixtral-7B-Instruct-v0.1.
[mixtralai/Mixtral-8x7B](https://huggingface.co/mixtralai/Mixtral-8x7B)
[mixtralai/Mixtral-7B-Instruct-v0.1](https://huggingface.co/mixtralai/Mixtral-7B-Instruct-v0.1)
|
176_10_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mixtral.md
|
https://huggingface.co/docs/transformers/en/model_doc/mixtral/#mixtralconfig
|
.md
|
[mixtralai/Mixtral-7B-Instruct-v0.1](https://huggingface.co/mixtralai/Mixtral-7B-Instruct-v0.1)
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 32000):
Vocabulary size of the Mixtral model. Defines the number of different tokens that can be represented by the
`inputs_ids` passed when calling [`MixtralModel`]
|
176_10_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mixtral.md
|
https://huggingface.co/docs/transformers/en/model_doc/mixtral/#mixtralconfig
|
.md
|
`inputs_ids` passed when calling [`MixtralModel`]
hidden_size (`int`, *optional*, defaults to 4096):
Dimension of the hidden representations.
intermediate_size (`int`, *optional*, defaults to 14336):
Dimension of the MLP representations.
num_hidden_layers (`int`, *optional*, defaults to 32):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 32):
Number of attention heads for each attention layer in the Transformer encoder.
|
176_10_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mixtral.md
|
https://huggingface.co/docs/transformers/en/model_doc/mixtral/#mixtralconfig
|
.md
|
Number of attention heads for each attention layer in the Transformer encoder.
num_key_value_heads (`int`, *optional*, defaults to 8):
This is the number of key_value heads that should be used to implement Grouped Query Attention. If
`num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if
`num_key_value_heads=1` the model will use Multi Query Attention (MQA) otherwise GQA is used. When
|
176_10_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mixtral.md
|
https://huggingface.co/docs/transformers/en/model_doc/mixtral/#mixtralconfig
|
.md
|
`num_key_value_heads=1` the model will use Multi Query Attention (MQA) otherwise GQA is used. When
converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
by meanpooling all the original heads within that group. For more details checkout [this
paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to `8`.
head_dim (`int`, *optional*, defaults to `hidden_size // num_attention_heads`):
The attention head dimension.
|
176_10_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mixtral.md
|
https://huggingface.co/docs/transformers/en/model_doc/mixtral/#mixtralconfig
|
.md
|
head_dim (`int`, *optional*, defaults to `hidden_size // num_attention_heads`):
The attention head dimension.
hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
The non-linear activation function (function or string) in the decoder.
max_position_embeddings (`int`, *optional*, defaults to `4096*32`):
The maximum sequence length that this model might ever be used with. Mixtral's sliding window attention
allows sequence of up to 4096*32 tokens.
|
176_10_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mixtral.md
|
https://huggingface.co/docs/transformers/en/model_doc/mixtral/#mixtralconfig
|
.md
|
allows sequence of up to 4096*32 tokens.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
rms_norm_eps (`float`, *optional*, defaults to 1e-05):
The epsilon used by the rms normalization layers.
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if `config.is_decoder=True`.
|
176_10_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mixtral.md
|
https://huggingface.co/docs/transformers/en/model_doc/mixtral/#mixtralconfig
|
.md
|
relevant if `config.is_decoder=True`.
pad_token_id (`int`, *optional*):
The id of the padding token.
bos_token_id (`int`, *optional*, defaults to 1):
The id of the "beginning-of-sequence" token.
eos_token_id (`int`, *optional*, defaults to 2):
The id of the "end-of-sequence" token.
tie_word_embeddings (`bool`, *optional*, defaults to `False`):
Whether the model's input and output word embeddings should be tied.
rope_theta (`float`, *optional*, defaults to 1000000.0):
The base period of the RoPE embeddings.
|
176_10_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mixtral.md
|
https://huggingface.co/docs/transformers/en/model_doc/mixtral/#mixtralconfig
|
.md
|
rope_theta (`float`, *optional*, defaults to 1000000.0):
The base period of the RoPE embeddings.
sliding_window (`int`, *optional*):
Sliding window attention window size. If not specified, will default to `4096`.
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
num_experts_per_tok (`int`, *optional*, defaults to 2):
The number of experts to route per-token, can be also interpreted as the `top-k` routing
parameter
|
176_10_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mixtral.md
|
https://huggingface.co/docs/transformers/en/model_doc/mixtral/#mixtralconfig
|
.md
|
The number of experts to route per-token, can be also interpreted as the `top-k` routing
parameter
num_local_experts (`int`, *optional*, defaults to 8):
Number of experts per Sparse MLP layer.
output_router_logits (`bool`, *optional*, defaults to `False`):
Whether or not the router logits should be returned by the model. Enabeling this will also
allow the model to output the auxiliary loss. See [here]() for more details
router_aux_loss_coef (`float`, *optional*, defaults to 0.001):
|
176_10_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mixtral.md
|
https://huggingface.co/docs/transformers/en/model_doc/mixtral/#mixtralconfig
|
.md
|
router_aux_loss_coef (`float`, *optional*, defaults to 0.001):
The aux loss factor for the total loss.
router_jitter_noise (`float`, *optional*, defaults to 0.0):
Amount of noise to add to the router.
```python
>>> from transformers import MixtralModel, MixtralConfig
|
176_10_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mixtral.md
|
https://huggingface.co/docs/transformers/en/model_doc/mixtral/#mixtralconfig
|
.md
|
>>> # Initializing a Mixtral 7B style configuration
>>> configuration = MixtralConfig()
>>> # Initializing a model from the Mixtral 7B style configuration
>>> model = MixtralModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
176_10_11
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mixtral.md
|
https://huggingface.co/docs/transformers/en/model_doc/mixtral/#mixtralmodel
|
.md
|
The bare Mixtral Model outputting raw hidden-states without any specific head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
176_11_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mixtral.md
|
https://huggingface.co/docs/transformers/en/model_doc/mixtral/#mixtralmodel
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`MixtralConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
|
176_11_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mixtral.md
|
https://huggingface.co/docs/transformers/en/model_doc/mixtral/#mixtralmodel
|
.md
|
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Transformer decoder consisting of *config.num_hidden_layers* layers. Each layer is a [`MixtralDecoderLayer`]
Args:
config: MixtralConfig
Methods: forward
|
176_11_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mixtral.md
|
https://huggingface.co/docs/transformers/en/model_doc/mixtral/#mixtralforcausallm
|
.md
|
No docstring available for MixtralForCausalLM
Methods: forward
|
176_12_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mixtral.md
|
https://huggingface.co/docs/transformers/en/model_doc/mixtral/#mixtralforsequenceclassification
|
.md
|
The Mixtral Model transformer with a sequence classification head on top (linear layer).
[`MixtralForSequenceClassification`] uses the last token in order to do the classification, as other causal models
(e.g. GPT-2) do.
Since it does classification on the last token, it requires to know the position of the last token. If a
`pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If
|
176_13_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mixtral.md
|
https://huggingface.co/docs/transformers/en/model_doc/mixtral/#mixtralforsequenceclassification
|
.md
|
`pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If
no `pad_token_id` is defined, it simply takes the last value in each row of the batch. Since it cannot guess the
padding tokens when `inputs_embeds` are passed instead of `input_ids`, it does the same (take the last value in
each row of the batch).
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
|
176_13_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mixtral.md
|
https://huggingface.co/docs/transformers/en/model_doc/mixtral/#mixtralforsequenceclassification
|
.md
|
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
|
176_13_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mixtral.md
|
https://huggingface.co/docs/transformers/en/model_doc/mixtral/#mixtralforsequenceclassification
|
.md
|
and behavior.
Parameters:
config ([`MixtralConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
176_13_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mixtral.md
|
https://huggingface.co/docs/transformers/en/model_doc/mixtral/#mixtralfortokenclassification
|
.md
|
The Mixtral Model transformer with a token classification head on top (a linear layer on top of the hidden-states
output) e.g. for Named-Entity-Recognition (NER) tasks.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
|
176_14_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mixtral.md
|
https://huggingface.co/docs/transformers/en/model_doc/mixtral/#mixtralfortokenclassification
|
.md
|
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`MixtralConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
|
176_14_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mixtral.md
|
https://huggingface.co/docs/transformers/en/model_doc/mixtral/#mixtralfortokenclassification
|
.md
|
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
The Mixtral Model transformer with a span classification head on top for extractive question-answering tasks like
SQuAD (a linear layer on top of the hidden-states output to compute `span start logits` and `span end logits`).
|
176_14_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mixtral.md
|
https://huggingface.co/docs/transformers/en/model_doc/mixtral/#mixtralfortokenclassification
|
.md
|
SQuAD (a linear layer on top of the hidden-states output to compute `span start logits` and `span end logits`).
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
176_14_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mixtral.md
|
https://huggingface.co/docs/transformers/en/model_doc/mixtral/#mixtralfortokenclassification
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`MixtralConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
|
176_14_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mixtral.md
|
https://huggingface.co/docs/transformers/en/model_doc/mixtral/#mixtralfortokenclassification
|
.md
|
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
176_14_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/graphormer.md
|
https://huggingface.co/docs/transformers/en/model_doc/graphormer/
|
.md
|
<!--Copyright 2022 The HuggingFace Team and Microsoft. All rights reserved.
Licensed under the MIT License; you may not use this file except in compliance with
the License.
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
|
177_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/graphormer.md
|
https://huggingface.co/docs/transformers/en/model_doc/graphormer/
|
.md
|
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
177_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/graphormer.md
|
https://huggingface.co/docs/transformers/en/model_doc/graphormer/#graphormer
|
.md
|
<Tip warning={true}>
This model is in maintenance mode only, we don't accept any new PRs changing its code.
If you run into any issues running this model, please reinstall the last version that supported this model: v4.40.2.
You can do so by running the following command: `pip install -U transformers==4.40.2`.
</Tip>
|
177_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/graphormer.md
|
https://huggingface.co/docs/transformers/en/model_doc/graphormer/#overview
|
.md
|
The Graphormer model was proposed in [Do Transformers Really Perform Bad for Graph Representation?](https://arxiv.org/abs/2106.05234) by
Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen and Tie-Yan Liu. It is a Graph Transformer model, modified to allow computations on graphs instead of text sequences by generating embeddings and features of interest during preprocessing and collation, then using a modified attention.
The abstract from the paper is the following:
|
177_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/graphormer.md
|
https://huggingface.co/docs/transformers/en/model_doc/graphormer/#overview
|
.md
|
*The Transformer architecture has become a dominant choice in many domains, such as natural language processing and computer vision. Yet, it has not achieved competitive performance on popular leaderboards of graph-level prediction compared to mainstream GNN variants. Therefore, it remains a mystery how Transformers could perform well for graph representation learning. In this paper, we solve this mystery by presenting Graphormer, which is built upon the standard Transformer architecture, and could attain
|
177_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/graphormer.md
|
https://huggingface.co/docs/transformers/en/model_doc/graphormer/#overview
|
.md
|
we solve this mystery by presenting Graphormer, which is built upon the standard Transformer architecture, and could attain excellent results on a broad range of graph representation learning tasks, especially on the recent OGB Large-Scale Challenge. Our key insight to utilizing Transformer in the graph is the necessity of effectively encoding the structural information of a graph into the model. To this end, we propose several simple yet effective structural encoding methods to help Graphormer better
|
177_2_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/graphormer.md
|
https://huggingface.co/docs/transformers/en/model_doc/graphormer/#overview
|
.md
|
into the model. To this end, we propose several simple yet effective structural encoding methods to help Graphormer better model graph-structured data. Besides, we mathematically characterize the expressive power of Graphormer and exhibit that with our ways of encoding the structural information of graphs, many popular GNN variants could be covered as the special cases of Graphormer.*
|
177_2_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/graphormer.md
|
https://huggingface.co/docs/transformers/en/model_doc/graphormer/#overview
|
.md
|
This model was contributed by [clefourrier](https://huggingface.co/clefourrier). The original code can be found [here](https://github.com/microsoft/Graphormer).
|
177_2_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/graphormer.md
|
https://huggingface.co/docs/transformers/en/model_doc/graphormer/#usage-tips
|
.md
|
This model will not work well on large graphs (more than 100 nodes/edges), as it will make the memory explode.
You can reduce the batch size, increase your RAM, or decrease the `UNREACHABLE_NODE_DISTANCE` parameter in algos_graphormer.pyx, but it will be hard to go above 700 nodes/edges.
This model does not use a tokenizer, but instead a special collator during training.
|
177_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/graphormer.md
|
https://huggingface.co/docs/transformers/en/model_doc/graphormer/#graphormerconfig
|
.md
|
This is the configuration class to store the configuration of a [`~GraphormerModel`]. It is used to instantiate an
Graphormer model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the Graphormer
[graphormer-base-pcqm4mv1](https://huggingface.co/graphormer-base-pcqm4mv1) architecture.
|
177_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/graphormer.md
|
https://huggingface.co/docs/transformers/en/model_doc/graphormer/#graphormerconfig
|
.md
|
[graphormer-base-pcqm4mv1](https://huggingface.co/graphormer-base-pcqm4mv1) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
num_classes (`int`, *optional*, defaults to 1):
Number of target classes or labels, set to n for binary classification of n tasks.
num_atoms (`int`, *optional*, defaults to 512*9):
Number of node types in the graphs.
|
177_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/graphormer.md
|
https://huggingface.co/docs/transformers/en/model_doc/graphormer/#graphormerconfig
|
.md
|
num_atoms (`int`, *optional*, defaults to 512*9):
Number of node types in the graphs.
num_edges (`int`, *optional*, defaults to 512*3):
Number of edges types in the graph.
num_in_degree (`int`, *optional*, defaults to 512):
Number of in degrees types in the input graphs.
num_out_degree (`int`, *optional*, defaults to 512):
Number of out degrees types in the input graphs.
num_edge_dis (`int`, *optional*, defaults to 128):
Number of edge dis in the input graphs.
|
177_4_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/graphormer.md
|
https://huggingface.co/docs/transformers/en/model_doc/graphormer/#graphormerconfig
|
.md
|
num_edge_dis (`int`, *optional*, defaults to 128):
Number of edge dis in the input graphs.
multi_hop_max_dist (`int`, *optional*, defaults to 20):
Maximum distance of multi hop edges between two nodes.
spatial_pos_max (`int`, *optional*, defaults to 1024):
Maximum distance between nodes in the graph attention bias matrices, used during preprocessing and
collation.
edge_type (`str`, *optional*, defaults to multihop):
Type of edge relation chosen.
max_nodes (`int`, *optional*, defaults to 512):
|
177_4_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/graphormer.md
|
https://huggingface.co/docs/transformers/en/model_doc/graphormer/#graphormerconfig
|
.md
|
Type of edge relation chosen.
max_nodes (`int`, *optional*, defaults to 512):
Maximum number of nodes which can be parsed for the input graphs.
share_input_output_embed (`bool`, *optional*, defaults to `False`):
Shares the embedding layer between encoder and decoder - careful, True is not implemented.
num_layers (`int`, *optional*, defaults to 12):
Number of layers.
embedding_dim (`int`, *optional*, defaults to 768):
Dimension of the embedding layer in encoder.
|
177_4_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/graphormer.md
|
https://huggingface.co/docs/transformers/en/model_doc/graphormer/#graphormerconfig
|
.md
|
Number of layers.
embedding_dim (`int`, *optional*, defaults to 768):
Dimension of the embedding layer in encoder.
ffn_embedding_dim (`int`, *optional*, defaults to 768):
Dimension of the "intermediate" (often named feed-forward) layer in encoder.
num_attention_heads (`int`, *optional*, defaults to 32):
Number of attention heads in the encoder.
self_attention (`bool`, *optional*, defaults to `True`):
Model is self attentive (False not implemented).
|
177_4_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/graphormer.md
|
https://huggingface.co/docs/transformers/en/model_doc/graphormer/#graphormerconfig
|
.md
|
self_attention (`bool`, *optional*, defaults to `True`):
Model is self attentive (False not implemented).
activation_function (`str` or `function`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"silu"` and `"gelu_new"` are supported.
dropout (`float`, *optional*, defaults to 0.1):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
|
177_4_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/graphormer.md
|
https://huggingface.co/docs/transformers/en/model_doc/graphormer/#graphormerconfig
|
.md
|
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_dropout (`float`, *optional*, defaults to 0.1):
The dropout probability for the attention weights.
activation_dropout (`float`, *optional*, defaults to 0.1):
The dropout probability for the activation of the linear transformer layer.
layerdrop (`float`, *optional*, defaults to 0.0):
The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
|
177_4_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/graphormer.md
|
https://huggingface.co/docs/transformers/en/model_doc/graphormer/#graphormerconfig
|
.md
|
The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
bias (`bool`, *optional*, defaults to `True`):
Uses bias in the attention module - unsupported at the moment.
embed_scale(`float`, *optional*, defaults to None):
Scaling factor for the node embeddings.
num_trans_layers_to_freeze (`int`, *optional*, defaults to 0):
Number of transformer layers to freeze.
encoder_normalize_before (`bool`, *optional*, defaults to `False`):
|
177_4_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/graphormer.md
|
https://huggingface.co/docs/transformers/en/model_doc/graphormer/#graphormerconfig
|
.md
|
Number of transformer layers to freeze.
encoder_normalize_before (`bool`, *optional*, defaults to `False`):
Normalize features before encoding the graph.
pre_layernorm (`bool`, *optional*, defaults to `False`):
Apply layernorm before self attention and the feed forward network. Without this, post layernorm will be
used.
apply_graphormer_init (`bool`, *optional*, defaults to `False`):
Apply a custom graphormer initialisation to the model before training.
|
177_4_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/graphormer.md
|
https://huggingface.co/docs/transformers/en/model_doc/graphormer/#graphormerconfig
|
.md
|
Apply a custom graphormer initialisation to the model before training.
freeze_embeddings (`bool`, *optional*, defaults to `False`):
Freeze the embedding layer, or train it along the model.
encoder_normalize_before (`bool`, *optional*, defaults to `False`):
Apply the layer norm before each encoder block.
q_noise (`float`, *optional*, defaults to 0.0):
Amount of quantization noise (see "Training with Quantization Noise for Extreme Model Compression"). (For
|
177_4_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/graphormer.md
|
https://huggingface.co/docs/transformers/en/model_doc/graphormer/#graphormerconfig
|
.md
|
Amount of quantization noise (see "Training with Quantization Noise for Extreme Model Compression"). (For
more detail, see fairseq's documentation on quant_noise).
qn_block_size (`int`, *optional*, defaults to 8):
Size of the blocks for subsequent quantization with iPQ (see q_noise).
kdim (`int`, *optional*, defaults to None):
Dimension of the key in the attention, if different from the other values.
vdim (`int`, *optional*, defaults to None):
|
177_4_11
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/graphormer.md
|
https://huggingface.co/docs/transformers/en/model_doc/graphormer/#graphormerconfig
|
.md
|
Dimension of the key in the attention, if different from the other values.
vdim (`int`, *optional*, defaults to None):
Dimension of the value in the attention, if different from the other values.
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models).
traceable (`bool`, *optional*, defaults to `False`):
Changes return value of the encoder's inner_state to stacked tensors.
Example:
```python
|
177_4_12
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/graphormer.md
|
https://huggingface.co/docs/transformers/en/model_doc/graphormer/#graphormerconfig
|
.md
|
Changes return value of the encoder's inner_state to stacked tensors.
Example:
```python
>>> from transformers import GraphormerForGraphClassification, GraphormerConfig
|
177_4_13
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/graphormer.md
|
https://huggingface.co/docs/transformers/en/model_doc/graphormer/#graphormerconfig
|
.md
|
>>> # Initializing a Graphormer graphormer-base-pcqm4mv2 style configuration
>>> configuration = GraphormerConfig()
>>> # Initializing a model from the graphormer-base-pcqm4mv1 style configuration
>>> model = GraphormerForGraphClassification(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
177_4_14
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.