source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bartpho.md
https://huggingface.co/docs/transformers/en/model_doc/bartpho/#overview
.md
sequence-to-sequence models pre-trained for Vietnamese. Our BARTpho uses the "large" architecture and pre-training scheme of the sequence-to-sequence denoising model BART, thus especially suitable for generative NLP tasks. Experiments on a downstream task of Vietnamese text summarization show that in both automatic and human evaluations, our BARTpho outperforms the strong baseline mBART and improves the state-of-the-art. We release BARTpho to facilitate future
118_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bartpho.md
https://huggingface.co/docs/transformers/en/model_doc/bartpho/#overview
.md
outperforms the strong baseline mBART and improves the state-of-the-art. We release BARTpho to facilitate future research and applications of generative Vietnamese NLP tasks.* This model was contributed by [dqnguyen](https://huggingface.co/dqnguyen). The original code can be found [here](https://github.com/VinAIResearch/BARTpho).
118_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bartpho.md
https://huggingface.co/docs/transformers/en/model_doc/bartpho/#usage-example
.md
```python >>> import torch >>> from transformers import AutoModel, AutoTokenizer >>> bartpho = AutoModel.from_pretrained("vinai/bartpho-syllable") >>> tokenizer = AutoTokenizer.from_pretrained("vinai/bartpho-syllable") >>> line = "Chúng tôi là những nghiên cứu viên." >>> input_ids = tokenizer(line, return_tensors="pt") >>> with torch.no_grad(): ... features = bartpho(**input_ids) # Models outputs are now tuples >>> # With TensorFlow 2.0+: >>> from transformers import TFAutoModel
118_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bartpho.md
https://huggingface.co/docs/transformers/en/model_doc/bartpho/#usage-example
.md
>>> # With TensorFlow 2.0+: >>> from transformers import TFAutoModel >>> bartpho = TFAutoModel.from_pretrained("vinai/bartpho-syllable") >>> input_ids = tokenizer(line, return_tensors="tf") >>> features = bartpho(**input_ids) ```
118_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bartpho.md
https://huggingface.co/docs/transformers/en/model_doc/bartpho/#usage-tips
.md
- Following mBART, BARTpho uses the "large" architecture of BART with an additional layer-normalization layer on top of both the encoder and decoder. Thus, usage examples in the [documentation of BART](bart), when adapting to use with BARTpho, should be adjusted by replacing the BART-specialized classes with the mBART-specialized counterparts. For example: ```python >>> from transformers import MBartForConditionalGeneration
118_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bartpho.md
https://huggingface.co/docs/transformers/en/model_doc/bartpho/#usage-tips
.md
>>> bartpho = MBartForConditionalGeneration.from_pretrained("vinai/bartpho-syllable") >>> TXT = "Chúng tôi là <mask> nghiên cứu viên." >>> input_ids = tokenizer([TXT], return_tensors="pt")["input_ids"] >>> logits = bartpho(input_ids).logits >>> masked_index = (input_ids[0] == tokenizer.mask_token_id).nonzero().item() >>> probs = logits[0, masked_index].softmax(dim=0) >>> values, predictions = probs.topk(5) >>> print(tokenizer.decode(predictions).split()) ```
118_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bartpho.md
https://huggingface.co/docs/transformers/en/model_doc/bartpho/#usage-tips
.md
>>> values, predictions = probs.topk(5) >>> print(tokenizer.decode(predictions).split()) ``` - This implementation is only for tokenization: "monolingual_vocab_file" consists of Vietnamese-specialized types extracted from the pre-trained SentencePiece model "vocab_file" that is available from the multilingual XLM-RoBERTa. Other languages, if employing this pre-trained multilingual SentencePiece model "vocab_file" for subword
118_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bartpho.md
https://huggingface.co/docs/transformers/en/model_doc/bartpho/#usage-tips
.md
Other languages, if employing this pre-trained multilingual SentencePiece model "vocab_file" for subword segmentation, can reuse BartphoTokenizer with their own language-specialized "monolingual_vocab_file".
118_3_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bartpho.md
https://huggingface.co/docs/transformers/en/model_doc/bartpho/#bartphotokenizer
.md
Adapted from [`XLMRobertaTokenizer`]. Based on [SentencePiece](https://github.com/google/sentencepiece). This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. Args: vocab_file (`str`): Path to the vocabulary file. This vocabulary is the pre-trained SentencePiece model available from the multilingual XLM-RoBERTa, also used in mBART, consisting of 250K types.
118_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bartpho.md
https://huggingface.co/docs/transformers/en/model_doc/bartpho/#bartphotokenizer
.md
multilingual XLM-RoBERTa, also used in mBART, consisting of 250K types. monolingual_vocab_file (`str`): Path to the monolingual vocabulary file. This monolingual vocabulary consists of Vietnamese-specialized types extracted from the multilingual vocabulary vocab_file of 250K types. bos_token (`str`, *optional*, defaults to `"<s>"`): The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token. <Tip>
118_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bartpho.md
https://huggingface.co/docs/transformers/en/model_doc/bartpho/#bartphotokenizer
.md
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token. <Tip> When building a sequence using special tokens, this is not the token that is used for the beginning of sequence. The token used is the `cls_token`. </Tip> eos_token (`str`, *optional*, defaults to `"</s>"`): The end of sequence token. <Tip> When building a sequence using special tokens, this is not the token that is used for the end of sequence. The token used is the `sep_token`.
118_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bartpho.md
https://huggingface.co/docs/transformers/en/model_doc/bartpho/#bartphotokenizer
.md
The token used is the `sep_token`. </Tip> sep_token (`str`, *optional*, defaults to `"</s>"`): The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens. cls_token (`str`, *optional*, defaults to `"<s>"`):
118_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bartpho.md
https://huggingface.co/docs/transformers/en/model_doc/bartpho/#bartphotokenizer
.md
token of a sequence built with special tokens. cls_token (`str`, *optional*, defaults to `"<s>"`): The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens. unk_token (`str`, *optional*, defaults to `"<unk>"`): The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.
118_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bartpho.md
https://huggingface.co/docs/transformers/en/model_doc/bartpho/#bartphotokenizer
.md
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. pad_token (`str`, *optional*, defaults to `"<pad>"`): The token used for padding, for example when batching sequences of different lengths. mask_token (`str`, *optional*, defaults to `"<mask>"`): The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict.
118_4_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bartpho.md
https://huggingface.co/docs/transformers/en/model_doc/bartpho/#bartphotokenizer
.md
modeling. This is the token which the model will try to predict. sp_model_kwargs (`dict`, *optional*): Will be passed to the `SentencePieceProcessor.__init__()` method. The [Python wrapper for SentencePiece](https://github.com/google/sentencepiece/tree/master/python) can be used, among other things, to set: - `enable_sampling`: Enable subword regularization. - `nbest_size`: Sampling parameters for unigram. Invalid for BPE-Dropout. - `nbest_size = {0,1}`: No sampling is performed.
118_4_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bartpho.md
https://huggingface.co/docs/transformers/en/model_doc/bartpho/#bartphotokenizer
.md
- `nbest_size`: Sampling parameters for unigram. Invalid for BPE-Dropout. - `nbest_size = {0,1}`: No sampling is performed. - `nbest_size > 1`: samples from the nbest_size results. - `nbest_size < 0`: assuming that nbest_size is infinite and samples from the all hypothesis (lattice) using forward-filtering-and-backward-sampling algorithm. - `alpha`: Smoothing parameter for unigram sampling, and dropout probability of merge operations for BPE-dropout. Attributes: sp_model (`SentencePieceProcessor`):
118_4_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bartpho.md
https://huggingface.co/docs/transformers/en/model_doc/bartpho/#bartphotokenizer
.md
BPE-dropout. Attributes: sp_model (`SentencePieceProcessor`): The *SentencePiece* processor that is used for every conversion (string, tokens and IDs).
118_4_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/biogpt.md
https://huggingface.co/docs/transformers/en/model_doc/biogpt/
.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
119_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/biogpt.md
https://huggingface.co/docs/transformers/en/model_doc/biogpt/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
119_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/biogpt.md
https://huggingface.co/docs/transformers/en/model_doc/biogpt/#overview
.md
The BioGPT model was proposed in [BioGPT: generative pre-trained transformer for biomedical text generation and mining](https://academic.oup.com/bib/advance-article/doi/10.1093/bib/bbac409/6713511?guestAccessKey=a66d9b5d-4f83-4017-bb52-405815c907b9) by Renqian Luo, Liai Sun, Yingce Xia, Tao Qin, Sheng Zhang, Hoifung Poon and Tie-Yan Liu. BioGPT is a domain-specific generative pre-trained Transformer language model for biomedical text generation and mining. BioGPT follows the Transformer language model
119_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/biogpt.md
https://huggingface.co/docs/transformers/en/model_doc/biogpt/#overview
.md
pre-trained Transformer language model for biomedical text generation and mining. BioGPT follows the Transformer language model backbone, and is pre-trained on 15M PubMed abstracts from scratch.
119_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/biogpt.md
https://huggingface.co/docs/transformers/en/model_doc/biogpt/#overview
.md
The abstract from the paper is the following:
119_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/biogpt.md
https://huggingface.co/docs/transformers/en/model_doc/biogpt/#overview
.md
*Pre-trained language models have attracted increasing attention in the biomedical domain, inspired by their great success in the general natural language domain. Among the two main branches of pre-trained language models in the general language domain, i.e. BERT (and its variants) and GPT (and its variants), the first one has been extensively studied in the biomedical domain, such as BioBERT and PubMedBERT. While they have achieved great success on a variety of discriminative downstream biomedical tasks,
119_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/biogpt.md
https://huggingface.co/docs/transformers/en/model_doc/biogpt/#overview
.md
as BioBERT and PubMedBERT. While they have achieved great success on a variety of discriminative downstream biomedical tasks, the lack of generation ability constrains their application scope. In this paper, we propose BioGPT, a domain-specific generative Transformer language model pre-trained on large-scale biomedical literature. We evaluate BioGPT on six biomedical natural language processing tasks and demonstrate that our model outperforms previous models on most tasks. Especially, we get 44.98%, 38.42%
119_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/biogpt.md
https://huggingface.co/docs/transformers/en/model_doc/biogpt/#overview
.md
processing tasks and demonstrate that our model outperforms previous models on most tasks. Especially, we get 44.98%, 38.42% and 40.76% F1 score on BC5CDR, KD-DTI and DDI end-to-end relation extraction tasks, respectively, and 78.2% accuracy on PubMedQA, creating a new record. Our case study on text generation further demonstrates the advantage of BioGPT on biomedical literature to generate fluent descriptions for biomedical terms.*
119_1_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/biogpt.md
https://huggingface.co/docs/transformers/en/model_doc/biogpt/#overview
.md
This model was contributed by [kamalkraj](https://huggingface.co/kamalkraj). The original code can be found [here](https://github.com/microsoft/BioGPT).
119_1_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/biogpt.md
https://huggingface.co/docs/transformers/en/model_doc/biogpt/#usage-tips
.md
- BioGPT is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than the left. - BioGPT was trained with a causal language modeling (CLM) objective and is therefore powerful at predicting the next token in a sequence. Leveraging this feature allows BioGPT to generate syntactically coherent text as it can be observed in the run_generation.py example script.
119_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/biogpt.md
https://huggingface.co/docs/transformers/en/model_doc/biogpt/#usage-tips
.md
- The model can take the `past_key_values` (for PyTorch) as input, which is the previously computed key/value attention pairs. Using this (past_key_values or past) value prevents the model from re-computing pre-computed values in the context of text generation. For PyTorch, see past_key_values argument of the BioGptForCausalLM.forward() method for more information on its usage.
119_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/biogpt.md
https://huggingface.co/docs/transformers/en/model_doc/biogpt/#using-scaled-dot-product-attention-sdpa
.md
PyTorch includes a native scaled dot-product attention (SDPA) operator as part of `torch.nn.functional`. This function encompasses several implementations that can be applied depending on the inputs and the hardware in use. See the [official documentation](https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html) or the [GPU Inference](https://huggingface.co/docs/transformers/main/en/perf_infer_gpu_one#pytorch-scaled-dot-product-attention) page for more information.
119_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/biogpt.md
https://huggingface.co/docs/transformers/en/model_doc/biogpt/#using-scaled-dot-product-attention-sdpa
.md
page for more information. SDPA is used by default for `torch>=2.1.1` when an implementation is available, but you may also set `attn_implementation="sdpa"` in `from_pretrained()` to explicitly request SDPA to be used. ``` from transformers import BioGptForCausalLM model = BioGptForCausalLM.from_pretrained("microsoft/biogpt", attn_implementation="sdpa", torch_dtype=torch.float16) ```
119_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/biogpt.md
https://huggingface.co/docs/transformers/en/model_doc/biogpt/#using-scaled-dot-product-attention-sdpa
.md
model = BioGptForCausalLM.from_pretrained("microsoft/biogpt", attn_implementation="sdpa", torch_dtype=torch.float16) ``` On a local benchmark (NVIDIA GeForce RTX 2060-8GB, PyTorch 2.3.1, OS Ubuntu 20.04) with `float16` and `microsoft/biogpt` model with a CausalLM head, we saw the following speedups during training. For the best speedups, we recommend loading the model in half-precision (e.g. `torch.float16` or `torch.bfloat16`).
119_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/biogpt.md
https://huggingface.co/docs/transformers/en/model_doc/biogpt/#using-scaled-dot-product-attention-sdpa
.md
For the best speedups, we recommend loading the model in half-precision (e.g. `torch.float16` or `torch.bfloat16`). | num_training_steps | batch_size | seq_len | is cuda | Time per batch (eager - s) | Time per batch (sdpa - s) | Speedup (%) | Eager peak mem (MB) | sdpa peak mem (MB) | Mem saving (%) | |--------------------|------------|---------|---------|----------------------------|---------------------------|-------------|---------------------|--------------------|----------------|
119_3_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/biogpt.md
https://huggingface.co/docs/transformers/en/model_doc/biogpt/#using-scaled-dot-product-attention-sdpa
.md
| 100 | 1 | 128 | False | 0.038 | 0.031 | 21.301 | 1601.862 | 1601.497 | 0.023 | | 100 | 1 | 256 | False | 0.039 | 0.034 | 15.084 | 1624.944 | 1625.296 | -0.022 |
119_3_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/biogpt.md
https://huggingface.co/docs/transformers/en/model_doc/biogpt/#using-scaled-dot-product-attention-sdpa
.md
| 100 | 2 | 128 | False | 0.039 | 0.033 | 16.820 | 1624.567 | 1625.296 | -0.045 | | 100 | 2 | 256 | False | 0.065 | 0.059 | 10.255 | 1672.164 | 1672.164 | 0.000 |
119_3_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/biogpt.md
https://huggingface.co/docs/transformers/en/model_doc/biogpt/#using-scaled-dot-product-attention-sdpa
.md
| 100 | 4 | 128 | False | 0.062 | 0.058 | 6.998 | 1671.435 | 1672.164 | -0.044 | | 100 | 4 | 256 | False | 0.113 | 0.100 | 13.316 | 2350.179 | 1848.435 | 27.144 |
119_3_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/biogpt.md
https://huggingface.co/docs/transformers/en/model_doc/biogpt/#using-scaled-dot-product-attention-sdpa
.md
| 100 | 8 | 128 | False | 0.107 | 0.098 | 9.883 | 2098.521 | 1848.435 | 13.530 | | 100 | 8 | 256 | False | 0.222 | 0.196 | 13.413 | 3989.980 | 2986.492 | 33.601 |
119_3_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/biogpt.md
https://huggingface.co/docs/transformers/en/model_doc/biogpt/#using-scaled-dot-product-attention-sdpa
.md
On a local benchmark (NVIDIA GeForce RTX 2060-8GB, PyTorch 2.3.1, OS Ubuntu 20.04) with `float16` and `microsoft/biogpt` model with a simple AutoModel head, we saw the following speedups during inference. | num_batches | batch_size | seq_len | is cuda | is half | use mask | Per token latency eager (ms) | Per token latency SDPA (ms) | Speedup (%) | Mem eager (MB) | Mem BT (MB) | Mem saved (%) |
119_3_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/biogpt.md
https://huggingface.co/docs/transformers/en/model_doc/biogpt/#using-scaled-dot-product-attention-sdpa
.md
|-------------|------------|---------|---------|---------|----------|------------------------------|-----------------------------|-------------|----------------|--------------|---------------| | 50 | 1 | 64 | True | True | True | 0.115 | 0.098 | 17.392 | 716.998 | 716.998 | 0.000 |
119_3_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/biogpt.md
https://huggingface.co/docs/transformers/en/model_doc/biogpt/#using-scaled-dot-product-attention-sdpa
.md
| 50 | 1 | 128 | True | True | True | 0.115 | 0.093 | 24.640 | 730.916 | 730.916 | 0.000 | | 50 | 2 | 64 | True | True | True | 0.114 | 0.096 | 19.204 | 730.900 | 730.900 | 0.000 |
119_3_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/biogpt.md
https://huggingface.co/docs/transformers/en/model_doc/biogpt/#using-scaled-dot-product-attention-sdpa
.md
| 50 | 2 | 128 | True | True | True | 0.117 | 0.095 | 23.529 | 759.262 | 759.262 | 0.000 | | 50 | 4 | 64 | True | True | True | 0.113 | 0.096 | 18.325 | 759.229 | 759.229 | 0.000 |
119_3_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/biogpt.md
https://huggingface.co/docs/transformers/en/model_doc/biogpt/#using-scaled-dot-product-attention-sdpa
.md
| 50 | 4 | 128 | True | True | True | 0.186 | 0.178 | 4.289 | 816.478 | 816.478 | 0.000 |
119_3_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/biogpt.md
https://huggingface.co/docs/transformers/en/model_doc/biogpt/#resources
.md
- [Causal language modeling task guide](../tasks/language_modeling)
119_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/biogpt.md
https://huggingface.co/docs/transformers/en/model_doc/biogpt/#biogptconfig
.md
This is the configuration class to store the configuration of a [`BioGptModel`]. It is used to instantiate an BioGPT model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the BioGPT [microsoft/biogpt](https://huggingface.co/microsoft/biogpt) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
119_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/biogpt.md
https://huggingface.co/docs/transformers/en/model_doc/biogpt/#biogptconfig
.md
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 42384): Vocabulary size of the BioGPT model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`BioGptModel`]. hidden_size (`int`, *optional*, defaults to 1024): Dimension of the encoder layers and the pooler layer.
119_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/biogpt.md
https://huggingface.co/docs/transformers/en/model_doc/biogpt/#biogptconfig
.md
hidden_size (`int`, *optional*, defaults to 1024): Dimension of the encoder layers and the pooler layer. num_hidden_layers (`int`, *optional*, defaults to 24): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 16): Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (`int`, *optional*, defaults to 4096): Dimension of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
119_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/biogpt.md
https://huggingface.co/docs/transformers/en/model_doc/biogpt/#biogptconfig
.md
Dimension of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder. hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported. hidden_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
119_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/biogpt.md
https://huggingface.co/docs/transformers/en/model_doc/biogpt/#biogptconfig
.md
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout ratio for the attention probabilities. max_position_embeddings (`int`, *optional*, defaults to 1024): The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). initializer_range (`float`, *optional*, defaults to 0.02):
119_5_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/biogpt.md
https://huggingface.co/docs/transformers/en/model_doc/biogpt/#biogptconfig
.md
just in case (e.g., 512 or 1024 or 2048). initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-12): The epsilon used by the layer normalization layers. scale_embedding (`bool`, *optional*, defaults to `True`): Scale embeddings by diving by sqrt(d_model). use_cache (`bool`, *optional*, defaults to `True`):
119_5_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/biogpt.md
https://huggingface.co/docs/transformers/en/model_doc/biogpt/#biogptconfig
.md
Scale embeddings by diving by sqrt(d_model). use_cache (`bool`, *optional*, defaults to `True`): Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if `config.is_decoder=True`. layerdrop (`float`, *optional*, defaults to 0.0): Please refer to the paper about LayerDrop: https://arxiv.org/abs/1909.11556 for further details activation_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for activations inside the fully connected layer.
119_5_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/biogpt.md
https://huggingface.co/docs/transformers/en/model_doc/biogpt/#biogptconfig
.md
activation_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for activations inside the fully connected layer. pad_token_id (`int`, *optional*, defaults to 1): Padding token id. bos_token_id (`int`, *optional*, defaults to 0): Beginning of stream token id. eos_token_id (`int`, *optional*, defaults to 2): End of stream token id. Example: ```python >>> from transformers import BioGptModel, BioGptConfig
119_5_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/biogpt.md
https://huggingface.co/docs/transformers/en/model_doc/biogpt/#biogptconfig
.md
>>> # Initializing a BioGPT microsoft/biogpt style configuration >>> configuration = BioGptConfig() >>> # Initializing a model from the microsoft/biogpt style configuration >>> model = BioGptModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
119_5_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/biogpt.md
https://huggingface.co/docs/transformers/en/model_doc/biogpt/#biogpttokenizer
.md
Construct an FAIRSEQ Transformer tokenizer. Moses tokenization followed by Byte-Pair Encoding. This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. Args: vocab_file (`str`): Path to the vocabulary file. merges_file (`str`): Merges file. unk_token (`str`, *optional*, defaults to `"<unk>"`):
119_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/biogpt.md
https://huggingface.co/docs/transformers/en/model_doc/biogpt/#biogpttokenizer
.md
Path to the vocabulary file. merges_file (`str`): Merges file. unk_token (`str`, *optional*, defaults to `"<unk>"`): The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. bos_token (`str`, *optional*, defaults to `"<s>"`): The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token. <Tip> When building a sequence using special tokens, this is not the token that is used for the beginning of
119_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/biogpt.md
https://huggingface.co/docs/transformers/en/model_doc/biogpt/#biogpttokenizer
.md
<Tip> When building a sequence using special tokens, this is not the token that is used for the beginning of sequence. The token used is the `cls_token`. </Tip> eos_token (`str`, *optional*, defaults to `"</s>"`): The end of sequence token. <Tip> When building a sequence using special tokens, this is not the token that is used for the end of sequence. The token used is the `sep_token`. </Tip> sep_token (`str`, *optional*, defaults to `"</s>"`):
119_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/biogpt.md
https://huggingface.co/docs/transformers/en/model_doc/biogpt/#biogpttokenizer
.md
The token used is the `sep_token`. </Tip> sep_token (`str`, *optional*, defaults to `"</s>"`): The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens. pad_token (`str`, *optional*, defaults to `"<pad>"`): The token used for padding, for example when batching sequences of different lengths.
119_6_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/biogpt.md
https://huggingface.co/docs/transformers/en/model_doc/biogpt/#biogpttokenizer
.md
The token used for padding, for example when batching sequences of different lengths. Methods: save_vocabulary
119_6_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/biogpt.md
https://huggingface.co/docs/transformers/en/model_doc/biogpt/#biogptmodel
.md
The bare BioGPT Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`~BioGptConfig`]): Model configuration class with all the parameters of the model.
119_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/biogpt.md
https://huggingface.co/docs/transformers/en/model_doc/biogpt/#biogptmodel
.md
behavior. Parameters: config ([`~BioGptConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
119_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/biogpt.md
https://huggingface.co/docs/transformers/en/model_doc/biogpt/#biogptforcausallm
.md
BioGPT Model with a `language modeling` head on top for CLM fine-tuning. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`~BioGptConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
119_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/biogpt.md
https://huggingface.co/docs/transformers/en/model_doc/biogpt/#biogptforcausallm
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
119_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/biogpt.md
https://huggingface.co/docs/transformers/en/model_doc/biogpt/#biogptfortokenclassification
.md
BioGPT Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`~BioGptConfig`]): Model configuration class with all the parameters of the model.
119_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/biogpt.md
https://huggingface.co/docs/transformers/en/model_doc/biogpt/#biogptfortokenclassification
.md
behavior. Parameters: config ([`~BioGptConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
119_9_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/biogpt.md
https://huggingface.co/docs/transformers/en/model_doc/biogpt/#biogptforsequenceclassification
.md
The BioGpt Model transformer with a sequence classification head on top (linear layer). [`BioGptForSequenceClassification`] uses the last token in order to do the classification, as other causal models (e.g. GPT-2) do. Since it does classification on the last token, it is required to know the position of the last token. If a `pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If
119_10_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/biogpt.md
https://huggingface.co/docs/transformers/en/model_doc/biogpt/#biogptforsequenceclassification
.md
`pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If no `pad_token_id` is defined, it simply takes the last value in each row of the batch. Since it cannot guess the padding tokens when `inputs_embeds` are passed instead of `input_ids`, it does the same (take the last value in each row of the batch). This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
119_10_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/biogpt.md
https://huggingface.co/docs/transformers/en/model_doc/biogpt/#biogptforsequenceclassification
.md
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`~BioGptConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
119_10_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/biogpt.md
https://huggingface.co/docs/transformers/en/model_doc/biogpt/#biogptforsequenceclassification
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
119_10_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phimoe.md
https://huggingface.co/docs/transformers/en/model_doc/phimoe/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
120_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phimoe.md
https://huggingface.co/docs/transformers/en/model_doc/phimoe/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
120_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phimoe.md
https://huggingface.co/docs/transformers/en/model_doc/phimoe/#overview
.md
The PhiMoE model was proposed in [Phi-3 Technical Report: A Highly Capable Language Model Locally on Your Phone](https://arxiv.org/abs/2404.14219) by Microsoft.
120_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phimoe.md
https://huggingface.co/docs/transformers/en/model_doc/phimoe/#summary
.md
The abstract from the Phi-3 paper is the following:
120_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phimoe.md
https://huggingface.co/docs/transformers/en/model_doc/phimoe/#summary
.md
We introduce phi-3-mini, a 3.8 billion parameter language model trained on 3.3 trillion tokens, whose overall performance, as measured by both academic benchmarks and internal testing, rivals that of models such as Mixtral 8x7B and GPT-3.5 (e.g., phi-3-mini achieves 69% on MMLU and 8.38 on MT-bench), despite being small enough to be deployed on a phone. Our training dataset is a scaled-up version of the one used for phi-2, composed of heavily filtered publicly available web data and synthetic data. The
120_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phimoe.md
https://huggingface.co/docs/transformers/en/model_doc/phimoe/#summary
.md
a scaled-up version of the one used for phi-2, composed of heavily filtered publicly available web data and synthetic data. The model is also further aligned for robustness, safety, and chat format. We also provide parameter-scaling results with a 7B, 14B models trained for 4.8T tokens, called phi-3-small, phi-3-medium, both significantly more capable than phi-3-mini (e.g., respectively 75%, 78% on MMLU, and 8.7, 8.9 on MT-bench). To enhance multilingual, multimodal, and long-context capabilities, we
120_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phimoe.md
https://huggingface.co/docs/transformers/en/model_doc/phimoe/#summary
.md
75%, 78% on MMLU, and 8.7, 8.9 on MT-bench). To enhance multilingual, multimodal, and long-context capabilities, we introduce three models in the phi-3.5 series: phi-3.5-mini, phi-3.5-MoE, and phi-3.5-Vision. The phi-3.5-MoE, a 16 x 3.8B MoE model with 6.6 billion active parameters, achieves superior performance in language reasoning, math, and code tasks compared to other open-source models of similar scale, such as Llama 3.1 and the Mixtral series, and on par with Gemini-1.5-Flash and GPT-4o-mini.
120_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phimoe.md
https://huggingface.co/docs/transformers/en/model_doc/phimoe/#summary
.md
models of similar scale, such as Llama 3.1 and the Mixtral series, and on par with Gemini-1.5-Flash and GPT-4o-mini. Meanwhile, phi-3.5-Vision, a 4.2 billion parameter model derived from phi-3.5-mini, excels in reasoning tasks and is adept at handling both single-image and text prompts, as well as multi-image and text prompts.
120_2_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phimoe.md
https://huggingface.co/docs/transformers/en/model_doc/phimoe/#summary
.md
The original code for PhiMoE can be found [here](https://huggingface.co/microsoft/Phi-3.5-MoE-instruct).
120_2_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phimoe.md
https://huggingface.co/docs/transformers/en/model_doc/phimoe/#usage-tips
.md
- This model is very similar to `Mixtral` with the main difference of [`Phi3LongRoPEScaledRotaryEmbedding`], where they are used to extend the context of the rotary embeddings. The query, key and values are fused, and the MLP's up and gate projection layers are also fused. - The tokenizer used for this model is identical to the [`LlamaTokenizer`], with the exception of additional tokens.
120_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phimoe.md
https://huggingface.co/docs/transformers/en/model_doc/phimoe/#how-to-use-phimoe
.md
<Tip warning={true}> Phi-3.5-MoE-instruct has been integrated in the development version (4.44.2.dev) of `transformers`. Until the official version is released through `pip`, ensure that you are doing the following: * When loading the model, ensure that `trust_remote_code=True` is passed as an argument of the `from_pretrained()` function. The current `transformers` version can be verified with: `pip list | grep transformers`. Examples of required packages: ``` flash_attn==2.5.8 torch==2.3.1
120_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phimoe.md
https://huggingface.co/docs/transformers/en/model_doc/phimoe/#how-to-use-phimoe
.md
Examples of required packages: ``` flash_attn==2.5.8 torch==2.3.1 accelerate==0.31.0 transformers==4.43.0 ``` </Tip> ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
120_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phimoe.md
https://huggingface.co/docs/transformers/en/model_doc/phimoe/#how-to-use-phimoe
.md
torch.random.manual_seed(0) model = AutoModelForCausalLM.from_pretrained( "microsoft/Phi-3.5-MoE-instruct", device_map="cuda", torch_dtype="auto", trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3.5-MoE-instruct")
120_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phimoe.md
https://huggingface.co/docs/transformers/en/model_doc/phimoe/#how-to-use-phimoe
.md
messages = [ {"role": "system", "content": "You are a helpful AI assistant."}, {"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"}, {"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."},
120_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phimoe.md
https://huggingface.co/docs/transformers/en/model_doc/phimoe/#how-to-use-phimoe
.md
{"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"}, ]
120_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phimoe.md
https://huggingface.co/docs/transformers/en/model_doc/phimoe/#how-to-use-phimoe
.md
pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, ) generation_args = { "max_new_tokens": 500, "return_full_text": False, "temperature": 0.0, "do_sample": False, } output = pipe(messages, **generation_args) print(output[0]['generated_text']) ```
120_4_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phimoe.md
https://huggingface.co/docs/transformers/en/model_doc/phimoe/#phimoeconfig
.md
This is the configuration class to store the configuration of a [`PhimoeModel`]. It is used to instantiate a Phi-moe model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the [microsoft/Phi-3.5-MoE-instruct](https://huggingface.co/microsoft/Phi-3.5-MoE-instruct). Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
120_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phimoe.md
https://huggingface.co/docs/transformers/en/model_doc/phimoe/#phimoeconfig
.md
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 32064): Vocabulary size of the Phimoe model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`PhimoeModel`] hidden_size (`int`, *optional*, defaults to 4096): Dimension of the hidden representations.
120_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phimoe.md
https://huggingface.co/docs/transformers/en/model_doc/phimoe/#phimoeconfig
.md
hidden_size (`int`, *optional*, defaults to 4096): Dimension of the hidden representations. intermediate_size (`int`, *optional*, defaults to 6400): Dimension of the MLP representations. num_hidden_layers (`int`, *optional*, defaults to 32): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 32): Number of attention heads for each attention layer in the Transformer encoder. num_key_value_heads (`int`, *optional*, defaults to 8):
120_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phimoe.md
https://huggingface.co/docs/transformers/en/model_doc/phimoe/#phimoeconfig
.md
num_key_value_heads (`int`, *optional*, defaults to 8): This is the number of key_value heads that should be used to implement Grouped Query Attention. If `num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if `num_key_value_heads=1` the model will use Multi Query Attention (MQA) otherwise GQA is used. When converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
120_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phimoe.md
https://huggingface.co/docs/transformers/en/model_doc/phimoe/#phimoeconfig
.md
converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed by meanpooling all the original heads within that group. For more details checkout [this paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to `8`. hidden_act (`str` or `function`, *optional*, defaults to `"silu"`): The non-linear activation function (function or string) in the decoder. max_position_embeddings (`int`, *optional*, defaults to `4096*32`):
120_5_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phimoe.md
https://huggingface.co/docs/transformers/en/model_doc/phimoe/#phimoeconfig
.md
max_position_embeddings (`int`, *optional*, defaults to `4096*32`): The maximum sequence length that this model might ever be used with. Mixtral's sliding window attention allows sequence of up to 4096*32 tokens. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. rms_norm_eps (`float`, *optional*, defaults to 1e-05): The epsilon used by the rms normalization layers.
120_5_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phimoe.md
https://huggingface.co/docs/transformers/en/model_doc/phimoe/#phimoeconfig
.md
rms_norm_eps (`float`, *optional*, defaults to 1e-05): The epsilon used by the rms normalization layers. use_cache (`bool`, *optional*, defaults to `True`): Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if `config.is_decoder=True`. pad_token_id (`int`, *optional*): The id of the padding token. bos_token_id (`int`, *optional*, defaults to 1): The id of the "beginning-of-sequence" token. eos_token_id (`int`, *optional*, defaults to 2):
120_5_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phimoe.md
https://huggingface.co/docs/transformers/en/model_doc/phimoe/#phimoeconfig
.md
The id of the "beginning-of-sequence" token. eos_token_id (`int`, *optional*, defaults to 2): The id of the "end-of-sequence" token. tie_word_embeddings (`bool`, *optional*, defaults to `False`): Whether the model's input and output word embeddings should be tied. rope_theta (`float`, *optional*, defaults to 1000000.0): The base period of the RoPE embeddings. rope_scaling (`dict`, *optional*): The scaling strategy for the RoPE embeddings. If `None`, no scaling is applied. If a dictionary, it must
120_5_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phimoe.md
https://huggingface.co/docs/transformers/en/model_doc/phimoe/#phimoeconfig
.md
The scaling strategy for the RoPE embeddings. If `None`, no scaling is applied. If a dictionary, it must contain the following keys: `type`, `short_factor`, `long_factor`, `short_mscale`, `long_mscale` and `original_max_position_embeddings`. The `type` must be `longrope`, the `short_mscale` and `long_scale` must be numbers, the `short_factor` and `long_factor` must be lists of numbers with the same length as half of the attention head size and the `original_max_position_embeddings` must be an integer.
120_5_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phimoe.md
https://huggingface.co/docs/transformers/en/model_doc/phimoe/#phimoeconfig
.md
the attention head size and the `original_max_position_embeddings` must be an integer. sliding_window (`int`, *optional*): Sliding window attention window size. If not specified, will default to `262144`. attention_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. num_experts_per_tok (`int`, *optional*, defaults to 2): The number of experts to root per-token, can be also interpreted as the `top-p` routing parameter
120_5_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phimoe.md
https://huggingface.co/docs/transformers/en/model_doc/phimoe/#phimoeconfig
.md
The number of experts to root per-token, can be also interpreted as the `top-p` routing parameter num_local_experts (`int`, *optional*, defaults to 16): Number of experts per Sparse MLP layer. output_router_logits (`bool`, *optional*, defaults to `False`): Whether or not the router logits should be returned by the model. Enabeling this will also allow the model to output the auxiliary loss. See [here]() for more details router_aux_loss_coef (`float`, *optional*, defaults to 0.001):
120_5_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phimoe.md
https://huggingface.co/docs/transformers/en/model_doc/phimoe/#phimoeconfig
.md
router_aux_loss_coef (`float`, *optional*, defaults to 0.001): The aux loss factor for the total loss. router_jitter_noise (`float`, *optional*, defaults to 0.01): Amount of noise to add to the router. input_jitter_noise (`float`, *optional*, defaults to 0.0): Input jitter noise attention_bias (`bool`, *optional*, defaults to `False`): Attention bias lm_head_bias (`bool`, *optional*, defaults to `False`): LM head bias Example: ```python >>> from transformers import PhimoeModel, PhimoeConfig
120_5_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phimoe.md
https://huggingface.co/docs/transformers/en/model_doc/phimoe/#phimoeconfig
.md
Example: ```python >>> from transformers import PhimoeModel, PhimoeConfig >>> # Initializing a Phi-3 style configuration >>> configuration = PhimoeConfig.from_pretrained("microsoft/Phi-3.5-MoE-instruct") >>> # Initializing a model from the configuration >>> model = PhimoeModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ``` <frameworkcontent> <pt>
120_5_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phimoe.md
https://huggingface.co/docs/transformers/en/model_doc/phimoe/#phimoemodel
.md
The bare Phimoe Model outputting raw hidden-states without any specific head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
120_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phimoe.md
https://huggingface.co/docs/transformers/en/model_doc/phimoe/#phimoemodel
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`PhimoeConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the
120_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phimoe.md
https://huggingface.co/docs/transformers/en/model_doc/phimoe/#phimoemodel
.md
load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Transformer decoder consisting of *config.num_hidden_layers* layers. Each layer is a [`PhimoeDecoderLayer`] Args: config: PhimoeConfig Methods: forward
120_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phimoe.md
https://huggingface.co/docs/transformers/en/model_doc/phimoe/#phimoeforcausallm
.md
No docstring available for PhimoeForCausalLM Methods: forward - generate
120_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phimoe.md
https://huggingface.co/docs/transformers/en/model_doc/phimoe/#phimoeforsequenceclassification
.md
The Phimoe Model transformer with a sequence classification head on top (linear layer). [`PhimoeForSequenceClassification`] uses the last token in order to do the classification, as other causal models (e.g. GPT-2) do. Since it does classification on the last token, it requires to know the position of the last token. If a `pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If
120_8_0