source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere.md
|
https://huggingface.co/docs/transformers/en/model_doc/cohere/#overview
|
.md
|
The Cohere Command-R model was proposed in the blogpost [Command-R: Retrieval Augmented Generation at Production Scale](https://txt.cohere.com/command-r/) by the Cohere Team.
The abstract from the paper is the following:
|
240_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere.md
|
https://huggingface.co/docs/transformers/en/model_doc/cohere/#overview
|
.md
|
The abstract from the paper is the following:
*Command-R is a scalable generative model targeting RAG and Tool Use to enable production-scale AI for enterprise. Today, we are introducing Command-R, a new LLM aimed at large-scale production workloads. Command-R targets the emerging “scalable” category of models that balance high efficiency with strong accuracy, enabling companies to move beyond proof of concept, and into production.*
|
240_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere.md
|
https://huggingface.co/docs/transformers/en/model_doc/cohere/#overview
|
.md
|
*Command-R is a generative model optimized for long context tasks such as retrieval augmented generation (RAG) and using external APIs and tools. It is designed to work in concert with our industry-leading Embed and Rerank models to provide best-in-class integration for RAG applications and excel at enterprise use cases. As a model built for companies to implement at scale, Command-R boasts:
- Strong accuracy on RAG and Tool Use
- Low latency, and high throughput
- Longer 128k context and lower pricing
|
240_0_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere.md
|
https://huggingface.co/docs/transformers/en/model_doc/cohere/#overview
|
.md
|
- Strong accuracy on RAG and Tool Use
- Low latency, and high throughput
- Longer 128k context and lower pricing
- Strong capabilities across 10 key languages
- Model weights available on HuggingFace for research and evaluation
Checkout model checkpoints [here](https://huggingface.co/CohereForAI/c4ai-command-r-v01).
|
240_0_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere.md
|
https://huggingface.co/docs/transformers/en/model_doc/cohere/#overview
|
.md
|
Checkout model checkpoints [here](https://huggingface.co/CohereForAI/c4ai-command-r-v01).
This model was contributed by [Saurabh Dash](https://huggingface.co/saurabhdash) and [Ahmet Üstün](https://huggingface.co/ahmetustun). The code of the implementation in Hugging Face is based on GPT-NeoX [here](https://github.com/EleutherAI/gpt-neox).
|
240_0_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere.md
|
https://huggingface.co/docs/transformers/en/model_doc/cohere/#usage-tips
|
.md
|
<Tip warning={true}>
The checkpoints uploaded on the Hub use `torch_dtype = 'float16'`, which will be
used by the `AutoModel` API to cast the checkpoints from `torch.float32` to `torch.float16`.
|
240_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere.md
|
https://huggingface.co/docs/transformers/en/model_doc/cohere/#usage-tips
|
.md
|
The `dtype` of the online weights is mostly irrelevant unless you are using `torch_dtype="auto"` when initializing a model using `model = AutoModelForCausalLM.from_pretrained("path", torch_dtype = "auto")`. The reason is that the model will first be downloaded ( using the `dtype` of the checkpoints online), then it will be casted to the default `dtype` of `torch` (becomes `torch.float32`), and finally, if there is a `torch_dtype` provided in the config, it will be used.
|
240_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere.md
|
https://huggingface.co/docs/transformers/en/model_doc/cohere/#usage-tips
|
.md
|
Training the model in `float16` is not recommended and is known to produce `nan`; as such, the model should be trained in `bfloat16`.
</Tip>
The model and tokenizer can be loaded via:
```python
# pip install transformers
from transformers import AutoTokenizer, AutoModelForCausalLM
|
240_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere.md
|
https://huggingface.co/docs/transformers/en/model_doc/cohere/#usage-tips
|
.md
|
model_id = "CohereForAI/c4ai-command-r-v01"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
|
240_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere.md
|
https://huggingface.co/docs/transformers/en/model_doc/cohere/#usage-tips
|
.md
|
# Format message with the command-r chat template
messages = [{"role": "user", "content": "Hello, how are you?"}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Hello, how are you?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
gen_tokens = model.generate(
input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.3,
)
|
240_1_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere.md
|
https://huggingface.co/docs/transformers/en/model_doc/cohere/#usage-tips
|
.md
|
gen_text = tokenizer.decode(gen_tokens[0])
print(gen_text)
```
- When using Flash Attention 2 via `attn_implementation="flash_attention_2"`, don't pass `torch_dtype` to the `from_pretrained` class method and use Automatic Mixed-Precision training. When using `Trainer`, it is simply specifying either `fp16` or `bf16` to `True`. Otherwise, make sure you are using `torch.autocast`. This is required because the Flash Attention only support `fp16` and `bf16` data type.
|
240_1_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere.md
|
https://huggingface.co/docs/transformers/en/model_doc/cohere/#resources
|
.md
|
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with Command-R. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
<PipelineTag pipeline="text-generation"/>
Loading FP16 model
```python
# pip install transformers
from transformers import AutoTokenizer, AutoModelForCausalLM
|
240_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere.md
|
https://huggingface.co/docs/transformers/en/model_doc/cohere/#resources
|
.md
|
model_id = "CohereForAI/c4ai-command-r-v01"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
|
240_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere.md
|
https://huggingface.co/docs/transformers/en/model_doc/cohere/#resources
|
.md
|
# Format message with the command-r chat template
messages = [{"role": "user", "content": "Hello, how are you?"}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Hello, how are you?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
gen_tokens = model.generate(
input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.3,
)
|
240_2_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere.md
|
https://huggingface.co/docs/transformers/en/model_doc/cohere/#resources
|
.md
|
gen_tokens = model.generate(
input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.3,
)
gen_text = tokenizer.decode(gen_tokens[0])
print(gen_text)
```
Loading bitsnbytes 4bit quantized model
```python
# pip install transformers bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
bnb_config = BitsAndBytesConfig(load_in_4bit=True)
|
240_2_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere.md
|
https://huggingface.co/docs/transformers/en/model_doc/cohere/#resources
|
.md
|
bnb_config = BitsAndBytesConfig(load_in_4bit=True)
model_id = "CohereForAI/c4ai-command-r-v01"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=bnb_config)
gen_tokens = model.generate(
input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.3,
)
gen_text = tokenizer.decode(gen_tokens[0])
print(gen_text)
```
|
240_2_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere.md
|
https://huggingface.co/docs/transformers/en/model_doc/cohere/#cohereconfig
|
.md
|
This is the configuration class to store the configuration of a [`CohereModel`]. It is used to instantiate an Cohere
model according to the specified arguments, defining the model architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information. Instantiating a configuration
|
240_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere.md
|
https://huggingface.co/docs/transformers/en/model_doc/cohere/#cohereconfig
|
.md
|
documentation from [`PretrainedConfig`] for more information. Instantiating a configuration
with the defaults will yield a similar configuration to that of the [CohereForAI/c4ai-command-r-v01](https://huggingface.co/CohereForAI/c4ai-command-r-v01) model.
Args:
vocab_size (`int`, *optional*, defaults to 256000):
Vocabulary size of the Cohere model. Defines the number of different tokens that can be represented by the
`inputs_ids` passed when calling [`CohereModel`]
|
240_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere.md
|
https://huggingface.co/docs/transformers/en/model_doc/cohere/#cohereconfig
|
.md
|
`inputs_ids` passed when calling [`CohereModel`]
hidden_size (`int`, *optional*, defaults to 8192):
Dimension of the hidden representations.
intermediate_size (`int`, *optional*, defaults to 22528):
Dimension of the MLP representations.
logit_scale (`float`, *optional*, defaults to 0.0625):
The scaling factor for the output logits.
num_hidden_layers (`int`, *optional*, defaults to 40):
Number of hidden layers in the Transformer decoder.
num_attention_heads (`int`, *optional*, defaults to 64):
|
240_3_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere.md
|
https://huggingface.co/docs/transformers/en/model_doc/cohere/#cohereconfig
|
.md
|
Number of hidden layers in the Transformer decoder.
num_attention_heads (`int`, *optional*, defaults to 64):
Number of attention heads for each attention layer in the Transformer decoder.
num_key_value_heads (`int`, *optional*):
This is the number of key_value heads that should be used to implement Grouped Query Attention. If
`num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if
|
240_3_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere.md
|
https://huggingface.co/docs/transformers/en/model_doc/cohere/#cohereconfig
|
.md
|
`num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if
`num_key_value_heads=1` the model will use Multi Query Attention (MQA) otherwise GQA is used. When
converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
by meanpooling all the original heads within that group. For more details checkout [this
paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to
`num_attention_heads`.
|
240_3_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere.md
|
https://huggingface.co/docs/transformers/en/model_doc/cohere/#cohereconfig
|
.md
|
paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to
`num_attention_heads`.
hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
The non-linear activation function (function or string) in the decoder.
max_position_embeddings (`int`, *optional*, defaults to 8192):
The maximum sequence length that this model might ever be used with.
initializer_range (`float`, *optional*, defaults to 0.02):
|
240_3_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere.md
|
https://huggingface.co/docs/transformers/en/model_doc/cohere/#cohereconfig
|
.md
|
The maximum sequence length that this model might ever be used with.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (`float`, *optional*, defaults to 1e-05):
The epsilon used by the layer normalization.
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models). Only
|
240_3_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere.md
|
https://huggingface.co/docs/transformers/en/model_doc/cohere/#cohereconfig
|
.md
|
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if `config.is_decoder=True`.
pad_token_id (`int`, *optional*, defaults to 0):
Padding token id.
bos_token_id (`int`, *optional*, defaults to 5):
Beginning of stream token id.
eos_token_id (`int`, *optional*, defaults to 255001):
End of stream token id.
tie_word_embeddings (`bool`, *optional*, defaults to `True`):
Whether to tie weight embeddings
|
240_3_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere.md
|
https://huggingface.co/docs/transformers/en/model_doc/cohere/#cohereconfig
|
.md
|
End of stream token id.
tie_word_embeddings (`bool`, *optional*, defaults to `True`):
Whether to tie weight embeddings
rope_theta (`float`, *optional*, defaults to 10000.0):
The base period of the RoPE embeddings.
rope_scaling (`Dict`, *optional*):
Dictionary containing the scaling configuration for the RoPE embeddings. NOTE: if you apply new rope type
and you expect the model to work on longer `max_position_embeddings`, we recommend you to update this value
accordingly.
Expected contents:
|
240_3_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere.md
|
https://huggingface.co/docs/transformers/en/model_doc/cohere/#cohereconfig
|
.md
|
accordingly.
Expected contents:
`rope_type` (`str`):
The sub-variant of RoPE to use. Can be one of ['default', 'linear', 'dynamic', 'yarn', 'longrope',
'llama3'], with 'default' being the original RoPE implementation.
`factor` (`float`, *optional*):
Used with all rope types except 'default'. The scaling factor to apply to the RoPE embeddings. In
most scaling types, a `factor` of x will enable the model to handle sequences of length x *
original maximum pre-trained length.
|
240_3_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere.md
|
https://huggingface.co/docs/transformers/en/model_doc/cohere/#cohereconfig
|
.md
|
original maximum pre-trained length.
`original_max_position_embeddings` (`int`, *optional*):
Used with 'dynamic', 'longrope' and 'llama3'. The original max position embeddings used during
pretraining.
`attention_factor` (`float`, *optional*):
Used with 'yarn' and 'longrope'. The scaling factor to be applied on the attention
computation. If unspecified, it defaults to value recommended by the implementation, using the
`factor` field to infer the suggested value.
`beta_fast` (`float`, *optional*):
|
240_3_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere.md
|
https://huggingface.co/docs/transformers/en/model_doc/cohere/#cohereconfig
|
.md
|
`factor` field to infer the suggested value.
`beta_fast` (`float`, *optional*):
Only used with 'yarn'. Parameter to set the boundary for extrapolation (only) in the linear
ramp function. If unspecified, it defaults to 32.
`beta_slow` (`float`, *optional*):
Only used with 'yarn'. Parameter to set the boundary for interpolation (only) in the linear
ramp function. If unspecified, it defaults to 1.
`short_factor` (`List[float]`, *optional*):
|
240_3_11
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere.md
|
https://huggingface.co/docs/transformers/en/model_doc/cohere/#cohereconfig
|
.md
|
ramp function. If unspecified, it defaults to 1.
`short_factor` (`List[float]`, *optional*):
Only used with 'longrope'. The scaling factor to be applied to short contexts (<
`original_max_position_embeddings`). Must be a list of numbers with the same length as the hidden
size divided by the number of attention heads divided by 2
`long_factor` (`List[float]`, *optional*):
Only used with 'longrope'. The scaling factor to be applied to long contexts (<
|
240_3_12
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere.md
|
https://huggingface.co/docs/transformers/en/model_doc/cohere/#cohereconfig
|
.md
|
`long_factor` (`List[float]`, *optional*):
Only used with 'longrope'. The scaling factor to be applied to long contexts (<
`original_max_position_embeddings`). Must be a list of numbers with the same length as the hidden
size divided by the number of attention heads divided by 2
`low_freq_factor` (`float`, *optional*):
Only used with 'llama3'. Scaling factor applied to low frequency components of the RoPE
`high_freq_factor` (`float`, *optional*):
|
240_3_13
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere.md
|
https://huggingface.co/docs/transformers/en/model_doc/cohere/#cohereconfig
|
.md
|
`high_freq_factor` (`float`, *optional*):
Only used with 'llama3'. Scaling factor applied to high frequency components of the RoPE
attention_bias (`bool`, defaults to `False`, *optional*, defaults to `False`):
Whether to use a bias in the query, key, value and output projection layers during self-attention.
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
use_qk_norm (`bool`, *optional*, defaults to `False`):
|
240_3_14
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere.md
|
https://huggingface.co/docs/transformers/en/model_doc/cohere/#cohereconfig
|
.md
|
The dropout ratio for the attention probabilities.
use_qk_norm (`bool`, *optional*, defaults to `False`):
Whether to use query-key normalization in the attention
```python
>>> from transformers import CohereModel, CohereConfig
|
240_3_15
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere.md
|
https://huggingface.co/docs/transformers/en/model_doc/cohere/#cohereconfig
|
.md
|
>>> # Initializing a Cohere model configuration
>>> configuration = CohereConfig()
>>> # Initializing a model from the Cohere configuration
>>> model = CohereModel(configuration) # doctest: +SKIP
>>> # Accessing the model configuration
>>> configuration = model.config # doctest: +SKIP
```
|
240_3_16
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere.md
|
https://huggingface.co/docs/transformers/en/model_doc/cohere/#coheretokenizerfast
|
.md
|
Construct a Cohere tokenizer. Based on byte-level Byte-Pair-Encoding.
This uses notably ByteFallback and NFC normalization.
```python
>>> from transformers import AutoTokenizer
|
240_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere.md
|
https://huggingface.co/docs/transformers/en/model_doc/cohere/#coheretokenizerfast
|
.md
|
>>> tokenizer = AutoTokenizer.from_pretrained("CohereForAI/c4ai-command-r-v01")
>>> tokenizer.encode("Hello this is a test")
[5, 28339, 2075, 1801, 1671, 3282]
```
If you want to change the `bos_token` or the `eos_token`, make sure to specify them when initializing the model, or
call `tokenizer.update_post_processor()` to make sure that the post-processing is correctly done (otherwise the
values of the first token and final token of an encoded sequence will not be correct). For more details, checkout
|
240_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere.md
|
https://huggingface.co/docs/transformers/en/model_doc/cohere/#coheretokenizerfast
|
.md
|
values of the first token and final token of an encoded sequence will not be correct). For more details, checkout
[post-processors] (https://huggingface.co/docs/tokenizers/api/post-processors) documentation.
You can get around that behavior by passing `add_prefix_space=True` when instantiating this tokenizer, but since
the model was not pretrained this way, it might yield a decrease in performance.
<Tip>
|
240_4_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere.md
|
https://huggingface.co/docs/transformers/en/model_doc/cohere/#coheretokenizerfast
|
.md
|
the model was not pretrained this way, it might yield a decrease in performance.
<Tip>
When used with `is_split_into_words=True`, this tokenizer needs to be instantiated with `add_prefix_space=True`.
</Tip>
This tokenizer inherits from [`PreTrainedTokenizerFast`] which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
Args:
vocab_file (`str`, *optional*):
Path to the vocabulary file.
merges_file (`str`, *optional*):
|
240_4_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere.md
|
https://huggingface.co/docs/transformers/en/model_doc/cohere/#coheretokenizerfast
|
.md
|
Args:
vocab_file (`str`, *optional*):
Path to the vocabulary file.
merges_file (`str`, *optional*):
Path to the merges file.
tokenizer_file (`str`, *optional*):
[tokenizers](https://github.com/huggingface/tokenizers) file (generally has a .json extension) that
contains everything needed to load the tokenizer.
clean_up_tokenization_spaces (`bool`, *optional*, defaults to `False`):
Whether or not to cleanup spaces after decoding, cleanup consists in removing potential artifacts like
extra spaces.
|
240_4_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere.md
|
https://huggingface.co/docs/transformers/en/model_doc/cohere/#coheretokenizerfast
|
.md
|
Whether or not to cleanup spaces after decoding, cleanup consists in removing potential artifacts like
extra spaces.
unk_token (`str` or `tokenizers.AddedToken`, *optional*, defaults to `"<UNK>"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
bos_token (`str` or `tokenizers.AddedToken`, *optional*, defaults to `"<BOS_TOKEN>"`):
|
240_4_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere.md
|
https://huggingface.co/docs/transformers/en/model_doc/cohere/#coheretokenizerfast
|
.md
|
token instead.
bos_token (`str` or `tokenizers.AddedToken`, *optional*, defaults to `"<BOS_TOKEN>"`):
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
eos_token (`str` or `tokenizers.AddedToken`, *optional*, defaults to `"<|END_OF_TURN_TOKEN|>"`):
The end of sequence token.
add_bos_token (`bool`, *optional*, defaults to `True`):
Whether or not to add an `bos_token` at the start of sequences.
add_eos_token (`bool`, *optional*, defaults to `False`):
|
240_4_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere.md
|
https://huggingface.co/docs/transformers/en/model_doc/cohere/#coheretokenizerfast
|
.md
|
Whether or not to add an `bos_token` at the start of sequences.
add_eos_token (`bool`, *optional*, defaults to `False`):
Whether or not to add an `eos_token` at the end of sequences.
use_default_system_prompt (`bool`, *optional*, defaults to `False`):
Whether or not the default system prompt for Cohere tokenizer should be used.
add_prefix_space (`bool`, *optional*, defaults to `False`):
Whether or not the tokenizer should automatically add a prefix space
Methods: build_inputs_with_special_tokens
|
240_4_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere.md
|
https://huggingface.co/docs/transformers/en/model_doc/cohere/#coheretokenizerfast
|
.md
|
Whether or not the tokenizer should automatically add a prefix space
Methods: build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- update_post_processor
- save_vocabulary
|
240_4_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere.md
|
https://huggingface.co/docs/transformers/en/model_doc/cohere/#coheremodel
|
.md
|
The bare Cohere Model outputting raw hidden-states without any specific head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
240_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere.md
|
https://huggingface.co/docs/transformers/en/model_doc/cohere/#coheremodel
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`CohereConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
|
240_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere.md
|
https://huggingface.co/docs/transformers/en/model_doc/cohere/#coheremodel
|
.md
|
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Transformer decoder consisting of *config.num_hidden_layers* layers. Each layer is a [`CohereDecoderLayer`]
Args:
config: CohereConfig
Methods: forward
|
240_5_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cohere.md
|
https://huggingface.co/docs/transformers/en/model_doc/cohere/#cohereforcausallm
|
.md
|
No docstring available for CohereForCausalLM
Methods: forward
|
240_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/granite.md
|
https://huggingface.co/docs/transformers/en/model_doc/granite/
|
.md
|
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
241_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/granite.md
|
https://huggingface.co/docs/transformers/en/model_doc/granite/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
241_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/granite.md
|
https://huggingface.co/docs/transformers/en/model_doc/granite/#overview
|
.md
|
The Granite model was proposed in [Power Scheduler: A Batch Size and Token Number Agnostic Learning Rate Scheduler](https://arxiv.org/abs/2408.13359) by Yikang Shen, Matthew Stallone, Mayank Mishra, Gaoyuan Zhang, Shawn Tan, Aditya Prasad, Adriana Meza Soria, David D. Cox and Rameswar Panda.
|
241_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/granite.md
|
https://huggingface.co/docs/transformers/en/model_doc/granite/#overview
|
.md
|
PowerLM-3B is a 3B state-of-the-art small language model trained with the Power learning rate scheduler. It is trained on a wide range of open-source and synthetic datasets with permissive licenses. PowerLM-3B has shown promising results compared to other models in the size categories across various benchmarks, including natural language multi-choices, code generation, and math reasoning.
The abstract from the paper is the following:
|
241_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/granite.md
|
https://huggingface.co/docs/transformers/en/model_doc/granite/#overview
|
.md
|
The abstract from the paper is the following:
*Finding the optimal learning rate for language model pretraining is a challenging task.
|
241_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/granite.md
|
https://huggingface.co/docs/transformers/en/model_doc/granite/#overview
|
.md
|
This is not only because there is a complicated correlation between learning rate, batch size, number of training tokens, model size, and other hyperparameters but also because it is prohibitively expensive to perform a hyperparameter search for large language models with Billions or Trillions of parameters. Recent studies propose using small proxy models and small corpus to perform hyperparameter searches and transposing the optimal parameters to large models and large corpus. While the zero-shot
|
241_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/granite.md
|
https://huggingface.co/docs/transformers/en/model_doc/granite/#overview
|
.md
|
to perform hyperparameter searches and transposing the optimal parameters to large models and large corpus. While the zero-shot transferability is theoretically and empirically proven for model size related hyperparameters, like depth and width, the zero-shot transfer from small corpus to large corpus is underexplored.
|
241_1_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/granite.md
|
https://huggingface.co/docs/transformers/en/model_doc/granite/#overview
|
.md
|
In this paper, we study the correlation between optimal learning rate, batch size, and number of training tokens for the recently proposed WSD scheduler. After thousands of small experiments, we found a power-law relationship between variables and demonstrated its transferability across model sizes. Based on the observation, we propose a new learning rate scheduler, Power scheduler, that is agnostic about the number of training tokens and batch size. The experiment shows that combining the Power scheduler
|
241_1_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/granite.md
|
https://huggingface.co/docs/transformers/en/model_doc/granite/#overview
|
.md
|
that is agnostic about the number of training tokens and batch size. The experiment shows that combining the Power scheduler with Maximum Update Parameterization (\mup) can consistently achieve impressive performance with one set of hyperparameters regardless of the number of training tokens, batch size, model size, and even model architecture. Our 3B dense and MoE models trained with the Power scheduler achieve comparable performance as state-of-the-art small language models.
|
241_1_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/granite.md
|
https://huggingface.co/docs/transformers/en/model_doc/granite/#overview
|
.md
|
We [open source](https://huggingface.co/collections/ibm/power-lm-66be64ae647ddf11b9808000) these pretrained models.*
Tips:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
|
241_1_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/granite.md
|
https://huggingface.co/docs/transformers/en/model_doc/granite/#overview
|
.md
|
model_path = "ibm/PowerLM-3b"
tokenizer = AutoTokenizer.from_pretrained(model_path)
# drop device_map if running on CPU
model = AutoModelForCausalLM.from_pretrained(model_path, device_map="auto")
model.eval()
# change input text as desired
prompt = "Write a code to find the maximum value in a list of numbers."
|
241_1_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/granite.md
|
https://huggingface.co/docs/transformers/en/model_doc/granite/#overview
|
.md
|
# tokenize the text
input_tokens = tokenizer(prompt, return_tensors="pt")
# generate output tokens
output = model.generate(**input_tokens, max_new_tokens=100)
# decode output tokens into text
output = tokenizer.batch_decode(output)
# loop over the batch to print, in this example the batch size is 1
for i in output:
print(i)
```
This model was contributed by [mayank-mishra](https://huggingface.co/mayank-mishra).
|
241_1_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/granite.md
|
https://huggingface.co/docs/transformers/en/model_doc/granite/#graniteconfig
|
.md
|
This is the configuration class to store the configuration of a [`GraniteModel`]. It is used to instantiate an Granite
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the Granite-3B.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
|
241_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/granite.md
|
https://huggingface.co/docs/transformers/en/model_doc/granite/#graniteconfig
|
.md
|
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 32000):
Vocabulary size of the Granite model. Defines the number of different tokens that can be represented by the
`inputs_ids` passed when calling [`GraniteModel`]
hidden_size (`int`, *optional*, defaults to 4096):
Dimension of the hidden representations.
intermediate_size (`int`, *optional*, defaults to 11008):
Dimension of the MLP representations.
|
241_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/granite.md
|
https://huggingface.co/docs/transformers/en/model_doc/granite/#graniteconfig
|
.md
|
intermediate_size (`int`, *optional*, defaults to 11008):
Dimension of the MLP representations.
num_hidden_layers (`int`, *optional*, defaults to 32):
Number of hidden layers in the Transformer decoder.
num_attention_heads (`int`, *optional*, defaults to 32):
Number of attention heads for each attention layer in the Transformer decoder.
num_key_value_heads (`int`, *optional*):
This is the number of key_value heads that should be used to implement Grouped Query Attention. If
|
241_2_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/granite.md
|
https://huggingface.co/docs/transformers/en/model_doc/granite/#graniteconfig
|
.md
|
This is the number of key_value heads that should be used to implement Grouped Query Attention. If
`num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if
`num_key_value_heads=1` the model will use Multi Query Attention (MQA) otherwise GQA is used. When
converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
by meanpooling all the original heads within that group. For more details checkout [this
|
241_2_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/granite.md
|
https://huggingface.co/docs/transformers/en/model_doc/granite/#graniteconfig
|
.md
|
by meanpooling all the original heads within that group. For more details checkout [this
paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to
`num_attention_heads`.
hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
The non-linear activation function (function or string) in the decoder.
max_position_embeddings (`int`, *optional*, defaults to 2048):
The maximum sequence length that this model might ever be used with.
|
241_2_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/granite.md
|
https://huggingface.co/docs/transformers/en/model_doc/granite/#graniteconfig
|
.md
|
The maximum sequence length that this model might ever be used with.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
rms_norm_eps (`float`, *optional*, defaults to 1e-06):
The epsilon used by the rms normalization layers.
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models). Only
|
241_2_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/granite.md
|
https://huggingface.co/docs/transformers/en/model_doc/granite/#graniteconfig
|
.md
|
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if `config.is_decoder=True`.
pad_token_id (`int`, *optional*):
Padding token id.
bos_token_id (`int`, *optional*, defaults to 1):
Beginning of stream token id.
eos_token_id (`int`, *optional*, defaults to 2):
End of stream token id.
tie_word_embeddings (`bool`, *optional*, defaults to `False`):
Whether to tie weight embeddings
rope_theta (`float`, *optional*, defaults to 10000.0):
|
241_2_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/granite.md
|
https://huggingface.co/docs/transformers/en/model_doc/granite/#graniteconfig
|
.md
|
Whether to tie weight embeddings
rope_theta (`float`, *optional*, defaults to 10000.0):
The base period of the RoPE embeddings.
rope_scaling (`Dict`, *optional*):
Dictionary containing the scaling configuration for the RoPE embeddings. Currently supports two scaling
strategies: linear and dynamic. Their scaling factor must be a float greater than 1. The expected format is
`{"type": strategy name, "factor": scaling factor}`. When using this flag, don't update
|
241_2_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/granite.md
|
https://huggingface.co/docs/transformers/en/model_doc/granite/#graniteconfig
|
.md
|
`{"type": strategy name, "factor": scaling factor}`. When using this flag, don't update
`max_position_embeddings` to the expected new maximum. See the following thread for more information on how
these scaling strategies behave:
https://www.reddit.com/r/LocalLLaMA/comments/14mrgpr/dynamically_scaled_rope_further_increases/. This is an
experimental feature, subject to breaking API changes in future versions.
attention_bias (`bool`, *optional*, defaults to `False`):
|
241_2_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/granite.md
|
https://huggingface.co/docs/transformers/en/model_doc/granite/#graniteconfig
|
.md
|
attention_bias (`bool`, *optional*, defaults to `False`):
Whether to use a bias in the query, key, value and output projection layers during self-attention.
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
mlp_bias (`bool`, *optional*, defaults to `False`):
Whether to use a bias in up_proj, down_proj and gate_proj layers in the MLP layers.
embedding_multiplier (`float`, *optional*, defaults to 1.0): embedding multiplier
|
241_2_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/granite.md
|
https://huggingface.co/docs/transformers/en/model_doc/granite/#graniteconfig
|
.md
|
embedding_multiplier (`float`, *optional*, defaults to 1.0): embedding multiplier
logits_scaling (`float`, *optional*, defaults to 1.0): divisor for output logits
residual_multiplier (`float`, *optional*, defaults to 1.0): residual multiplier
attention_multiplier (`float`, *optional*, defaults to 1.0): attention multiplier
```python
>>> from transformers import GraniteModel, GraniteConfig
|
241_2_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/granite.md
|
https://huggingface.co/docs/transformers/en/model_doc/granite/#graniteconfig
|
.md
|
>>> # Initializing a Granite granite-3b style configuration
>>> configuration = GraniteConfig()
>>> # Initializing a model from the granite-7b style configuration
>>> model = GraniteModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
241_2_11
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/granite.md
|
https://huggingface.co/docs/transformers/en/model_doc/granite/#granitemodel
|
.md
|
The bare Granite Model outputting raw hidden-states without any specific head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
241_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/granite.md
|
https://huggingface.co/docs/transformers/en/model_doc/granite/#granitemodel
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`GraniteConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
|
241_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/granite.md
|
https://huggingface.co/docs/transformers/en/model_doc/granite/#granitemodel
|
.md
|
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Transformer decoder consisting of *config.num_hidden_layers* layers. Each layer is a [`GraniteDecoderLayer`]
Args:
config: GraniteConfig
Methods: forward
|
241_3_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/granite.md
|
https://huggingface.co/docs/transformers/en/model_doc/granite/#graniteforcausallm
|
.md
|
No docstring available for GraniteForCausalLM
Methods: forward
|
241_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/
|
.md
|
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
242_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
242_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnet
|
.md
|
<div class="flex flex-wrap space-x-1">
<a href="https://huggingface.co/models?filter=prophetnet">
<img alt="Models" src="https://img.shields.io/badge/All_model_pages-prophetnet-blueviolet">
</a>
<a href="https://huggingface.co/spaces/docs-demos/prophetnet-large-uncased">
<img alt="Spaces" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue">
</a>
</div>
|
242_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#overview
|
.md
|
The ProphetNet model was proposed in [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training,](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei
Zhang, Ming Zhou on 13 Jan, 2020.
ProphetNet is an encoder-decoder model and can predict n-future tokens for "ngram" language modeling instead of just
the next token.
The abstract from the paper is the following:
|
242_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#overview
|
.md
|
the next token.
The abstract from the paper is the following:
*In this paper, we present a new sequence-to-sequence pretraining model called ProphetNet, which introduces a novel
self-supervised objective named future n-gram prediction and the proposed n-stream self-attention mechanism. Instead of
the optimization of one-step ahead prediction in traditional sequence-to-sequence model, the ProphetNet is optimized by
|
242_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#overview
|
.md
|
the optimization of one-step ahead prediction in traditional sequence-to-sequence model, the ProphetNet is optimized by
n-step ahead prediction which predicts the next n tokens simultaneously based on previous context tokens at each time
step. The future n-gram prediction explicitly encourages the model to plan for the future tokens and prevent
overfitting on strong local correlations. We pre-train ProphetNet using a base scale dataset (16GB) and a large scale
|
242_2_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#overview
|
.md
|
overfitting on strong local correlations. We pre-train ProphetNet using a base scale dataset (16GB) and a large scale
dataset (160GB) respectively. Then we conduct experiments on CNN/DailyMail, Gigaword, and SQuAD 1.1 benchmarks for
abstractive summarization and question generation tasks. Experimental results show that ProphetNet achieves new
state-of-the-art results on all these datasets compared to the models using the same scale pretraining corpus.*
|
242_2_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#overview
|
.md
|
state-of-the-art results on all these datasets compared to the models using the same scale pretraining corpus.*
The Authors' code can be found [here](https://github.com/microsoft/ProphetNet).
|
242_2_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#usage-tips
|
.md
|
- ProphetNet is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than
the left.
- The model architecture is based on the original Transformer, but replaces the “standard” self-attention mechanism in the decoder by a main self-attention mechanism and a self and n-stream (predict) self-attention mechanism.
|
242_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#resources
|
.md
|
- [Causal language modeling task guide](../tasks/language_modeling)
- [Translation task guide](../tasks/translation)
- [Summarization task guide](../tasks/summarization)
|
242_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnetconfig
|
.md
|
This is the configuration class to store the configuration of a [`ProphetNetModel`]. It is used to instantiate a
ProphetNet model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the ProphetNet
[microsoft/prophetnet-large-uncased](https://huggingface.co/microsoft/prophetnet-large-uncased) architecture.
|
242_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnetconfig
|
.md
|
[microsoft/prophetnet-large-uncased](https://huggingface.co/microsoft/prophetnet-large-uncased) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
activation_dropout (`float`, *optional*, defaults to 0.1):
The dropout ratio for activations inside the fully connected layer.
activation_function (`str` or `function`, *optional*, defaults to `"gelu"`):
|
242_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnetconfig
|
.md
|
activation_function (`str` or `function`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"silu"` and `"gelu_new"` are supported.
vocab_size (`int`, *optional*, defaults to 30522):
Vocabulary size of the ProphetNET model. Defines the number of different tokens that can be represented by
the `inputs_ids` passed when calling [`ProphetNetModel`].
hidden_size (`int`, *optional*, defaults to 1024):
|
242_5_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnetconfig
|
.md
|
the `inputs_ids` passed when calling [`ProphetNetModel`].
hidden_size (`int`, *optional*, defaults to 1024):
Dimensionality of the layers and the pooler layer.
encoder_ffn_dim (`int`, *optional*, defaults to 4096):
Dimensionality of the "intermediate" (often named feed-forward) layer in decoder.
num_encoder_layers (`int`, *optional*, defaults to 12):
Number of encoder layers.
num_encoder_attention_heads (`int`, *optional*, defaults to 16):
|
242_5_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnetconfig
|
.md
|
Number of encoder layers.
num_encoder_attention_heads (`int`, *optional*, defaults to 16):
Number of attention heads for each attention layer in the Transformer encoder.
decoder_ffn_dim (`int`, *optional*, defaults to 4096):
Dimensionality of the `intermediate` (often named feed-forward) layer in decoder.
num_decoder_layers (`int`, *optional*, defaults to 12):
Number of decoder layers.
num_decoder_attention_heads (`int`, *optional*, defaults to 16):
|
242_5_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnetconfig
|
.md
|
Number of decoder layers.
num_decoder_attention_heads (`int`, *optional*, defaults to 16):
Number of attention heads for each attention layer in the Transformer decoder.
attention_dropout (`float`, *optional*, defaults to 0.1):
The dropout ratio for the attention probabilities.
dropout (`float`, *optional*, defaults to 0.1):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
max_position_embeddings (`int`, *optional*, defaults to 512):
|
242_5_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnetconfig
|
.md
|
max_position_embeddings (`int`, *optional*, defaults to 512):
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
init_std (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
add_cross_attention (`bool`, *optional*, defaults to `True`):
Whether cross-attention layers should be added to the model.
|
242_5_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnetconfig
|
.md
|
add_cross_attention (`bool`, *optional*, defaults to `True`):
Whether cross-attention layers should be added to the model.
is_encoder_decoder (`bool`, *optional*, defaults to `True`):
Whether this is an encoder/decoder model.
pad_token_id (`int`, *optional*, defaults to 1)
Padding token id.
bos_token_id (`int`, *optional*, defaults to 0)
Beginning of stream token id.
eos_token_id (`int`, *optional*, defaults to 2)
End of stream token id.
ngram (`int`, *optional*, defaults to 2)
|
242_5_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnetconfig
|
.md
|
eos_token_id (`int`, *optional*, defaults to 2)
End of stream token id.
ngram (`int`, *optional*, defaults to 2)
Number of future tokens to predict. Set to 1 to be same as traditional Language model to predict next first
token.
num_buckets (`int`, *optional*, defaults to 32)
The number of buckets to use for each attention layer. This is for relative position calculation. See the
[T5 paper](see https://arxiv.org/abs/1910.10683) for more details.
relative_max_distance (`int`, *optional*, defaults to 128)
|
242_5_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnetconfig
|
.md
|
[T5 paper](see https://arxiv.org/abs/1910.10683) for more details.
relative_max_distance (`int`, *optional*, defaults to 128)
Relative distances greater than this number will be put into the last same bucket. This is for relative
position calculation. See the [T5 paper](see https://arxiv.org/abs/1910.10683) for more details.
disable_ngram_loss (`bool`, *optional*, defaults to `False`):
Whether be trained predicting only the next first token.
eps (`float`, *optional*, defaults to 0.0):
|
242_5_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnetconfig
|
.md
|
Whether be trained predicting only the next first token.
eps (`float`, *optional*, defaults to 0.0):
Controls the `epsilon` parameter value for label smoothing in the loss calculation. If set to 0, no label
smoothing is performed.
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models).
|
242_5_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnettokenizer
|
.md
|
Construct a ProphetNetTokenizer. Based on WordPiece.
This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
Args:
vocab_file (`str`):
File containing the vocabulary.
do_lower_case (`bool`, *optional*, defaults to `True`):
Whether or not to lowercase the input when tokenizing.
do_basic_tokenize (`bool`, *optional*, defaults to `True`):
|
242_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnettokenizer
|
.md
|
Whether or not to lowercase the input when tokenizing.
do_basic_tokenize (`bool`, *optional*, defaults to `True`):
Whether or not to do basic tokenization before WordPiece.
never_split (`Iterable`, *optional*):
Collection of tokens which will never be split during tokenization. Only has an effect when
`do_basic_tokenize=True`
unk_token (`str`, *optional*, defaults to `"[UNK]"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
|
242_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnettokenizer
|
.md
|
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
sep_token (`str`, *optional*, defaults to `"[SEP]"`):
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
x_sep_token (`str`, *optional*, defaults to `"[X_SEP]"`):
|
242_6_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnettokenizer
|
.md
|
token of a sequence built with special tokens.
x_sep_token (`str`, *optional*, defaults to `"[X_SEP]"`):
Special second separator token, which can be generated by [`ProphetNetForConditionalGeneration`]. It is
used to separate bullet-point like sentences in summarization, *e.g.*.
pad_token (`str`, *optional*, defaults to `"[PAD]"`):
The token used for padding, for example when batching sequences of different lengths.
mask_token (`str`, *optional*, defaults to `"[MASK]"`):
|
242_6_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/prophetnet.md
|
https://huggingface.co/docs/transformers/en/model_doc/prophetnet/#prophetnettokenizer
|
.md
|
mask_token (`str`, *optional*, defaults to `"[MASK]"`):
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
tokenize_chinese_chars (`bool`, *optional*, defaults to `True`):
Whether or not to tokenize Chinese characters.
This should likely be deactivated for Japanese (see this
[issue](https://github.com/huggingface/transformers/issues/328)).
strip_accents (`bool`, *optional*):
|
242_6_4
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.