source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#resources
|
.md
|
- A [notebook](https://colab.research.google.com/drive/1SYpgFpcmtIUzdE7pxqknrM4ArCASfkFQ?usp=sharing) on how to fine-tune the Llama 2 model on a personal computer using QLoRa and TRL. ๐
โก๏ธ Inference
- A [notebook](https://colab.research.google.com/drive/1TC56ArKerXUpbgRy5vM3woRsbTEVNq7h?usp=sharing) on how to quantize the Llama 2 model using GPTQ from the AutoGPTQ library. ๐
|
201_3_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#resources
|
.md
|
- A [notebook](https://colab.research.google.com/drive/1X1z9Q6domMKl2CnEM0QGHNwidLfR4dW2?usp=sharing) on how to run the Llama 2 Chat Model with 4-bit quantization on a local computer or Google Colab. ๐
๐ Deploy
- [Fine-tune LLaMA 2 (7-70B) on Amazon SageMaker](https://www.philschmid.de/sagemaker-llama2-qlora), a complete guide from setup to QLoRA fine-tuning and deployment on Amazon SageMaker.
|
201_3_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#resources
|
.md
|
- [Deploy Llama 2 7B/13B/70B on Amazon SageMaker](https://www.philschmid.de/sagemaker-llama-llm), a guide on using Hugging Face's LLM DLC container for secure and scalable deployment.
|
201_3_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#llamaconfig
|
.md
|
This is the configuration class to store the configuration of a [`LlamaModel`]. It is used to instantiate an LLaMA
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the LLaMA-7B.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
|
201_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#llamaconfig
|
.md
|
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 32000):
Vocabulary size of the LLaMA model. Defines the number of different tokens that can be represented by the
`inputs_ids` passed when calling [`LlamaModel`]
hidden_size (`int`, *optional*, defaults to 4096):
Dimension of the hidden representations.
intermediate_size (`int`, *optional*, defaults to 11008):
Dimension of the MLP representations.
|
201_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#llamaconfig
|
.md
|
intermediate_size (`int`, *optional*, defaults to 11008):
Dimension of the MLP representations.
num_hidden_layers (`int`, *optional*, defaults to 32):
Number of hidden layers in the Transformer decoder.
num_attention_heads (`int`, *optional*, defaults to 32):
Number of attention heads for each attention layer in the Transformer decoder.
num_key_value_heads (`int`, *optional*):
This is the number of key_value heads that should be used to implement Grouped Query Attention. If
|
201_4_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#llamaconfig
|
.md
|
This is the number of key_value heads that should be used to implement Grouped Query Attention. If
`num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if
`num_key_value_heads=1` the model will use Multi Query Attention (MQA) otherwise GQA is used. When
converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
by meanpooling all the original heads within that group. For more details checkout [this
|
201_4_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#llamaconfig
|
.md
|
by meanpooling all the original heads within that group. For more details checkout [this
paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to
`num_attention_heads`.
hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
The non-linear activation function (function or string) in the decoder.
max_position_embeddings (`int`, *optional*, defaults to 2048):
The maximum sequence length that this model might ever be used with. Llama 1 supports up to 2048 tokens,
|
201_4_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#llamaconfig
|
.md
|
The maximum sequence length that this model might ever be used with. Llama 1 supports up to 2048 tokens,
Llama 2 up to 4096, CodeLlama up to 16384.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
rms_norm_eps (`float`, *optional*, defaults to 1e-06):
The epsilon used by the rms normalization layers.
use_cache (`bool`, *optional*, defaults to `True`):
|
201_4_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#llamaconfig
|
.md
|
The epsilon used by the rms normalization layers.
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if `config.is_decoder=True`.
pad_token_id (`int`, *optional*):
Padding token id.
bos_token_id (`int`, *optional*, defaults to 1):
Beginning of stream token id.
eos_token_id (`int`, *optional*, defaults to 2):
End of stream token id.
pretraining_tp (`int`, *optional*, defaults to 1):
|
201_4_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#llamaconfig
|
.md
|
eos_token_id (`int`, *optional*, defaults to 2):
End of stream token id.
pretraining_tp (`int`, *optional*, defaults to 1):
Experimental feature. Tensor parallelism rank used during pretraining. Please refer to [this
document](https://huggingface.co/docs/transformers/main/perf_train_gpu_many#tensor-parallelism) to
understand more about it. This value is necessary to ensure exact reproducibility of the pretraining
results. Please refer to [this issue](https://github.com/pytorch/pytorch/issues/76232).
|
201_4_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#llamaconfig
|
.md
|
results. Please refer to [this issue](https://github.com/pytorch/pytorch/issues/76232).
tie_word_embeddings (`bool`, *optional*, defaults to `False`):
Whether to tie weight embeddings
rope_theta (`float`, *optional*, defaults to 10000.0):
The base period of the RoPE embeddings.
rope_scaling (`Dict`, *optional*):
Dictionary containing the scaling configuration for the RoPE embeddings. NOTE: if you apply new rope type
|
201_4_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#llamaconfig
|
.md
|
Dictionary containing the scaling configuration for the RoPE embeddings. NOTE: if you apply new rope type
and you expect the model to work on longer `max_position_embeddings`, we recommend you to update this value
accordingly.
Expected contents:
`rope_type` (`str`):
The sub-variant of RoPE to use. Can be one of ['default', 'linear', 'dynamic', 'yarn', 'longrope',
'llama3'], with 'default' being the original RoPE implementation.
`factor` (`float`, *optional*):
|
201_4_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#llamaconfig
|
.md
|
'llama3'], with 'default' being the original RoPE implementation.
`factor` (`float`, *optional*):
Used with all rope types except 'default'. The scaling factor to apply to the RoPE embeddings. In
most scaling types, a `factor` of x will enable the model to handle sequences of length x *
original maximum pre-trained length.
`original_max_position_embeddings` (`int`, *optional*):
Used with 'dynamic', 'longrope' and 'llama3'. The original max position embeddings used during
pretraining.
|
201_4_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#llamaconfig
|
.md
|
Used with 'dynamic', 'longrope' and 'llama3'. The original max position embeddings used during
pretraining.
`attention_factor` (`float`, *optional*):
Used with 'yarn' and 'longrope'. The scaling factor to be applied on the attention
computation. If unspecified, it defaults to value recommended by the implementation, using the
`factor` field to infer the suggested value.
`beta_fast` (`float`, *optional*):
Only used with 'yarn'. Parameter to set the boundary for extrapolation (only) in the linear
|
201_4_11
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#llamaconfig
|
.md
|
`beta_fast` (`float`, *optional*):
Only used with 'yarn'. Parameter to set the boundary for extrapolation (only) in the linear
ramp function. If unspecified, it defaults to 32.
`beta_slow` (`float`, *optional*):
Only used with 'yarn'. Parameter to set the boundary for interpolation (only) in the linear
ramp function. If unspecified, it defaults to 1.
`short_factor` (`List[float]`, *optional*):
Only used with 'longrope'. The scaling factor to be applied to short contexts (<
|
201_4_12
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#llamaconfig
|
.md
|
`short_factor` (`List[float]`, *optional*):
Only used with 'longrope'. The scaling factor to be applied to short contexts (<
`original_max_position_embeddings`). Must be a list of numbers with the same length as the hidden
size divided by the number of attention heads divided by 2
`long_factor` (`List[float]`, *optional*):
Only used with 'longrope'. The scaling factor to be applied to long contexts (<
`original_max_position_embeddings`). Must be a list of numbers with the same length as the hidden
|
201_4_13
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#llamaconfig
|
.md
|
`original_max_position_embeddings`). Must be a list of numbers with the same length as the hidden
size divided by the number of attention heads divided by 2
`low_freq_factor` (`float`, *optional*):
Only used with 'llama3'. Scaling factor applied to low frequency components of the RoPE
`high_freq_factor` (`float`, *optional*):
Only used with 'llama3'. Scaling factor applied to high frequency components of the RoPE
attention_bias (`bool`, *optional*, defaults to `False`):
|
201_4_14
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#llamaconfig
|
.md
|
attention_bias (`bool`, *optional*, defaults to `False`):
Whether to use a bias in the query, key, value and output projection layers during self-attention.
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
mlp_bias (`bool`, *optional*, defaults to `False`):
Whether to use a bias in up_proj, down_proj and gate_proj layers in the MLP layers.
head_dim (`int`, *optional*):
|
201_4_15
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#llamaconfig
|
.md
|
Whether to use a bias in up_proj, down_proj and gate_proj layers in the MLP layers.
head_dim (`int`, *optional*):
The attention head dimension. If None, it will default to hidden_size // num_attention_heads
```python
>>> from transformers import LlamaModel, LlamaConfig
|
201_4_16
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#llamaconfig
|
.md
|
>>> # Initializing a LLaMA llama-7b style configuration
>>> configuration = LlamaConfig()
>>> # Initializing a model from the llama-7b style configuration
>>> model = LlamaModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
201_4_17
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#llamatokenizer
|
.md
|
Construct a Llama tokenizer. Based on byte-level Byte-Pair-Encoding. The default padding token is unset as there is
no padding token in the original model.
Args:
vocab_file (`str`):
Path to the vocabulary file.
unk_token (`str` or `tokenizers.AddedToken`, *optional*, defaults to `"<unk>"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
bos_token (`str` or `tokenizers.AddedToken`, *optional*, defaults to `"<s>"`):
|
201_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#llamatokenizer
|
.md
|
token instead.
bos_token (`str` or `tokenizers.AddedToken`, *optional*, defaults to `"<s>"`):
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
eos_token (`str` or `tokenizers.AddedToken`, *optional*, defaults to `"</s>"`):
The end of sequence token.
pad_token (`str` or `tokenizers.AddedToken`, *optional*):
A special token used to make arrays of tokens the same size for batching purpose. Will then be ignored by
|
201_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#llamatokenizer
|
.md
|
A special token used to make arrays of tokens the same size for batching purpose. Will then be ignored by
attention mechanisms or loss computation.
sp_model_kwargs (`Dict[str, Any]`, `Optional`, *optional*):
Will be passed to the `SentencePieceProcessor.__init__()` method. The [Python wrapper for
SentencePiece](https://github.com/google/sentencepiece/tree/master/python) can be used, among other things,
to set:
- `enable_sampling`: Enable subword regularization.
|
201_5_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#llamatokenizer
|
.md
|
to set:
- `enable_sampling`: Enable subword regularization.
- `nbest_size`: Sampling parameters for unigram. Invalid for BPE-Dropout.
- `nbest_size = {0,1}`: No sampling is performed.
- `nbest_size > 1`: samples from the nbest_size results.
- `nbest_size < 0`: assuming that nbest_size is infinite and samples from the all hypothesis (lattice)
using forward-filtering-and-backward-sampling algorithm.
- `alpha`: Smoothing parameter for unigram sampling, and dropout probability of merge operations for
|
201_5_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#llamatokenizer
|
.md
|
- `alpha`: Smoothing parameter for unigram sampling, and dropout probability of merge operations for
BPE-dropout.
add_bos_token (`bool`, *optional*, defaults to `True`):
Whether or not to add an `bos_token` at the start of sequences.
add_eos_token (`bool`, *optional*, defaults to `False`):
Whether or not to add an `eos_token` at the end of sequences.
clean_up_tokenization_spaces (`bool`, *optional*, defaults to `False`):
|
201_5_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#llamatokenizer
|
.md
|
clean_up_tokenization_spaces (`bool`, *optional*, defaults to `False`):
Whether or not to cleanup spaces after decoding, cleanup consists in removing potential artifacts like
extra spaces.
use_default_system_prompt (`bool`, *optional*, defaults to `False`):
Whether or not the default system prompt for Llama should be used.
spaces_between_special_tokens (`bool`, *optional*, defaults to `False`):
Whether or not to add spaces between special tokens.
legacy (`bool`, *optional*):
|
201_5_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#llamatokenizer
|
.md
|
Whether or not to add spaces between special tokens.
legacy (`bool`, *optional*):
Whether or not the `legacy` behavior of the tokenizer should be used. Legacy is before the merge of #24622
and #25224 which includes fixes to properly handle tokens that appear after special tokens.
Make sure to also set `from_slow` to `True`.
A simple example:
- `legacy=True`:
```python
>>> from transformers import LlamaTokenizerFast
|
201_5_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#llamatokenizer
|
.md
|
>>> tokenizer = LlamaTokenizerFast.from_pretrained("huggyllama/llama-7b", legacy=True, from_slow=True)
>>> tokenizer.encode("Hello <s>.") # 869 is 'โ.'
[1, 15043, 29871, 1, 869]
```
- `legacy=False`:
```python
>>> from transformers import LlamaTokenizerFast
|
201_5_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#llamatokenizer
|
.md
|
>>> tokenizer = LlamaTokenizerFast.from_pretrained("huggyllama/llama-7b", legacy=False, from_slow=True)
>>> tokenizer.encode("Hello <s>.") # 29889 is '.'
[1, 15043, 29871, 1, 29889]
```
Checkout the [pull request](https://github.com/huggingface/transformers/pull/24565) for more details.
add_prefix_space (`bool`, *optional*, defaults to `True`):
Whether or not to add an initial space to the input. This allows to treat the leading word just as any
|
201_5_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#llamatokenizer
|
.md
|
Whether or not to add an initial space to the input. This allows to treat the leading word just as any
other word. Again, this should be set with `from_slow=True` to make sure it's taken into account.
Methods: build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
|
201_5_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#llamatokenizerfast
|
.md
|
Construct a Llama tokenizer. Based on byte-level Byte-Pair-Encoding.
This uses notably ByteFallback and no normalization.
```python
>>> from transformers import LlamaTokenizerFast
|
201_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#llamatokenizerfast
|
.md
|
>>> tokenizer = LlamaTokenizerFast.from_pretrained("hf-internal-testing/llama-tokenizer")
>>> tokenizer.encode("Hello this is a test")
[1, 15043, 445, 338, 263, 1243]
```
If you want to change the `bos_token` or the `eos_token`, make sure to specify them when initializing the model, or
call `tokenizer.update_post_processor()` to make sure that the post-processing is correctly done (otherwise the
|
201_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#llamatokenizerfast
|
.md
|
call `tokenizer.update_post_processor()` to make sure that the post-processing is correctly done (otherwise the
values of the first token and final token of an encoded sequence will not be correct). For more details, checkout
[post-processors] (https://huggingface.co/docs/tokenizers/api/post-processors) documentation.
This tokenizer inherits from [`PreTrainedTokenizerFast`] which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
Args:
|
201_6_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#llamatokenizerfast
|
.md
|
refer to this superclass for more information regarding those methods.
Args:
vocab_file (`str`, *optional*):
[SentencePiece](https://github.com/google/sentencepiece) file (generally has a .model extension) that
contains the vocabulary necessary to instantiate a tokenizer.
tokenizer_file (`str`, *optional*):
[tokenizers](https://github.com/huggingface/tokenizers) file (generally has a .json extension) that
contains everything needed to load the tokenizer.
|
201_6_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#llamatokenizerfast
|
.md
|
contains everything needed to load the tokenizer.
clean_up_tokenization_spaces (`bool`, *optional*, defaults to `False`):
Whether or not to cleanup spaces after decoding, cleanup consists in removing potential artifacts like
extra spaces.
unk_token (`str` or `tokenizers.AddedToken`, *optional*, defaults to `"<unk>"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
|
201_6_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#llamatokenizerfast
|
.md
|
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
bos_token (`str` or `tokenizers.AddedToken`, *optional*, defaults to `"<s>"`):
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
eos_token (`str` or `tokenizers.AddedToken`, *optional*, defaults to `"</s>"`):
The end of sequence token.
add_bos_token (`bool`, *optional*, defaults to `True`):
|
201_6_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#llamatokenizerfast
|
.md
|
The end of sequence token.
add_bos_token (`bool`, *optional*, defaults to `True`):
Whether or not to add an `bos_token` at the start of sequences.
add_eos_token (`bool`, *optional*, defaults to `False`):
Whether or not to add an `eos_token` at the end of sequences.
use_default_system_prompt (`bool`, *optional*, defaults to `False`):
Whether or not the default system prompt for Llama should be used
legacy (`bool`, *optional*):
|
201_6_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#llamatokenizerfast
|
.md
|
Whether or not the default system prompt for Llama should be used
legacy (`bool`, *optional*):
Whether or not the `legacy` behavior of the tokenizer should be used. Legacy is before the merge of #24622
and #25224 which includes fixes to properly handle tokens that appear after special tokens.
Make sure to also set `from_slow` to `True`.
A simple example:
- `legacy=True`:
```python
>>> from transformers import LlamaTokenizerFast
|
201_6_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#llamatokenizerfast
|
.md
|
>>> tokenizer = LlamaTokenizerFast.from_pretrained("huggyllama/llama-7b", legacy=True, from_slow=True)
>>> tokenizer.encode("Hello <s>.") # 869 is 'โ.'
[1, 15043, 29871, 1, 869]
```
- `legacy=False`:
```python
>>> from transformers import LlamaTokenizerFast
|
201_6_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#llamatokenizerfast
|
.md
|
>>> tokenizer = LlamaTokenizerFast.from_pretrained("huggyllama/llama-7b", legacy=False, from_slow=True)
>>> tokenizer.encode("Hello <s>.") # 29889 is '.'
[1, 15043, 29871, 1, 29889]
```
Checkout the [pull request](https://github.com/huggingface/transformers/pull/24565) for more details.
add_prefix_space (`bool`, *optional*):
Whether or not the tokenizer should automatically add a prefix space
Methods: build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
|
201_6_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#llamatokenizerfast
|
.md
|
Methods: build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- update_post_processor
- save_vocabulary
|
201_6_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#llamamodel
|
.md
|
The bare LLaMA Model outputting raw hidden-states without any specific head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
201_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#llamamodel
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`LlamaConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
|
201_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#llamamodel
|
.md
|
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Transformer decoder consisting of *config.num_hidden_layers* layers. Each layer is a [`LlamaDecoderLayer`]
Args:
config: LlamaConfig
Methods: forward
|
201_7_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#llamaforcausallm
|
.md
|
No docstring available for LlamaForCausalLM
Methods: forward
|
201_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#llamaforsequenceclassification
|
.md
|
The LLaMa Model transformer with a sequence classification head on top (linear layer).
[`LlamaForSequenceClassification`] uses the last token in order to do the classification, as other causal models
(e.g. GPT-2) do.
Since it does classification on the last token, it requires to know the position of the last token. If a
`pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If
|
201_9_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#llamaforsequenceclassification
|
.md
|
`pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If
no `pad_token_id` is defined, it simply takes the last value in each row of the batch. Since it cannot guess the
padding tokens when `inputs_embeds` are passed instead of `input_ids`, it does the same (take the last value in
each row of the batch).
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
|
201_9_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#llamaforsequenceclassification
|
.md
|
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
|
201_9_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/llama2.md
|
https://huggingface.co/docs/transformers/en/model_doc/llama2/#llamaforsequenceclassification
|
.md
|
and behavior.
Parameters:
config ([`LlamaConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
201_9_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mctct.md
|
https://huggingface.co/docs/transformers/en/model_doc/mctct/
|
.md
|
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
202_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mctct.md
|
https://huggingface.co/docs/transformers/en/model_doc/mctct/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
202_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mctct.md
|
https://huggingface.co/docs/transformers/en/model_doc/mctct/#m-ctc-t
|
.md
|
<Tip warning={true}>
This model is in maintenance mode only, so we won't accept any new PRs changing its code.
If you run into any issues running this model, please reinstall the last version that supported this model: v4.30.0.
You can do so by running the following command: `pip install -U transformers==4.30.0`.
</Tip>
|
202_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mctct.md
|
https://huggingface.co/docs/transformers/en/model_doc/mctct/#overview
|
.md
|
The M-CTC-T model was proposed in [Pseudo-Labeling For Massively Multilingual Speech Recognition](https://arxiv.org/abs/2111.00161) by Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, and Ronan Collobert. The model is a 1B-param transformer encoder, with a CTC head over 8065 character labels and a language identification head over 60 language ID labels. It is trained on Common Voice (version 6.1, December 2020 release) and VoxPopuli. After training on Common Voice and VoxPopuli, the model is trained
|
202_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mctct.md
|
https://huggingface.co/docs/transformers/en/model_doc/mctct/#overview
|
.md
|
Voice (version 6.1, December 2020 release) and VoxPopuli. After training on Common Voice and VoxPopuli, the model is trained on Common Voice only. The labels are unnormalized character-level transcripts (punctuation and capitalization are not removed). The model takes as input Mel filterbank features from a 16Khz audio signal.
|
202_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mctct.md
|
https://huggingface.co/docs/transformers/en/model_doc/mctct/#overview
|
.md
|
The abstract from the paper is the following:
*Semi-supervised learning through pseudo-labeling has become a staple of state-of-the-art monolingual
speech recognition systems. In this work, we extend pseudo-labeling to massively multilingual speech
recognition with 60 languages. We propose a simple pseudo-labeling recipe that works well even
with low-resource languages: train a supervised multilingual model, fine-tune it with semi-supervised
|
202_2_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mctct.md
|
https://huggingface.co/docs/transformers/en/model_doc/mctct/#overview
|
.md
|
with low-resource languages: train a supervised multilingual model, fine-tune it with semi-supervised
learning on a target language, generate pseudo-labels for that language, and train a final model using
pseudo-labels for all languages, either from scratch or by fine-tuning. Experiments on the labeled
Common Voice and unlabeled VoxPopuli datasets show that our recipe can yield a model with better
performance for many languages that also transfers well to LibriSpeech.*
|
202_2_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mctct.md
|
https://huggingface.co/docs/transformers/en/model_doc/mctct/#overview
|
.md
|
performance for many languages that also transfers well to LibriSpeech.*
This model was contributed by [cwkeam](https://huggingface.co/cwkeam). The original code can be found [here](https://github.com/flashlight/wav2letter/tree/main/recipes/mling_pl).
|
202_2_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mctct.md
|
https://huggingface.co/docs/transformers/en/model_doc/mctct/#usage-tips
|
.md
|
The PyTorch version of this model is only available in torch 1.9 and higher.
|
202_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mctct.md
|
https://huggingface.co/docs/transformers/en/model_doc/mctct/#resources
|
.md
|
- [Automatic speech recognition task guide](../tasks/asr)
|
202_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mctct.md
|
https://huggingface.co/docs/transformers/en/model_doc/mctct/#mctctconfig
|
.md
|
This is the configuration class to store the configuration of a [`MCTCTModel`]. It is used to instantiate an
M-CTC-T model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the M-CTC-T
[speechbrain/m-ctc-t-large](https://huggingface.co/speechbrain/m-ctc-t-large) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
202_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mctct.md
|
https://huggingface.co/docs/transformers/en/model_doc/mctct/#mctctconfig
|
.md
|
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 8065):
Vocabulary size of the M-CTC-T model. Defines the number of different tokens that can be represented by the
`inputs_ids` passed when calling [`MCTCTModel`].
hidden_size (`int`, *optional*, defaults to 1536):
Dimension of the encoder layers and the pooler layer.
|
202_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mctct.md
|
https://huggingface.co/docs/transformers/en/model_doc/mctct/#mctctconfig
|
.md
|
hidden_size (`int`, *optional*, defaults to 1536):
Dimension of the encoder layers and the pooler layer.
num_hidden_layers (`int`, *optional*, defaults to 36):
Number of hidden layers in the Transformer encoder.
intermediate_size (`int`, *optional*, defaults to 6144):
Dimension of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 4):
Number of attention heads for each attention layer in the Transformer encoder.
|
202_5_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mctct.md
|
https://huggingface.co/docs/transformers/en/model_doc/mctct/#mctctconfig
|
.md
|
Number of attention heads for each attention layer in the Transformer encoder.
attention_head_dim (`int`, *optional*, defaults to 384):
Dimensions of each attention head for each attention layer in the Transformer encoder.
max_position_embeddings (`int`, *optional*, defaults to 920):
The maximum sequence length that this model might ever be used with (after log-mel spectrogram extraction).
layer_norm_eps (`float`, *optional*, defaults to 1e-05):
The epsilon used by the layer normalization layers.
|
202_5_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mctct.md
|
https://huggingface.co/docs/transformers/en/model_doc/mctct/#mctctconfig
|
.md
|
layer_norm_eps (`float`, *optional*, defaults to 1e-05):
The epsilon used by the layer normalization layers.
layerdrop (`float`, *optional*, defaults to 0.3):
The probability of dropping an encoder layer during training. The default 0.3 value is used in the original
implementation.
hidden_act (`str` or `function`, *optional*, defaults to `"relu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"selu"` and `"gelu_new"` are supported.
|
202_5_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mctct.md
|
https://huggingface.co/docs/transformers/en/model_doc/mctct/#mctctconfig
|
.md
|
`"relu"`, `"selu"` and `"gelu_new"` are supported.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
hidden_dropout_prob (`float`, *optional*, defaults to 0.3):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (`float`, *optional*, defaults to 0.3):
The dropout ratio for the attention probabilities.
|
202_5_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mctct.md
|
https://huggingface.co/docs/transformers/en/model_doc/mctct/#mctctconfig
|
.md
|
attention_probs_dropout_prob (`float`, *optional*, defaults to 0.3):
The dropout ratio for the attention probabilities.
pad_token_id (`int`, *optional*, defaults to 1):
The tokenizer index of the pad token.
bos_token_id (`int`, *optional*, defaults to 0):
The tokenizer index of the bos token.
eos_token_id (`int`, *optional*, defaults to 2):
The tokenizer index of the eos token.
conv_glu_dim (`int`, *optional*, defaults to 1):
|
202_5_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mctct.md
|
https://huggingface.co/docs/transformers/en/model_doc/mctct/#mctctconfig
|
.md
|
The tokenizer index of the eos token.
conv_glu_dim (`int`, *optional*, defaults to 1):
The dimension of the output of the `Conv1dSubsampler` layer in which GLU is applied on. Though the original
Flashlight code uses the value of 2, here it's adapted to 1 due to transposition differences.
conv_dropout (`int`, *optional*, defaults to 0.3):
The probability of randomly dropping the `Conv1dSubsampler` layer during training.
num_conv_layers (`int`, *optional*, defaults to 1):
|
202_5_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mctct.md
|
https://huggingface.co/docs/transformers/en/model_doc/mctct/#mctctconfig
|
.md
|
num_conv_layers (`int`, *optional*, defaults to 1):
Number of convolution layers before applying transformer encoder layers.
conv_kernel (`Sequence[int]`, *optional*, defaults to `(7,)`):
The kernel size of the 1D convolution applied before transformer layers. `len(conv_kernel)` must be equal
to `num_conv_layers`.
conv_stride (`Sequence[int]`, *optional*, defaults to `(3,)`):
The stride length of the 1D convolution applied before transformer layers. `len(conv_stride)` must be equal
to `num_conv_layers`.
|
202_5_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mctct.md
|
https://huggingface.co/docs/transformers/en/model_doc/mctct/#mctctconfig
|
.md
|
to `num_conv_layers`.
input_feat_per_channel (`int`, *optional*, defaults to 80):
Feature dimensions of the channels of the input to the Conv1D layer.
input_channels (`int`, *optional*, defaults to 1):
Number of input channels of the input to the Conv1D layer.
conv_channels (`List[int]`, *optional*):
Channel sizes of intermediate Conv1D layers.
ctc_loss_reduction (`str`, *optional*, defaults to `"sum"`):
Specifies the reduction to apply to the output of `torch.nn.CTCLoss`. Only relevant when training an
|
202_5_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mctct.md
|
https://huggingface.co/docs/transformers/en/model_doc/mctct/#mctctconfig
|
.md
|
Specifies the reduction to apply to the output of `torch.nn.CTCLoss`. Only relevant when training an
instance of [`MCTCTForCTC`].
ctc_zero_infinity (`bool`, *optional*, defaults to `False`):
Whether to zero infinite losses and the associated gradients of `torch.nn.CTCLoss`. Infinite losses mainly
occur when the inputs are too short to be aligned to the targets. Only relevant when training an instance
of [`MCTCTForCTC`].
Example:
```python
>>> from transformers import MCTCTConfig, MCTCTModel
|
202_5_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mctct.md
|
https://huggingface.co/docs/transformers/en/model_doc/mctct/#mctctconfig
|
.md
|
>>> # Initializing a M-CTC-T mctct-large style configuration
>>> configuration = MCTCTConfig()
>>> # Initializing a model (with random weights) from the mctct-large style configuration
>>> model = MCTCTModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
202_5_11
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mctct.md
|
https://huggingface.co/docs/transformers/en/model_doc/mctct/#mctctfeatureextractor
|
.md
|
Constructs a M-CTC-T feature extractor.
This feature extractor inherits from [`~feature_extraction_sequence_utils.SequenceFeatureExtractor`] which contains
most of the main methods. Users should refer to this superclass for more information regarding those methods. This
code has been adapted from Flashlight's C++ code. For more information about the implementation, one can refer to
this [notebook](https://colab.research.google.com/drive/1GLtINkkhzms-IsdcGy_-tVCkv0qNF-Gt#scrollTo=pMCRGMmUC_an)
|
202_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mctct.md
|
https://huggingface.co/docs/transformers/en/model_doc/mctct/#mctctfeatureextractor
|
.md
|
this [notebook](https://colab.research.google.com/drive/1GLtINkkhzms-IsdcGy_-tVCkv0qNF-Gt#scrollTo=pMCRGMmUC_an)
that takes the user step-by-step in the implementation.
Args:
feature_size (`int`, defaults to 80):
The feature dimension of the extracted features. This is the number of mel_frequency
sampling_rate (`int`, defaults to 16000):
The sampling rate at which the audio files should be digitalized expressed in hertz (Hz).
padding_value (`float`, defaults to 0.0):
|
202_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mctct.md
|
https://huggingface.co/docs/transformers/en/model_doc/mctct/#mctctfeatureextractor
|
.md
|
padding_value (`float`, defaults to 0.0):
The value that is used to fill the padding values.
hop_length (`int`, defaults to 10):
Number of audio samples between windows. Otherwise referred to as "shift" in many papers.
win_length (`int`, defaults to 25):
Number of ms per window
win_function (`str`, defaults to `"hamming_window"`):
Name for the window function used for windowing, must be accessible via `torch.{win_function}`
frame_signal_scale (`float`, defaults to 32768.0):
|
202_6_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mctct.md
|
https://huggingface.co/docs/transformers/en/model_doc/mctct/#mctctfeatureextractor
|
.md
|
frame_signal_scale (`float`, defaults to 32768.0):
Constant multiplied in creating the frames before applying DFT.
preemphasis_coeff (`float`, defaults to 0.97):
Constant multiplied in applying Pre-emphasis before DFT.
mel_floor (`float` defaults to 1.0):
Minimum value of mel frequency banks.
normalize_means (`bool`, *optional*, defaults to `True`):
Whether or not to zero-mean normalize the extracted features.
normalize_vars (`bool`, *optional*, defaults to `True`):
|
202_6_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mctct.md
|
https://huggingface.co/docs/transformers/en/model_doc/mctct/#mctctfeatureextractor
|
.md
|
Whether or not to zero-mean normalize the extracted features.
normalize_vars (`bool`, *optional*, defaults to `True`):
Whether or not to unit-variance normalize the extracted features.
Methods: __call__
|
202_6_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mctct.md
|
https://huggingface.co/docs/transformers/en/model_doc/mctct/#mctctprocessor
|
.md
|
Constructs a MCTCT processor which wraps a MCTCT feature extractor and a MCTCT tokenizer into a single processor.
[`MCTCTProcessor`] offers all the functionalities of [`MCTCTFeatureExtractor`] and [`AutoTokenizer`]. See the
[`~MCTCTProcessor.__call__`] and [`~MCTCTProcessor.decode`] for more information.
Args:
feature_extractor (`MCTCTFeatureExtractor`):
An instance of [`MCTCTFeatureExtractor`]. The feature extractor is a required input.
tokenizer (`AutoTokenizer`):
|
202_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mctct.md
|
https://huggingface.co/docs/transformers/en/model_doc/mctct/#mctctprocessor
|
.md
|
An instance of [`MCTCTFeatureExtractor`]. The feature extractor is a required input.
tokenizer (`AutoTokenizer`):
An instance of [`AutoTokenizer`]. The tokenizer is a required input.
Methods: __call__
- from_pretrained
- save_pretrained
- batch_decode
- decode
|
202_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mctct.md
|
https://huggingface.co/docs/transformers/en/model_doc/mctct/#mctctmodel
|
.md
|
The bare M-CTC-T Model transformer outputting raw hidden-states without any specific head on top.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`MCTCTConfig`]): Model configuration class with all the parameters of the model.
|
202_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mctct.md
|
https://huggingface.co/docs/transformers/en/model_doc/mctct/#mctctmodel
|
.md
|
behavior.
Parameters:
config ([`MCTCTConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
202_8_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mctct.md
|
https://huggingface.co/docs/transformers/en/model_doc/mctct/#mctctforctc
|
.md
|
MCTCT Model with a `language modeling` head on top for Connectionist Temporal Classification (CTC).
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`MCTCTConfig`]): Model configuration class with all the parameters of the model.
|
202_9_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mctct.md
|
https://huggingface.co/docs/transformers/en/model_doc/mctct/#mctctforctc
|
.md
|
behavior.
Parameters:
config ([`MCTCTConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
202_9_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot-small.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot-small/
|
.md
|
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
203_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot-small.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot-small/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
203_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot-small.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot-small/#blenderbot-small
|
.md
|
Note that [`BlenderbotSmallModel`] and
[`BlenderbotSmallForConditionalGeneration`] are only used in combination with the checkpoint
[facebook/blenderbot-90M](https://huggingface.co/facebook/blenderbot-90M). Larger Blenderbot checkpoints should
instead be used with [`BlenderbotModel`] and
[`BlenderbotForConditionalGeneration`]
|
203_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot-small.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot-small/#overview
|
.md
|
The Blender chatbot model was proposed in [Recipes for building an open-domain chatbot](https://arxiv.org/pdf/2004.13637.pdf) Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu,
Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston on 30 Apr 2020.
The abstract of the paper is the following:
*Building open-domain chatbots is a challenging area for machine learning research. While prior work has shown that
|
203_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot-small.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot-small/#overview
|
.md
|
*Building open-domain chatbots is a challenging area for machine learning research. While prior work has shown that
scaling neural models in the number of parameters and the size of the data they are trained on gives improved results,
we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of
skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to
|
203_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot-small.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot-small/#overview
|
.md
|
skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to
their partners, and displaying knowledge, empathy and personality appropriately, while maintaining a consistent
persona. We show that large scale models can learn these skills when given appropriate training data and choice of
generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter models, and make our models
|
203_2_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot-small.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot-small/#overview
|
.md
|
generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter models, and make our models
and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn
dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing
failure cases of our models.*
This model was contributed by [patrickvonplaten](https://huggingface.co/patrickvonplaten). The authors' code can be
|
203_2_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot-small.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot-small/#overview
|
.md
|
This model was contributed by [patrickvonplaten](https://huggingface.co/patrickvonplaten). The authors' code can be
found [here](https://github.com/facebookresearch/ParlAI).
|
203_2_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot-small.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot-small/#usage-tips
|
.md
|
Blenderbot Small is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than
the left.
|
203_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot-small.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot-small/#resources
|
.md
|
- [Causal language modeling task guide](../tasks/language_modeling)
- [Translation task guide](../tasks/translation)
- [Summarization task guide](../tasks/summarization)
|
203_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot-small.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot-small/#blenderbotsmallconfig
|
.md
|
This is the configuration class to store the configuration of a [`BlenderbotSmallModel`]. It is used to instantiate
an BlenderbotSmall model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the BlenderbotSmall
[facebook/blenderbot_small-90M](https://huggingface.co/facebook/blenderbot_small-90M) architecture.
|
203_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot-small.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot-small/#blenderbotsmallconfig
|
.md
|
[facebook/blenderbot_small-90M](https://huggingface.co/facebook/blenderbot_small-90M) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 50265):
Vocabulary size of the BlenderbotSmall model. Defines the number of different tokens that can be
|
203_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot-small.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot-small/#blenderbotsmallconfig
|
.md
|
Vocabulary size of the BlenderbotSmall model. Defines the number of different tokens that can be
represented by the `inputs_ids` passed when calling [`BlenderbotSmallModel`] or [`TFBlenderbotSmallModel`].
d_model (`int`, *optional*, defaults to 512):
Dimensionality of the layers and the pooler layer.
encoder_layers (`int`, *optional*, defaults to 8):
Number of encoder layers.
decoder_layers (`int`, *optional*, defaults to 8):
Number of decoder layers.
|
203_5_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot-small.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot-small/#blenderbotsmallconfig
|
.md
|
Number of encoder layers.
decoder_layers (`int`, *optional*, defaults to 8):
Number of decoder layers.
encoder_attention_heads (`int`, *optional*, defaults to 16):
Number of attention heads for each attention layer in the Transformer encoder.
decoder_attention_heads (`int`, *optional*, defaults to 16):
Number of attention heads for each attention layer in the Transformer decoder.
decoder_ffn_dim (`int`, *optional*, defaults to 2048):
|
203_5_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot-small.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot-small/#blenderbotsmallconfig
|
.md
|
decoder_ffn_dim (`int`, *optional*, defaults to 2048):
Dimensionality of the "intermediate" (often named feed-forward) layer in decoder.
encoder_ffn_dim (`int`, *optional*, defaults to 2048):
Dimensionality of the "intermediate" (often named feed-forward) layer in decoder.
activation_function (`str` or `function`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"silu"` and `"gelu_new"` are supported.
|
203_5_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot-small.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot-small/#blenderbotsmallconfig
|
.md
|
`"relu"`, `"silu"` and `"gelu_new"` are supported.
dropout (`float`, *optional*, defaults to 0.1):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
activation_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for activations inside the fully connected layer.
max_position_embeddings (`int`, *optional*, defaults to 512):
|
203_5_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blenderbot-small.md
|
https://huggingface.co/docs/transformers/en/model_doc/blenderbot-small/#blenderbotsmallconfig
|
.md
|
max_position_embeddings (`int`, *optional*, defaults to 512):
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
init_std (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
encoder_layerdrop (`float`, *optional*, defaults to 0.0):
|
203_5_6
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.