source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/camembert.md
|
https://huggingface.co/docs/transformers/en/model_doc/camembert/#camembertmodel
|
.md
|
The bare CamemBERT Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
339_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/camembert.md
|
https://huggingface.co/docs/transformers/en/model_doc/camembert/#camembertmodel
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`CamembertConfig`]): Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
|
339_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/camembert.md
|
https://huggingface.co/docs/transformers/en/model_doc/camembert/#camembertmodel
|
.md
|
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of
cross-attention is added between the self-attention layers, following the architecture described in *Attention is
|
339_6_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/camembert.md
|
https://huggingface.co/docs/transformers/en/model_doc/camembert/#camembertmodel
|
.md
|
cross-attention is added between the self-attention layers, following the architecture described in *Attention is
all you need*_ by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz
Kaiser and Illia Polosukhin.
To behave as a decoder the model needs to be initialized with the `is_decoder` argument of the configuration set to
`True`. To be used in a Seq2Seq model, the model needs to initialized with both `is_decoder` argument and
|
339_6_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/camembert.md
|
https://huggingface.co/docs/transformers/en/model_doc/camembert/#camembertmodel
|
.md
|
`True`. To be used in a Seq2Seq model, the model needs to initialized with both `is_decoder` argument and
`add_cross_attention` set to `True`; an `encoder_hidden_states` is then expected as an input to the forward pass.
.. _*Attention is all you need*: https://arxiv.org/abs/1706.03762
|
339_6_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/camembert.md
|
https://huggingface.co/docs/transformers/en/model_doc/camembert/#camembertforcausallm
|
.md
|
CamemBERT Model with a `language modeling` head on top for CLM fine-tuning.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
339_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/camembert.md
|
https://huggingface.co/docs/transformers/en/model_doc/camembert/#camembertforcausallm
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`CamembertConfig`]): Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
|
339_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/camembert.md
|
https://huggingface.co/docs/transformers/en/model_doc/camembert/#camembertforcausallm
|
.md
|
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
|
339_7_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/camembert.md
|
https://huggingface.co/docs/transformers/en/model_doc/camembert/#camembertformaskedlm
|
.md
|
CamemBERT Model with a `language modeling` head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
339_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/camembert.md
|
https://huggingface.co/docs/transformers/en/model_doc/camembert/#camembertformaskedlm
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`CamembertConfig`]): Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
|
339_8_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/camembert.md
|
https://huggingface.co/docs/transformers/en/model_doc/camembert/#camembertformaskedlm
|
.md
|
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
|
339_8_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/camembert.md
|
https://huggingface.co/docs/transformers/en/model_doc/camembert/#camembertforsequenceclassification
|
.md
|
CamemBERT Model transformer with a sequence classification/regression head on top (a linear layer on top of the
pooled output) e.g. for GLUE tasks.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
339_9_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/camembert.md
|
https://huggingface.co/docs/transformers/en/model_doc/camembert/#camembertforsequenceclassification
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`CamembertConfig`]): Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
|
339_9_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/camembert.md
|
https://huggingface.co/docs/transformers/en/model_doc/camembert/#camembertforsequenceclassification
|
.md
|
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
|
339_9_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/camembert.md
|
https://huggingface.co/docs/transformers/en/model_doc/camembert/#camembertformultiplechoice
|
.md
|
CamemBERT Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a
softmax) e.g. for RocStories/SWAG tasks.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
339_10_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/camembert.md
|
https://huggingface.co/docs/transformers/en/model_doc/camembert/#camembertformultiplechoice
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`CamembertConfig`]): Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
|
339_10_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/camembert.md
|
https://huggingface.co/docs/transformers/en/model_doc/camembert/#camembertformultiplechoice
|
.md
|
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
|
339_10_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/camembert.md
|
https://huggingface.co/docs/transformers/en/model_doc/camembert/#camembertfortokenclassification
|
.md
|
CamemBERT Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g.
for Named-Entity-Recognition (NER) tasks.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
339_11_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/camembert.md
|
https://huggingface.co/docs/transformers/en/model_doc/camembert/#camembertfortokenclassification
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`CamembertConfig`]): Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
|
339_11_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/camembert.md
|
https://huggingface.co/docs/transformers/en/model_doc/camembert/#camembertfortokenclassification
|
.md
|
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
|
339_11_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/camembert.md
|
https://huggingface.co/docs/transformers/en/model_doc/camembert/#camembertforquestionanswering
|
.md
|
CamemBERT Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layers on top of the hidden-states output to compute `span start logits` and `span end logits`
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
|
339_12_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/camembert.md
|
https://huggingface.co/docs/transformers/en/model_doc/camembert/#camembertforquestionanswering
|
.md
|
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`CamembertConfig`]): Model configuration class with all the parameters of the
|
339_12_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/camembert.md
|
https://huggingface.co/docs/transformers/en/model_doc/camembert/#camembertforquestionanswering
|
.md
|
and behavior.
Parameters:
config ([`CamembertConfig`]): Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
</pt>
<tf>
|
339_12_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/camembert.md
|
https://huggingface.co/docs/transformers/en/model_doc/camembert/#tfcamembertmodel
|
.md
|
No docstring available for TFCamembertModel
|
339_13_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/camembert.md
|
https://huggingface.co/docs/transformers/en/model_doc/camembert/#tfcamembertforcausallm
|
.md
|
No docstring available for TFCamembertForCausalLM
|
339_14_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/camembert.md
|
https://huggingface.co/docs/transformers/en/model_doc/camembert/#tfcamembertformaskedlm
|
.md
|
No docstring available for TFCamembertForMaskedLM
|
339_15_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/camembert.md
|
https://huggingface.co/docs/transformers/en/model_doc/camembert/#tfcamembertforsequenceclassification
|
.md
|
No docstring available for TFCamembertForSequenceClassification
|
339_16_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/camembert.md
|
https://huggingface.co/docs/transformers/en/model_doc/camembert/#tfcamembertformultiplechoice
|
.md
|
No docstring available for TFCamembertForMultipleChoice
|
339_17_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/camembert.md
|
https://huggingface.co/docs/transformers/en/model_doc/camembert/#tfcamembertfortokenclassification
|
.md
|
No docstring available for TFCamembertForTokenClassification
|
339_18_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/camembert.md
|
https://huggingface.co/docs/transformers/en/model_doc/camembert/#tfcamembertforquestionanswering
|
.md
|
No docstring available for TFCamembertForQuestionAnswering
</tf>
</frameworkcontent>
|
339_19_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox_japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox_japanese/
|
.md
|
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
340_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox_japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox_japanese/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
340_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox_japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox_japanese/#overview
|
.md
|
We introduce GPT-NeoX-Japanese, which is an autoregressive language model for Japanese, trained on top of [https://github.com/EleutherAI/gpt-neox](https://github.com/EleutherAI/gpt-neox).
Japanese is a unique language with its large vocabulary and a combination of hiragana, katakana, and kanji writing scripts.
|
340_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox_japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox_japanese/#overview
|
.md
|
Japanese is a unique language with its large vocabulary and a combination of hiragana, katakana, and kanji writing scripts.
To address this distinct structure of the Japanese language, we use a [special sub-word tokenizer](https://github.com/tanreinama/Japanese-BPEEncoder_V2). We are very grateful to *tanreinama* for open-sourcing this incredibly helpful tokenizer.
|
340_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox_japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox_japanese/#overview
|
.md
|
Following the recommendations from Google's research on [PaLM](https://ai.googleblog.com/2022/04/pathways-language-model-palm-scaling-to.html), we have removed bias parameters from transformer blocks, achieving better model performance. Please refer [this article](https://medium.com/ml-abeja/training-a-better-gpt-2-93b157662ae4) in detail.
|
340_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox_japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox_japanese/#overview
|
.md
|
Development of the model was led by [Shinya Otani](https://github.com/SO0529), [Takayoshi Makabe](https://github.com/spider-man-tm), [Anuj Arora](https://github.com/Anuj040), and [Kyo Hattori](https://github.com/go5paopao) from [ABEJA, Inc.](https://www.abejainc.com/). For more information on this model-building activity, please refer [here (ja)](https://tech-blog.abeja.asia/entry/abeja-gpt-project-202207).
|
340_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox_japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox_japanese/#usage-example
|
.md
|
The `generate()` method can be used to generate text using GPT NeoX Japanese model.
```python
>>> from transformers import GPTNeoXJapaneseForCausalLM, GPTNeoXJapaneseTokenizer
>>> model = GPTNeoXJapaneseForCausalLM.from_pretrained("abeja/gpt-neox-japanese-2.7b")
>>> tokenizer = GPTNeoXJapaneseTokenizer.from_pretrained("abeja/gpt-neox-japanese-2.7b")
>>> prompt = "人とAIが協調するためには、"
>>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids
|
340_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox_japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox_japanese/#usage-example
|
.md
|
>>> prompt = "人とAIが協調するためには、"
>>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids
>>> gen_tokens = model.generate(
... input_ids,
... do_sample=True,
... temperature=0.9,
... max_length=100,
... )
>>> gen_text = tokenizer.batch_decode(gen_tokens, skip_special_tokens=True)[0]
>>> print(gen_text)
人とAIが協調するためには、AIと人が共存し、AIを正しく理解する必要があります。
```
|
340_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox_japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox_japanese/#resources
|
.md
|
- [Causal language modeling task guide](../tasks/language_modeling)
|
340_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox_japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox_japanese/#gptneoxjapaneseconfig
|
.md
|
This is the configuration class to store the configuration of a [`GPTNeoXModelJapanese`]. It is used to instantiate
a GPTNeoX model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the GPTNeoXJapanese
[abeja/gpt-neox-japanese-2.7b](https://huggingface.co/abeja/gpt-neox-japanese-2.7b) architecture.
|
340_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox_japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox_japanese/#gptneoxjapaneseconfig
|
.md
|
[abeja/gpt-neox-japanese-2.7b](https://huggingface.co/abeja/gpt-neox-japanese-2.7b) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information. Default configs is set as 2.7B model
Args:
vocab_size (`int`, *optional*, defaults to 32000):
Vocabulary size of the GPTNeoXJapanese model. Defines the number of different tokens that can be
|
340_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox_japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox_japanese/#gptneoxjapaneseconfig
|
.md
|
Vocabulary size of the GPTNeoXJapanese model. Defines the number of different tokens that can be
represented by the `inputs_ids` passed when calling [`GPTNeoXJapanese`].
hidden_size (`int`, *optional*, defaults to 2560):
Dimension of the encoder layers and the pooler layer.
num_hidden_layers (`int`, *optional*, defaults to 32):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 32):
|
340_4_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox_japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox_japanese/#gptneoxjapaneseconfig
|
.md
|
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 32):
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_multiple_size (`int`, *optional*, defaults to 4):
Dimension of the "intermediate" layer in the Transformer encoder is calculated by hidden_size *
intermediate_multiple_size.
hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`):
|
340_4_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox_japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox_japanese/#gptneoxjapaneseconfig
|
.md
|
intermediate_multiple_size.
hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the encoder and pooler.
rotary_pct (`float`, *optional*, defaults to 1.00):
percentage of hidden dimensions to allocate to rotary embeddings
rotary_emb_base (`int`, *optional*, defaults to 10000)
base for computing rotary embeddings frequency
max_position_embeddings (`int`, *optional*, defaults to 2048):
|
340_4_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox_japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox_japanese/#gptneoxjapaneseconfig
|
.md
|
base for computing rotary embeddings frequency
max_position_embeddings (`int`, *optional*, defaults to 2048):
The maximum sequence length that this model might ever be used with.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (`float`, *optional*, defaults to 1e-5):
The epsilon used by the layer normalization layers.
use_cache (`bool`, *optional*, defaults to `True`):
|
340_4_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox_japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox_japanese/#gptneoxjapaneseconfig
|
.md
|
The epsilon used by the layer normalization layers.
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if `config.is_decoder=True`.
rope_scaling (`Dict`, *optional*):
Dictionary containing the scaling configuration for the RoPE embeddings. NOTE: if you apply new rope type
and you expect the model to work on longer `max_position_embeddings`, we recommend you to update this value
accordingly.
|
340_4_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox_japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox_japanese/#gptneoxjapaneseconfig
|
.md
|
and you expect the model to work on longer `max_position_embeddings`, we recommend you to update this value
accordingly.
Expected contents:
`rope_type` (`str`):
The sub-variant of RoPE to use. Can be one of ['default', 'linear', 'dynamic', 'yarn', 'longrope',
'llama3'], with 'default' being the original RoPE implementation.
`factor` (`float`, *optional*):
Used with all rope types except 'default'. The scaling factor to apply to the RoPE embeddings. In
|
340_4_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox_japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox_japanese/#gptneoxjapaneseconfig
|
.md
|
Used with all rope types except 'default'. The scaling factor to apply to the RoPE embeddings. In
most scaling types, a `factor` of x will enable the model to handle sequences of length x *
original maximum pre-trained length.
`original_max_position_embeddings` (`int`, *optional*):
Used with 'dynamic', 'longrope' and 'llama3'. The original max position embeddings used during
pretraining.
`attention_factor` (`float`, *optional*):
|
340_4_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox_japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox_japanese/#gptneoxjapaneseconfig
|
.md
|
pretraining.
`attention_factor` (`float`, *optional*):
Used with 'yarn' and 'longrope'. The scaling factor to be applied on the attention
computation. If unspecified, it defaults to value recommended by the implementation, using the
`factor` field to infer the suggested value.
`beta_fast` (`float`, *optional*):
Only used with 'yarn'. Parameter to set the boundary for extrapolation (only) in the linear
ramp function. If unspecified, it defaults to 32.
`beta_slow` (`float`, *optional*):
|
340_4_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox_japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox_japanese/#gptneoxjapaneseconfig
|
.md
|
ramp function. If unspecified, it defaults to 32.
`beta_slow` (`float`, *optional*):
Only used with 'yarn'. Parameter to set the boundary for interpolation (only) in the linear
ramp function. If unspecified, it defaults to 1.
`short_factor` (`List[float]`, *optional*):
Only used with 'longrope'. The scaling factor to be applied to short contexts (<
`original_max_position_embeddings`). Must be a list of numbers with the same length as the hidden
size divided by the number of attention heads divided by 2
|
340_4_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox_japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox_japanese/#gptneoxjapaneseconfig
|
.md
|
size divided by the number of attention heads divided by 2
`long_factor` (`List[float]`, *optional*):
Only used with 'longrope'. The scaling factor to be applied to long contexts (<
`original_max_position_embeddings`). Must be a list of numbers with the same length as the hidden
size divided by the number of attention heads divided by 2
`low_freq_factor` (`float`, *optional*):
Only used with 'llama3'. Scaling factor applied to low frequency components of the RoPE
`high_freq_factor` (`float`, *optional*):
|
340_4_11
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox_japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox_japanese/#gptneoxjapaneseconfig
|
.md
|
`high_freq_factor` (`float`, *optional*):
Only used with 'llama3'. Scaling factor applied to high frequency components of the RoPE
attention_dropout (`float`, *optional*, defaults to 0.1):
The dropout ratio for the attention.
hidden_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the hidden layer.
Example:
```python
>>> from transformers import GPTNeoXJapaneseConfig, GPTNeoXJapaneseModel
|
340_4_12
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox_japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox_japanese/#gptneoxjapaneseconfig
|
.md
|
>>> # Initializing a GPTNeoXJapanese gpt-neox-japanese-2.7b style configuration
>>> configuration = GPTNeoXJapaneseConfig()
>>> # Initializing a model (with random weights) from the gpt-neox-japanese-2.7b style configuration
>>> model = GPTNeoXJapaneseModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
340_4_13
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox_japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox_japanese/#gptneoxjapanesetokenizer
|
.md
|
This tokenizer inherits from [`PreTrainedTokenizer`] and is based on Japanese special Sub-Word-Encoding that is
used in this repository (https://github.com/tanreinama/Japanese-BPEEncoder_V2). Check the repository for details.
Japanese has a relatively large vocabulary and there is no separation between words. Furthermore, the language is a
combination of hiragana, katakana, and kanji, and variants such as "1" and "①" are often used. In order to cope
with these, this tokenizer has the following features
|
340_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox_japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox_japanese/#gptneoxjapanesetokenizer
|
.md
|
with these, this tokenizer has the following features
- Subword-by-subword segmentation, which is intermediate between byte strings and morphological analysis.
- BPEs are created for each Kanji, Hiragana, and Katakana character, and there are no BPEs that cross character
types, such as Kanji + Hiragana or Hiragana + Katakana.
- All-byte encoding that does not require <unk>.
- Independent of UTF codes such as 2-byte and 3-byte characters
- Conversion of heterographs to the same token_id
|
340_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox_japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox_japanese/#gptneoxjapanesetokenizer
|
.md
|
- Independent of UTF codes such as 2-byte and 3-byte characters
- Conversion of heterographs to the same token_id
- Emoji and Emoticon are grouped into 12 types as special tags.
Example:
```python
>>> from transformers import GPTNeoXJapaneseTokenizer
|
340_5_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox_japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox_japanese/#gptneoxjapanesetokenizer
|
.md
|
>>> tokenizer = GPTNeoXJapaneseTokenizer.from_pretrained("abeja/gpt-neox-japanese-2.7b")
>>> # You can confirm both 慶応 and 慶應 are encoded to 17749
>>> tokenizer("吾輩は猫である🐯。実は慶応(慶應)大学出身")["input_ids"]
[30014, 26883, 26638, 27228, 25, 26650, 31732, 31679, 27809, 26638, 17749, 31592, 17749, 31593, 321, 1281]
|
340_5_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox_japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox_japanese/#gptneoxjapanesetokenizer
|
.md
|
>>> # Both 慶応 and 慶應 are decoded to 慶応
>>> tokenizer.decode(tokenizer("吾輩は猫である🐯。実は慶応(慶應)大学出身")["input_ids"])
'吾輩は猫である🐯。実は慶応(慶応)大学出身'
```
Args:
vocab_file (`str`):
File containing the vocabulary.
emoji_file (`str`):
File containing the emoji.
unk_token (`str`, *optional*, defaults to `"<|endoftext|>"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (`str`, *optional*, defaults to `"<|endoftext|>"`):
|
340_5_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox_japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox_japanese/#gptneoxjapanesetokenizer
|
.md
|
token instead.
pad_token (`str`, *optional*, defaults to `"<|endoftext|>"`):
The token used for padding
bos_token (`str`, *optional*, defaults to `"<|startoftext|>"`):
The beginning of sequence token.
eos_token (`str`, *optional*, defaults to `"<|endoftext|>"`):
The end of sequence token.
do_clean_text (`bool`, *optional*, defaults to `False`):
Whether or not to clean text for URL, EMAIL, TEL, Japanese DATE and Japanese PRICE.
|
340_5_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox_japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox_japanese/#gptneoxjapanesemodel
|
.md
|
The bare GPTNeoXJapanese Model transformer outputting raw hidden-states without any specific head on top.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`~GPTNeoXJapaneseConfig`]): Model configuration class with all the parameters of the model.
|
340_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox_japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox_japanese/#gptneoxjapanesemodel
|
.md
|
behavior.
Parameters:
config ([`~GPTNeoXJapaneseConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
340_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox_japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox_japanese/#gptneoxjapaneseforcausallm
|
.md
|
GPTNeoXJapanese Model with a `language modeling` head on top for Classifier Model fine-tuning.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`~GPTNeoXJapaneseConfig`]): Model configuration class with all the parameters of the model.
|
340_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gpt_neox_japanese.md
|
https://huggingface.co/docs/transformers/en/model_doc/gpt_neox_japanese/#gptneoxjapaneseforcausallm
|
.md
|
behavior.
Parameters:
config ([`~GPTNeoXJapaneseConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
340_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mra.md
|
https://huggingface.co/docs/transformers/en/model_doc/mra/
|
.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
341_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mra.md
|
https://huggingface.co/docs/transformers/en/model_doc/mra/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
341_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mra.md
|
https://huggingface.co/docs/transformers/en/model_doc/mra/#overview
|
.md
|
The MRA model was proposed in [Multi Resolution Analysis (MRA) for Approximate Self-Attention](https://arxiv.org/abs/2207.10284) by Zhanpeng Zeng, Sourav Pal, Jeffery Kline, Glenn M Fung, and Vikas Singh.
The abstract from the paper is the following:
|
341_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mra.md
|
https://huggingface.co/docs/transformers/en/model_doc/mra/#overview
|
.md
|
*Transformers have emerged as a preferred model for many tasks in natural language processing and vision. Recent efforts on training and deploying Transformers more efficiently have identified many strategies to approximate the self-attention matrix, a key module in a Transformer architecture. Effective ideas include various prespecified sparsity patterns, low-rank basis expansions and combinations thereof. In this paper, we revisit classical Multiresolution Analysis (MRA) concepts such as Wavelets, whose
|
341_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mra.md
|
https://huggingface.co/docs/transformers/en/model_doc/mra/#overview
|
.md
|
and combinations thereof. In this paper, we revisit classical Multiresolution Analysis (MRA) concepts such as Wavelets, whose potential value in this setting remains underexplored thus far. We show that simple approximations based on empirical feedback and design choices informed by modern hardware and implementation challenges, eventually yield a MRA-based approach for self-attention with an excellent performance profile across most criteria of interest. We undertake an extensive set of experiments and
|
341_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mra.md
|
https://huggingface.co/docs/transformers/en/model_doc/mra/#overview
|
.md
|
with an excellent performance profile across most criteria of interest. We undertake an extensive set of experiments and demonstrate that this multi-resolution scheme outperforms most efficient self-attention proposals and is favorable for both short and long sequences. Code is available at https://github.com/mlpen/mra-attention.*
|
341_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mra.md
|
https://huggingface.co/docs/transformers/en/model_doc/mra/#overview
|
.md
|
This model was contributed by [novice03](https://huggingface.co/novice03).
The original code can be found [here](https://github.com/mlpen/mra-attention).
|
341_1_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mra.md
|
https://huggingface.co/docs/transformers/en/model_doc/mra/#mraconfig
|
.md
|
This is the configuration class to store the configuration of a [`MraModel`]. It is used to instantiate an MRA
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the Mra
[uw-madison/mra-base-512-4](https://huggingface.co/uw-madison/mra-base-512-4) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
341_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mra.md
|
https://huggingface.co/docs/transformers/en/model_doc/mra/#mraconfig
|
.md
|
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 50265):
Vocabulary size of the Mra model. Defines the number of different tokens that can be represented by the
`inputs_ids` passed when calling [`MraModel`].
hidden_size (`int`, *optional*, defaults to 768):
Dimension of the encoder layers and the pooler layer.
|
341_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mra.md
|
https://huggingface.co/docs/transformers/en/model_doc/mra/#mraconfig
|
.md
|
hidden_size (`int`, *optional*, defaults to 768):
Dimension of the encoder layers and the pooler layer.
num_hidden_layers (`int`, *optional*, defaults to 12):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 12):
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (`int`, *optional*, defaults to 3072):
Dimension of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
|
341_2_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mra.md
|
https://huggingface.co/docs/transformers/en/model_doc/mra/#mraconfig
|
.md
|
Dimension of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"selu"` and `"gelu_new"` are supported.
hidden_dropout_prob (`float`, *optional*, defaults to 0.1):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
|
341_2_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mra.md
|
https://huggingface.co/docs/transformers/en/model_doc/mra/#mraconfig
|
.md
|
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1):
The dropout ratio for the attention probabilities.
max_position_embeddings (`int`, *optional*, defaults to 512):
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (`int`, *optional*, defaults to 1):
|
341_2_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mra.md
|
https://huggingface.co/docs/transformers/en/model_doc/mra/#mraconfig
|
.md
|
just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (`int`, *optional*, defaults to 1):
The vocabulary size of the `token_type_ids` passed when calling [`MraModel`].
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (`float`, *optional*, defaults to 1e-5):
The epsilon used by the layer normalization layers.
position_embedding_type (`str`, *optional*, defaults to `"absolute"`):
|
341_2_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mra.md
|
https://huggingface.co/docs/transformers/en/model_doc/mra/#mraconfig
|
.md
|
The epsilon used by the layer normalization layers.
position_embedding_type (`str`, *optional*, defaults to `"absolute"`):
Type of position embedding. Choose one of `"absolute"`, `"relative_key"`, `"relative_key_query"`.
block_per_row (`int`, *optional*, defaults to 4):
Used to set the budget for the high resolution scale.
approx_mode (`str`, *optional*, defaults to `"full"`):
Controls whether both low and high resolution approximations are used. Set to `"full"` for both low and
|
341_2_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mra.md
|
https://huggingface.co/docs/transformers/en/model_doc/mra/#mraconfig
|
.md
|
Controls whether both low and high resolution approximations are used. Set to `"full"` for both low and
high resolution and `"sparse"` for only low resolution.
initial_prior_first_n_blocks (`int`, *optional*, defaults to 0):
The initial number of blocks for which high resolution is used.
initial_prior_diagonal_n_blocks (`int`, *optional*, defaults to 0):
The number of diagonal blocks for which high resolution is used.
Example:
```python
>>> from transformers import MraConfig, MraModel
|
341_2_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mra.md
|
https://huggingface.co/docs/transformers/en/model_doc/mra/#mraconfig
|
.md
|
>>> # Initializing a Mra uw-madison/mra-base-512-4 style configuration
>>> configuration = MraConfig()
>>> # Initializing a model (with random weights) from the uw-madison/mra-base-512-4 style configuration
>>> model = MraModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
341_2_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mra.md
|
https://huggingface.co/docs/transformers/en/model_doc/mra/#mramodel
|
.md
|
The bare MRA Model transformer outputting raw hidden-states without any specific head on top.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`MraConfig`]): Model configuration class with all the parameters of the model.
|
341_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mra.md
|
https://huggingface.co/docs/transformers/en/model_doc/mra/#mramodel
|
.md
|
behavior.
Parameters:
config ([`MraConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
341_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mra.md
|
https://huggingface.co/docs/transformers/en/model_doc/mra/#mraformaskedlm
|
.md
|
MRA Model with a `language modeling` head on top.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`MraConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
341_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mra.md
|
https://huggingface.co/docs/transformers/en/model_doc/mra/#mraformaskedlm
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
341_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mra.md
|
https://huggingface.co/docs/transformers/en/model_doc/mra/#mraforsequenceclassification
|
.md
|
MRA Model transformer with a sequence classification/regression head on top (a linear layer on top of
the pooled output) e.g. for GLUE tasks.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`MraConfig`]): Model configuration class with all the parameters of the model.
|
341_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mra.md
|
https://huggingface.co/docs/transformers/en/model_doc/mra/#mraforsequenceclassification
|
.md
|
behavior.
Parameters:
config ([`MraConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
341_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mra.md
|
https://huggingface.co/docs/transformers/en/model_doc/mra/#mraformultiplechoice
|
.md
|
MRA Model with a multiple choice classification head on top (a linear layer on top of
the pooled output and a softmax) e.g. for RocStories/SWAG tasks.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`MraConfig`]): Model configuration class with all the parameters of the model.
|
341_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mra.md
|
https://huggingface.co/docs/transformers/en/model_doc/mra/#mraformultiplechoice
|
.md
|
behavior.
Parameters:
config ([`MraConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
341_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mra.md
|
https://huggingface.co/docs/transformers/en/model_doc/mra/#mrafortokenclassification
|
.md
|
MRA Model with a token classification head on top (a linear layer on top of
the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks.
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`MraConfig`]): Model configuration class with all the parameters of the model.
|
341_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mra.md
|
https://huggingface.co/docs/transformers/en/model_doc/mra/#mrafortokenclassification
|
.md
|
behavior.
Parameters:
config ([`MraConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
341_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mra.md
|
https://huggingface.co/docs/transformers/en/model_doc/mra/#mraforquestionanswering
|
.md
|
MRA Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layers on top of the hidden-states output to compute `span start logits` and `span end logits`).
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
|
341_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mra.md
|
https://huggingface.co/docs/transformers/en/model_doc/mra/#mraforquestionanswering
|
.md
|
behavior.
Parameters:
config ([`MraConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
341_8_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pop2piano.md
|
https://huggingface.co/docs/transformers/en/model_doc/pop2piano/
|
.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
342_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pop2piano.md
|
https://huggingface.co/docs/transformers/en/model_doc/pop2piano/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
|
342_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pop2piano.md
|
https://huggingface.co/docs/transformers/en/model_doc/pop2piano/#pop2piano
|
.md
|
<div class="flex flex-wrap space-x-1">
<a href="https://huggingface.co/spaces/sweetcocoa/pop2piano">
<img alt="Spaces" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue">
</a>
</div>
|
342_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pop2piano.md
|
https://huggingface.co/docs/transformers/en/model_doc/pop2piano/#overview
|
.md
|
The Pop2Piano model was proposed in [Pop2Piano : Pop Audio-based Piano Cover Generation](https://arxiv.org/abs/2211.00895) by Jongho Choi and Kyogu Lee.
Piano covers of pop music are widely enjoyed, but generating them from music is not a trivial task. It requires great
expertise with playing piano as well as knowing different characteristics and melodies of a song. With Pop2Piano you
can directly generate a cover from a song's audio waveform. It is the first model to directly generate a piano cover
|
342_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pop2piano.md
|
https://huggingface.co/docs/transformers/en/model_doc/pop2piano/#overview
|
.md
|
can directly generate a cover from a song's audio waveform. It is the first model to directly generate a piano cover
from pop audio without melody and chord extraction modules.
Pop2Piano is an encoder-decoder Transformer model based on [T5](https://arxiv.org/pdf/1910.10683.pdf). The input audio
is transformed to its waveform and passed to the encoder, which transforms it to a latent representation. The decoder
|
342_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pop2piano.md
|
https://huggingface.co/docs/transformers/en/model_doc/pop2piano/#overview
|
.md
|
is transformed to its waveform and passed to the encoder, which transforms it to a latent representation. The decoder
uses these latent representations to generate token ids in an autoregressive way. Each token id corresponds to one of four
different token types: time, velocity, note and 'special'. The token ids are then decoded to their equivalent MIDI file.
The abstract from the paper is the following:
*Piano covers of pop music are enjoyed by many people. However, the
|
342_2_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pop2piano.md
|
https://huggingface.co/docs/transformers/en/model_doc/pop2piano/#overview
|
.md
|
The abstract from the paper is the following:
*Piano covers of pop music are enjoyed by many people. However, the
task of automatically generating piano covers of pop music is still
understudied. This is partly due to the lack of synchronized
{Pop, Piano Cover} data pairs, which made it challenging to apply
the latest data-intensive deep learning-based methods. To leverage
the power of the data-driven approach, we make a large amount of
paired and synchronized {Pop, Piano Cover} data using an automated
|
342_2_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pop2piano.md
|
https://huggingface.co/docs/transformers/en/model_doc/pop2piano/#overview
|
.md
|
paired and synchronized {Pop, Piano Cover} data using an automated
pipeline. In this paper, we present Pop2Piano, a Transformer network
that generates piano covers given waveforms of pop music. To the best
of our knowledge, this is the first model to generate a piano cover
directly from pop audio without using melody and chord extraction
modules. We show that Pop2Piano, trained with our dataset, is capable
of producing plausible piano covers.*
|
342_2_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pop2piano.md
|
https://huggingface.co/docs/transformers/en/model_doc/pop2piano/#overview
|
.md
|
modules. We show that Pop2Piano, trained with our dataset, is capable
of producing plausible piano covers.*
This model was contributed by [Susnato Dhar](https://huggingface.co/susnato).
The original code can be found [here](https://github.com/sweetcocoa/pop2piano).
|
342_2_5
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.