source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bark.md | https://huggingface.co/docs/transformers/en/model_doc/bark/#barkprocessor | .md | Constructs a Bark processor which wraps a text tokenizer and optional Bark voice presets into a single processor.
Args:
tokenizer ([`PreTrainedTokenizer`]):
An instance of [`PreTrainedTokenizer`].
speaker_embeddings (`Dict[Dict[str]]`, *optional*):
Optional nested speaker embeddings dictionary. The first level contains voice preset names (e.g
`"en_speaker_4"`). The second level contains `"semantic_prompt"`, `"coarse_prompt"` and `"fine_prompt"` | 134_10_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bark.md | https://huggingface.co/docs/transformers/en/model_doc/bark/#barkprocessor | .md | `"en_speaker_4"`). The second level contains `"semantic_prompt"`, `"coarse_prompt"` and `"fine_prompt"`
embeddings. The values correspond to the path of the corresponding `np.ndarray`. See
[here](https://suno-ai.notion.site/8b8e8749ed514b0cbf3f699013548683?v=bc67cff786b04b50b3ceb756fd05f68c) for
a list of `voice_preset_names`.
Methods: all
- __call__ | 134_10_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bark.md | https://huggingface.co/docs/transformers/en/model_doc/bark/#barkmodel | .md | The full Bark model, a text-to-speech model composed of 4 sub-models:
- [`BarkSemanticModel`] (also referred to as the 'text' model): a causal auto-regressive transformer model that
takes
as input tokenized text, and predicts semantic text tokens that capture the meaning of the text.
- [`BarkCoarseModel`] (also refered to as the 'coarse acoustics' model), also a causal autoregressive transformer, | 134_11_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bark.md | https://huggingface.co/docs/transformers/en/model_doc/bark/#barkmodel | .md | - [`BarkCoarseModel`] (also refered to as the 'coarse acoustics' model), also a causal autoregressive transformer,
that takes into input the results of the last model. It aims at regressing the first two audio codebooks necessary
to `encodec`.
- [`BarkFineModel`] (the 'fine acoustics' model), this time a non-causal autoencoder transformer, which iteratively
predicts the last codebooks based on the sum of the previous codebooks embeddings. | 134_11_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bark.md | https://huggingface.co/docs/transformers/en/model_doc/bark/#barkmodel | .md | predicts the last codebooks based on the sum of the previous codebooks embeddings.
- having predicted all the codebook channels from the [`EncodecModel`], Bark uses it to decode the output audio
array.
It should be noted that each of the first three modules can support conditional speaker embeddings to condition the
output sound according to specific predefined voice.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the | 134_11_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bark.md | https://huggingface.co/docs/transformers/en/model_doc/bark/#barkmodel | .md | This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`BarkConfig`]): | 134_11_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bark.md | https://huggingface.co/docs/transformers/en/model_doc/bark/#barkmodel | .md | and behavior.
Parameters:
config ([`BarkConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: generate
- enable_cpu_offload | 134_11_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bark.md | https://huggingface.co/docs/transformers/en/model_doc/bark/#barksemanticmodel | .md | Bark semantic (or text) model. It shares the same architecture as the coarse model.
It is a GPT-2 like autoregressive model with a language modeling head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | 134_12_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bark.md | https://huggingface.co/docs/transformers/en/model_doc/bark/#barksemanticmodel | .md | etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`BarkSemanticConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the | 134_12_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bark.md | https://huggingface.co/docs/transformers/en/model_doc/bark/#barksemanticmodel | .md | load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 134_12_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bark.md | https://huggingface.co/docs/transformers/en/model_doc/bark/#barkcoarsemodel | .md | Bark coarse acoustics model.
It shares the same architecture as the semantic (or text) model. It is a GPT-2 like autoregressive model with a
language modeling head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.) | 134_13_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bark.md | https://huggingface.co/docs/transformers/en/model_doc/bark/#barkcoarsemodel | .md | library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`BarkCoarseConfig`]): | 134_13_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bark.md | https://huggingface.co/docs/transformers/en/model_doc/bark/#barkcoarsemodel | .md | and behavior.
Parameters:
config ([`BarkCoarseConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 134_13_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bark.md | https://huggingface.co/docs/transformers/en/model_doc/bark/#barkfinemodel | .md | Bark fine acoustics model. It is a non-causal GPT-like model with `config.n_codes_total` embedding layers and
language modeling heads, one for each codebook.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | 134_14_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bark.md | https://huggingface.co/docs/transformers/en/model_doc/bark/#barkfinemodel | .md | etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`BarkFineConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the | 134_14_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bark.md | https://huggingface.co/docs/transformers/en/model_doc/bark/#barkfinemodel | .md | load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 134_14_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bark.md | https://huggingface.co/docs/transformers/en/model_doc/bark/#barkcausalmodel | .md | No docstring available for BarkCausalModel
Methods: forward | 134_15_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bark.md | https://huggingface.co/docs/transformers/en/model_doc/bark/#barkcoarseconfig | .md | This is the configuration class to store the configuration of a [`BarkCoarseModel`]. It is used to instantiate the model
according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the Bark [suno/bark](https://huggingface.co/suno/bark)
architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the | 134_16_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bark.md | https://huggingface.co/docs/transformers/en/model_doc/bark/#barkcoarseconfig | .md | architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
block_size (`int`, *optional*, defaults to 1024):
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
input_vocab_size (`int`, *optional*, defaults to 10_048): | 134_16_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bark.md | https://huggingface.co/docs/transformers/en/model_doc/bark/#barkcoarseconfig | .md | just in case (e.g., 512 or 1024 or 2048).
input_vocab_size (`int`, *optional*, defaults to 10_048):
Vocabulary size of a Bark sub-model. Defines the number of different tokens that can be represented by the
`inputs_ids` passed when calling [`BarkCoarseModel`]. Defaults to 10_048 but should be carefully thought with
regards to the chosen sub-model.
output_vocab_size (`int`, *optional*, defaults to 10_048): | 134_16_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bark.md | https://huggingface.co/docs/transformers/en/model_doc/bark/#barkcoarseconfig | .md | regards to the chosen sub-model.
output_vocab_size (`int`, *optional*, defaults to 10_048):
Output vocabulary size of a Bark sub-model. Defines the number of different tokens that can be represented
by the: `output_ids` when passing forward a [`BarkCoarseModel`]. Defaults to 10_048 but should be carefully thought
with regards to the chosen sub-model.
num_layers (`int`, *optional*, defaults to 12):
Number of hidden layers in the given sub-model.
num_heads (`int`, *optional*, defaults to 12): | 134_16_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bark.md | https://huggingface.co/docs/transformers/en/model_doc/bark/#barkcoarseconfig | .md | Number of hidden layers in the given sub-model.
num_heads (`int`, *optional*, defaults to 12):
Number of attention heads for each attention layer in the Transformer architecture.
hidden_size (`int`, *optional*, defaults to 768):
Dimensionality of the "intermediate" (often named feed-forward) layer in the architecture.
dropout (`float`, *optional*, defaults to 0.0):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
bias (`bool`, *optional*, defaults to `True`): | 134_16_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bark.md | https://huggingface.co/docs/transformers/en/model_doc/bark/#barkcoarseconfig | .md | bias (`bool`, *optional*, defaults to `True`):
Whether or not to use bias in the linear layers and layer norm layers.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models).
Example:
```python
>>> from transformers import BarkCoarseConfig, BarkCoarseModel | 134_16_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bark.md | https://huggingface.co/docs/transformers/en/model_doc/bark/#barkcoarseconfig | .md | >>> # Initializing a Bark sub-module style configuration
>>> configuration = BarkCoarseConfig()
>>> # Initializing a model (with random weights) from the suno/bark style configuration
>>> model = BarkCoarseModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
Methods: all | 134_16_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bark.md | https://huggingface.co/docs/transformers/en/model_doc/bark/#barkfineconfig | .md | This is the configuration class to store the configuration of a [`BarkFineModel`]. It is used to instantiate the model
according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the Bark [suno/bark](https://huggingface.co/suno/bark)
architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the | 134_17_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bark.md | https://huggingface.co/docs/transformers/en/model_doc/bark/#barkfineconfig | .md | architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
block_size (`int`, *optional*, defaults to 1024):
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
input_vocab_size (`int`, *optional*, defaults to 10_048): | 134_17_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bark.md | https://huggingface.co/docs/transformers/en/model_doc/bark/#barkfineconfig | .md | just in case (e.g., 512 or 1024 or 2048).
input_vocab_size (`int`, *optional*, defaults to 10_048):
Vocabulary size of a Bark sub-model. Defines the number of different tokens that can be represented by the
`inputs_ids` passed when calling [`BarkFineModel`]. Defaults to 10_048 but should be carefully thought with
regards to the chosen sub-model.
output_vocab_size (`int`, *optional*, defaults to 10_048): | 134_17_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bark.md | https://huggingface.co/docs/transformers/en/model_doc/bark/#barkfineconfig | .md | regards to the chosen sub-model.
output_vocab_size (`int`, *optional*, defaults to 10_048):
Output vocabulary size of a Bark sub-model. Defines the number of different tokens that can be represented
by the: `output_ids` when passing forward a [`BarkFineModel`]. Defaults to 10_048 but should be carefully thought
with regards to the chosen sub-model.
num_layers (`int`, *optional*, defaults to 12):
Number of hidden layers in the given sub-model.
num_heads (`int`, *optional*, defaults to 12): | 134_17_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bark.md | https://huggingface.co/docs/transformers/en/model_doc/bark/#barkfineconfig | .md | Number of hidden layers in the given sub-model.
num_heads (`int`, *optional*, defaults to 12):
Number of attention heads for each attention layer in the Transformer architecture.
hidden_size (`int`, *optional*, defaults to 768):
Dimensionality of the "intermediate" (often named feed-forward) layer in the architecture.
dropout (`float`, *optional*, defaults to 0.0):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
bias (`bool`, *optional*, defaults to `True`): | 134_17_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bark.md | https://huggingface.co/docs/transformers/en/model_doc/bark/#barkfineconfig | .md | bias (`bool`, *optional*, defaults to `True`):
Whether or not to use bias in the linear layers and layer norm layers.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models).
n_codes_total (`int`, *optional*, defaults to 8): | 134_17_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bark.md | https://huggingface.co/docs/transformers/en/model_doc/bark/#barkfineconfig | .md | n_codes_total (`int`, *optional*, defaults to 8):
The total number of audio codebooks predicted. Used in the fine acoustics sub-model.
n_codes_given (`int`, *optional*, defaults to 1):
The number of audio codebooks predicted in the coarse acoustics sub-model. Used in the acoustics
sub-models.
Example:
```python
>>> from transformers import BarkFineConfig, BarkFineModel | 134_17_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bark.md | https://huggingface.co/docs/transformers/en/model_doc/bark/#barkfineconfig | .md | >>> # Initializing a Bark sub-module style configuration
>>> configuration = BarkFineConfig()
>>> # Initializing a model (with random weights) from the suno/bark style configuration
>>> model = BarkFineModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
Methods: all | 134_17_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bark.md | https://huggingface.co/docs/transformers/en/model_doc/bark/#barksemanticconfig | .md | This is the configuration class to store the configuration of a [`BarkSemanticModel`]. It is used to instantiate the model
according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the Bark [suno/bark](https://huggingface.co/suno/bark)
architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the | 134_18_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bark.md | https://huggingface.co/docs/transformers/en/model_doc/bark/#barksemanticconfig | .md | architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
block_size (`int`, *optional*, defaults to 1024):
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
input_vocab_size (`int`, *optional*, defaults to 10_048): | 134_18_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bark.md | https://huggingface.co/docs/transformers/en/model_doc/bark/#barksemanticconfig | .md | just in case (e.g., 512 or 1024 or 2048).
input_vocab_size (`int`, *optional*, defaults to 10_048):
Vocabulary size of a Bark sub-model. Defines the number of different tokens that can be represented by the
`inputs_ids` passed when calling [`BarkSemanticModel`]. Defaults to 10_048 but should be carefully thought with
regards to the chosen sub-model.
output_vocab_size (`int`, *optional*, defaults to 10_048): | 134_18_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bark.md | https://huggingface.co/docs/transformers/en/model_doc/bark/#barksemanticconfig | .md | regards to the chosen sub-model.
output_vocab_size (`int`, *optional*, defaults to 10_048):
Output vocabulary size of a Bark sub-model. Defines the number of different tokens that can be represented
by the: `output_ids` when passing forward a [`BarkSemanticModel`]. Defaults to 10_048 but should be carefully thought
with regards to the chosen sub-model.
num_layers (`int`, *optional*, defaults to 12):
Number of hidden layers in the given sub-model.
num_heads (`int`, *optional*, defaults to 12): | 134_18_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bark.md | https://huggingface.co/docs/transformers/en/model_doc/bark/#barksemanticconfig | .md | Number of hidden layers in the given sub-model.
num_heads (`int`, *optional*, defaults to 12):
Number of attention heads for each attention layer in the Transformer architecture.
hidden_size (`int`, *optional*, defaults to 768):
Dimensionality of the "intermediate" (often named feed-forward) layer in the architecture.
dropout (`float`, *optional*, defaults to 0.0):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
bias (`bool`, *optional*, defaults to `True`): | 134_18_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bark.md | https://huggingface.co/docs/transformers/en/model_doc/bark/#barksemanticconfig | .md | bias (`bool`, *optional*, defaults to `True`):
Whether or not to use bias in the linear layers and layer norm layers.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models).
Example:
```python
>>> from transformers import BarkSemanticConfig, BarkSemanticModel | 134_18_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bark.md | https://huggingface.co/docs/transformers/en/model_doc/bark/#barksemanticconfig | .md | >>> # Initializing a Bark sub-module style configuration
>>> configuration = BarkSemanticConfig()
>>> # Initializing a model (with random weights) from the suno/bark style configuration
>>> model = BarkSemanticModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
Methods: all | 134_18_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie_m.md | https://huggingface.co/docs/transformers/en/model_doc/ernie_m/ | .md | <!--Copyright 2023 The HuggingFace and Baidu Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on | 135_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie_m.md | https://huggingface.co/docs/transformers/en/model_doc/ernie_m/ | .md | Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 135_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie_m.md | https://huggingface.co/docs/transformers/en/model_doc/ernie_m/#erniem | .md | <Tip warning={true}>
This model is in maintenance mode only, we don't accept any new PRs changing its code.
If you run into any issues running this model, please reinstall the last version that supported this model: v4.40.2.
You can do so by running the following command: `pip install -U transformers==4.40.2`.
</Tip> | 135_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie_m.md | https://huggingface.co/docs/transformers/en/model_doc/ernie_m/#overview | .md | The ErnieM model was proposed in [ERNIE-M: Enhanced Multilingual Representation by Aligning
Cross-lingual Semantics with Monolingual Corpora](https://arxiv.org/abs/2012.15674) by Xuan Ouyang, Shuohuan Wang, Chao Pang, Yu Sun,
Hao Tian, Hua Wu, Haifeng Wang.
The abstract from the paper is the following: | 135_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie_m.md | https://huggingface.co/docs/transformers/en/model_doc/ernie_m/#overview | .md | *Recent studies have demonstrated that pre-trained cross-lingual models achieve impressive performance in downstream cross-lingual tasks. This improvement benefits from learning a large amount of monolingual and parallel corpora. Although it is generally acknowledged that parallel corpora are critical for improving the model performance, existing methods are often constrained by the size of parallel corpora, especially for lowresource languages. In this paper, we propose ERNIE-M, a new training method that | 135_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie_m.md | https://huggingface.co/docs/transformers/en/model_doc/ernie_m/#overview | .md | size of parallel corpora, especially for lowresource languages. In this paper, we propose ERNIE-M, a new training method that encourages the model to align the representation of multiple languages with monolingual corpora, to overcome the constraint that the parallel corpus size places on the model performance. Our key insight is to integrate back-translation into the pre-training process. We generate pseudo-parallel sentence pairs on a monolingual corpus to enable the learning of semantic alignments | 135_2_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie_m.md | https://huggingface.co/docs/transformers/en/model_doc/ernie_m/#overview | .md | process. We generate pseudo-parallel sentence pairs on a monolingual corpus to enable the learning of semantic alignments between different languages, thereby enhancing the semantic modeling of cross-lingual models. Experimental results show that ERNIE-M outperforms existing cross-lingual models and delivers new state-of-the-art results in various cross-lingual downstream tasks.* | 135_2_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie_m.md | https://huggingface.co/docs/transformers/en/model_doc/ernie_m/#overview | .md | This model was contributed by [Susnato Dhar](https://huggingface.co/susnato). The original code can be found [here](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/paddlenlp/transformers/ernie_m). | 135_2_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie_m.md | https://huggingface.co/docs/transformers/en/model_doc/ernie_m/#usage-tips | .md | - Ernie-M is a BERT-like model so it is a stacked Transformer Encoder.
- Instead of using MaskedLM for pretraining (like BERT) the authors used two novel techniques: `Cross-attention Masked Language Modeling` and `Back-translation Masked Language Modeling`. For now these two LMHead objectives are not implemented here.
- It is a multilingual language model.
- Next Sentence Prediction was not used in pretraining process. | 135_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie_m.md | https://huggingface.co/docs/transformers/en/model_doc/ernie_m/#resources | .md | - [Text classification task guide](../tasks/sequence_classification)
- [Token classification task guide](../tasks/token_classification)
- [Question answering task guide](../tasks/question_answering)
- [Multiple choice task guide](../tasks/multiple_choice) | 135_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie_m.md | https://huggingface.co/docs/transformers/en/model_doc/ernie_m/#erniemconfig | .md | This is the configuration class to store the configuration of a [`ErnieMModel`]. It is used to instantiate a
Ernie-M model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the `Ernie-M`
[susnato/ernie-m-base_pytorch](https://huggingface.co/susnato/ernie-m-base_pytorch) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the | 135_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie_m.md | https://huggingface.co/docs/transformers/en/model_doc/ernie_m/#erniemconfig | .md | Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 250002):
Vocabulary size of `inputs_ids` in [`ErnieMModel`]. Also is the vocab size of token embedding matrix.
Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling
[`ErnieMModel`].
hidden_size (`int`, *optional*, defaults to 768): | 135_5_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie_m.md | https://huggingface.co/docs/transformers/en/model_doc/ernie_m/#erniemconfig | .md | [`ErnieMModel`].
hidden_size (`int`, *optional*, defaults to 768):
Dimensionality of the embedding layer, encoder layers and pooler layer.
num_hidden_layers (`int`, *optional*, defaults to 12):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 12):
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (`int`, *optional*, defaults to 3072): | 135_5_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie_m.md | https://huggingface.co/docs/transformers/en/model_doc/ernie_m/#erniemconfig | .md | intermediate_size (`int`, *optional*, defaults to 3072):
Dimensionality of the feed-forward (ff) layer in the encoder. Input tensors to feed-forward layers are
firstly projected from hidden_size to intermediate_size, and then projected back to hidden_size. Typically
intermediate_size is larger than hidden_size.
hidden_act (`str`, *optional*, defaults to `"gelu"`):
The non-linear activation function in the feed-forward layer. `"gelu"`, `"relu"` and any other torch | 135_5_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie_m.md | https://huggingface.co/docs/transformers/en/model_doc/ernie_m/#erniemconfig | .md | The non-linear activation function in the feed-forward layer. `"gelu"`, `"relu"` and any other torch
supported activation functions are supported.
hidden_dropout_prob (`float`, *optional*, defaults to 0.1):
The dropout probability for all fully connected layers in the embeddings and encoder.
attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1):
The dropout probability used in `MultiHeadAttention` in all encoder layers to drop some attention target. | 135_5_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie_m.md | https://huggingface.co/docs/transformers/en/model_doc/ernie_m/#erniemconfig | .md | The dropout probability used in `MultiHeadAttention` in all encoder layers to drop some attention target.
max_position_embeddings (`int`, *optional*, defaults to 514):
The maximum value of the dimensionality of position encoding, which dictates the maximum supported length
of an input sequence.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the normal initializer for initializing all weight matrices. The index of padding
token in the token vocabulary. | 135_5_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie_m.md | https://huggingface.co/docs/transformers/en/model_doc/ernie_m/#erniemconfig | .md | token in the token vocabulary.
pad_token_id (`int`, *optional*, defaults to 1):
Padding token id.
layer_norm_eps (`float`, *optional*, defaults to 1e-05):
The epsilon used by the layer normalization layers.
classifier_dropout (`float`, *optional*):
The dropout ratio for the classification head.
act_dropout (`float`, *optional*, defaults to 0.0):
This dropout probability is used in `ErnieMEncoderLayer` after activation.
A normal_initializer initializes weight matrices as normal distributions. See | 135_5_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie_m.md | https://huggingface.co/docs/transformers/en/model_doc/ernie_m/#erniemconfig | .md | A normal_initializer initializes weight matrices as normal distributions. See
`ErnieMPretrainedModel._init_weights()` for how weights are initialized in `ErnieMModel`. | 135_5_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie_m.md | https://huggingface.co/docs/transformers/en/model_doc/ernie_m/#erniemtokenizer | .md | Constructs a Ernie-M tokenizer. It uses the `sentencepiece` tools to cut the words to sub-words.
Args:
sentencepiece_model_file (`str`):
The file path of sentencepiece model.
vocab_file (`str`, *optional*):
The file path of the vocabulary.
do_lower_case (`str`, *optional*, defaults to `True`):
Whether or not to lowercase the input when tokenizing.
unk_token (`str`, *optional*, defaults to `"[UNK]"`):
A special token representing the `unknown (out-of-vocabulary)` token. An unknown token is set to be | 135_6_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie_m.md | https://huggingface.co/docs/transformers/en/model_doc/ernie_m/#erniemtokenizer | .md | A special token representing the `unknown (out-of-vocabulary)` token. An unknown token is set to be
`unk_token` inorder to be converted to an ID.
sep_token (`str`, *optional*, defaults to `"[SEP]"`):
A special token separating two different sentences in the same input.
pad_token (`str`, *optional*, defaults to `"[PAD]"`):
A special token used to make arrays of tokens the same size for batching purposes.
cls_token (`str`, *optional*, defaults to `"[CLS]"`): | 135_6_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie_m.md | https://huggingface.co/docs/transformers/en/model_doc/ernie_m/#erniemtokenizer | .md | cls_token (`str`, *optional*, defaults to `"[CLS]"`):
A special token used for sequence classification. It is the last token of the sequence when built with
special tokens.
mask_token (`str`, *optional*, defaults to `"[MASK]"`):
A special token representing a masked token. This is the token used in the masked language modeling task
which the model tries to predict the original unmasked ones.
Methods: build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences | 135_6_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie_m.md | https://huggingface.co/docs/transformers/en/model_doc/ernie_m/#erniemtokenizer | .md | Methods: build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary | 135_6_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie_m.md | https://huggingface.co/docs/transformers/en/model_doc/ernie_m/#erniemmodel | .md | The bare ErnieM Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use | 135_7_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie_m.md | https://huggingface.co/docs/transformers/en/model_doc/ernie_m/#erniemmodel | .md | etc.)
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`ErnieMConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the | 135_7_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie_m.md | https://huggingface.co/docs/transformers/en/model_doc/ernie_m/#erniemmodel | .md | Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 135_7_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie_m.md | https://huggingface.co/docs/transformers/en/model_doc/ernie_m/#erniemforsequenceclassification | .md | ErnieM Model transformer with a sequence classification/regression head on top (a linear layer on top of
the pooled output) e.g. for GLUE tasks.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use | 135_8_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie_m.md | https://huggingface.co/docs/transformers/en/model_doc/ernie_m/#erniemforsequenceclassification | .md | etc.)
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`ErnieMConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the | 135_8_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie_m.md | https://huggingface.co/docs/transformers/en/model_doc/ernie_m/#erniemforsequenceclassification | .md | Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 135_8_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie_m.md | https://huggingface.co/docs/transformers/en/model_doc/ernie_m/#erniemformultiplechoice | .md | ErnieM Model with a multiple choice classification head on top (a linear layer on top of
the pooled output and a softmax) e.g. for RocStories/SWAG tasks.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use | 135_9_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie_m.md | https://huggingface.co/docs/transformers/en/model_doc/ernie_m/#erniemformultiplechoice | .md | etc.)
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`ErnieMConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the | 135_9_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie_m.md | https://huggingface.co/docs/transformers/en/model_doc/ernie_m/#erniemformultiplechoice | .md | Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 135_9_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie_m.md | https://huggingface.co/docs/transformers/en/model_doc/ernie_m/#erniemfortokenclassification | .md | ErnieM Model with a token classification head on top (a linear layer on top of
the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use | 135_10_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie_m.md | https://huggingface.co/docs/transformers/en/model_doc/ernie_m/#erniemfortokenclassification | .md | etc.)
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`ErnieMConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the | 135_10_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie_m.md | https://huggingface.co/docs/transformers/en/model_doc/ernie_m/#erniemfortokenclassification | .md | Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 135_10_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie_m.md | https://huggingface.co/docs/transformers/en/model_doc/ernie_m/#erniemforquestionanswering | .md | ErnieM Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layers on top of the hidden-states output to compute `span start logits` and `span end logits`).
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.) | 135_11_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie_m.md | https://huggingface.co/docs/transformers/en/model_doc/ernie_m/#erniemforquestionanswering | .md | library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`ErnieMConfig`]): Model configuration class with all the parameters of the model. | 135_11_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie_m.md | https://huggingface.co/docs/transformers/en/model_doc/ernie_m/#erniemforquestionanswering | .md | behavior.
Parameters:
config ([`ErnieMConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 135_11_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie_m.md | https://huggingface.co/docs/transformers/en/model_doc/ernie_m/#erniemforinformationextraction | .md | ErnieMForInformationExtraction is a Ernie-M Model with two linear layer on top of the hidden-states output to
compute `start_prob` and `end_prob`, designed for Universal Information Extraction.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.) | 135_12_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie_m.md | https://huggingface.co/docs/transformers/en/model_doc/ernie_m/#erniemforinformationextraction | .md | library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters:
config ([`ErnieMConfig`]): Model configuration class with all the parameters of the model. | 135_12_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ernie_m.md | https://huggingface.co/docs/transformers/en/model_doc/ernie_m/#erniemforinformationextraction | .md | behavior.
Parameters:
config ([`ErnieMConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 135_12_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seggpt.md | https://huggingface.co/docs/transformers/en/model_doc/seggpt/ | .md | <!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 136_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seggpt.md | https://huggingface.co/docs/transformers/en/model_doc/seggpt/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 136_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seggpt.md | https://huggingface.co/docs/transformers/en/model_doc/seggpt/#overview | .md | The SegGPT model was proposed in [SegGPT: Segmenting Everything In Context](https://arxiv.org/abs/2304.03284) by Xinlong Wang, Xiaosong Zhang, Yue Cao, Wen Wang, Chunhua Shen, Tiejun Huang. SegGPT employs a decoder-only Transformer that can generate a segmentation mask given an input image, a prompt image and its corresponding prompt mask. The model achieves remarkable one-shot results with 56.1 mIoU on COCO-20 and 85.6 mIoU on FSS-1000.
The abstract from the paper is the following: | 136_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seggpt.md | https://huggingface.co/docs/transformers/en/model_doc/seggpt/#overview | .md | *We present SegGPT, a generalist model for segmenting everything in context. We unify various segmentation tasks into a generalist in-context learning framework that accommodates different kinds of segmentation data by transforming them into the same format of images. The training of SegGPT is formulated as an in-context coloring problem with random color mapping for each data sample. The objective is to accomplish diverse tasks according to the context, rather than relying on specific colors. After | 136_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seggpt.md | https://huggingface.co/docs/transformers/en/model_doc/seggpt/#overview | .md | sample. The objective is to accomplish diverse tasks according to the context, rather than relying on specific colors. After training, SegGPT can perform arbitrary segmentation tasks in images or videos via in-context inference, such as object instance, stuff, part, contour, and text. SegGPT is evaluated on a broad range of tasks, including few-shot semantic segmentation, video object segmentation, semantic segmentation, and panoptic segmentation. Our results show strong capabilities in segmenting | 136_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seggpt.md | https://huggingface.co/docs/transformers/en/model_doc/seggpt/#overview | .md | video object segmentation, semantic segmentation, and panoptic segmentation. Our results show strong capabilities in segmenting in-domain and out-of* | 136_1_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seggpt.md | https://huggingface.co/docs/transformers/en/model_doc/seggpt/#overview | .md | Tips:
- One can use [`SegGptImageProcessor`] to prepare image input, prompt and mask to the model.
- One can either use segmentation maps or RGB images as prompt masks. If using the latter make sure to set `do_convert_rgb=False` in the `preprocess` method.
- It's highly advisable to pass `num_labels` when using `segmentation_maps` (not considering background) during preprocessing and postprocessing with [`SegGptImageProcessor`] for your use case. | 136_1_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seggpt.md | https://huggingface.co/docs/transformers/en/model_doc/seggpt/#overview | .md | - When doing inference with [`SegGptForImageSegmentation`] if your `batch_size` is greater than 1 you can use feature ensemble across your images by passing `feature_ensemble=True` in the forward method.
Here's how to use the model for one-shot semantic segmentation:
```python
import torch
from datasets import load_dataset
from transformers import SegGptImageProcessor, SegGptForImageSegmentation | 136_1_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seggpt.md | https://huggingface.co/docs/transformers/en/model_doc/seggpt/#overview | .md | checkpoint = "BAAI/seggpt-vit-large"
image_processor = SegGptImageProcessor.from_pretrained(checkpoint)
model = SegGptForImageSegmentation.from_pretrained(checkpoint)
dataset_id = "EduardoPacheco/FoodSeg103"
ds = load_dataset(dataset_id, split="train")
# Number of labels in FoodSeg103 (not including background)
num_labels = 103
image_input = ds[4]["image"]
ground_truth = ds[4]["label"]
image_prompt = ds[29]["image"]
mask_prompt = ds[29]["label"] | 136_1_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seggpt.md | https://huggingface.co/docs/transformers/en/model_doc/seggpt/#overview | .md | image_input = ds[4]["image"]
ground_truth = ds[4]["label"]
image_prompt = ds[29]["image"]
mask_prompt = ds[29]["label"]
inputs = image_processor(
images=image_input,
prompt_images=image_prompt,
segmentation_maps=mask_prompt,
num_labels=num_labels,
return_tensors="pt"
)
with torch.no_grad():
outputs = model(**inputs) | 136_1_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seggpt.md | https://huggingface.co/docs/transformers/en/model_doc/seggpt/#overview | .md | with torch.no_grad():
outputs = model(**inputs)
target_sizes = [image_input.size[::-1]]
mask = image_processor.post_process_semantic_segmentation(outputs, target_sizes, num_labels=num_labels)[0]
```
This model was contributed by [EduardoPacheco](https://huggingface.co/EduardoPacheco).
The original code can be found [here]([(https://github.com/baaivision/Painter/tree/main)). | 136_1_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seggpt.md | https://huggingface.co/docs/transformers/en/model_doc/seggpt/#seggptconfig | .md | This is the configuration class to store the configuration of a [`SegGptModel`]. It is used to instantiate a SegGPT
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the SegGPT
[BAAI/seggpt-vit-large](https://huggingface.co/BAAI/seggpt-vit-large) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the | 136_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seggpt.md | https://huggingface.co/docs/transformers/en/model_doc/seggpt/#seggptconfig | .md | Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
hidden_size (`int`, *optional*, defaults to 1024):
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (`int`, *optional*, defaults to 24):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 16): | 136_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seggpt.md | https://huggingface.co/docs/transformers/en/model_doc/seggpt/#seggptconfig | .md | Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 16):
Number of attention heads for each attention layer in the Transformer encoder.
hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"selu"` and `"gelu_new"` are supported.
hidden_dropout_prob (`float`, *optional*, defaults to 0.0): | 136_2_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seggpt.md | https://huggingface.co/docs/transformers/en/model_doc/seggpt/#seggptconfig | .md | `"relu"`, `"selu"` and `"gelu_new"` are supported.
hidden_dropout_prob (`float`, *optional*, defaults to 0.0):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (`float`, *optional*, defaults to 1e-06):
The epsilon used by the layer normalization layers. | 136_2_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seggpt.md | https://huggingface.co/docs/transformers/en/model_doc/seggpt/#seggptconfig | .md | layer_norm_eps (`float`, *optional*, defaults to 1e-06):
The epsilon used by the layer normalization layers.
image_size (`List[int]`, *optional*, defaults to `[896, 448]`):
The size (resolution) of each image.
patch_size (`int`, *optional*, defaults to 16):
The size (resolution) of each patch.
num_channels (`int`, *optional*, defaults to 3):
The number of input channels.
qkv_bias (`bool`, *optional*, defaults to `True`):
Whether to add a bias to the queries, keys and values.
mlp_dim (`int`, *optional*): | 136_2_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seggpt.md | https://huggingface.co/docs/transformers/en/model_doc/seggpt/#seggptconfig | .md | Whether to add a bias to the queries, keys and values.
mlp_dim (`int`, *optional*):
The dimensionality of the MLP layer in the Transformer encoder. If unset, defaults to
`hidden_size` * 4.
drop_path_rate (`float`, *optional*, defaults to 0.1):
The drop path rate for the dropout layers.
pretrain_image_size (`int`, *optional*, defaults to 224):
The pretrained size of the absolute position embeddings.
decoder_hidden_size (`int`, *optional*, defaults to 64):
Hidden size for decoder. | 136_2_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seggpt.md | https://huggingface.co/docs/transformers/en/model_doc/seggpt/#seggptconfig | .md | decoder_hidden_size (`int`, *optional*, defaults to 64):
Hidden size for decoder.
use_relative_position_embeddings (`bool`, *optional*, defaults to `True`):
Whether to use relative position embeddings in the attention layers.
merge_index (`int`, *optional*, defaults to 2):
The index of the encoder layer to merge the embeddings.
intermediate_hidden_state_indices (`List[int]`, *optional*, defaults to `[5, 11, 17, 23]`):
The indices of the encoder layers which we store as features for the decoder. | 136_2_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seggpt.md | https://huggingface.co/docs/transformers/en/model_doc/seggpt/#seggptconfig | .md | The indices of the encoder layers which we store as features for the decoder.
beta (`float`, *optional*, defaults to 0.01):
Regularization factor for SegGptLoss (smooth-l1 loss).
Example:
```python
>>> from transformers import SegGptConfig, SegGptModel | 136_2_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seggpt.md | https://huggingface.co/docs/transformers/en/model_doc/seggpt/#seggptconfig | .md | >>> # Initializing a SegGPT seggpt-vit-large style configuration
>>> configuration = SegGptConfig()
>>> # Initializing a model (with random weights) from the seggpt-vit-large style configuration
>>> model = SegGptModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
``` | 136_2_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seggpt.md | https://huggingface.co/docs/transformers/en/model_doc/seggpt/#seggptimageprocessor | .md | Constructs a SegGpt image processor.
Args:
do_resize (`bool`, *optional*, defaults to `True`):
Whether to resize the image's (height, width) dimensions to the specified `(size["height"],
size["width"])`. Can be overridden by the `do_resize` parameter in the `preprocess` method.
size (`dict`, *optional*, defaults to `{"height": 448, "width": 448}`):
Size of the output image after resizing. Can be overridden by the `size` parameter in the `preprocess`
method. | 136_3_0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.