source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-conformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-conformer/#wav2vec2conformerconfig
|
.md
|
[facebook/wav2vec2-conformer-rel-pos-large](https://huggingface.co/facebook/wav2vec2-conformer-rel-pos-large)
architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*):
Vocabulary size of the Wav2Vec2Conformer model. Defines the number of different tokens that can be
|
278_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-conformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-conformer/#wav2vec2conformerconfig
|
.md
|
Vocabulary size of the Wav2Vec2Conformer model. Defines the number of different tokens that can be
represented by the `inputs_ids` passed when calling [`Wav2Vec2ConformerModel`]. Vocabulary size of the
model. Defines the different tokens that can be represented by the *inputs_ids* passed to the forward
method of [`Wav2Vec2ConformerModel`].
hidden_size (`int`, *optional*, defaults to 768):
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (`int`, *optional*, defaults to 12):
|
278_4_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-conformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-conformer/#wav2vec2conformerconfig
|
.md
|
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (`int`, *optional*, defaults to 12):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 12):
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (`int`, *optional*, defaults to 3072):
Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
|
278_4_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-conformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-conformer/#wav2vec2conformerconfig
|
.md
|
Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"selu"` and `"gelu_new"` are supported.
hidden_dropout (`float`, *optional*, defaults to 0.1):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
|
278_4_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-conformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-conformer/#wav2vec2conformerconfig
|
.md
|
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
activation_dropout (`float`, *optional*, defaults to 0.1):
The dropout ratio for activations inside the fully connected layer.
attention_dropout (`float`, *optional*, defaults to 0.1):
The dropout ratio for the attention probabilities.
final_dropout (`float`, *optional*, defaults to 0.1):
The dropout probability for the final projection layer of [`Wav2Vec2ConformerForCTC`].
|
278_4_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-conformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-conformer/#wav2vec2conformerconfig
|
.md
|
The dropout probability for the final projection layer of [`Wav2Vec2ConformerForCTC`].
layerdrop (`float`, *optional*, defaults to 0.1):
The LayerDrop probability. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more
details.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (`float`, *optional*, defaults to 1e-12):
The epsilon used by the layer normalization layers.
|
278_4_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-conformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-conformer/#wav2vec2conformerconfig
|
.md
|
layer_norm_eps (`float`, *optional*, defaults to 1e-12):
The epsilon used by the layer normalization layers.
feat_extract_norm (`str`, *optional*, defaults to `"group"`):
The norm to be applied to 1D convolutional layers in feature encoder. One of `"group"` for group
normalization of only the first 1D convolutional layer or `"layer"` for layer normalization of all 1D
convolutional layers.
feat_proj_dropout (`float`, *optional*, defaults to 0.0):
The dropout probability for output of the feature encoder.
|
278_4_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-conformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-conformer/#wav2vec2conformerconfig
|
.md
|
feat_proj_dropout (`float`, *optional*, defaults to 0.0):
The dropout probability for output of the feature encoder.
feat_extract_activation (`str, `optional`, defaults to `"gelu"`):
The non-linear activation function (function or string) in the 1D convolutional layers of the feature
extractor. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported.
feat_quantizer_dropout (`float`, *optional*, defaults to 0.0):
The dropout probability for quantized feature encoder states.
|
278_4_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-conformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-conformer/#wav2vec2conformerconfig
|
.md
|
feat_quantizer_dropout (`float`, *optional*, defaults to 0.0):
The dropout probability for quantized feature encoder states.
conv_dim (`Tuple[int]` or `List[int]`, *optional*, defaults to `(512, 512, 512, 512, 512, 512, 512)`):
A tuple of integers defining the number of input and output channels of each 1D convolutional layer in the
feature encoder. The length of *conv_dim* defines the number of 1D convolutional layers.
|
278_4_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-conformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-conformer/#wav2vec2conformerconfig
|
.md
|
feature encoder. The length of *conv_dim* defines the number of 1D convolutional layers.
conv_stride (`Tuple[int]` or `List[int]`, *optional*, defaults to `(5, 2, 2, 2, 2, 2, 2)`):
A tuple of integers defining the stride of each 1D convolutional layer in the feature encoder. The length
of *conv_stride* defines the number of convolutional layers and has to match the length of *conv_dim*.
conv_kernel (`Tuple[int]` or `List[int]`, *optional*, defaults to `(10, 3, 3, 3, 3, 3, 3)`):
|
278_4_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-conformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-conformer/#wav2vec2conformerconfig
|
.md
|
conv_kernel (`Tuple[int]` or `List[int]`, *optional*, defaults to `(10, 3, 3, 3, 3, 3, 3)`):
A tuple of integers defining the kernel size of each 1D convolutional layer in the feature encoder. The
length of *conv_kernel* defines the number of convolutional layers and has to match the length of
*conv_dim*.
conv_bias (`bool`, *optional*, defaults to `False`):
Whether the 1D convolutional layers have a bias.
num_conv_pos_embeddings (`int`, *optional*, defaults to 128):
|
278_4_11
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-conformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-conformer/#wav2vec2conformerconfig
|
.md
|
Whether the 1D convolutional layers have a bias.
num_conv_pos_embeddings (`int`, *optional*, defaults to 128):
Number of convolutional positional embeddings. Defines the kernel size of 1D convolutional positional
embeddings layer.
num_conv_pos_embedding_groups (`int`, *optional*, defaults to 16):
Number of groups of 1D convolutional positional embeddings layer.
apply_spec_augment (`bool`, *optional*, defaults to `True`):
|
278_4_12
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-conformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-conformer/#wav2vec2conformerconfig
|
.md
|
Number of groups of 1D convolutional positional embeddings layer.
apply_spec_augment (`bool`, *optional*, defaults to `True`):
Whether to apply *SpecAugment* data augmentation to the outputs of the feature encoder. For reference see
[SpecAugment: A Simple Data Augmentation Method for Automatic Speech
Recognition](https://arxiv.org/abs/1904.08779).
mask_time_prob (`float`, *optional*, defaults to 0.05):
Percentage (between 0 and 1) of all feature vectors along the time axis which will be masked. The masking
|
278_4_13
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-conformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-conformer/#wav2vec2conformerconfig
|
.md
|
Percentage (between 0 and 1) of all feature vectors along the time axis which will be masked. The masking
procecure generates ''mask_time_prob*len(time_axis)/mask_time_length'' independent masks over the axis. If
reasoning from the propability of each feature vector to be chosen as the start of the vector span to be
masked, *mask_time_prob* should be `prob_vector_start*mask_time_length`. Note that overlap may decrease the
|
278_4_14
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-conformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-conformer/#wav2vec2conformerconfig
|
.md
|
masked, *mask_time_prob* should be `prob_vector_start*mask_time_length`. Note that overlap may decrease the
actual percentage of masked vectors. This is only relevant if `apply_spec_augment is True`.
mask_time_length (`int`, *optional*, defaults to 10):
Length of vector span along the time axis.
mask_time_min_masks (`int`, *optional*, defaults to 2),:
The minimum number of masks of length `mask_feature_length` generated along the time axis, each time step,
|
278_4_15
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-conformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-conformer/#wav2vec2conformerconfig
|
.md
|
The minimum number of masks of length `mask_feature_length` generated along the time axis, each time step,
irrespectively of `mask_feature_prob`. Only relevant if ''mask_time_prob*len(time_axis)/mask_time_length <
mask_time_min_masks''
mask_feature_prob (`float`, *optional*, defaults to 0.0):
Percentage (between 0 and 1) of all feature vectors along the feature axis which will be masked. The
masking procecure generates ''mask_feature_prob*len(feature_axis)/mask_time_length'' independent masks over
|
278_4_16
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-conformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-conformer/#wav2vec2conformerconfig
|
.md
|
masking procecure generates ''mask_feature_prob*len(feature_axis)/mask_time_length'' independent masks over
the axis. If reasoning from the propability of each feature vector to be chosen as the start of the vector
span to be masked, *mask_feature_prob* should be `prob_vector_start*mask_feature_length`. Note that overlap
may decrease the actual percentage of masked vectors. This is only relevant if `apply_spec_augment is
True`.
mask_feature_length (`int`, *optional*, defaults to 10):
|
278_4_17
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-conformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-conformer/#wav2vec2conformerconfig
|
.md
|
True`.
mask_feature_length (`int`, *optional*, defaults to 10):
Length of vector span along the feature axis.
mask_feature_min_masks (`int`, *optional*, defaults to 0),:
The minimum number of masks of length `mask_feature_length` generated along the feature axis, each time
step, irrespectively of `mask_feature_prob`. Only relevant if
''mask_feature_prob*len(feature_axis)/mask_feature_length < mask_feature_min_masks''
num_codevectors_per_group (`int`, *optional*, defaults to 320):
|
278_4_18
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-conformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-conformer/#wav2vec2conformerconfig
|
.md
|
num_codevectors_per_group (`int`, *optional*, defaults to 320):
Number of entries in each quantization codebook (group).
num_codevector_groups (`int`, *optional*, defaults to 2):
Number of codevector groups for product codevector quantization.
contrastive_logits_temperature (`float`, *optional*, defaults to 0.1):
The temperature *kappa* in the contrastive loss.
feat_quantizer_dropout (`float`, *optional*, defaults to 0.0):
|
278_4_19
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-conformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-conformer/#wav2vec2conformerconfig
|
.md
|
The temperature *kappa* in the contrastive loss.
feat_quantizer_dropout (`float`, *optional*, defaults to 0.0):
The dropout probability for the output of the feature encoder that's used by the quantizer.
num_negatives (`int`, *optional*, defaults to 100):
Number of negative samples for the contrastive loss.
codevector_dim (`int`, *optional*, defaults to 256):
Dimensionality of the quantized feature vectors.
proj_codevector_dim (`int`, *optional*, defaults to 256):
|
278_4_20
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-conformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-conformer/#wav2vec2conformerconfig
|
.md
|
Dimensionality of the quantized feature vectors.
proj_codevector_dim (`int`, *optional*, defaults to 256):
Dimensionality of the final projection of both the quantized and the transformer features.
diversity_loss_weight (`int`, *optional*, defaults to 0.1):
The weight of the codebook diversity loss component.
ctc_loss_reduction (`str`, *optional*, defaults to `"sum"`):
Specifies the reduction to apply to the output of `torch.nn.CTCLoss`. Only relevant when training an
|
278_4_21
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-conformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-conformer/#wav2vec2conformerconfig
|
.md
|
Specifies the reduction to apply to the output of `torch.nn.CTCLoss`. Only relevant when training an
instance of [`Wav2Vec2ConformerForCTC`].
ctc_zero_infinity (`bool`, *optional*, defaults to `False`):
Whether to zero infinite losses and the associated gradients of `torch.nn.CTCLoss`. Infinite losses mainly
occur when the inputs are too short to be aligned to the targets. Only relevant when training an instance
of [`Wav2Vec2ConformerForCTC`].
|
278_4_22
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-conformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-conformer/#wav2vec2conformerconfig
|
.md
|
of [`Wav2Vec2ConformerForCTC`].
use_weighted_layer_sum (`bool`, *optional*, defaults to `False`):
Whether to use a weighted average of layer outputs with learned weights. Only relevant when using an
instance of [`Wav2Vec2ConformerForSequenceClassification`].
classifier_proj_size (`int`, *optional*, defaults to 256):
Dimensionality of the projection before token mean-pooling for classification.
tdnn_dim (`Tuple[int]` or `List[int]`, *optional*, defaults to `(512, 512, 512, 512, 1500)`):
|
278_4_23
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-conformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-conformer/#wav2vec2conformerconfig
|
.md
|
tdnn_dim (`Tuple[int]` or `List[int]`, *optional*, defaults to `(512, 512, 512, 512, 1500)`):
A tuple of integers defining the number of output channels of each 1D convolutional layer in the *TDNN*
module of the *XVector* model. The length of *tdnn_dim* defines the number of *TDNN* layers.
tdnn_kernel (`Tuple[int]` or `List[int]`, *optional*, defaults to `(5, 3, 3, 1, 1)`):
A tuple of integers defining the kernel size of each 1D convolutional layer in the *TDNN* module of the
|
278_4_24
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-conformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-conformer/#wav2vec2conformerconfig
|
.md
|
A tuple of integers defining the kernel size of each 1D convolutional layer in the *TDNN* module of the
*XVector* model. The length of *tdnn_kernel* has to match the length of *tdnn_dim*.
tdnn_dilation (`Tuple[int]` or `List[int]`, *optional*, defaults to `(1, 2, 3, 1, 1)`):
A tuple of integers defining the dilation factor of each 1D convolutional layer in *TDNN* module of the
*XVector* model. The length of *tdnn_dilation* has to match the length of *tdnn_dim*.
|
278_4_25
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-conformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-conformer/#wav2vec2conformerconfig
|
.md
|
*XVector* model. The length of *tdnn_dilation* has to match the length of *tdnn_dim*.
xvector_output_dim (`int`, *optional*, defaults to 512):
Dimensionality of the *XVector* embedding vectors.
add_adapter (`bool`, *optional*, defaults to `False`):
Whether a convolutional network should be stacked on top of the Wav2Vec2Conformer Encoder. Can be very
useful for warm-starting Wav2Vec2Conformer for SpeechEncoderDecoder models.
adapter_kernel_size (`int`, *optional*, defaults to 3):
|
278_4_26
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-conformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-conformer/#wav2vec2conformerconfig
|
.md
|
adapter_kernel_size (`int`, *optional*, defaults to 3):
Kernel size of the convolutional layers in the adapter network. Only relevant if `add_adapter is True`.
adapter_stride (`int`, *optional*, defaults to 2):
Stride of the convolutional layers in the adapter network. Only relevant if `add_adapter is True`.
num_adapter_layers (`int`, *optional*, defaults to 3):
Number of convolutional layers that should be used in the adapter network. Only relevant if `add_adapter is
True`.
|
278_4_27
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-conformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-conformer/#wav2vec2conformerconfig
|
.md
|
Number of convolutional layers that should be used in the adapter network. Only relevant if `add_adapter is
True`.
output_hidden_size (`int`, *optional*):
Dimensionality of the encoder output layer. If not defined, this defaults to *hidden-size*. Only relevant
if `add_adapter is True`.
position_embeddings_type (`str`, *optional*, defaults to `"relative"`):
Can be specified to `relative` or `rotary` for relative or rotary position embeddings respectively. If left
|
278_4_28
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-conformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-conformer/#wav2vec2conformerconfig
|
.md
|
Can be specified to `relative` or `rotary` for relative or rotary position embeddings respectively. If left
`None` no relative position embedding is applied.
rotary_embedding_base (`int`, *optional*, defaults to 10000):
If `"rotary"` position embeddings are used, defines the size of the embedding base.
max_source_positions (`int`, *optional*, defaults to 5000):
if `"relative"` position embeddings are used, defines the maximum source input positions.
|
278_4_29
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-conformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-conformer/#wav2vec2conformerconfig
|
.md
|
if `"relative"` position embeddings are used, defines the maximum source input positions.
conv_depthwise_kernel_size (`int`, *optional*, defaults to 31):
Kernel size of convolutional depthwise 1D layer in Conformer blocks.
conformer_conv_dropout (`float`, *optional*, defaults to 0.1):
The dropout probability for all convolutional layers in Conformer blocks.
Example:
```python
>>> from transformers import Wav2Vec2ConformerConfig, Wav2Vec2ConformerModel
|
278_4_30
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-conformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-conformer/#wav2vec2conformerconfig
|
.md
|
>>> # Initializing a Wav2Vec2Conformer facebook/wav2vec2-conformer-rel-pos-large style configuration
>>> configuration = Wav2Vec2ConformerConfig()
>>> # Initializing a model (with random weights) from the facebook/wav2vec2-conformer-rel-pos-large style configuration
>>> model = Wav2Vec2ConformerModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
278_4_31
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-conformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-conformer/#wav2vec2conformer-specific-outputs
|
.md
|
models.wav2vec2_conformer.modeling_wav2vec2_conformer.Wav2Vec2ConformerForPreTrainingOutput
Output type of [`Wav2Vec2ConformerForPreTraining`], with potential hidden states and attentions.
Args:
loss (*optional*, returned when `sample_negative_indices` are passed, `torch.FloatTensor` of shape `(1,)`):
Total loss as the sum of the contrastive loss (L_m) and the diversity loss (L_d) as stated in the [official
paper](https://arxiv.org/pdf/2006.11477.pdf) . (classification) loss.
|
278_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-conformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-conformer/#wav2vec2conformer-specific-outputs
|
.md
|
paper](https://arxiv.org/pdf/2006.11477.pdf) . (classification) loss.
projected_states (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.proj_codevector_dim)`):
Hidden-states of the model projected to *config.proj_codevector_dim* that can be used to predict the masked
projected quantized states.
projected_quantized_states (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.proj_codevector_dim)`):
|
278_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-conformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-conformer/#wav2vec2conformer-specific-outputs
|
.md
|
projected_quantized_states (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.proj_codevector_dim)`):
Quantized extracted feature vectors projected to *config.proj_codevector_dim* representing the positive
target vectors for contrastive loss.
hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
|
278_5_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-conformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-conformer/#wav2vec2conformer-specific-outputs
|
.md
|
Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of
shape `(batch_size, sequence_length, hidden_size)`.
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
|
278_5_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-conformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-conformer/#wav2vec2conformer-specific-outputs
|
.md
|
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
sequence_length)`.
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
contrastive_loss (*optional*, returned when `sample_negative_indices` are passed, `torch.FloatTensor` of shape `(1,)`):
The contrastive loss (L_m) as stated in the [official paper](https://arxiv.org/pdf/2006.11477.pdf) .
|
278_5_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-conformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-conformer/#wav2vec2conformer-specific-outputs
|
.md
|
The contrastive loss (L_m) as stated in the [official paper](https://arxiv.org/pdf/2006.11477.pdf) .
diversity_loss (*optional*, returned when `sample_negative_indices` are passed, `torch.FloatTensor` of shape `(1,)`):
The diversity loss (L_d) as stated in the [official paper](https://arxiv.org/pdf/2006.11477.pdf) .
|
278_5_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-conformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-conformer/#wav2vec2conformermodel
|
.md
|
The bare Wav2Vec2Conformer Model transformer outputting raw hidden-states without any specific head on top.
Wav2Vec2Conformer was proposed in [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech
Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael
Auli.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).
|
278_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-conformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-conformer/#wav2vec2conformermodel
|
.md
|
library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch [nn.Module](https://pytorch.org/docs/stable/nn.html#nn.Module) sub-class. Use it as a
regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
Parameters:
config ([`Wav2Vec2ConformerConfig`]): Model configuration class with all the parameters of the model.
|
278_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-conformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-conformer/#wav2vec2conformermodel
|
.md
|
Parameters:
config ([`Wav2Vec2ConformerConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
278_6_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-conformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-conformer/#wav2vec2conformerforctc
|
.md
|
Wav2Vec2Conformer Model with a `language modeling` head on top for Connectionist Temporal Classification (CTC).
Wav2Vec2Conformer was proposed in [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech
Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael
Auli.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
|
278_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-conformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-conformer/#wav2vec2conformerforctc
|
.md
|
Auli.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch [nn.Module](https://pytorch.org/docs/stable/nn.html#nn.Module) sub-class. Use it as a
regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
Parameters:
|
278_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-conformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-conformer/#wav2vec2conformerforctc
|
.md
|
Parameters:
config ([`Wav2Vec2ConformerConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
278_7_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-conformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-conformer/#wav2vec2conformerforsequenceclassification
|
.md
|
Wav2Vec2Conformer Model with a sequence classification head on top (a linear layer over the pooled output) for
tasks like SUPERB Keyword Spotting.
Wav2Vec2Conformer was proposed in [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech
Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael
Auli.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
|
278_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-conformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-conformer/#wav2vec2conformerforsequenceclassification
|
.md
|
Auli.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch [nn.Module](https://pytorch.org/docs/stable/nn.html#nn.Module) sub-class. Use it as a
regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
Parameters:
|
278_8_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-conformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-conformer/#wav2vec2conformerforsequenceclassification
|
.md
|
Parameters:
config ([`Wav2Vec2ConformerConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
278_8_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-conformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-conformer/#wav2vec2conformerforaudioframeclassification
|
.md
|
Wav2Vec2Conformer Model with a frame classification head on top for tasks like Speaker Diarization.
Wav2Vec2Conformer was proposed in [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech
Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael
Auli.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).
|
278_9_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-conformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-conformer/#wav2vec2conformerforaudioframeclassification
|
.md
|
library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch [nn.Module](https://pytorch.org/docs/stable/nn.html#nn.Module) sub-class. Use it as a
regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
Parameters:
config ([`Wav2Vec2ConformerConfig`]): Model configuration class with all the parameters of the model.
|
278_9_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-conformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-conformer/#wav2vec2conformerforaudioframeclassification
|
.md
|
Parameters:
config ([`Wav2Vec2ConformerConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
278_9_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-conformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-conformer/#wav2vec2conformerforxvector
|
.md
|
Wav2Vec2Conformer Model with an XVector feature extraction head on top for tasks like Speaker Verification.
Wav2Vec2Conformer was proposed in [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech
Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael
Auli.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
|
278_10_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-conformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-conformer/#wav2vec2conformerforxvector
|
.md
|
Auli.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch [nn.Module](https://pytorch.org/docs/stable/nn.html#nn.Module) sub-class. Use it as a
regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
Parameters:
|
278_10_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-conformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-conformer/#wav2vec2conformerforxvector
|
.md
|
Parameters:
config ([`Wav2Vec2ConformerConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
278_10_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-conformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-conformer/#wav2vec2conformerforpretraining
|
.md
|
Wav2Vec2Conformer Model with a quantizer and `VQ` head on top.
Wav2Vec2Conformer was proposed in [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech
Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael
Auli.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).
|
278_11_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-conformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-conformer/#wav2vec2conformerforpretraining
|
.md
|
library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch [nn.Module](https://pytorch.org/docs/stable/nn.html#nn.Module) sub-class. Use it as a
regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
Parameters:
config ([`Wav2Vec2ConformerConfig`]): Model configuration class with all the parameters of the model.
|
278_11_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-conformer.md
|
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-conformer/#wav2vec2conformerforpretraining
|
.md
|
Parameters:
config ([`Wav2Vec2ConformerConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
278_11_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/markuplm.md
|
https://huggingface.co/docs/transformers/en/model_doc/markuplm/
|
.md
|
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
279_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/markuplm.md
|
https://huggingface.co/docs/transformers/en/model_doc/markuplm/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
279_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/markuplm.md
|
https://huggingface.co/docs/transformers/en/model_doc/markuplm/#overview
|
.md
|
The MarkupLM model was proposed in [MarkupLM: Pre-training of Text and Markup Language for Visually-rich Document
Understanding](https://arxiv.org/abs/2110.08518) by Junlong Li, Yiheng Xu, Lei Cui, Furu Wei. MarkupLM is BERT, but
applied to HTML pages instead of raw text documents. The model incorporates additional embedding layers to improve
performance, similar to [LayoutLM](layoutlm).
|
279_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/markuplm.md
|
https://huggingface.co/docs/transformers/en/model_doc/markuplm/#overview
|
.md
|
performance, similar to [LayoutLM](layoutlm).
The model can be used for tasks like question answering on web pages or information extraction from web pages. It obtains
state-of-the-art results on 2 important benchmarks:
- [WebSRC](https://x-lance.github.io/WebSRC/), a dataset for Web-Based Structural Reading Comprehension (a bit like SQuAD but for web pages)
|
279_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/markuplm.md
|
https://huggingface.co/docs/transformers/en/model_doc/markuplm/#overview
|
.md
|
- [SWDE](https://www.researchgate.net/publication/221299838_From_one_tree_to_a_forest_a_unified_solution_for_structured_web_data_extraction), a dataset
for information extraction from web pages (basically named-entity recognition on web pages)
The abstract from the paper is the following:
*Multimodal pre-training with text, layout, and image has made significant progress for Visually-rich Document
|
279_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/markuplm.md
|
https://huggingface.co/docs/transformers/en/model_doc/markuplm/#overview
|
.md
|
*Multimodal pre-training with text, layout, and image has made significant progress for Visually-rich Document
Understanding (VrDU), especially the fixed-layout documents such as scanned document images. While, there are still a
large number of digital documents where the layout information is not fixed and needs to be interactively and
dynamically rendered for visualization, making existing layout-based pre-training approaches not easy to apply. In this
|
279_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/markuplm.md
|
https://huggingface.co/docs/transformers/en/model_doc/markuplm/#overview
|
.md
|
dynamically rendered for visualization, making existing layout-based pre-training approaches not easy to apply. In this
paper, we propose MarkupLM for document understanding tasks with markup languages as the backbone such as
HTML/XML-based documents, where text and markup information is jointly pre-trained. Experiment results show that the
pre-trained MarkupLM significantly outperforms the existing strong baseline models on several document understanding
|
279_1_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/markuplm.md
|
https://huggingface.co/docs/transformers/en/model_doc/markuplm/#overview
|
.md
|
pre-trained MarkupLM significantly outperforms the existing strong baseline models on several document understanding
tasks. The pre-trained model and code will be publicly available.*
This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/microsoft/unilm/tree/master/markuplm).
|
279_1_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/markuplm.md
|
https://huggingface.co/docs/transformers/en/model_doc/markuplm/#usage-tips
|
.md
|
- In addition to `input_ids`, [`~MarkupLMModel.forward`] expects 2 additional inputs, namely `xpath_tags_seq` and `xpath_subs_seq`.
These are the XPATH tags and subscripts respectively for each token in the input sequence.
- One can use [`MarkupLMProcessor`] to prepare all data for the model. Refer to the [usage guide](#usage-markuplmprocessor) for more info.
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/markuplm_architecture.jpg"
|
279_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/markuplm.md
|
https://huggingface.co/docs/transformers/en/model_doc/markuplm/#usage-tips
|
.md
|
alt="drawing" width="600"/>
<small> MarkupLM architecture. Taken from the <a href="https://arxiv.org/abs/2110.08518">original paper.</a> </small>
|
279_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/markuplm.md
|
https://huggingface.co/docs/transformers/en/model_doc/markuplm/#usage-markuplmprocessor
|
.md
|
The easiest way to prepare data for the model is to use [`MarkupLMProcessor`], which internally combines a feature extractor
([`MarkupLMFeatureExtractor`]) and a tokenizer ([`MarkupLMTokenizer`] or [`MarkupLMTokenizerFast`]). The feature extractor is
used to extract all nodes and xpaths from the HTML strings, which are then provided to the tokenizer, which turns them into the
token-level inputs of the model (`input_ids` etc.). Note that you can still use the feature extractor and tokenizer separately,
|
279_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/markuplm.md
|
https://huggingface.co/docs/transformers/en/model_doc/markuplm/#usage-markuplmprocessor
|
.md
|
token-level inputs of the model (`input_ids` etc.). Note that you can still use the feature extractor and tokenizer separately,
if you only want to handle one of the two tasks.
```python
from transformers import MarkupLMFeatureExtractor, MarkupLMTokenizerFast, MarkupLMProcessor
|
279_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/markuplm.md
|
https://huggingface.co/docs/transformers/en/model_doc/markuplm/#usage-markuplmprocessor
|
.md
|
feature_extractor = MarkupLMFeatureExtractor()
tokenizer = MarkupLMTokenizerFast.from_pretrained("microsoft/markuplm-base")
processor = MarkupLMProcessor(feature_extractor, tokenizer)
```
In short, one can provide HTML strings (and possibly additional data) to [`MarkupLMProcessor`],
and it will create the inputs expected by the model. Internally, the processor first uses
[`MarkupLMFeatureExtractor`] to get a list of nodes and corresponding xpaths. The nodes and
|
279_3_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/markuplm.md
|
https://huggingface.co/docs/transformers/en/model_doc/markuplm/#usage-markuplmprocessor
|
.md
|
[`MarkupLMFeatureExtractor`] to get a list of nodes and corresponding xpaths. The nodes and
xpaths are then provided to [`MarkupLMTokenizer`] or [`MarkupLMTokenizerFast`], which converts them
to token-level `input_ids`, `attention_mask`, `token_type_ids`, `xpath_subs_seq`, `xpath_tags_seq`.
Optionally, one can provide node labels to the processor, which are turned into token-level `labels`.
|
279_3_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/markuplm.md
|
https://huggingface.co/docs/transformers/en/model_doc/markuplm/#usage-markuplmprocessor
|
.md
|
Optionally, one can provide node labels to the processor, which are turned into token-level `labels`.
[`MarkupLMFeatureExtractor`] uses [Beautiful Soup](https://www.crummy.com/software/BeautifulSoup/bs4/doc/), a Python library for
pulling data out of HTML and XML files, under the hood. Note that you can still use your own parsing solution of
choice, and provide the nodes and xpaths yourself to [`MarkupLMTokenizer`] or [`MarkupLMTokenizerFast`].
|
279_3_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/markuplm.md
|
https://huggingface.co/docs/transformers/en/model_doc/markuplm/#usage-markuplmprocessor
|
.md
|
choice, and provide the nodes and xpaths yourself to [`MarkupLMTokenizer`] or [`MarkupLMTokenizerFast`].
In total, there are 5 use cases that are supported by the processor. Below, we list them all. Note that each of these
use cases work for both batched and non-batched inputs (we illustrate them for non-batched inputs).
**Use case 1: web page classification (training, inference) + token classification (inference), parse_html = True**
|
279_3_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/markuplm.md
|
https://huggingface.co/docs/transformers/en/model_doc/markuplm/#usage-markuplmprocessor
|
.md
|
**Use case 1: web page classification (training, inference) + token classification (inference), parse_html = True**
This is the simplest case, in which the processor will use the feature extractor to get all nodes and xpaths from the HTML.
```python
>>> from transformers import MarkupLMProcessor
|
279_3_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/markuplm.md
|
https://huggingface.co/docs/transformers/en/model_doc/markuplm/#usage-markuplmprocessor
|
.md
|
>>> processor = MarkupLMProcessor.from_pretrained("microsoft/markuplm-base")
>>> html_string = """
... <!DOCTYPE html>
... <html>
... <head>
... <title>Hello world</title>
... </head>
... <body>
... <h1>Welcome</h1>
... <p>Here is my website.</p>
... </body>
... </html>"""
|
279_3_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/markuplm.md
|
https://huggingface.co/docs/transformers/en/model_doc/markuplm/#usage-markuplmprocessor
|
.md
|
>>> # note that you can also add provide all tokenizer parameters here such as padding, truncation
>>> encoding = processor(html_string, return_tensors="pt")
>>> print(encoding.keys())
dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'xpath_tags_seq', 'xpath_subs_seq'])
```
**Use case 2: web page classification (training, inference) + token classification (inference), parse_html=False**
|
279_3_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/markuplm.md
|
https://huggingface.co/docs/transformers/en/model_doc/markuplm/#usage-markuplmprocessor
|
.md
|
```
**Use case 2: web page classification (training, inference) + token classification (inference), parse_html=False**
In case one already has obtained all nodes and xpaths, one doesn't need the feature extractor. In that case, one should
provide the nodes and corresponding xpaths themselves to the processor, and make sure to set `parse_html` to `False`.
```python
>>> from transformers import MarkupLMProcessor
|
279_3_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/markuplm.md
|
https://huggingface.co/docs/transformers/en/model_doc/markuplm/#usage-markuplmprocessor
|
.md
|
>>> processor = MarkupLMProcessor.from_pretrained("microsoft/markuplm-base")
>>> processor.parse_html = False
|
279_3_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/markuplm.md
|
https://huggingface.co/docs/transformers/en/model_doc/markuplm/#usage-markuplmprocessor
|
.md
|
>>> nodes = ["hello", "world", "how", "are"]
>>> xpaths = ["/html/body/div/li[1]/div/span", "/html/body/div/li[1]/div/span", "html/body", "html/body/div"]
>>> encoding = processor(nodes=nodes, xpaths=xpaths, return_tensors="pt")
>>> print(encoding.keys())
dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'xpath_tags_seq', 'xpath_subs_seq'])
```
**Use case 3: token classification (training), parse_html=False**
|
279_3_11
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/markuplm.md
|
https://huggingface.co/docs/transformers/en/model_doc/markuplm/#usage-markuplmprocessor
|
.md
|
```
**Use case 3: token classification (training), parse_html=False**
For token classification tasks (such as [SWDE](https://paperswithcode.com/dataset/swde)), one can also provide the
corresponding node labels in order to train a model. The processor will then convert these into token-level `labels`.
By default, it will only label the first wordpiece of a word, and label the remaining wordpieces with -100, which is the
|
279_3_12
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/markuplm.md
|
https://huggingface.co/docs/transformers/en/model_doc/markuplm/#usage-markuplmprocessor
|
.md
|
By default, it will only label the first wordpiece of a word, and label the remaining wordpieces with -100, which is the
`ignore_index` of PyTorch's CrossEntropyLoss. In case you want all wordpieces of a word to be labeled, you can
initialize the tokenizer with `only_label_first_subword` set to `False`.
```python
>>> from transformers import MarkupLMProcessor
|
279_3_13
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/markuplm.md
|
https://huggingface.co/docs/transformers/en/model_doc/markuplm/#usage-markuplmprocessor
|
.md
|
>>> processor = MarkupLMProcessor.from_pretrained("microsoft/markuplm-base")
>>> processor.parse_html = False
|
279_3_14
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/markuplm.md
|
https://huggingface.co/docs/transformers/en/model_doc/markuplm/#usage-markuplmprocessor
|
.md
|
>>> nodes = ["hello", "world", "how", "are"]
>>> xpaths = ["/html/body/div/li[1]/div/span", "/html/body/div/li[1]/div/span", "html/body", "html/body/div"]
>>> node_labels = [1, 2, 2, 1]
>>> encoding = processor(nodes=nodes, xpaths=xpaths, node_labels=node_labels, return_tensors="pt")
>>> print(encoding.keys())
dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'xpath_tags_seq', 'xpath_subs_seq', 'labels'])
```
**Use case 4: web page question answering (inference), parse_html=True**
|
279_3_15
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/markuplm.md
|
https://huggingface.co/docs/transformers/en/model_doc/markuplm/#usage-markuplmprocessor
|
.md
|
```
**Use case 4: web page question answering (inference), parse_html=True**
For question answering tasks on web pages, you can provide a question to the processor. By default, the
processor will use the feature extractor to get all nodes and xpaths, and create [CLS] question tokens [SEP] word tokens [SEP].
```python
>>> from transformers import MarkupLMProcessor
|
279_3_16
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/markuplm.md
|
https://huggingface.co/docs/transformers/en/model_doc/markuplm/#usage-markuplmprocessor
|
.md
|
>>> processor = MarkupLMProcessor.from_pretrained("microsoft/markuplm-base")
>>> html_string = """
... <!DOCTYPE html>
... <html>
... <head>
... <title>Hello world</title>
... </head>
... <body>
... <h1>Welcome</h1>
... <p>My name is Niels.</p>
... </body>
... </html>"""
|
279_3_17
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/markuplm.md
|
https://huggingface.co/docs/transformers/en/model_doc/markuplm/#usage-markuplmprocessor
|
.md
|
>>> question = "What's his name?"
>>> encoding = processor(html_string, questions=question, return_tensors="pt")
>>> print(encoding.keys())
dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'xpath_tags_seq', 'xpath_subs_seq'])
```
**Use case 5: web page question answering (inference), parse_html=False**
For question answering tasks (such as WebSRC), you can provide a question to the processor. If you have extracted
|
279_3_18
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/markuplm.md
|
https://huggingface.co/docs/transformers/en/model_doc/markuplm/#usage-markuplmprocessor
|
.md
|
For question answering tasks (such as WebSRC), you can provide a question to the processor. If you have extracted
all nodes and xpaths yourself, you can provide them directly to the processor. Make sure to set `parse_html` to `False`.
```python
>>> from transformers import MarkupLMProcessor
|
279_3_19
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/markuplm.md
|
https://huggingface.co/docs/transformers/en/model_doc/markuplm/#usage-markuplmprocessor
|
.md
|
>>> processor = MarkupLMProcessor.from_pretrained("microsoft/markuplm-base")
>>> processor.parse_html = False
|
279_3_20
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/markuplm.md
|
https://huggingface.co/docs/transformers/en/model_doc/markuplm/#usage-markuplmprocessor
|
.md
|
>>> nodes = ["hello", "world", "how", "are"]
>>> xpaths = ["/html/body/div/li[1]/div/span", "/html/body/div/li[1]/div/span", "html/body", "html/body/div"]
>>> question = "What's his name?"
>>> encoding = processor(nodes=nodes, xpaths=xpaths, questions=question, return_tensors="pt")
>>> print(encoding.keys())
dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'xpath_tags_seq', 'xpath_subs_seq'])
```
|
279_3_21
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/markuplm.md
|
https://huggingface.co/docs/transformers/en/model_doc/markuplm/#resources
|
.md
|
- [Demo notebooks](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/MarkupLM)
- [Text classification task guide](../tasks/sequence_classification)
- [Token classification task guide](../tasks/token_classification)
- [Question answering task guide](../tasks/question_answering)
|
279_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/markuplm.md
|
https://huggingface.co/docs/transformers/en/model_doc/markuplm/#markuplmconfig
|
.md
|
This is the configuration class to store the configuration of a [`MarkupLMModel`]. It is used to instantiate a
MarkupLM model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the MarkupLM
[microsoft/markuplm-base](https://huggingface.co/microsoft/markuplm-base) architecture.
Configuration objects inherit from [`BertConfig`] and can be used to control the model outputs. Read the
|
279_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/markuplm.md
|
https://huggingface.co/docs/transformers/en/model_doc/markuplm/#markuplmconfig
|
.md
|
Configuration objects inherit from [`BertConfig`] and can be used to control the model outputs. Read the
documentation from [`BertConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 30522):
Vocabulary size of the MarkupLM model. Defines the different tokens that can be represented by the
*inputs_ids* passed to the forward method of [`MarkupLMModel`].
hidden_size (`int`, *optional*, defaults to 768):
Dimensionality of the encoder layers and the pooler layer.
|
279_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/markuplm.md
|
https://huggingface.co/docs/transformers/en/model_doc/markuplm/#markuplmconfig
|
.md
|
hidden_size (`int`, *optional*, defaults to 768):
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (`int`, *optional*, defaults to 12):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 12):
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (`int`, *optional*, defaults to 3072):
Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
|
279_5_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/markuplm.md
|
https://huggingface.co/docs/transformers/en/model_doc/markuplm/#markuplmconfig
|
.md
|
Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"silu"` and `"gelu_new"` are supported.
hidden_dropout_prob (`float`, *optional*, defaults to 0.1):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
|
279_5_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/markuplm.md
|
https://huggingface.co/docs/transformers/en/model_doc/markuplm/#markuplmconfig
|
.md
|
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1):
The dropout ratio for the attention probabilities.
max_position_embeddings (`int`, *optional*, defaults to 512):
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (`int`, *optional*, defaults to 2):
|
279_5_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/markuplm.md
|
https://huggingface.co/docs/transformers/en/model_doc/markuplm/#markuplmconfig
|
.md
|
just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (`int`, *optional*, defaults to 2):
The vocabulary size of the `token_type_ids` passed into [`MarkupLMModel`].
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (`float`, *optional*, defaults to 1e-12):
The epsilon used by the layer normalization layers.
max_tree_id_unit_embeddings (`int`, *optional*, defaults to 1024):
|
279_5_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/markuplm.md
|
https://huggingface.co/docs/transformers/en/model_doc/markuplm/#markuplmconfig
|
.md
|
The epsilon used by the layer normalization layers.
max_tree_id_unit_embeddings (`int`, *optional*, defaults to 1024):
The maximum value that the tree id unit embedding might ever use. Typically set this to something large
just in case (e.g., 1024).
max_xpath_tag_unit_embeddings (`int`, *optional*, defaults to 256):
The maximum value that the xpath tag unit embedding might ever use. Typically set this to something large
just in case (e.g., 256).
|
279_5_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/markuplm.md
|
https://huggingface.co/docs/transformers/en/model_doc/markuplm/#markuplmconfig
|
.md
|
just in case (e.g., 256).
max_xpath_subs_unit_embeddings (`int`, *optional*, defaults to 1024):
The maximum value that the xpath subscript unit embedding might ever use. Typically set this to something
large just in case (e.g., 1024).
tag_pad_id (`int`, *optional*, defaults to 216):
The id of the padding token in the xpath tags.
subs_pad_id (`int`, *optional*, defaults to 1001):
The id of the padding token in the xpath subscripts.
xpath_tag_unit_hidden_size (`int`, *optional*, defaults to 32):
|
279_5_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/markuplm.md
|
https://huggingface.co/docs/transformers/en/model_doc/markuplm/#markuplmconfig
|
.md
|
The id of the padding token in the xpath subscripts.
xpath_tag_unit_hidden_size (`int`, *optional*, defaults to 32):
The hidden size of each tree id unit. One complete tree index will have
(50*xpath_tag_unit_hidden_size)-dim.
max_depth (`int`, *optional*, defaults to 50):
The maximum depth in xpath.
Examples:
```python
>>> from transformers import MarkupLMModel, MarkupLMConfig
|
279_5_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/markuplm.md
|
https://huggingface.co/docs/transformers/en/model_doc/markuplm/#markuplmconfig
|
.md
|
>>> # Initializing a MarkupLM microsoft/markuplm-base style configuration
>>> configuration = MarkupLMConfig()
>>> # Initializing a model from the microsoft/markuplm-base style configuration
>>> model = MarkupLMModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
Methods: all
|
279_5_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/markuplm.md
|
https://huggingface.co/docs/transformers/en/model_doc/markuplm/#markuplmfeatureextractor
|
.md
|
Constructs a MarkupLM feature extractor. This can be used to get a list of nodes and corresponding xpaths from HTML
strings.
This feature extractor inherits from [`~feature_extraction_utils.PreTrainedFeatureExtractor`] which contains most
of the main methods. Users should refer to this superclass for more information regarding those methods.
Methods: __call__
|
279_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/markuplm.md
|
https://huggingface.co/docs/transformers/en/model_doc/markuplm/#markuplmtokenizer
|
.md
|
Construct a MarkupLM tokenizer. Based on byte-level Byte-Pair-Encoding (BPE). [`MarkupLMTokenizer`] can be used to
turn HTML strings into to token-level `input_ids`, `attention_mask`, `token_type_ids`, `xpath_tags_seq` and
`xpath_tags_seq`. This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods.
Users should refer to this superclass for more information regarding those methods.
Args:
vocab_file (`str`):
Path to the vocabulary file.
merges_file (`str`):
|
279_7_0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.