source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flaubert.md
https://huggingface.co/docs/transformers/en/model_doc/flaubert/#flaubertmodel
.md
No docstring available for FlaubertModel Methods: forward
163_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flaubert.md
https://huggingface.co/docs/transformers/en/model_doc/flaubert/#flaubertwithlmheadmodel
.md
The Flaubert Model transformer with a language modeling head on top (linear layer with weights tied to the input embeddings). This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
163_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flaubert.md
https://huggingface.co/docs/transformers/en/model_doc/flaubert/#flaubertwithlmheadmodel
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`FlaubertConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
163_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flaubert.md
https://huggingface.co/docs/transformers/en/model_doc/flaubert/#flaubertwithlmheadmodel
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
163_7_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flaubert.md
https://huggingface.co/docs/transformers/en/model_doc/flaubert/#flaubertforsequenceclassification
.md
Flaubert Model with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
163_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flaubert.md
https://huggingface.co/docs/transformers/en/model_doc/flaubert/#flaubertforsequenceclassification
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`FlaubertConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
163_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flaubert.md
https://huggingface.co/docs/transformers/en/model_doc/flaubert/#flaubertforsequenceclassification
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
163_8_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flaubert.md
https://huggingface.co/docs/transformers/en/model_doc/flaubert/#flaubertformultiplechoice
.md
Flaubert Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
163_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flaubert.md
https://huggingface.co/docs/transformers/en/model_doc/flaubert/#flaubertformultiplechoice
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`FlaubertConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
163_9_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flaubert.md
https://huggingface.co/docs/transformers/en/model_doc/flaubert/#flaubertformultiplechoice
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
163_9_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flaubert.md
https://huggingface.co/docs/transformers/en/model_doc/flaubert/#flaubertfortokenclassification
.md
Flaubert Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
163_10_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flaubert.md
https://huggingface.co/docs/transformers/en/model_doc/flaubert/#flaubertfortokenclassification
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`FlaubertConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
163_10_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flaubert.md
https://huggingface.co/docs/transformers/en/model_doc/flaubert/#flaubertfortokenclassification
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
163_10_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flaubert.md
https://huggingface.co/docs/transformers/en/model_doc/flaubert/#flaubertforquestionansweringsimple
.md
Flaubert Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute `span start logits` and `span end logits`). This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
163_11_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flaubert.md
https://huggingface.co/docs/transformers/en/model_doc/flaubert/#flaubertforquestionansweringsimple
.md
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`FlaubertConfig`]): Model configuration class with all the parameters of the model.
163_11_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flaubert.md
https://huggingface.co/docs/transformers/en/model_doc/flaubert/#flaubertforquestionansweringsimple
.md
and behavior. Parameters: config ([`FlaubertConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
163_11_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flaubert.md
https://huggingface.co/docs/transformers/en/model_doc/flaubert/#flaubertforquestionanswering
.md
No docstring available for FlaubertForQuestionAnswering Methods: forward </pt> <tf>
163_12_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flaubert.md
https://huggingface.co/docs/transformers/en/model_doc/flaubert/#tfflaubertmodel
.md
No docstring available for TFFlaubertModel Methods: call
163_13_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flaubert.md
https://huggingface.co/docs/transformers/en/model_doc/flaubert/#tfflaubertwithlmheadmodel
.md
No docstring available for TFFlaubertWithLMHeadModel Methods: call
163_14_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flaubert.md
https://huggingface.co/docs/transformers/en/model_doc/flaubert/#tfflaubertforsequenceclassification
.md
No docstring available for TFFlaubertForSequenceClassification Methods: call
163_15_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flaubert.md
https://huggingface.co/docs/transformers/en/model_doc/flaubert/#tfflaubertformultiplechoice
.md
No docstring available for TFFlaubertForMultipleChoice Methods: call
163_16_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flaubert.md
https://huggingface.co/docs/transformers/en/model_doc/flaubert/#tfflaubertfortokenclassification
.md
No docstring available for TFFlaubertForTokenClassification Methods: call
163_17_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flaubert.md
https://huggingface.co/docs/transformers/en/model_doc/flaubert/#tfflaubertforquestionansweringsimple
.md
No docstring available for TFFlaubertForQuestionAnsweringSimple Methods: call </tf> </frameworkcontent>
163_18_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvp.md
https://huggingface.co/docs/transformers/en/model_doc/tvp/
.md
<!--Copyright 2023 The Intel Team Authors and HuggingFace Inc. team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
164_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvp.md
https://huggingface.co/docs/transformers/en/model_doc/tvp/
.md
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
164_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvp.md
https://huggingface.co/docs/transformers/en/model_doc/tvp/#overview
.md
The text-visual prompting (TVP) framework was proposed in the paper [Text-Visual Prompting for Efficient 2D Temporal Video Grounding](https://arxiv.org/abs/2303.04995) by Yimeng Zhang, Xin Chen, Jinghan Jia, Sijia Liu, Ke Ding. The abstract from the paper is the following:
164_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvp.md
https://huggingface.co/docs/transformers/en/model_doc/tvp/#overview
.md
*In this paper, we study the problem of temporal video grounding (TVG), which aims to predict the starting/ending time points of moments described by a text sentence within a long untrimmed video. Benefiting from fine-grained 3D visual features, the TVG techniques have achieved remarkable progress in recent years. However, the high complexity of 3D convolutional neural networks (CNNs) makes extracting dense 3D visual features time-consuming, which calls for intensive memory and computing resources. Towards
164_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvp.md
https://huggingface.co/docs/transformers/en/model_doc/tvp/#overview
.md
makes extracting dense 3D visual features time-consuming, which calls for intensive memory and computing resources. Towards efficient TVG, we propose a novel text-visual prompting (TVP) framework, which incorporates optimized perturbation patterns (that we call ‘prompts’) into both visual inputs and textual features of a TVG model. In sharp contrast to 3D CNNs, we show that TVP allows us to effectively co-train vision encoder and language encoder in a 2D TVG model and improves the performance of
164_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvp.md
https://huggingface.co/docs/transformers/en/model_doc/tvp/#overview
.md
TVP allows us to effectively co-train vision encoder and language encoder in a 2D TVG model and improves the performance of cross-modal feature fusion using only low-complexity sparse 2D visual features. Further, we propose a Temporal-Distance IoU (TDIoU) loss for efficient learning of TVG. Experiments on two benchmark datasets, Charades-STA and ActivityNet Captions datasets, empirically show that the proposed TVP significantly boosts the performance of 2D TVG (e.g., 9.79% improvement on Charades-STA and
164_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvp.md
https://huggingface.co/docs/transformers/en/model_doc/tvp/#overview
.md
show that the proposed TVP significantly boosts the performance of 2D TVG (e.g., 9.79% improvement on Charades-STA and 30.77% improvement on ActivityNet Captions) and achieves 5× inference acceleration over TVG using 3D visual features.*
164_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvp.md
https://huggingface.co/docs/transformers/en/model_doc/tvp/#overview
.md
This research addresses temporal video grounding (TVG), which is the process of pinpointing the start and end times of specific events in a long video, as described by a text sentence. Text-visual prompting (TVP), is proposed to enhance TVG. TVP involves integrating specially designed patterns, known as 'prompts', into both the visual (image-based) and textual (word-based) input components of a TVG model. These prompts provide additional spatial-temporal context, improving the model's ability to accurately
164_1_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvp.md
https://huggingface.co/docs/transformers/en/model_doc/tvp/#overview
.md
of a TVG model. These prompts provide additional spatial-temporal context, improving the model's ability to accurately determine event timings in the video. The approach employs 2D visual inputs in place of 3D ones. Although 3D inputs offer more spatial-temporal detail, they are also more time-consuming to process. The use of 2D inputs with the prompting method aims to provide similar levels of context and accuracy more efficiently.
164_1_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvp.md
https://huggingface.co/docs/transformers/en/model_doc/tvp/#overview
.md
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/tvp_architecture.png" alt="drawing" width="600"/> <small> TVP architecture. Taken from the <a href="https://arxiv.org/abs/2303.04995">original paper.</a> </small> This model was contributed by [Jiqing Feng](https://huggingface.co/Jiqing). The original code can be found [here](https://github.com/intel/TVP).
164_1_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvp.md
https://huggingface.co/docs/transformers/en/model_doc/tvp/#usage-tips-and-examples
.md
Prompts are optimized perturbation patterns, which would be added to input video frames or text features. Universal set refers to using the same exact set of prompts for any input, this means that these prompts are added consistently to all video frames and text features, regardless of the input's content.
164_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvp.md
https://huggingface.co/docs/transformers/en/model_doc/tvp/#usage-tips-and-examples
.md
TVP consists of a visual encoder and cross-modal encoder. A universal set of visual prompts and text prompts to be integrated into sampled video frames and textual features, respectively. Specially, a set of different visual prompts are applied to uniformly-sampled frames of one untrimmed video in order. The goal of this model is to incorporate trainable prompts into both visual inputs and textual features to temporal video grounding(TVG) problems.
164_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvp.md
https://huggingface.co/docs/transformers/en/model_doc/tvp/#usage-tips-and-examples
.md
In principle, one can apply any visual, cross-modal encoder in the proposed architecture. The [`TvpProcessor`] wraps [`BertTokenizer`] and [`TvpImageProcessor`] into a single instance to both encode the text and prepare the images respectively. The following example shows how to run temporal video grounding using [`TvpProcessor`] and [`TvpForVideoGrounding`]. ```python import av import cv2 import numpy as np import torch from huggingface_hub import hf_hub_download
164_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvp.md
https://huggingface.co/docs/transformers/en/model_doc/tvp/#usage-tips-and-examples
.md
```python import av import cv2 import numpy as np import torch from huggingface_hub import hf_hub_download from transformers import AutoProcessor, TvpForVideoGrounding
164_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvp.md
https://huggingface.co/docs/transformers/en/model_doc/tvp/#usage-tips-and-examples
.md
def pyav_decode(container, sampling_rate, num_frames, clip_idx, num_clips, target_fps): ''' Convert the video from its original fps to the target_fps and decode the video with PyAV decoder. Args: container (container): pyav container. sampling_rate (int): frame sampling rate (interval between two sampled frames). num_frames (int): number of frames to sample. clip_idx (int): if clip_idx is -1, perform random temporal sampling. If clip_idx is larger than -1, uniformly split the video to num_clips
164_2_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvp.md
https://huggingface.co/docs/transformers/en/model_doc/tvp/#usage-tips-and-examples
.md
If clip_idx is larger than -1, uniformly split the video to num_clips clips, and select the clip_idx-th video clip. num_clips (int): overall number of clips to uniformly sample from the given video. target_fps (int): the input video may have different fps, convert it to the target video fps before frame sampling. Returns: frames (tensor): decoded frames from the video. Return None if the no video stream was found. fps (float): the number of frames per second of the video. '''
164_2_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvp.md
https://huggingface.co/docs/transformers/en/model_doc/tvp/#usage-tips-and-examples
.md
video stream was found. fps (float): the number of frames per second of the video. ''' video = container.streams.video[0] fps = float(video.average_rate) clip_size = sampling_rate * num_frames / target_fps * fps delta = max(num_frames - clip_size, 0) start_idx = delta * clip_idx / num_clips end_idx = start_idx + clip_size - 1 timebase = video.duration / num_frames video_start_pts = int(start_idx * timebase) video_end_pts = int(end_idx * timebase) seek_offset = max(video_start_pts - 1024, 0)
164_2_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvp.md
https://huggingface.co/docs/transformers/en/model_doc/tvp/#usage-tips-and-examples
.md
video_end_pts = int(end_idx * timebase) seek_offset = max(video_start_pts - 1024, 0) container.seek(seek_offset, any_frame=False, backward=True, stream=video) frames = {} for frame in container.decode(video=0): if frame.pts < video_start_pts: continue frames[frame.pts] = frame if frame.pts > video_end_pts: break frames = [frames[pts] for pts in sorted(frames)] return frames, fps
164_2_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvp.md
https://huggingface.co/docs/transformers/en/model_doc/tvp/#usage-tips-and-examples
.md
def decode(container, sampling_rate, num_frames, clip_idx, num_clips, target_fps): ''' Decode the video and perform temporal sampling. Args: container (container): pyav container. sampling_rate (int): frame sampling rate (interval between two sampled frames). num_frames (int): number of frames to sample. clip_idx (int): if clip_idx is -1, perform random temporal sampling. If clip_idx is larger than -1, uniformly split the video to num_clips clips, and select the clip_idx-th video clip.
164_2_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvp.md
https://huggingface.co/docs/transformers/en/model_doc/tvp/#usage-tips-and-examples
.md
If clip_idx is larger than -1, uniformly split the video to num_clips clips, and select the clip_idx-th video clip. num_clips (int): overall number of clips to uniformly sample from the given video. target_fps (int): the input video may have different fps, convert it to the target video fps before frame sampling. Returns: frames (tensor): decoded frames from the video. ''' assert clip_idx >= -2, "Not a valied clip_idx {}".format(clip_idx)
164_2_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvp.md
https://huggingface.co/docs/transformers/en/model_doc/tvp/#usage-tips-and-examples
.md
Returns: frames (tensor): decoded frames from the video. ''' assert clip_idx >= -2, "Not a valied clip_idx {}".format(clip_idx) frames, fps = pyav_decode(container, sampling_rate, num_frames, clip_idx, num_clips, target_fps) clip_size = sampling_rate * num_frames / target_fps * fps index = np.linspace(0, clip_size - 1, num_frames) index = np.clip(index, 0, len(frames) - 1).astype(np.int64) frames = np.array([frames[idx].to_rgb().to_ndarray() for idx in index]) frames = frames.transpose(0, 3, 1, 2)
164_2_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvp.md
https://huggingface.co/docs/transformers/en/model_doc/tvp/#usage-tips-and-examples
.md
frames = np.array([frames[idx].to_rgb().to_ndarray() for idx in index]) frames = frames.transpose(0, 3, 1, 2) return frames
164_2_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvp.md
https://huggingface.co/docs/transformers/en/model_doc/tvp/#usage-tips-and-examples
.md
file = hf_hub_download(repo_id="Intel/tvp_demo", filename="AK2KG.mp4", repo_type="dataset") model = TvpForVideoGrounding.from_pretrained("Intel/tvp-base") decoder_kwargs = dict( container=av.open(file, metadata_errors="ignore"), sampling_rate=1, num_frames=model.config.num_frames, clip_idx=0, num_clips=1, target_fps=3, ) raw_sampled_frms = decode(**decoder_kwargs)
164_2_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvp.md
https://huggingface.co/docs/transformers/en/model_doc/tvp/#usage-tips-and-examples
.md
text = "a person is sitting on a bed." processor = AutoProcessor.from_pretrained("Intel/tvp-base") model_inputs = processor( text=[text], videos=list(raw_sampled_frms), return_tensors="pt", max_text_length=100#, size=size ) model_inputs["pixel_values"] = model_inputs["pixel_values"].to(model.dtype) output = model(**model_inputs)
164_2_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvp.md
https://huggingface.co/docs/transformers/en/model_doc/tvp/#usage-tips-and-examples
.md
model_inputs["pixel_values"] = model_inputs["pixel_values"].to(model.dtype) output = model(**model_inputs) def get_video_duration(filename): cap = cv2.VideoCapture(filename) if cap.isOpened(): rate = cap.get(5) frame_num = cap.get(7) duration = frame_num/rate return duration return -1 duration = get_video_duration(file) start, end = processor.post_process_video_grounding(output.logits, duration)
164_2_14
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvp.md
https://huggingface.co/docs/transformers/en/model_doc/tvp/#usage-tips-and-examples
.md
print(f"The time slot of the video corresponding to the text \"{text}\" is from {start}s to {end}s") ``` Tips: - This implementation of TVP uses [`BertTokenizer`] to generate text embeddings and Resnet-50 model to compute visual embeddings. - Checkpoints for pre-trained [tvp-base](https://huggingface.co/Intel/tvp-base) is released. - Please refer to [Table 2](https://arxiv.org/pdf/2303.04995.pdf) for TVP's performance on Temporal Video Grounding task.
164_2_15
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvp.md
https://huggingface.co/docs/transformers/en/model_doc/tvp/#tvpconfig
.md
This is the configuration class to store the configuration of a [`TvpModel`]. It is used to instantiate an Tvp model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Tvp [Intel/tvp-base](https://huggingface.co/Intel/tvp-base) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
164_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvp.md
https://huggingface.co/docs/transformers/en/model_doc/tvp/#tvpconfig
.md
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: backbone_config (`PretrainedConfig` or `dict`, *optional*): The configuration of the backbone model. backbone (`str`, *optional*): Name of backbone to use when `backbone_config` is `None`. If `use_pretrained_backbone` is `True`, this
164_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvp.md
https://huggingface.co/docs/transformers/en/model_doc/tvp/#tvpconfig
.md
Name of backbone to use when `backbone_config` is `None`. If `use_pretrained_backbone` is `True`, this will load the corresponding pretrained weights from the timm or transformers library. If `use_pretrained_backbone` is `False`, this loads the backbone's config and uses that to initialize the backbone with random weights. use_pretrained_backbone (`bool`, *optional*, defaults to `False`): Whether to use pretrained weights for the backbone. use_timm_backbone (`bool`, *optional*, defaults to `False`):
164_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvp.md
https://huggingface.co/docs/transformers/en/model_doc/tvp/#tvpconfig
.md
Whether to use pretrained weights for the backbone. use_timm_backbone (`bool`, *optional*, defaults to `False`): Whether to load `backbone` from the timm library. If `False`, the backbone is loaded from the transformers library. backbone_kwargs (`dict`, *optional*): Keyword arguments to be passed to AutoBackbone when loading from a checkpoint e.g. `{'out_indices': (0, 1, 2, 3)}`. Cannot be specified if `backbone_config` is set. distance_loss_weight (`float`, *optional*, defaults to 1.0):
164_3_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvp.md
https://huggingface.co/docs/transformers/en/model_doc/tvp/#tvpconfig
.md
distance_loss_weight (`float`, *optional*, defaults to 1.0): The weight of distance loss. duration_loss_weight (`float`, *optional*, defaults to 0.1): The weight of duration loss. visual_prompter_type (`str`, *optional*, defaults to `"framepad"`): Visual prompt type. The type of padding. Framepad means padding on each frame. Should be one of "framepad" or "framedownpad" visual_prompter_apply (`str`, *optional*, defaults to `"replace"`):
164_3_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvp.md
https://huggingface.co/docs/transformers/en/model_doc/tvp/#tvpconfig
.md
or "framedownpad" visual_prompter_apply (`str`, *optional*, defaults to `"replace"`): The way of applying visual prompt. Replace means use the value of prompt to change the original value in visual inputs. Should be one of "replace", or "add", or "remove". visual_prompt_size (`int`, *optional*, defaults to 96): The size of visual prompt. max_img_size (`int`, *optional*, defaults to 448): The maximum size of frame. num_frames (`int`, *optional*, defaults to 48): The number of frames extracted from a video.
164_3_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvp.md
https://huggingface.co/docs/transformers/en/model_doc/tvp/#tvpconfig
.md
The maximum size of frame. num_frames (`int`, *optional*, defaults to 48): The number of frames extracted from a video. vocab_size (`int`, *optional*, defaults to 30522): Vocabulary size of the Tvp text model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`TvpModel`]. hidden_size (`int`, *optional*, defaults to 768): Dimensionality of the encoder layers. intermediate_size (`int`, *optional*, defaults to 3072):
164_3_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvp.md
https://huggingface.co/docs/transformers/en/model_doc/tvp/#tvpconfig
.md
Dimensionality of the encoder layers. intermediate_size (`int`, *optional*, defaults to 3072): Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder. num_hidden_layers (`int`, *optional*, defaults to 12): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 12): Number of attention heads for each attention layer in the Transformer encoder. max_position_embeddings (`int`, *optional*, defaults to 512):
164_3_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvp.md
https://huggingface.co/docs/transformers/en/model_doc/tvp/#tvpconfig
.md
max_position_embeddings (`int`, *optional*, defaults to 512): The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). max_grid_col_position_embeddings (`int`, *optional*, defaults to 100): The largest number of horizontal patches from a video frame. max_grid_row_position_embeddings (`int`, *optional*, defaults to 100): The largest number of vertical patches from a video frame.
164_3_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvp.md
https://huggingface.co/docs/transformers/en/model_doc/tvp/#tvpconfig
.md
The largest number of vertical patches from a video frame. hidden_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout probability of hidden layers. hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` `"quick_gelu"` are supported. layer_norm_eps (`float`, *optional*, defaults to 1e-12): The epsilon used by the layer normalization layers.
164_3_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvp.md
https://huggingface.co/docs/transformers/en/model_doc/tvp/#tvpconfig
.md
layer_norm_eps (`float`, *optional*, defaults to 1e-12): The epsilon used by the layer normalization layers. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout probability of attention layers.
164_3_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvp.md
https://huggingface.co/docs/transformers/en/model_doc/tvp/#tvpimageprocessor
.md
Constructs a Tvp image processor. Args: do_resize (`bool`, *optional*, defaults to `True`): Whether to resize the image's (height, width) dimensions to the specified `size`. Can be overridden by the `do_resize` parameter in the `preprocess` method. size (`Dict[str, int]` *optional*, defaults to `{"longest_edge": 448}`): Size of the output image after resizing. The longest edge of the image will be resized to
164_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvp.md
https://huggingface.co/docs/transformers/en/model_doc/tvp/#tvpimageprocessor
.md
Size of the output image after resizing. The longest edge of the image will be resized to `size["longest_edge"]` while maintaining the aspect ratio of the original image. Can be overriden by `size` in the `preprocess` method. resample (`PILImageResampling`, *optional*, defaults to `Resampling.BILINEAR`): Resampling filter to use if resizing the image. Can be overridden by the `resample` parameter in the `preprocess` method. do_center_crop (`bool`, *optional*, defaults to `True`):
164_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvp.md
https://huggingface.co/docs/transformers/en/model_doc/tvp/#tvpimageprocessor
.md
`preprocess` method. do_center_crop (`bool`, *optional*, defaults to `True`): Whether to center crop the image to the specified `crop_size`. Can be overridden by the `do_center_crop` parameter in the `preprocess` method. crop_size (`Dict[str, int]`, *optional*, defaults to `{"height": 448, "width": 448}`): Size of the image after applying the center crop. Can be overridden by the `crop_size` parameter in the `preprocess` method. do_rescale (`bool`, *optional*, defaults to `True`):
164_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvp.md
https://huggingface.co/docs/transformers/en/model_doc/tvp/#tvpimageprocessor
.md
`preprocess` method. do_rescale (`bool`, *optional*, defaults to `True`): Whether to rescale the image by the specified scale `rescale_factor`. Can be overridden by the `do_rescale` parameter in the `preprocess` method. rescale_factor (`int` or `float`, *optional*, defaults to `1/255`): Defines the scale factor to use if rescaling the image. Can be overridden by the `rescale_factor` parameter in the `preprocess` method. do_pad (`bool`, *optional*, defaults to `True`):
164_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvp.md
https://huggingface.co/docs/transformers/en/model_doc/tvp/#tvpimageprocessor
.md
in the `preprocess` method. do_pad (`bool`, *optional*, defaults to `True`): Whether to pad the image. Can be overridden by the `do_pad` parameter in the `preprocess` method. pad_size (`Dict[str, int]`, *optional*, defaults to `{"height": 448, "width": 448}`): Size of the image after applying the padding. Can be overridden by the `pad_size` parameter in the `preprocess` method. constant_values (`Union[float, Iterable[float]]`, *optional*, defaults to 0): The fill value to use when padding the image.
164_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvp.md
https://huggingface.co/docs/transformers/en/model_doc/tvp/#tvpimageprocessor
.md
constant_values (`Union[float, Iterable[float]]`, *optional*, defaults to 0): The fill value to use when padding the image. pad_mode (`PaddingMode`, *optional*, defaults to `PaddingMode.CONSTANT`): Use what kind of mode in padding. do_normalize (`bool`, *optional*, defaults to `True`): Whether to normalize the image. Can be overridden by the `do_normalize` parameter in the `preprocess` method. do_flip_channel_order (`bool`, *optional*, defaults to `True`):
164_4_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvp.md
https://huggingface.co/docs/transformers/en/model_doc/tvp/#tvpimageprocessor
.md
method. do_flip_channel_order (`bool`, *optional*, defaults to `True`): Whether to flip the color channels from RGB to BGR. Can be overridden by the `do_flip_channel_order` parameter in the `preprocess` method. image_mean (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_MEAN`): Mean to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the `image_mean` parameter in the `preprocess` method.
164_4_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvp.md
https://huggingface.co/docs/transformers/en/model_doc/tvp/#tvpimageprocessor
.md
channels in the image. Can be overridden by the `image_mean` parameter in the `preprocess` method. image_std (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_STD`): Standard deviation to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the `image_std` parameter in the `preprocess` method. Methods: preprocess
164_4_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvp.md
https://huggingface.co/docs/transformers/en/model_doc/tvp/#tvpprocessor
.md
Constructs an TVP processor which wraps a TVP image processor and a Bert tokenizer into a single processor. [`TvpProcessor`] offers all the functionalities of [`TvpImageProcessor`] and [`BertTokenizerFast`]. See the [`~TvpProcessor.__call__`] and [`~TvpProcessor.decode`] for more information. Args: image_processor ([`TvpImageProcessor`], *optional*): The image processor is a required input. tokenizer ([`BertTokenizerFast`], *optional*): The tokenizer is a required input. Methods: __call__
164_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvp.md
https://huggingface.co/docs/transformers/en/model_doc/tvp/#tvpmodel
.md
The bare Tvp Model transformer outputting BaseModelOutputWithPooling object without any specific head on top. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`TvpConfig`]): Model configuration class with all the parameters of the model.
164_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvp.md
https://huggingface.co/docs/transformers/en/model_doc/tvp/#tvpmodel
.md
behavior. Parameters: config ([`TvpConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
164_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvp.md
https://huggingface.co/docs/transformers/en/model_doc/tvp/#tvpforvideogrounding
.md
Tvp Model with a video grounding head on top computing IoU, distance, and duration loss. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`TvpConfig`]): Model configuration class with all the parameters of the model.
164_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/tvp.md
https://huggingface.co/docs/transformers/en/model_doc/tvp/#tvpforvideogrounding
.md
behavior. Parameters: config ([`TvpConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
164_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fnet.md
https://huggingface.co/docs/transformers/en/model_doc/fnet/
.md
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
165_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fnet.md
https://huggingface.co/docs/transformers/en/model_doc/fnet/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
165_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fnet.md
https://huggingface.co/docs/transformers/en/model_doc/fnet/#overview
.md
The FNet model was proposed in [FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824) by James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon. The model replaces the self-attention layer in a BERT model with a fourier transform which returns only the real parts of the transform. The model is significantly faster than the BERT model because it has fewer parameters and is more memory efficient. The model achieves about 92-97%
165_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fnet.md
https://huggingface.co/docs/transformers/en/model_doc/fnet/#overview
.md
than the BERT model because it has fewer parameters and is more memory efficient. The model achieves about 92-97% accuracy of BERT counterparts on GLUE benchmark, and trains much faster than the BERT model. The abstract from the paper is the following: *We show that Transformer encoder architectures can be sped up, with limited accuracy costs, by replacing the self-attention sublayers with simple linear transformations that "mix" input tokens. These linear mixers, along with
165_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fnet.md
https://huggingface.co/docs/transformers/en/model_doc/fnet/#overview
.md
self-attention sublayers with simple linear transformations that "mix" input tokens. These linear mixers, along with standard nonlinearities in feed-forward layers, prove competent at modeling semantic relationships in several text classification tasks. Most surprisingly, we find that replacing the self-attention sublayer in a Transformer encoder with a standard, unparameterized Fourier Transform achieves 92-97% of the accuracy of BERT counterparts on the GLUE
165_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fnet.md
https://huggingface.co/docs/transformers/en/model_doc/fnet/#overview
.md
with a standard, unparameterized Fourier Transform achieves 92-97% of the accuracy of BERT counterparts on the GLUE benchmark, but trains 80% faster on GPUs and 70% faster on TPUs at standard 512 input lengths. At longer input lengths, our FNet model is significantly faster: when compared to the "efficient" Transformers on the Long Range Arena benchmark, FNet matches the accuracy of the most accurate models, while outpacing the fastest models across all
165_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fnet.md
https://huggingface.co/docs/transformers/en/model_doc/fnet/#overview
.md
benchmark, FNet matches the accuracy of the most accurate models, while outpacing the fastest models across all sequence lengths on GPUs (and across relatively shorter lengths on TPUs). Finally, FNet has a light memory footprint and is particularly efficient at smaller model sizes; for a fixed speed and accuracy budget, small FNet models outperform Transformer counterparts.*
165_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fnet.md
https://huggingface.co/docs/transformers/en/model_doc/fnet/#overview
.md
outperform Transformer counterparts.* This model was contributed by [gchhablani](https://huggingface.co/gchhablani). The original code can be found [here](https://github.com/google-research/google-research/tree/master/f_net).
165_1_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fnet.md
https://huggingface.co/docs/transformers/en/model_doc/fnet/#usage-tips
.md
The model was trained without an attention mask as it is based on Fourier Transform. The model was trained with maximum sequence length 512 which includes pad tokens. Hence, it is highly recommended to use the same maximum sequence length for fine-tuning and inference.
165_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fnet.md
https://huggingface.co/docs/transformers/en/model_doc/fnet/#resources
.md
- [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Multiple choice task guide](../tasks/multiple_choice)
165_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fnet.md
https://huggingface.co/docs/transformers/en/model_doc/fnet/#fnetconfig
.md
This is the configuration class to store the configuration of a [`FNetModel`]. It is used to instantiate an FNet model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the FNet [google/fnet-base](https://huggingface.co/google/fnet-base) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
165_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fnet.md
https://huggingface.co/docs/transformers/en/model_doc/fnet/#fnetconfig
.md
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 32000): Vocabulary size of the FNet model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`FNetModel`] or [`TFFNetModel`]. hidden_size (`int`, *optional*, defaults to 768): Dimension of the encoder layers and the pooler layer.
165_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fnet.md
https://huggingface.co/docs/transformers/en/model_doc/fnet/#fnetconfig
.md
hidden_size (`int`, *optional*, defaults to 768): Dimension of the encoder layers and the pooler layer. num_hidden_layers (`int`, *optional*, defaults to 12): Number of hidden layers in the Transformer encoder. intermediate_size (`int`, *optional*, defaults to 3072): Dimension of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder. hidden_act (`str` or `function`, *optional*, defaults to `"gelu_new"`):
165_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fnet.md
https://huggingface.co/docs/transformers/en/model_doc/fnet/#fnetconfig
.md
hidden_act (`str` or `function`, *optional*, defaults to `"gelu_new"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported. hidden_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. max_position_embeddings (`int`, *optional*, defaults to 512):
165_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fnet.md
https://huggingface.co/docs/transformers/en/model_doc/fnet/#fnetconfig
.md
max_position_embeddings (`int`, *optional*, defaults to 512): The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). type_vocab_size (`int`, *optional*, defaults to 4): The vocabulary size of the `token_type_ids` passed when calling [`FNetModel`] or [`TFFNetModel`]. initializer_range (`float`, *optional*, defaults to 0.02):
165_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fnet.md
https://huggingface.co/docs/transformers/en/model_doc/fnet/#fnetconfig
.md
initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-12): The epsilon used by the layer normalization layers. use_tpu_fourier_optimizations (`bool`, *optional*, defaults to `False`): Determines whether to use TPU optimized FFTs. If `True`, the model will favor axis-wise FFTs transforms.
165_4_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fnet.md
https://huggingface.co/docs/transformers/en/model_doc/fnet/#fnetconfig
.md
Determines whether to use TPU optimized FFTs. If `True`, the model will favor axis-wise FFTs transforms. Set to `False` for GPU/CPU hardware, in which case n-dimensional FFTs are used. tpu_short_seq_length (`int`, *optional*, defaults to 512): The sequence length that is expected by the model when using TPUs. This will be used to initialize the DFT matrix only when *use_tpu_fourier_optimizations* is set to `True` and the input sequence is shorter than or equal to 4096 tokens. Example: ```python
165_4_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fnet.md
https://huggingface.co/docs/transformers/en/model_doc/fnet/#fnetconfig
.md
equal to 4096 tokens. Example: ```python >>> from transformers import FNetConfig, FNetModel
165_4_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fnet.md
https://huggingface.co/docs/transformers/en/model_doc/fnet/#fnetconfig
.md
>>> # Initializing a FNet fnet-base style configuration >>> configuration = FNetConfig() >>> # Initializing a model (with random weights) from the fnet-base style configuration >>> model = FNetModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
165_4_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fnet.md
https://huggingface.co/docs/transformers/en/model_doc/fnet/#fnettokenizer
.md
Construct an FNet tokenizer. Adapted from [`AlbertTokenizer`]. Based on [SentencePiece](https://github.com/google/sentencepiece). This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. Args: vocab_file (`str`): [SentencePiece](https://github.com/google/sentencepiece) file (generally has a *.spm* extension) that contains the vocabulary necessary to instantiate a tokenizer.
165_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fnet.md
https://huggingface.co/docs/transformers/en/model_doc/fnet/#fnettokenizer
.md
contains the vocabulary necessary to instantiate a tokenizer. do_lower_case (`bool`, *optional*, defaults to `False`): Whether or not to lowercase the input when tokenizing. remove_space (`bool`, *optional*, defaults to `True`): Whether or not to strip the text when tokenizing (removing excess spaces before and after the string). keep_accents (`bool`, *optional*, defaults to `True`): Whether or not to keep accents when tokenizing. unk_token (`str`, *optional*, defaults to `"<unk>"`):
165_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fnet.md
https://huggingface.co/docs/transformers/en/model_doc/fnet/#fnettokenizer
.md
Whether or not to keep accents when tokenizing. unk_token (`str`, *optional*, defaults to `"<unk>"`): The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. sep_token (`str`, *optional*, defaults to `"[SEP]"`): The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last
165_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fnet.md
https://huggingface.co/docs/transformers/en/model_doc/fnet/#fnettokenizer
.md
sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens. pad_token (`str`, *optional*, defaults to `"<pad>"`): The token used for padding, for example when batching sequences of different lengths. cls_token (`str`, *optional*, defaults to `"[CLS]"`): The classifier token which is used when doing sequence classification (classification of the whole sequence
165_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fnet.md
https://huggingface.co/docs/transformers/en/model_doc/fnet/#fnettokenizer
.md
The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens. mask_token (`str`, *optional*, defaults to `"[MASK]"`): The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict. sp_model_kwargs (`dict`, *optional*):
165_5_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fnet.md
https://huggingface.co/docs/transformers/en/model_doc/fnet/#fnettokenizer
.md
modeling. This is the token which the model will try to predict. sp_model_kwargs (`dict`, *optional*): Will be passed to the `SentencePieceProcessor.__init__()` method. The [Python wrapper for SentencePiece](https://github.com/google/sentencepiece/tree/master/python) can be used, among other things, to set: - `enable_sampling`: Enable subword regularization. - `nbest_size`: Sampling parameters for unigram. Invalid for BPE-Dropout. - `nbest_size = {0,1}`: No sampling is performed.
165_5_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fnet.md
https://huggingface.co/docs/transformers/en/model_doc/fnet/#fnettokenizer
.md
- `nbest_size`: Sampling parameters for unigram. Invalid for BPE-Dropout. - `nbest_size = {0,1}`: No sampling is performed. - `nbest_size > 1`: samples from the nbest_size results. - `nbest_size < 0`: assuming that nbest_size is infinite and samples from the all hypothesis (lattice) using forward-filtering-and-backward-sampling algorithm. - `alpha`: Smoothing parameter for unigram sampling, and dropout probability of merge operations for BPE-dropout. Attributes: sp_model (`SentencePieceProcessor`):
165_5_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/fnet.md
https://huggingface.co/docs/transformers/en/model_doc/fnet/#fnettokenizer
.md
BPE-dropout. Attributes: sp_model (`SentencePieceProcessor`): The *SentencePiece* processor that is used for every conversion (string, tokens and IDs). Methods: build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary
165_5_7