source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sam.md
|
https://huggingface.co/docs/transformers/en/model_doc/sam/#overview
|
.md
|
This model was contributed by [ybelkada](https://huggingface.co/ybelkada) and [ArthurZ](https://huggingface.co/ArthurZ).
The original code can be found [here](https://github.com/facebookresearch/segment-anything).
Below is an example on how to run mask generation given an image and a 2D point:
```python
import torch
from PIL import Image
import requests
from transformers import SamModel, SamProcessor
|
248_1_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sam.md
|
https://huggingface.co/docs/transformers/en/model_doc/sam/#overview
|
.md
|
device = "cuda" if torch.cuda.is_available() else "cpu"
model = SamModel.from_pretrained("facebook/sam-vit-huge").to(device)
processor = SamProcessor.from_pretrained("facebook/sam-vit-huge")
img_url = "https://huggingface.co/ybelkada/segment-anything/resolve/main/assets/car.png"
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert("RGB")
input_points = [[[450, 600]]] # 2D location of a window in the image
|
248_1_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sam.md
|
https://huggingface.co/docs/transformers/en/model_doc/sam/#overview
|
.md
|
inputs = processor(raw_image, input_points=input_points, return_tensors="pt").to(device)
with torch.no_grad():
outputs = model(**inputs)
|
248_1_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sam.md
|
https://huggingface.co/docs/transformers/en/model_doc/sam/#overview
|
.md
|
masks = processor.image_processor.post_process_masks(
outputs.pred_masks.cpu(), inputs["original_sizes"].cpu(), inputs["reshaped_input_sizes"].cpu()
)
scores = outputs.iou_scores
```
You can also process your own masks alongside the input images in the processor to be passed to the model.
```python
import torch
from PIL import Image
import requests
from transformers import SamModel, SamProcessor
|
248_1_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sam.md
|
https://huggingface.co/docs/transformers/en/model_doc/sam/#overview
|
.md
|
device = "cuda" if torch.cuda.is_available() else "cpu"
model = SamModel.from_pretrained("facebook/sam-vit-huge").to(device)
processor = SamProcessor.from_pretrained("facebook/sam-vit-huge")
|
248_1_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sam.md
|
https://huggingface.co/docs/transformers/en/model_doc/sam/#overview
|
.md
|
img_url = "https://huggingface.co/ybelkada/segment-anything/resolve/main/assets/car.png"
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert("RGB")
mask_url = "https://huggingface.co/ybelkada/segment-anything/resolve/main/assets/car.png"
segmentation_map = Image.open(requests.get(mask_url, stream=True).raw).convert("1")
input_points = [[[450, 600]]] # 2D location of a window in the image
|
248_1_11
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sam.md
|
https://huggingface.co/docs/transformers/en/model_doc/sam/#overview
|
.md
|
inputs = processor(raw_image, input_points=input_points, segmentation_maps=segmentation_map, return_tensors="pt").to(device)
with torch.no_grad():
outputs = model(**inputs)
masks = processor.image_processor.post_process_masks(
outputs.pred_masks.cpu(), inputs["original_sizes"].cpu(), inputs["reshaped_input_sizes"].cpu()
)
scores = outputs.iou_scores
```
|
248_1_12
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sam.md
|
https://huggingface.co/docs/transformers/en/model_doc/sam/#resources
|
.md
|
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with SAM.
- [Demo notebook](https://github.com/huggingface/notebooks/blob/main/examples/segment_anything.ipynb) for using the model.
- [Demo notebook](https://github.com/huggingface/notebooks/blob/main/examples/automatic_mask_generation.ipynb) for using the automatic mask generation pipeline.
|
248_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sam.md
|
https://huggingface.co/docs/transformers/en/model_doc/sam/#resources
|
.md
|
- [Demo notebook](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/SAM/Run_inference_with_MedSAM_using_HuggingFace_Transformers.ipynb) for inference with MedSAM, a fine-tuned version of SAM on the medical domain. 🌎
- [Demo notebook](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/SAM/Fine_tune_SAM_(segment_anything)_on_a_custom_dataset.ipynb) for fine-tuning the model on custom data. 🌎
|
248_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sam.md
|
https://huggingface.co/docs/transformers/en/model_doc/sam/#slimsam
|
.md
|
SlimSAM, a pruned version of SAM, was proposed in [0.1% Data Makes Segment Anything Slim](https://arxiv.org/abs/2312.05284) by Zigeng Chen et al. SlimSAM reduces the size of the SAM models considerably while maintaining the same performance.
Checkpoints can be found on the [hub](https://huggingface.co/models?other=slimsam), and they can be used as a drop-in replacement of SAM.
|
248_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sam.md
|
https://huggingface.co/docs/transformers/en/model_doc/sam/#grounded-sam
|
.md
|
One can combine [Grounding DINO](grounding-dino) with SAM for text-based mask generation as introduced in [Grounded SAM: Assembling Open-World Models for Diverse Visual Tasks](https://arxiv.org/abs/2401.14159). You can refer to this [demo notebook](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/Grounding%20DINO/GroundingDINO_with_Segment_Anything.ipynb) 🌍 for details.
|
248_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sam.md
|
https://huggingface.co/docs/transformers/en/model_doc/sam/#grounded-sam
|
.md
|
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/grounded_sam.png"
alt="drawing" width="900"/>
<small> Grounded SAM overview. Taken from the <a href="https://github.com/IDEA-Research/Grounded-Segment-Anything">original repository</a>. </small>
|
248_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sam.md
|
https://huggingface.co/docs/transformers/en/model_doc/sam/#samconfig
|
.md
|
[`SamConfig`] is the configuration class to store the configuration of a [`SamModel`]. It is used to instantiate a
SAM model according to the specified arguments, defining the vision model, prompt-encoder model and mask decoder
configs. Instantiating a configuration with the defaults will yield a similar configuration to that of the
SAM-ViT-H [facebook/sam-vit-huge](https://huggingface.co/facebook/sam-vit-huge) architecture.
|
248_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sam.md
|
https://huggingface.co/docs/transformers/en/model_doc/sam/#samconfig
|
.md
|
SAM-ViT-H [facebook/sam-vit-huge](https://huggingface.co/facebook/sam-vit-huge) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vision_config (Union[`dict`, `SamVisionConfig`], *optional*):
Dictionary of configuration options used to initialize [`SamVisionConfig`].
prompt_encoder_config (Union[`dict`, `SamPromptEncoderConfig`], *optional*):
|
248_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sam.md
|
https://huggingface.co/docs/transformers/en/model_doc/sam/#samconfig
|
.md
|
prompt_encoder_config (Union[`dict`, `SamPromptEncoderConfig`], *optional*):
Dictionary of configuration options used to initialize [`SamPromptEncoderConfig`].
mask_decoder_config (Union[`dict`, `SamMaskDecoderConfig`], *optional*):
Dictionary of configuration options used to initialize [`SamMaskDecoderConfig`].
kwargs (*optional*):
Dictionary of keyword arguments.
Example:
```python
>>> from transformers import (
... SamVisionConfig,
... SamPromptEncoderConfig,
... SamMaskDecoderConfig,
|
248_5_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sam.md
|
https://huggingface.co/docs/transformers/en/model_doc/sam/#samconfig
|
.md
|
```python
>>> from transformers import (
... SamVisionConfig,
... SamPromptEncoderConfig,
... SamMaskDecoderConfig,
... SamModel,
... )
|
248_5_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sam.md
|
https://huggingface.co/docs/transformers/en/model_doc/sam/#samconfig
|
.md
|
>>> # Initializing a SamConfig with `"facebook/sam-vit-huge"` style configuration
>>> configuration = SamConfig()
>>> # Initializing a SamModel (with random weights) from the `"facebook/sam-vit-huge"` style configuration
>>> model = SamModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
>>> # We can also initialize a SamConfig from a SamVisionConfig, SamPromptEncoderConfig, and SamMaskDecoderConfig
|
248_5_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sam.md
|
https://huggingface.co/docs/transformers/en/model_doc/sam/#samconfig
|
.md
|
>>> # We can also initialize a SamConfig from a SamVisionConfig, SamPromptEncoderConfig, and SamMaskDecoderConfig
>>> # Initializing SAM vision, SAM Q-Former and language model configurations
>>> vision_config = SamVisionConfig()
>>> prompt_encoder_config = SamPromptEncoderConfig()
>>> mask_decoder_config = SamMaskDecoderConfig()
>>> config = SamConfig(vision_config, prompt_encoder_config, mask_decoder_config)
```
|
248_5_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sam.md
|
https://huggingface.co/docs/transformers/en/model_doc/sam/#samvisionconfig
|
.md
|
This is the configuration class to store the configuration of a [`SamVisionModel`]. It is used to instantiate a SAM
vision encoder according to the specified arguments, defining the model architecture. Instantiating a configuration
defaults will yield a similar configuration to that of the SAM ViT-h
[facebook/sam-vit-huge](https://huggingface.co/facebook/sam-vit-huge) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
248_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sam.md
|
https://huggingface.co/docs/transformers/en/model_doc/sam/#samvisionconfig
|
.md
|
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
hidden_size (`int`, *optional*, defaults to 768):
Dimensionality of the encoder layers and the pooler layer.
output_channels (`int`, *optional*, defaults to 256):
Dimensionality of the output channels in the Patch Encoder.
num_hidden_layers (`int`, *optional*, defaults to 12):
Number of hidden layers in the Transformer encoder.
|
248_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sam.md
|
https://huggingface.co/docs/transformers/en/model_doc/sam/#samvisionconfig
|
.md
|
num_hidden_layers (`int`, *optional*, defaults to 12):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 12):
Number of attention heads for each attention layer in the Transformer encoder.
num_channels (`int`, *optional*, defaults to 3):
Number of channels in the input image.
image_size (`int`, *optional*, defaults to 1024):
Expected resolution. Target size of the resized input image.
patch_size (`int`, *optional*, defaults to 16):
|
248_6_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sam.md
|
https://huggingface.co/docs/transformers/en/model_doc/sam/#samvisionconfig
|
.md
|
Expected resolution. Target size of the resized input image.
patch_size (`int`, *optional*, defaults to 16):
Size of the patches to be extracted from the input image.
hidden_act (`str`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string)
layer_norm_eps (`float`, *optional*, defaults to 1e-06):
The epsilon used by the layer normalization layers.
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
|
248_6_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sam.md
|
https://huggingface.co/docs/transformers/en/model_doc/sam/#samvisionconfig
|
.md
|
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
initializer_range (`float`, *optional*, defaults to 1e-10):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
qkv_bias (`bool`, *optional*, defaults to `True`):
Whether to add a bias to query, key, value projections.
mlp_ratio (`float`, *optional*, defaults to 4.0):
Ratio of mlp hidden dim to embedding dim.
|
248_6_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sam.md
|
https://huggingface.co/docs/transformers/en/model_doc/sam/#samvisionconfig
|
.md
|
mlp_ratio (`float`, *optional*, defaults to 4.0):
Ratio of mlp hidden dim to embedding dim.
use_abs_pos (`bool`, *optional*, defaults to `True`):
Whether to use absolute position embedding.
use_rel_pos (`bool`, *optional*, defaults to `True`):
Whether to use relative position embedding.
window_size (`int`, *optional*, defaults to 14):
Window size for relative position.
global_attn_indexes (`List[int]`, *optional*, defaults to `[2, 5, 8, 11]`):
The indexes of the global attention layers.
|
248_6_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sam.md
|
https://huggingface.co/docs/transformers/en/model_doc/sam/#samvisionconfig
|
.md
|
global_attn_indexes (`List[int]`, *optional*, defaults to `[2, 5, 8, 11]`):
The indexes of the global attention layers.
num_pos_feats (`int`, *optional*, defaults to 128):
The dimensionality of the position embedding.
mlp_dim (`int`, *optional*):
The dimensionality of the MLP layer in the Transformer encoder. If `None`, defaults to `mlp_ratio *
hidden_size`.
|
248_6_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sam.md
|
https://huggingface.co/docs/transformers/en/model_doc/sam/#sammaskdecoderconfig
|
.md
|
This is the configuration class to store the configuration of a [`SamMaskDecoder`]. It is used to instantiate a SAM
mask decoder to the specified arguments, defining the model architecture. Instantiating a configuration defaults
will yield a similar configuration to that of the SAM-vit-h
[facebook/sam-vit-huge](https://huggingface.co/facebook/sam-vit-huge) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
248_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sam.md
|
https://huggingface.co/docs/transformers/en/model_doc/sam/#sammaskdecoderconfig
|
.md
|
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
hidden_size (`int`, *optional*, defaults to 256):
Dimensionality of the hidden states.
hidden_act (`str`, *optional*, defaults to `"relu"`):
The non-linear activation function used inside the `SamMaskDecoder` module.
mlp_dim (`int`, *optional*, defaults to 2048):
|
248_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sam.md
|
https://huggingface.co/docs/transformers/en/model_doc/sam/#sammaskdecoderconfig
|
.md
|
The non-linear activation function used inside the `SamMaskDecoder` module.
mlp_dim (`int`, *optional*, defaults to 2048):
Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
num_hidden_layers (`int`, *optional*, defaults to 2):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 8):
Number of attention heads for each attention layer in the Transformer encoder.
|
248_7_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sam.md
|
https://huggingface.co/docs/transformers/en/model_doc/sam/#sammaskdecoderconfig
|
.md
|
Number of attention heads for each attention layer in the Transformer encoder.
attention_downsample_rate (`int`, *optional*, defaults to 2):
The downsampling rate of the attention layer.
num_multimask_outputs (`int`, *optional*, defaults to 3):
The number of outputs from the `SamMaskDecoder` module. In the Segment Anything paper, this is set to 3.
iou_head_depth (`int`, *optional*, defaults to 3):
The number of layers in the IoU head module.
iou_head_hidden_dim (`int`, *optional*, defaults to 256):
|
248_7_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sam.md
|
https://huggingface.co/docs/transformers/en/model_doc/sam/#sammaskdecoderconfig
|
.md
|
The number of layers in the IoU head module.
iou_head_hidden_dim (`int`, *optional*, defaults to 256):
The dimensionality of the hidden states in the IoU head module.
layer_norm_eps (`float`, *optional*, defaults to 1e-06):
The epsilon used by the layer normalization layers.
|
248_7_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sam.md
|
https://huggingface.co/docs/transformers/en/model_doc/sam/#sampromptencoderconfig
|
.md
|
This is the configuration class to store the configuration of a [`SamPromptEncoder`]. The [`SamPromptEncoder`]
module is used to encode the input 2D points and bounding boxes. Instantiating a configuration defaults will yield
a similar configuration to that of the SAM-vit-h
[facebook/sam-vit-huge](https://huggingface.co/facebook/sam-vit-huge) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
248_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sam.md
|
https://huggingface.co/docs/transformers/en/model_doc/sam/#sampromptencoderconfig
|
.md
|
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
hidden_size (`int`, *optional*, defaults to 256):
Dimensionality of the hidden states.
image_size (`int`, *optional*, defaults to 1024):
The expected output resolution of the image.
patch_size (`int`, *optional*, defaults to 16):
The size (resolution) of each patch.
mask_input_channels (`int`, *optional*, defaults to 16):
|
248_8_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sam.md
|
https://huggingface.co/docs/transformers/en/model_doc/sam/#sampromptencoderconfig
|
.md
|
The size (resolution) of each patch.
mask_input_channels (`int`, *optional*, defaults to 16):
The number of channels to be fed to the `MaskDecoder` module.
num_point_embeddings (`int`, *optional*, defaults to 4):
The number of point embeddings to be used.
hidden_act (`str`, *optional*, defaults to `"gelu"`):
The non-linear activation function in the encoder and pooler.
|
248_8_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sam.md
|
https://huggingface.co/docs/transformers/en/model_doc/sam/#samprocessor
|
.md
|
Constructs a SAM processor which wraps a SAM image processor and an 2D points & Bounding boxes processor into a
single processor.
[`SamProcessor`] offers all the functionalities of [`SamImageProcessor`]. See the docstring of
[`~SamImageProcessor.__call__`] for more information.
Args:
image_processor (`SamImageProcessor`):
An instance of [`SamImageProcessor`]. The image processor is a required input.
|
248_9_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sam.md
|
https://huggingface.co/docs/transformers/en/model_doc/sam/#samimageprocessor
|
.md
|
Constructs a SAM image processor.
Args:
do_resize (`bool`, *optional*, defaults to `True`):
Whether to resize the image's (height, width) dimensions to the specified `size`. Can be overridden by the
`do_resize` parameter in the `preprocess` method.
size (`dict`, *optional*, defaults to `{"longest_edge": 1024}`):
Size of the output image after resizing. Resizes the longest edge of the image to match
`size["longest_edge"]` while maintaining the aspect ratio. Can be overridden by the `size` parameter in the
|
248_10_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sam.md
|
https://huggingface.co/docs/transformers/en/model_doc/sam/#samimageprocessor
|
.md
|
`size["longest_edge"]` while maintaining the aspect ratio. Can be overridden by the `size` parameter in the
`preprocess` method.
mask_size (`dict`, *optional*, defaults to `{"longest_edge": 256}`):
Size of the output segmentation map after resizing. Resizes the longest edge of the image to match
`size["longest_edge"]` while maintaining the aspect ratio. Can be overridden by the `mask_size` parameter
in the `preprocess` method.
resample (`PILImageResampling`, *optional*, defaults to `Resampling.BILINEAR`):
|
248_10_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sam.md
|
https://huggingface.co/docs/transformers/en/model_doc/sam/#samimageprocessor
|
.md
|
in the `preprocess` method.
resample (`PILImageResampling`, *optional*, defaults to `Resampling.BILINEAR`):
Resampling filter to use if resizing the image. Can be overridden by the `resample` parameter in the
`preprocess` method.
do_rescale (`bool`, *optional*, defaults to `True`):
Wwhether to rescale the image by the specified scale `rescale_factor`. Can be overridden by the
`do_rescale` parameter in the `preprocess` method.
rescale_factor (`int` or `float`, *optional*, defaults to `1/255`):
|
248_10_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sam.md
|
https://huggingface.co/docs/transformers/en/model_doc/sam/#samimageprocessor
|
.md
|
`do_rescale` parameter in the `preprocess` method.
rescale_factor (`int` or `float`, *optional*, defaults to `1/255`):
Scale factor to use if rescaling the image. Only has an effect if `do_rescale` is set to `True`. Can be
overridden by the `rescale_factor` parameter in the `preprocess` method.
do_normalize (`bool`, *optional*, defaults to `True`):
Whether to normalize the image. Can be overridden by the `do_normalize` parameter in the `preprocess`
|
248_10_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sam.md
|
https://huggingface.co/docs/transformers/en/model_doc/sam/#samimageprocessor
|
.md
|
Whether to normalize the image. Can be overridden by the `do_normalize` parameter in the `preprocess`
method. Can be overridden by the `do_normalize` parameter in the `preprocess` method.
image_mean (`float` or `List[float]`, *optional*, defaults to `IMAGENET_DEFAULT_MEAN`):
Mean to use if normalizing the image. This is a float or list of floats the length of the number of
channels in the image. Can be overridden by the `image_mean` parameter in the `preprocess` method. Can be
|
248_10_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sam.md
|
https://huggingface.co/docs/transformers/en/model_doc/sam/#samimageprocessor
|
.md
|
channels in the image. Can be overridden by the `image_mean` parameter in the `preprocess` method. Can be
overridden by the `image_mean` parameter in the `preprocess` method.
image_std (`float` or `List[float]`, *optional*, defaults to `IMAGENET_DEFAULT_STD`):
Standard deviation to use if normalizing the image. This is a float or list of floats the length of the
number of channels in the image. Can be overridden by the `image_std` parameter in the `preprocess` method.
|
248_10_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sam.md
|
https://huggingface.co/docs/transformers/en/model_doc/sam/#samimageprocessor
|
.md
|
number of channels in the image. Can be overridden by the `image_std` parameter in the `preprocess` method.
Can be overridden by the `image_std` parameter in the `preprocess` method.
do_pad (`bool`, *optional*, defaults to `True`):
Whether to pad the image to the specified `pad_size`. Can be overridden by the `do_pad` parameter in the
`preprocess` method.
pad_size (`dict`, *optional*, defaults to `{"height": 1024, "width": 1024}`):
|
248_10_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sam.md
|
https://huggingface.co/docs/transformers/en/model_doc/sam/#samimageprocessor
|
.md
|
`preprocess` method.
pad_size (`dict`, *optional*, defaults to `{"height": 1024, "width": 1024}`):
Size of the output image after padding. Can be overridden by the `pad_size` parameter in the `preprocess`
method.
mask_pad_size (`dict`, *optional*, defaults to `{"height": 256, "width": 256}`):
Size of the output segmentation map after padding. Can be overridden by the `mask_pad_size` parameter in
the `preprocess` method.
do_convert_rgb (`bool`, *optional*, defaults to `True`):
|
248_10_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sam.md
|
https://huggingface.co/docs/transformers/en/model_doc/sam/#samimageprocessor
|
.md
|
the `preprocess` method.
do_convert_rgb (`bool`, *optional*, defaults to `True`):
Whether to convert the image to RGB.
|
248_10_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sam.md
|
https://huggingface.co/docs/transformers/en/model_doc/sam/#sammodel
|
.md
|
Segment Anything Model (SAM) for generating segmentation masks, given an input image and optional 2D location and bounding boxes.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
248_11_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sam.md
|
https://huggingface.co/docs/transformers/en/model_doc/sam/#sammodel
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`SamConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
248_11_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sam.md
|
https://huggingface.co/docs/transformers/en/model_doc/sam/#sammodel
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
248_11_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sam.md
|
https://huggingface.co/docs/transformers/en/model_doc/sam/#tfsammodel
|
.md
|
No docstring available for TFSamModel
Methods: call
|
248_12_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon.md
|
https://huggingface.co/docs/transformers/en/model_doc/falcon/
|
.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
249_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon.md
|
https://huggingface.co/docs/transformers/en/model_doc/falcon/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
249_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon.md
|
https://huggingface.co/docs/transformers/en/model_doc/falcon/#overview
|
.md
|
Falcon is a class of causal decoder-only models built by [TII](https://www.tii.ae/). The largest Falcon checkpoints
have been trained on >=1T tokens of text, with a particular emphasis on the [RefinedWeb](https://arxiv.org/abs/2306.01116)
corpus. They are made available under the Apache 2.0 license.
Falcon's architecture is modern and optimized for inference, with multi-query attention and support for efficient
|
249_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon.md
|
https://huggingface.co/docs/transformers/en/model_doc/falcon/#overview
|
.md
|
Falcon's architecture is modern and optimized for inference, with multi-query attention and support for efficient
attention variants like `FlashAttention`. Both 'base' models trained only as causal language models as well as
'instruct' models that have received further fine-tuning are available.
Falcon models are (as of 2023) some of the largest and most powerful open-source language models,
|
249_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon.md
|
https://huggingface.co/docs/transformers/en/model_doc/falcon/#overview
|
.md
|
Falcon models are (as of 2023) some of the largest and most powerful open-source language models,
and consistently rank highly in the [OpenLLM leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
|
249_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon.md
|
https://huggingface.co/docs/transformers/en/model_doc/falcon/#converting-custom-checkpoints
|
.md
|
<Tip>
Falcon models were initially added to the Hugging Face Hub as custom code checkpoints. However, Falcon is now fully
supported in the Transformers library. If you fine-tuned a model from a custom code checkpoint, we recommend converting
your checkpoint to the new in-library format, as this should give significant improvements to stability and
performance, especially for generation, as well as removing the need to use `trust_remote_code=True`!
</Tip>
|
249_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon.md
|
https://huggingface.co/docs/transformers/en/model_doc/falcon/#converting-custom-checkpoints
|
.md
|
performance, especially for generation, as well as removing the need to use `trust_remote_code=True`!
</Tip>
You can convert custom code checkpoints to full Transformers checkpoints using the `convert_custom_code_checkpoint.py`
script located in the
[Falcon model directory](https://github.com/huggingface/transformers/tree/main/src/transformers/models/falcon)
of the Transformers library. To use this script, simply call it with
|
249_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon.md
|
https://huggingface.co/docs/transformers/en/model_doc/falcon/#converting-custom-checkpoints
|
.md
|
of the Transformers library. To use this script, simply call it with
`python convert_custom_code_checkpoint.py --checkpoint_dir my_model`. This will convert your checkpoint in-place, and
you can immediately load it from the directory afterwards with e.g. `from_pretrained()`. If your model hasn't been
uploaded to the Hub, we recommend making a backup before attempting the conversion, just in case!
|
249_2_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon.md
|
https://huggingface.co/docs/transformers/en/model_doc/falcon/#falconconfig
|
.md
|
This is the configuration class to store the configuration of a [`FalconModel`]. It is used to instantiate a Falcon
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the
[tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
249_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon.md
|
https://huggingface.co/docs/transformers/en/model_doc/falcon/#falconconfig
|
.md
|
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 65024):
Vocabulary size of the Falcon model. Defines the number of different tokens that can be represented by the
`inputs_ids` passed when calling [`FalconModel`]
hidden_size (`int`, *optional*, defaults to 4544):
Dimension of the hidden representations.
|
249_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon.md
|
https://huggingface.co/docs/transformers/en/model_doc/falcon/#falconconfig
|
.md
|
hidden_size (`int`, *optional*, defaults to 4544):
Dimension of the hidden representations.
num_hidden_layers (`int`, *optional*, defaults to 32):
Number of hidden layers in the Transformer decoder.
num_attention_heads (`int`, *optional*, defaults to 71):
Number of attention heads for each attention layer in the Transformer encoder.
num_ln_in_parallel_attn (`int`, *optional*):
Set to 2 if separate layer norms are to be used for the MLP and the attention output when using parallel
attention, otherwise, 1.
|
249_3_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon.md
|
https://huggingface.co/docs/transformers/en/model_doc/falcon/#falconconfig
|
.md
|
attention, otherwise, 1.
layer_norm_epsilon (`float`, *optional*, defaults to 1e-05):
The epsilon used by the layer normalization layers.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
use_cache (`bool`, *optional*, defaults to `True`):
Whether the model should return the last key/values attentions (not used by all models). Only relevant if
`config.is_decoder=True`.
|
249_3_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon.md
|
https://huggingface.co/docs/transformers/en/model_doc/falcon/#falconconfig
|
.md
|
`config.is_decoder=True`.
hidden_dropout (`float`, *optional*, defaults to 0.0):
The dropout probability for MLP layers.
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout probability for attention layers.
num_kv_heads (`int`, *optional*):
Number of key-value heads to use per attention layer. If unset, defaults to the same value as
`num_attention_heads`.
alibi (`bool`, *optional*, defaults to `False`):
Whether to use ALiBi positional biases during self-attention.
|
249_3_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon.md
|
https://huggingface.co/docs/transformers/en/model_doc/falcon/#falconconfig
|
.md
|
alibi (`bool`, *optional*, defaults to `False`):
Whether to use ALiBi positional biases during self-attention.
new_decoder_architecture (`bool`, *optional*, defaults to `False`):
Whether to use the new (Falcon-40B) decoder architecture. If `True`, the `multi_query` and `parallel_attn`
arguments are ignored, as the new decoder always uses parallel attention.
multi_query (`bool`, *optional*, defaults to `True`):
|
249_3_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon.md
|
https://huggingface.co/docs/transformers/en/model_doc/falcon/#falconconfig
|
.md
|
arguments are ignored, as the new decoder always uses parallel attention.
multi_query (`bool`, *optional*, defaults to `True`):
Whether to use multi-query attention in the decoder. Ignored when `new_decoder_architecture` is `True`.
parallel_attn (`bool`, *optional*, defaults to `True`):
Whether to compute attention in parallel with the feedforward layer. If False, they are consecutive
instead, as in the original Transformer architecture. Ignored when `new_decoder_architecture` is `True`.
|
249_3_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon.md
|
https://huggingface.co/docs/transformers/en/model_doc/falcon/#falconconfig
|
.md
|
instead, as in the original Transformer architecture. Ignored when `new_decoder_architecture` is `True`.
bias (`bool`, *optional*, defaults to `False`):
Whether to use bias on Linear layers.
max_position_embeddings (`int`, *optional*, defaults to 2048):
The maximum sequence length that this model might ever be used with, when `alibi` is `False`. Pretrained
Falcon models with RoPE support up to 2048 tokens.
rope_theta (`float`, *optional*, defaults to 10000.0):
The base period of the RoPE embeddings.
|
249_3_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon.md
|
https://huggingface.co/docs/transformers/en/model_doc/falcon/#falconconfig
|
.md
|
rope_theta (`float`, *optional*, defaults to 10000.0):
The base period of the RoPE embeddings.
rope_scaling (`Dict`, *optional*):
Dictionary containing the scaling configuration for the RoPE embeddings. NOTE: if you apply new rope type
and you expect the model to work on longer `max_position_embeddings`, we recommend you to update this value
accordingly.
Expected contents:
`rope_type` (`str`):
The sub-variant of RoPE to use. Can be one of ['default', 'linear', 'dynamic', 'yarn', 'longrope',
|
249_3_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon.md
|
https://huggingface.co/docs/transformers/en/model_doc/falcon/#falconconfig
|
.md
|
`rope_type` (`str`):
The sub-variant of RoPE to use. Can be one of ['default', 'linear', 'dynamic', 'yarn', 'longrope',
'llama3'], with 'default' being the original RoPE implementation.
`factor` (`float`, *optional*):
Used with all rope types except 'default'. The scaling factor to apply to the RoPE embeddings. In
most scaling types, a `factor` of x will enable the model to handle sequences of length x *
original maximum pre-trained length.
`original_max_position_embeddings` (`int`, *optional*):
|
249_3_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon.md
|
https://huggingface.co/docs/transformers/en/model_doc/falcon/#falconconfig
|
.md
|
original maximum pre-trained length.
`original_max_position_embeddings` (`int`, *optional*):
Used with 'dynamic', 'longrope' and 'llama3'. The original max position embeddings used during
pretraining.
`attention_factor` (`float`, *optional*):
Used with 'yarn' and 'longrope'. The scaling factor to be applied on the attention
computation. If unspecified, it defaults to value recommended by the implementation, using the
`factor` field to infer the suggested value.
`beta_fast` (`float`, *optional*):
|
249_3_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon.md
|
https://huggingface.co/docs/transformers/en/model_doc/falcon/#falconconfig
|
.md
|
`factor` field to infer the suggested value.
`beta_fast` (`float`, *optional*):
Only used with 'yarn'. Parameter to set the boundary for extrapolation (only) in the linear
ramp function. If unspecified, it defaults to 32.
`beta_slow` (`float`, *optional*):
Only used with 'yarn'. Parameter to set the boundary for interpolation (only) in the linear
ramp function. If unspecified, it defaults to 1.
`short_factor` (`List[float]`, *optional*):
|
249_3_11
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon.md
|
https://huggingface.co/docs/transformers/en/model_doc/falcon/#falconconfig
|
.md
|
ramp function. If unspecified, it defaults to 1.
`short_factor` (`List[float]`, *optional*):
Only used with 'longrope'. The scaling factor to be applied to short contexts (<
`original_max_position_embeddings`). Must be a list of numbers with the same length as the hidden
size divided by the number of attention heads divided by 2
`long_factor` (`List[float]`, *optional*):
Only used with 'longrope'. The scaling factor to be applied to long contexts (<
|
249_3_12
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon.md
|
https://huggingface.co/docs/transformers/en/model_doc/falcon/#falconconfig
|
.md
|
`long_factor` (`List[float]`, *optional*):
Only used with 'longrope'. The scaling factor to be applied to long contexts (<
`original_max_position_embeddings`). Must be a list of numbers with the same length as the hidden
size divided by the number of attention heads divided by 2
`low_freq_factor` (`float`, *optional*):
Only used with 'llama3'. Scaling factor applied to low frequency components of the RoPE
`high_freq_factor` (`float`, *optional*):
|
249_3_13
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon.md
|
https://huggingface.co/docs/transformers/en/model_doc/falcon/#falconconfig
|
.md
|
`high_freq_factor` (`float`, *optional*):
Only used with 'llama3'. Scaling factor applied to high frequency components of the RoPE
bos_token_id (`int`, *optional*, defaults to 11):
The id of the "beginning-of-sequence" token.
eos_token_id (`int`, *optional*, defaults to 11):
The id of the "end-of-sequence" token.
ffn_hidden_size (`int`, *optional*):
The hidden size of the feedforward layer in the Transformer decoder.
defaults to 4x hidden dim
activation (`str`, *optional*, defaults to `"gelu"`):
|
249_3_14
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon.md
|
https://huggingface.co/docs/transformers/en/model_doc/falcon/#falconconfig
|
.md
|
defaults to 4x hidden dim
activation (`str`, *optional*, defaults to `"gelu"`):
The activation function used in the feedforward layer.
Example:
```python
>>> from transformers import FalconModel, FalconConfig
|
249_3_15
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon.md
|
https://huggingface.co/docs/transformers/en/model_doc/falcon/#falconconfig
|
.md
|
>>> # Initializing a small (2-layer) Falcon configuration
>>> configuration = FalconConfig(num_hidden_layers=2)
>>> # Initializing a model from the small configuration
>>> model = FalconModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
Methods: all
|
249_3_16
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon.md
|
https://huggingface.co/docs/transformers/en/model_doc/falcon/#falconmodel
|
.md
|
The bare Falcon Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
249_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon.md
|
https://huggingface.co/docs/transformers/en/model_doc/falcon/#falconmodel
|
.md
|
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`FalconConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
249_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon.md
|
https://huggingface.co/docs/transformers/en/model_doc/falcon/#falconmodel
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
249_4_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon.md
|
https://huggingface.co/docs/transformers/en/model_doc/falcon/#falconforcausallm
|
.md
|
The Falcon Model transformer with a language modeling head on top (linear layer with weights tied to the input embeddings).
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
249_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon.md
|
https://huggingface.co/docs/transformers/en/model_doc/falcon/#falconforcausallm
|
.md
|
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`FalconConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
249_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon.md
|
https://huggingface.co/docs/transformers/en/model_doc/falcon/#falconforcausallm
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
249_5_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon.md
|
https://huggingface.co/docs/transformers/en/model_doc/falcon/#falconforsequenceclassification
|
.md
|
The Falcon Model transformer with a sequence classification head on top (linear layer).
[`FalconForSequenceClassification`] uses the last token in order to do the classification, as other causal models
(e.g. GPT-1) do.
Since it does classification on the last token, it requires to know the position of the last token. If a
`pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If
|
249_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon.md
|
https://huggingface.co/docs/transformers/en/model_doc/falcon/#falconforsequenceclassification
|
.md
|
`pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If
no `pad_token_id` is defined, it simply takes the last value in each row of the batch. Since it cannot guess the
padding tokens when `inputs_embeds` are passed instead of `input_ids`, it does the same (take the last value in
each row of the batch).
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
|
249_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon.md
|
https://huggingface.co/docs/transformers/en/model_doc/falcon/#falconforsequenceclassification
|
.md
|
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
|
249_6_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon.md
|
https://huggingface.co/docs/transformers/en/model_doc/falcon/#falconforsequenceclassification
|
.md
|
and behavior.
Parameters:
config ([`FalconConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
249_6_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon.md
|
https://huggingface.co/docs/transformers/en/model_doc/falcon/#falconfortokenclassification
|
.md
|
Falcon Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for
Named-Entity-Recognition (NER) tasks.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
249_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon.md
|
https://huggingface.co/docs/transformers/en/model_doc/falcon/#falconfortokenclassification
|
.md
|
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`FalconConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
|
249_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon.md
|
https://huggingface.co/docs/transformers/en/model_doc/falcon/#falconfortokenclassification
|
.md
|
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
249_7_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon.md
|
https://huggingface.co/docs/transformers/en/model_doc/falcon/#falconforquestionanswering
|
.md
|
The Falcon Model transformer with a span classification head on top for extractive question-answering tasks like
SQuAD (a linear layers on top of the hidden-states output to compute `span start logits` and `span end logits`).
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings etc.)
|
249_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon.md
|
https://huggingface.co/docs/transformers/en/model_doc/falcon/#falconforquestionanswering
|
.md
|
library implements for all its model (such as downloading or saving, resizing the input embeddings etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`FalconConfig`]): Model configuration class with all the parameters of the model.
|
249_8_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/falcon.md
|
https://huggingface.co/docs/transformers/en/model_doc/falcon/#falconforquestionanswering
|
.md
|
and behavior.
Parameters:
config ([`FalconConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
249_8_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/barthez.md
|
https://huggingface.co/docs/transformers/en/model_doc/barthez/
|
.md
|
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
250_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/barthez.md
|
https://huggingface.co/docs/transformers/en/model_doc/barthez/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
250_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/barthez.md
|
https://huggingface.co/docs/transformers/en/model_doc/barthez/#overview
|
.md
|
The BARThez model was proposed in [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) by Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis on 23 Oct,
2020.
The abstract of the paper:
*Inductive transfer learning, enabled by self-supervised learning, have taken the entire Natural Language Processing
(NLP) field by storm, with models such as BERT and BART setting new state of the art on countless natural language
|
250_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/barthez.md
|
https://huggingface.co/docs/transformers/en/model_doc/barthez/#overview
|
.md
|
(NLP) field by storm, with models such as BERT and BART setting new state of the art on countless natural language
understanding tasks. While there are some notable exceptions, most of the available models and research have been
conducted for the English language. In this work, we introduce BARThez, the first BART model for the French language
(to the best of our knowledge). BARThez was pretrained on a very large monolingual French corpus from past research
|
250_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/barthez.md
|
https://huggingface.co/docs/transformers/en/model_doc/barthez/#overview
|
.md
|
(to the best of our knowledge). BARThez was pretrained on a very large monolingual French corpus from past research
that we adapted to suit BART's perturbation schemes. Unlike already existing BERT-based French language models such as
CamemBERT and FlauBERT, BARThez is particularly well-suited for generative tasks, since not only its encoder but also
its decoder is pretrained. In addition to discriminative tasks from the FLUE benchmark, we evaluate BARThez on a novel
|
250_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/barthez.md
|
https://huggingface.co/docs/transformers/en/model_doc/barthez/#overview
|
.md
|
its decoder is pretrained. In addition to discriminative tasks from the FLUE benchmark, we evaluate BARThez on a novel
summarization dataset, OrangeSum, that we release with this paper. We also continue the pretraining of an already
pretrained multilingual BART on BARThez's corpus, and we show that the resulting model, which we call mBARTHez,
provides a significant boost over vanilla BARThez, and is on par with or outperforms CamemBERT and FlauBERT.*
|
250_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/barthez.md
|
https://huggingface.co/docs/transformers/en/model_doc/barthez/#overview
|
.md
|
provides a significant boost over vanilla BARThez, and is on par with or outperforms CamemBERT and FlauBERT.*
This model was contributed by [moussakam](https://huggingface.co/moussakam). The Authors' code can be found [here](https://github.com/moussaKam/BARThez).
<Tip>
BARThez implementation is the same as BART, except for tokenization. Refer to [BART documentation](bart) for information on
configuration classes and their parameters. BARThez-specific tokenizers are documented below.
</Tip>
|
250_1_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/barthez.md
|
https://huggingface.co/docs/transformers/en/model_doc/barthez/#resources
|
.md
|
- BARThez can be fine-tuned on sequence-to-sequence tasks in a similar way as BART, check:
[examples/pytorch/summarization/](https://github.com/huggingface/transformers/tree/main/examples/pytorch/summarization/README.md).
|
250_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/barthez.md
|
https://huggingface.co/docs/transformers/en/model_doc/barthez/#bartheztokenizer
|
.md
|
Adapted from [`CamembertTokenizer`] and [`BartTokenizer`]. Construct a BARThez tokenizer. Based on
[SentencePiece](https://github.com/google/sentencepiece).
This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
Args:
vocab_file (`str`):
[SentencePiece](https://github.com/google/sentencepiece) file (generally has a *.spm* extension) that
|
250_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/barthez.md
|
https://huggingface.co/docs/transformers/en/model_doc/barthez/#bartheztokenizer
|
.md
|
Args:
vocab_file (`str`):
[SentencePiece](https://github.com/google/sentencepiece) file (generally has a *.spm* extension) that
contains the vocabulary necessary to instantiate a tokenizer.
bos_token (`str`, *optional*, defaults to `"<s>"`):
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
<Tip>
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the `cls_token`.
|
250_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/barthez.md
|
https://huggingface.co/docs/transformers/en/model_doc/barthez/#bartheztokenizer
|
.md
|
sequence. The token used is the `cls_token`.
</Tip>
eos_token (`str`, *optional*, defaults to `"</s>"`):
The end of sequence token.
<Tip>
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the `sep_token`.
</Tip>
sep_token (`str`, *optional*, defaults to `"</s>"`):
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
|
250_3_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/barthez.md
|
https://huggingface.co/docs/transformers/en/model_doc/barthez/#bartheztokenizer
|
.md
|
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (`str`, *optional*, defaults to `"<s>"`):
The classifier token which is used when doing sequence classification (classification of the whole sequence
|
250_3_3
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.