source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip-2.md
https://huggingface.co/docs/transformers/en/model_doc/blip-2/
.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
262_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip-2.md
https://huggingface.co/docs/transformers/en/model_doc/blip-2/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
262_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip-2.md
https://huggingface.co/docs/transformers/en/model_doc/blip-2/#overview
.md
The BLIP-2 model was proposed in [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Junnan Li, Dongxu Li, Silvio Savarese, Steven Hoi. BLIP-2 leverages frozen pre-trained image encoders and large language models (LLMs) by training a lightweight, 12-layer Transformer
262_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip-2.md
https://huggingface.co/docs/transformers/en/model_doc/blip-2/#overview
.md
encoder in between them, achieving state-of-the-art performance on various vision-language tasks. Most notably, BLIP-2 improves upon [Flamingo](https://arxiv.org/abs/2204.14198), an 80 billion parameter model, by 8.7% on zero-shot VQAv2 with 54x fewer trainable parameters. The abstract from the paper is the following:
262_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip-2.md
https://huggingface.co/docs/transformers/en/model_doc/blip-2/#overview
.md
*The cost of vision-and-language pre-training has become increasingly prohibitive due to end-to-end training of large-scale models. This paper proposes BLIP-2, a generic and efficient pre-training strategy that bootstraps vision-language pre-training from off-the-shelf frozen pre-trained image encoders and frozen large language models. BLIP-2 bridges the modality gap with a lightweight Querying Transformer, which is pre-trained in two stages. The first stage bootstraps vision-language representation
262_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip-2.md
https://huggingface.co/docs/transformers/en/model_doc/blip-2/#overview
.md
lightweight Querying Transformer, which is pre-trained in two stages. The first stage bootstraps vision-language representation learning from a frozen image encoder. The second stage bootstraps vision-to-language generative learning from a frozen language model. BLIP-2 achieves state-of-the-art performance on various vision-language tasks, despite having significantly fewer trainable parameters than existing methods. For example, our model outperforms Flamingo80B by 8.7% on zero-shot VQAv2 with 54x fewer
262_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip-2.md
https://huggingface.co/docs/transformers/en/model_doc/blip-2/#overview
.md
parameters than existing methods. For example, our model outperforms Flamingo80B by 8.7% on zero-shot VQAv2 with 54x fewer trainable parameters. We also demonstrate the model's emerging capabilities of zero-shot image-to-text generation that can follow natural language instructions.*
262_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip-2.md
https://huggingface.co/docs/transformers/en/model_doc/blip-2/#overview
.md
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/blip2_architecture.jpg" alt="drawing" width="600"/> <small> BLIP-2 architecture. Taken from the <a href="https://arxiv.org/abs/2301.12597">original paper.</a> </small> This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/salesforce/LAVIS/tree/5ee63d688ba4cebff63acee04adaef2dee9af207).
262_1_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip-2.md
https://huggingface.co/docs/transformers/en/model_doc/blip-2/#usage-tips
.md
- BLIP-2 can be used for conditional text generation given an image and an optional text prompt. At inference time, it's recommended to use the [`generate`] method. - One can use [`Blip2Processor`] to prepare images for the model, and decode the predicted tokens ID's back to text. > [!NOTE]
262_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip-2.md
https://huggingface.co/docs/transformers/en/model_doc/blip-2/#usage-tips
.md
> BLIP models after release v4.46 will raise warnings about adding `processor.num_query_tokens = {{num_query_tokens}}` and expand model embeddings layer to add special `<image>` token. It is strongly recommended to add the attributes to the processor if you own the model checkpoint, or open a PR if it is not owned by you. Adding these attributes means that BLIP will add the number of query tokens required per image and expand the text with as many `<image>` placeholders as there will be query tokens.
262_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip-2.md
https://huggingface.co/docs/transformers/en/model_doc/blip-2/#usage-tips
.md
of query tokens required per image and expand the text with as many `<image>` placeholders as there will be query tokens. Usually it is around 500 tokens per image, so make sure that the text is not truncated as otherwise there wil be failure when merging the embeddings.
262_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip-2.md
https://huggingface.co/docs/transformers/en/model_doc/blip-2/#usage-tips
.md
The attributes can be obtained from model config, as `model.config.num_query_tokens` and model embeddings expansion can be done by following [this link](https://gist.github.com/zucchini-nlp/e9f20b054fa322f84ac9311d9ab67042).
262_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip-2.md
https://huggingface.co/docs/transformers/en/model_doc/blip-2/#resources
.md
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with BLIP-2. - Demo notebooks for BLIP-2 for image captioning, visual question answering (VQA) and chat-like conversations can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/BLIP-2).
262_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip-2.md
https://huggingface.co/docs/transformers/en/model_doc/blip-2/#resources
.md
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
262_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip-2.md
https://huggingface.co/docs/transformers/en/model_doc/blip-2/#blip2config
.md
[`Blip2Config`] is the configuration class to store the configuration of a [`Blip2ForConditionalGeneration`]. It is used to instantiate a BLIP-2 model according to the specified arguments, defining the vision model, Q-Former model and language model configs. Instantiating a configuration with the defaults will yield a similar configuration to that of the BLIP-2 [Salesforce/blip2-opt-2.7b](https://huggingface.co/Salesforce/blip2-opt-2.7b) architecture.
262_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip-2.md
https://huggingface.co/docs/transformers/en/model_doc/blip-2/#blip2config
.md
that of the BLIP-2 [Salesforce/blip2-opt-2.7b](https://huggingface.co/Salesforce/blip2-opt-2.7b) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vision_config (`dict`, *optional*): Dictionary of configuration options used to initialize [`Blip2VisionConfig`]. qformer_config (`dict`, *optional*):
262_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip-2.md
https://huggingface.co/docs/transformers/en/model_doc/blip-2/#blip2config
.md
Dictionary of configuration options used to initialize [`Blip2VisionConfig`]. qformer_config (`dict`, *optional*): Dictionary of configuration options used to initialize [`Blip2QFormerConfig`]. text_config (`dict`, *optional*): Dictionary of configuration options used to initialize any [`PretrainedConfig`]. num_query_tokens (`int`, *optional*, defaults to 32): The number of query tokens passed through the Transformer. image_text_hidden_size (`int`, *optional*, defaults to 256):
262_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip-2.md
https://huggingface.co/docs/transformers/en/model_doc/blip-2/#blip2config
.md
The number of query tokens passed through the Transformer. image_text_hidden_size (`int`, *optional*, defaults to 256): Dimentionality of the hidden state of the image-text fusion layer. image_token_index (`int`, *optional*): Token index of special image token. kwargs (*optional*): Dictionary of keyword arguments. Example: ```python >>> from transformers import ( ... Blip2VisionConfig, ... Blip2QFormerConfig, ... OPTConfig, ... Blip2Config, ... Blip2ForConditionalGeneration, ... )
262_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip-2.md
https://huggingface.co/docs/transformers/en/model_doc/blip-2/#blip2config
.md
>>> # Initializing a Blip2Config with Salesforce/blip2-opt-2.7b style configuration >>> configuration = Blip2Config() >>> # Initializing a Blip2ForConditionalGeneration (with random weights) from the Salesforce/blip2-opt-2.7b style configuration >>> model = Blip2ForConditionalGeneration(configuration) >>> # Accessing the model configuration >>> configuration = model.config >>> # We can also initialize a Blip2Config from a Blip2VisionConfig, Blip2QFormerConfig and any PretrainedConfig
262_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip-2.md
https://huggingface.co/docs/transformers/en/model_doc/blip-2/#blip2config
.md
>>> # We can also initialize a Blip2Config from a Blip2VisionConfig, Blip2QFormerConfig and any PretrainedConfig >>> # Initializing BLIP-2 vision, BLIP-2 Q-Former and language model configurations >>> vision_config = Blip2VisionConfig() >>> qformer_config = Blip2QFormerConfig() >>> text_config = OPTConfig() >>> config = Blip2Config.from_text_vision_configs(vision_config, qformer_config, text_config) ``` Methods: from_vision_qformer_text_configs
262_4_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip-2.md
https://huggingface.co/docs/transformers/en/model_doc/blip-2/#blip2visionconfig
.md
This is the configuration class to store the configuration of a [`Blip2VisionModel`]. It is used to instantiate a BLIP-2 vision encoder according to the specified arguments, defining the model architecture. Instantiating a configuration defaults will yield a similar configuration to that of the BLIP-2 [Salesforce/blip2-opt-2.7b](https://huggingface.co/Salesforce/blip2-opt-2.7b) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
262_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip-2.md
https://huggingface.co/docs/transformers/en/model_doc/blip-2/#blip2visionconfig
.md
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: hidden_size (`int`, *optional*, defaults to 1408): Dimensionality of the encoder layers and the pooler layer. intermediate_size (`int`, *optional*, defaults to 6144): Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder. num_hidden_layers (`int`, *optional*, defaults to 39):
262_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip-2.md
https://huggingface.co/docs/transformers/en/model_doc/blip-2/#blip2visionconfig
.md
num_hidden_layers (`int`, *optional*, defaults to 39): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 16): Number of attention heads for each attention layer in the Transformer encoder. image_size (`int`, *optional*, defaults to 224): The size (resolution) of each image. patch_size (`int`, *optional*, defaults to 14): The size (resolution) of each patch. hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`):
262_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip-2.md
https://huggingface.co/docs/transformers/en/model_doc/blip-2/#blip2visionconfig
.md
The size (resolution) of each patch. hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` `"gelu"` are supported. layer_norm_eps (`float`, *optional*, defaults to 1e-5): The epsilon used by the layer normalization layers. attention_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities.
262_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip-2.md
https://huggingface.co/docs/transformers/en/model_doc/blip-2/#blip2visionconfig
.md
attention_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. qkv_bias (`bool`, *optional*, defaults to `True`): Whether to add a bias to the queries and values in the self-attention layers. Example: ```python >>> from transformers import Blip2VisionConfig, Blip2VisionModel
262_5_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip-2.md
https://huggingface.co/docs/transformers/en/model_doc/blip-2/#blip2visionconfig
.md
>>> # Initializing a Blip2VisionConfig with Salesforce/blip2-opt-2.7b style configuration >>> configuration = Blip2VisionConfig() >>> # Initializing a Blip2VisionModel (with random weights) from the Salesforce/blip2-opt-2.7b style configuration >>> model = Blip2VisionModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
262_5_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip-2.md
https://huggingface.co/docs/transformers/en/model_doc/blip-2/#blip2qformerconfig
.md
This is the configuration class to store the configuration of a [`Blip2QFormerModel`]. It is used to instantiate a BLIP-2 Querying Transformer (Q-Former) model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the BLIP-2 [Salesforce/blip2-opt-2.7b](https://huggingface.co/Salesforce/blip2-opt-2.7b) architecture. Configuration objects
262_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip-2.md
https://huggingface.co/docs/transformers/en/model_doc/blip-2/#blip2qformerconfig
.md
[Salesforce/blip2-opt-2.7b](https://huggingface.co/Salesforce/blip2-opt-2.7b) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Note that [`Blip2QFormerModel`] is very similar to [`BertLMHeadModel`] with interleaved cross-attention. Args: vocab_size (`int`, *optional*, defaults to 30522):
262_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip-2.md
https://huggingface.co/docs/transformers/en/model_doc/blip-2/#blip2qformerconfig
.md
Args: vocab_size (`int`, *optional*, defaults to 30522): Vocabulary size of the Q-Former model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling the model. hidden_size (`int`, *optional*, defaults to 768): Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (`int`, *optional*, defaults to 12): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 12):
262_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip-2.md
https://huggingface.co/docs/transformers/en/model_doc/blip-2/#blip2qformerconfig
.md
Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 12): Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (`int`, *optional*, defaults to 3072): Dimensionality of the "intermediate" (often named feed-forward) layer in the Transformer encoder. hidden_act (`str` or `Callable`, *optional*, defaults to `"gelu"`):
262_6_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip-2.md
https://huggingface.co/docs/transformers/en/model_doc/blip-2/#blip2qformerconfig
.md
hidden_act (`str` or `Callable`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"silu"` and `"gelu_new"` are supported. hidden_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout ratio for the attention probabilities.
262_6_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip-2.md
https://huggingface.co/docs/transformers/en/model_doc/blip-2/#blip2qformerconfig
.md
attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout ratio for the attention probabilities. max_position_embeddings (`int`, *optional*, defaults to 512): The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
262_6_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip-2.md
https://huggingface.co/docs/transformers/en/model_doc/blip-2/#blip2qformerconfig
.md
The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-12): The epsilon used by the layer normalization layers. position_embedding_type (`str`, *optional*, defaults to `"absolute"`): Type of position embedding. Choose one of `"absolute"`, `"relative_key"`, `"relative_key_query"`. For positional embeddings use `"absolute"`. For more information on `"relative_key"`, please refer to
262_6_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip-2.md
https://huggingface.co/docs/transformers/en/model_doc/blip-2/#blip2qformerconfig
.md
positional embeddings use `"absolute"`. For more information on `"relative_key"`, please refer to [Self-Attention with Relative Position Representations (Shaw et al.)](https://arxiv.org/abs/1803.02155). For more information on `"relative_key_query"`, please refer to *Method 4* in [Improve Transformer Models with Better Relative Position Embeddings (Huang et al.)](https://arxiv.org/abs/2009.13658). cross_attention_frequency (`int`, *optional*, defaults to 2):
262_6_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip-2.md
https://huggingface.co/docs/transformers/en/model_doc/blip-2/#blip2qformerconfig
.md
cross_attention_frequency (`int`, *optional*, defaults to 2): The frequency of adding cross-attention to the Transformer layers. encoder_hidden_size (`int`, *optional*, defaults to 1408): The hidden size of the hidden states for cross-attention. use_qformer_text_input (`bool`, *optional*, defaults to `False`): Whether to use BERT-style embeddings. Examples: ```python >>> from transformers import Blip2QFormerConfig, Blip2QFormerModel
262_6_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip-2.md
https://huggingface.co/docs/transformers/en/model_doc/blip-2/#blip2qformerconfig
.md
>>> # Initializing a BLIP-2 Salesforce/blip2-opt-2.7b style configuration >>> configuration = Blip2QFormerConfig() >>> # Initializing a model (with random weights) from the Salesforce/blip2-opt-2.7b style configuration >>> model = Blip2QFormerModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
262_6_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip-2.md
https://huggingface.co/docs/transformers/en/model_doc/blip-2/#blip2processor
.md
Constructs a BLIP-2 processor which wraps a BLIP image processor and an OPT/T5 tokenizer into a single processor. [`BlipProcessor`] offers all the functionalities of [`BlipImageProcessor`] and [`AutoTokenizer`]. See the docstring of [`~BlipProcessor.__call__`] and [`~BlipProcessor.decode`] for more information. Args: image_processor (`BlipImageProcessor`): An instance of [`BlipImageProcessor`]. The image processor is a required input. tokenizer (`AutoTokenizer`):
262_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip-2.md
https://huggingface.co/docs/transformers/en/model_doc/blip-2/#blip2processor
.md
An instance of [`BlipImageProcessor`]. The image processor is a required input. tokenizer (`AutoTokenizer`): An instance of ['PreTrainedTokenizer`]. The tokenizer is a required input. num_query_tokens (`int`, *optional*): Number of tokens used by the Qformer as queries, should be same as in model's config.
262_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip-2.md
https://huggingface.co/docs/transformers/en/model_doc/blip-2/#blip2visionmodel
.md
No docstring available for Blip2VisionModel Methods: forward
262_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip-2.md
https://huggingface.co/docs/transformers/en/model_doc/blip-2/#blip2qformermodel
.md
Querying Transformer (Q-Former), used in BLIP-2. Methods: forward
262_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip-2.md
https://huggingface.co/docs/transformers/en/model_doc/blip-2/#blip2model
.md
BLIP-2 Model for generating text and image features. The model consists of a vision encoder, Querying Transformer (Q-Former) and a language model. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
262_10_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip-2.md
https://huggingface.co/docs/transformers/en/model_doc/blip-2/#blip2model
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`Blip2Config`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
262_10_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip-2.md
https://huggingface.co/docs/transformers/en/model_doc/blip-2/#blip2model
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward - get_text_features - get_image_features - get_qformer_features
262_10_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip-2.md
https://huggingface.co/docs/transformers/en/model_doc/blip-2/#blip2forconditionalgeneration
.md
BLIP-2 Model for generating text given an image and an optional text prompt. The model consists of a vision encoder, Querying Transformer (Q-Former) and a language model. One can optionally pass `input_ids` to the model, which serve as a text prompt, to make the language model continue the prompt. Otherwise, the language model starts generating text from the [BOS] (beginning-of-sequence) token. <Tip> Note that Flan-T5 checkpoints cannot be cast to float16. They are pre-trained using bfloat16.
262_11_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip-2.md
https://huggingface.co/docs/transformers/en/model_doc/blip-2/#blip2forconditionalgeneration
.md
<Tip> Note that Flan-T5 checkpoints cannot be cast to float16. They are pre-trained using bfloat16. </Tip> This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
262_11_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip-2.md
https://huggingface.co/docs/transformers/en/model_doc/blip-2/#blip2forconditionalgeneration
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`Blip2Config`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
262_11_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip-2.md
https://huggingface.co/docs/transformers/en/model_doc/blip-2/#blip2forconditionalgeneration
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward - generate
262_11_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip-2.md
https://huggingface.co/docs/transformers/en/model_doc/blip-2/#blip2forimagetextretrieval
.md
BLIP-2 Model with a vision and text projector, and a classification head on top. The model is used in the context of image-text retrieval. Given an image and a text, the model returns the probability of the text being relevant to the image. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
262_12_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip-2.md
https://huggingface.co/docs/transformers/en/model_doc/blip-2/#blip2forimagetextretrieval
.md
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`Blip2Config`]): Model configuration class with all the parameters of the model.
262_12_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip-2.md
https://huggingface.co/docs/transformers/en/model_doc/blip-2/#blip2forimagetextretrieval
.md
and behavior. Parameters: config ([`Blip2Config`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
262_12_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip-2.md
https://huggingface.co/docs/transformers/en/model_doc/blip-2/#blip2textmodelwithprojection
.md
BLIP-2 Text Model with a projection layer on top (a linear layer on top of the pooled output). This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
262_13_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip-2.md
https://huggingface.co/docs/transformers/en/model_doc/blip-2/#blip2textmodelwithprojection
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`Blip2Config`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
262_13_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip-2.md
https://huggingface.co/docs/transformers/en/model_doc/blip-2/#blip2textmodelwithprojection
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
262_13_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip-2.md
https://huggingface.co/docs/transformers/en/model_doc/blip-2/#blip2visionmodelwithprojection
.md
BLIP-2 Vision Model with a projection layer on top (a linear layer on top of the pooled output). This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
262_14_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip-2.md
https://huggingface.co/docs/transformers/en/model_doc/blip-2/#blip2visionmodelwithprojection
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`Blip2Config`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
262_14_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/blip-2.md
https://huggingface.co/docs/transformers/en/model_doc/blip-2/#blip2visionmodelwithprojection
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
262_14_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/
.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
263_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
263_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#bert
.md
<div class="flex flex-wrap space-x-1"> <a href="https://huggingface.co/models?filter=bert"> <img alt="Models" src="https://img.shields.io/badge/All_model_pages-bert-blueviolet"> </a> <a href="https://huggingface.co/spaces/docs-demos/bert-base-uncased"> <img alt="Spaces" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue"> </a> </div>
263_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#overview
.md
The BERT model was proposed in [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova. It's a bidirectional transformer pretrained using a combination of masked language modeling objective and next sentence prediction on a large corpus comprising the Toronto Book Corpus and Wikipedia. The abstract from the paper is the following:
263_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#overview
.md
The abstract from the paper is the following: *We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result,
263_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#overview
.md
representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications.* *BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural
263_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#overview
.md
*BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).*
263_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#overview
.md
improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).* This model was contributed by [thomwolf](https://huggingface.co/thomwolf). The original code can be found [here](https://github.com/google-research/bert).
263_2_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#usage-tips
.md
- BERT is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than the left. - BERT was trained with the masked language modeling (MLM) and next sentence prediction (NSP) objectives. It is efficient at predicting masked tokens and at NLU in general, but is not optimal for text generation. - Corrupts the inputs by using random masking, more precisely, during pretraining, a given percentage of tokens (usually 15%) is masked by:
263_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#usage-tips
.md
* a special mask token with probability 0.8 * a random token different from the one masked with probability 0.1 * the same token with probability 0.1 - The model must predict the original sentence, but has a second objective: inputs are two sentences A and B (with a separation token in between). With probability 50%, the sentences are consecutive in the corpus, in the remaining 50% they are not related. The model has to predict if the sentences are consecutive or not.
263_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#using-scaled-dot-product-attention-sdpa
.md
PyTorch includes a native scaled dot-product attention (SDPA) operator as part of `torch.nn.functional`. This function encompasses several implementations that can be applied depending on the inputs and the hardware in use. See the [official documentation](https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html) or the [GPU Inference](https://huggingface.co/docs/transformers/main/en/perf_infer_gpu_one#pytorch-scaled-dot-product-attention) page for more information.
263_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#using-scaled-dot-product-attention-sdpa
.md
page for more information. SDPA is used by default for `torch>=2.1.1` when an implementation is available, but you may also set `attn_implementation="sdpa"` in `from_pretrained()` to explicitly request SDPA to be used. ``` from transformers import BertModel
263_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#using-scaled-dot-product-attention-sdpa
.md
model = BertModel.from_pretrained("bert-base-uncased", torch_dtype=torch.float16, attn_implementation="sdpa") ... ``` For the best speedups, we recommend loading the model in half-precision (e.g. `torch.float16` or `torch.bfloat16`). On a local benchmark (A100-80GB, CPUx12, RAM 96.6GB, PyTorch 2.2.0, OS Ubuntu 22.04) with `float16`, we saw the following speedups during training and inference.
263_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#training
.md
|batch_size|seq_len|Time per batch (eager - s)|Time per batch (sdpa - s)|Speedup (%)|Eager peak mem (MB)|sdpa peak mem (MB)|Mem saving (%)| |----------|-------|--------------------------|-------------------------|-----------|-------------------|------------------|--------------| |4 |256 |0.023 |0.017 |35.472 |939.213 |764.834 |22.800 |
263_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#training
.md
|4 |512 |0.023 |0.018 |23.687 |1970.447 |1227.162 |60.569 | |8 |256 |0.023 |0.018 |23.491 |1594.295 |1226.114 |30.028 | |8 |512 |0.035 |0.025 |43.058 |3629.401 |2134.262 |70.054 |
263_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#training
.md
|16 |256 |0.030 |0.024 |25.583 |2874.426 |2134.262 |34.680 | |16 |512 |0.064 |0.044 |46.223 |6964.659 |3961.013 |75.830 |
263_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#inference
.md
|batch_size|seq_len|Per token latency eager (ms)|Per token latency SDPA (ms)|Speedup (%)|Mem eager (MB)|Mem BT (MB)|Mem saved (%)| |----------|-------|----------------------------|---------------------------|-----------|--------------|-----------|-------------| |1 |128 |5.736 |4.987 |15.022 |282.661 |282.924 |-0.093 |
263_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#inference
.md
|1 |256 |5.689 |4.945 |15.055 |298.686 |298.948 |-0.088 | |2 |128 |6.154 |4.982 |23.521 |314.523 |314.785 |-0.083 | |2 |256 |6.201 |4.949 |25.303 |347.546 |347.033 |0.148 |
263_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#inference
.md
|4 |128 |6.049 |4.987 |21.305 |378.895 |379.301 |-0.107 | |4 |256 |6.285 |5.364 |17.166 |443.209 |444.382 |-0.264 |
263_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#resources
.md
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with BERT. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. <PipelineTag pipeline="text-classification"/>
263_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#resources
.md
<PipelineTag pipeline="text-classification"/> - A blog post on [BERT Text Classification in a different language](https://www.philschmid.de/bert-text-classification-in-a-different-language). - A notebook for [Finetuning BERT (and friends) for multi-label text classification](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/BERT/Fine_tuning_BERT_(and_friends)_for_multi_label_text_classification.ipynb).
263_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#resources
.md
- A notebook on how to [Finetune BERT for multi-label classification using PyTorch](https://colab.research.google.com/github/abhimishra91/transformers-tutorials/blob/master/transformers_multi_label_classification.ipynb). 🌎 - A notebook on how to [warm-start an EncoderDecoder model with BERT for summarization](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/BERT2BERT_for_CNN_Dailymail.ipynb).
263_7_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#resources
.md
- [`BertForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification.ipynb).
263_7_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#resources
.md
- [`TFBertForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification-tf.ipynb).
263_7_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#resources
.md
- [`FlaxBertForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification_flax.ipynb). - [Text classification task guide](../tasks/sequence_classification) <PipelineTag pipeline="token-classification"/>
263_7_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#resources
.md
- [Text classification task guide](../tasks/sequence_classification) <PipelineTag pipeline="token-classification"/> - A blog post on how to use [Hugging Face Transformers with Keras: Fine-tune a non-English BERT for Named Entity Recognition](https://www.philschmid.de/huggingface-transformers-keras-tf).
263_7_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#resources
.md
- A notebook for [Finetuning BERT for named-entity recognition](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/BERT/Custom_Named_Entity_Recognition_with_BERT_only_first_wordpiece.ipynb) using only the first wordpiece of each word in the word label during tokenization. To propagate the label of the word to all wordpieces, see this [version](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/BERT/Custom_Named_Entity_Recognition_with_BERT.ipynb) of the
263_7_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#resources
.md
of the notebook instead.
263_7_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#resources
.md
- [`BertForTokenClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/token-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification.ipynb).
263_7_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#resources
.md
- [`TFBertForTokenClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/token-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification-tf.ipynb). - [`FlaxBertForTokenClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/token-classification).
263_7_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#resources
.md
- [Token classification](https://huggingface.co/course/chapter7/2?fw=pt) chapter of the 🤗 Hugging Face Course. - [Token classification task guide](../tasks/token_classification) <PipelineTag pipeline="fill-mask"/>
263_7_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#resources
.md
- [Token classification task guide](../tasks/token_classification) <PipelineTag pipeline="fill-mask"/> - [`BertForMaskedLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling#robertabertdistilbert-and-masked-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb).
263_7_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#resources
.md
- [`TFBertForMaskedLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/language-modeling#run_mlmpy) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb).
263_7_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#resources
.md
- [`FlaxBertForMaskedLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling#masked-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/masked_language_modeling_flax.ipynb). - [Masked language modeling](https://huggingface.co/course/chapter7/3?fw=pt) chapter of the 🤗 Hugging Face Course. - [Masked language modeling task guide](../tasks/masked_language_modeling)
263_7_14
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#resources
.md
- [Masked language modeling task guide](../tasks/masked_language_modeling) <PipelineTag pipeline="question-answering"/> - [`BertForQuestionAnswering`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering.ipynb).
263_7_15
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#resources
.md
- [`TFBertForQuestionAnswering`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/question-answering) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering-tf.ipynb). - [`FlaxBertForQuestionAnswering`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/question-answering).
263_7_16
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#resources
.md
- [Question answering](https://huggingface.co/course/chapter7/7?fw=pt) chapter of the 🤗 Hugging Face Course. - [Question answering task guide](../tasks/question_answering) **Multiple choice** - [`BertForMultipleChoice`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/multiple-choice) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice.ipynb).
263_7_17
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#resources
.md
- [`TFBertForMultipleChoice`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/multiple-choice) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice-tf.ipynb). - [Multiple choice task guide](../tasks/multiple_choice) ⚡️ **Inference**
263_7_18
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#resources
.md
- [Multiple choice task guide](../tasks/multiple_choice) ⚡️ **Inference** - A blog post on how to [Accelerate BERT inference with Hugging Face Transformers and AWS Inferentia](https://huggingface.co/blog/bert-inferentia-sagemaker). - A blog post on how to [Accelerate BERT inference with DeepSpeed-Inference on GPUs](https://www.philschmid.de/bert-deepspeed-inference). ⚙️ **Pretraining**
263_7_19
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#resources
.md
⚙️ **Pretraining** - A blog post on [Pre-Training BERT with Hugging Face Transformers and Habana Gaudi](https://www.philschmid.de/pre-training-bert-habana). 🚀 **Deploy** - A blog post on how to [Convert Transformers to ONNX with Hugging Face Optimum](https://www.philschmid.de/convert-transformers-to-onnx). - A blog post on how to [Setup Deep Learning environment for Hugging Face Transformers with Habana Gaudi on AWS](https://www.philschmid.de/getting-started-habana-gaudi#conclusion).
263_7_20
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#resources
.md
- A blog post on [Autoscaling BERT with Hugging Face Transformers, Amazon SageMaker and Terraform module](https://www.philschmid.de/terraform-huggingface-amazon-sagemaker-advanced). - A blog post on [Serverless BERT with HuggingFace, AWS Lambda, and Docker](https://www.philschmid.de/serverless-bert-with-huggingface-aws-lambda-docker).
263_7_21
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#resources
.md
- A blog post on [Hugging Face Transformers BERT fine-tuning using Amazon SageMaker and Training Compiler](https://www.philschmid.de/huggingface-amazon-sagemaker-training-compiler). - A blog post on [Task-specific knowledge distillation for BERT using Transformers & Amazon SageMaker](https://www.philschmid.de/knowledge-distillation-bert-transformers).
263_7_22
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#bertconfig
.md
This is the configuration class to store the configuration of a [`BertModel`] or a [`TFBertModel`]. It is used to instantiate a BERT model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the BERT [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) architecture.
263_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bert.md
https://huggingface.co/docs/transformers/en/model_doc/bert/#bertconfig
.md
[google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 30522): Vocabulary size of the BERT model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`BertModel`] or [`TFBertModel`].
263_8_1