source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mms.md
|
https://huggingface.co/docs/transformers/en/model_doc/mms/#inference
|
.md
|
processor = AutoFeatureExtractor.from_pretrained(model_id)
model = Wav2Vec2ForSequenceClassification.from_pretrained(model_id)
```
Now we process the audio data, pass the processed audio data to the model to classify it into a language, just like we usually do for Wav2Vec2 audio classification models such as [ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition](https://huggingface.co/harshit345/xlsr-wav2vec-speech-emotion-recognition)
```py
# English
|
170_8_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mms.md
|
https://huggingface.co/docs/transformers/en/model_doc/mms/#inference
|
.md
|
```py
# English
inputs = processor(en_sample, sampling_rate=16_000, return_tensors="pt")
|
170_8_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mms.md
|
https://huggingface.co/docs/transformers/en/model_doc/mms/#inference
|
.md
|
with torch.no_grad():
outputs = model(**inputs).logits
lang_id = torch.argmax(outputs, dim=-1)[0].item()
detected_lang = model.config.id2label[lang_id]
# 'eng'
# Arabic
inputs = processor(ar_sample, sampling_rate=16_000, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs).logits
|
170_8_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mms.md
|
https://huggingface.co/docs/transformers/en/model_doc/mms/#inference
|
.md
|
with torch.no_grad():
outputs = model(**inputs).logits
lang_id = torch.argmax(outputs, dim=-1)[0].item()
detected_lang = model.config.id2label[lang_id]
# 'ara'
```
To see all the supported languages of a checkpoint, you can print out the language ids as follows:
```py
processor.id2label.values()
```
|
170_8_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mms.md
|
https://huggingface.co/docs/transformers/en/model_doc/mms/#audio-pretrained-models
|
.md
|
Pretrained models are available for two different sizes - [300M](https://huggingface.co/facebook/mms-300m) ,
[1Bil](https://huggingface.co/facebook/mms-1b).
<Tip>
The MMS for ASR architecture is based on the Wav2Vec2 model, refer to [Wav2Vec2's documentation page](wav2vec2) for further
details on how to finetune with models for various downstream tasks.
MMS-TTS uses the same model architecture as VITS, refer to [VITS's documentation page](vits) for API reference.
</Tip>
|
170_9_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bort.md
|
https://huggingface.co/docs/transformers/en/model_doc/bort/
|
.md
|
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
171_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bort.md
|
https://huggingface.co/docs/transformers/en/model_doc/bort/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
171_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bort.md
|
https://huggingface.co/docs/transformers/en/model_doc/bort/#bort
|
.md
|
<Tip warning={true}>
This model is in maintenance mode only, we do not accept any new PRs changing its code.
If you run into any issues running this model, please reinstall the last version that supported this model: v4.30.0.
You can do so by running the following command: `pip install -U transformers==4.30.0`.
</Tip>
|
171_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bort.md
|
https://huggingface.co/docs/transformers/en/model_doc/bort/#overview
|
.md
|
The BORT model was proposed in [Optimal Subarchitecture Extraction for BERT](https://arxiv.org/abs/2010.10499) by
Adrian de Wynter and Daniel J. Perry. It is an optimal subset of architectural parameters for the BERT, which the
authors refer to as "Bort".
The abstract from the paper is the following:
*We extract an optimal subset of architectural parameters for the BERT architecture from Devlin et al. (2018) by
|
171_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bort.md
|
https://huggingface.co/docs/transformers/en/model_doc/bort/#overview
|
.md
|
*We extract an optimal subset of architectural parameters for the BERT architecture from Devlin et al. (2018) by
applying recent breakthroughs in algorithms for neural architecture search. This optimal subset, which we refer to as
"Bort", is demonstrably smaller, having an effective (that is, not counting the embedding layer) size of 5.5% the
original BERT-large architecture, and 16% of the net size. Bort is also able to be pretrained in 288 GPU hours, which
|
171_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bort.md
|
https://huggingface.co/docs/transformers/en/model_doc/bort/#overview
|
.md
|
original BERT-large architecture, and 16% of the net size. Bort is also able to be pretrained in 288 GPU hours, which
is 1.2% of the time required to pretrain the highest-performing BERT parametric architectural variant, RoBERTa-large
(Liu et al., 2019), and about 33% of that of the world-record, in GPU hours, required to train BERT-large on the same
hardware. It is also 7.9x faster on a CPU, as well as being better performing than other compressed variants of the
|
171_2_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bort.md
|
https://huggingface.co/docs/transformers/en/model_doc/bort/#overview
|
.md
|
hardware. It is also 7.9x faster on a CPU, as well as being better performing than other compressed variants of the
architecture, and some of the non-compressed variants: it obtains performance improvements of between 0.3% and 31%,
absolute, with respect to BERT-large, on multiple public natural language understanding (NLU) benchmarks.*
This model was contributed by [stefan-it](https://huggingface.co/stefan-it). The original code can be found [here](https://github.com/alexa/bort/).
|
171_2_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bort.md
|
https://huggingface.co/docs/transformers/en/model_doc/bort/#usage-tips
|
.md
|
- BORT's model architecture is based on BERT, refer to [BERT's documentation page](bert) for the
model's API reference as well as usage examples.
- BORT uses the RoBERTa tokenizer instead of the BERT tokenizer, refer to [RoBERTa's documentation page](roberta) for the tokenizer's API reference as well as usage examples.
- BORT requires a specific fine-tuning algorithm, called [Agora](https://adewynter.github.io/notes/bort_algorithms_and_applications.html#fine-tuning-with-algebraic-topology) ,
|
171_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/bort.md
|
https://huggingface.co/docs/transformers/en/model_doc/bort/#usage-tips
|
.md
|
that is sadly not open-sourced yet. It would be very useful for the community, if someone tries to implement the
algorithm to make BORT fine-tuning work.
|
171_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/emu3.md
|
https://huggingface.co/docs/transformers/en/model_doc/emu3/
|
.md
|
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
172_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/emu3.md
|
https://huggingface.co/docs/transformers/en/model_doc/emu3/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
172_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/emu3.md
|
https://huggingface.co/docs/transformers/en/model_doc/emu3/#overview
|
.md
|
The Emu3 model was proposed in [Emu3: Next-Token Prediction is All You Need](https://arxiv.org/abs/2409.18869) by Xinlong Wang, Xiaosong Zhang, Zhengxiong Luo, Quan Sun, Yufeng Cui, Jinsheng Wang, Fan Zhang, Yueze Wang, Zhen Li, Qiying Yu, Yingli Zhao, Yulong Ao, Xuebin Min, Tao Li, Boya Wu, Bo Zhao, Bowen Zhang, Liangdong Wang, Guang Liu, Zheqi He, Xi Yang, Jingjing Liu, Yonghua Lin, Tiejun Huang, Zhongyuan Wang.
|
172_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/emu3.md
|
https://huggingface.co/docs/transformers/en/model_doc/emu3/#overview
|
.md
|
Emu3 is a multimodal LLM that uses vector quantization to tokenize images into discrete tokens. Discretized image tokens are later fused with text token ids for image and text generation. The model can additionally generate images by predicting image token ids.
The abstract from the paper is the following:
|
172_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/emu3.md
|
https://huggingface.co/docs/transformers/en/model_doc/emu3/#overview
|
.md
|
*While next-token prediction is considered a promising path towards artificial general intelligence, it has struggled to excel in multimodal tasks, which are still dominated by diffusion models (e.g., Stable Diffusion) and compositional approaches (e.g., CLIP combined with LLMs). In this paper, we introduce Emu3, a new suite of state-of-the-art multimodal models trained solely with next-token prediction. By tokenizing images, text, and videos into a discrete space, we train a single transformer from
|
172_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/emu3.md
|
https://huggingface.co/docs/transformers/en/model_doc/emu3/#overview
|
.md
|
with next-token prediction. By tokenizing images, text, and videos into a discrete space, we train a single transformer from scratch on a mixture of multimodal sequences. Emu3 outperforms several well-established task-specific models in both generation and perception tasks, surpassing flagship models such as SDXL and LLaVA-1.6, while eliminating the need for diffusion or compositional architectures. Emu3 is also capable of generating high-fidelity video via predicting the next token in a video sequence. We
|
172_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/emu3.md
|
https://huggingface.co/docs/transformers/en/model_doc/emu3/#overview
|
.md
|
architectures. Emu3 is also capable of generating high-fidelity video via predicting the next token in a video sequence. We simplify complex multimodal model designs by converging on a singular focus: tokens, unlocking great potential for scaling both during training and inference. Our results demonstrate that next-token prediction is a promising path towards building general multimodal intelligence beyond language. We open-source key techniques and models to support further research in this direction.*
|
172_1_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/emu3.md
|
https://huggingface.co/docs/transformers/en/model_doc/emu3/#overview
|
.md
|
Tips:
- We advise users to set `processor.tokenizer.padding_side = "left"` before batched generation as it leads to more accurate results.
- Note that the model has been trained with a specific prompt format for chatting. Use `processor.apply_chat_template(my_conversation_dict)` to correctly format your prompts.
|
172_1_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/emu3.md
|
https://huggingface.co/docs/transformers/en/model_doc/emu3/#overview
|
.md
|
- Emu3 has two different checkpoints for image-generation and text-generation, make sure to use the correct checkpoint when loading the model. To generate an image, it is advised to use `prefix_constraints` so that the generated tokens are sampled only from possible image tokens. See more below for usage examples.
> [!TIP]
|
172_1_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/emu3.md
|
https://huggingface.co/docs/transformers/en/model_doc/emu3/#overview
|
.md
|
> [!TIP]
> Emu3 implementation in Transformers uses a special image token to indicate where to merge image embeddings. The special image token isn't new and uses one of the reserved tokens: `<|extra_0|>`. You have to add `<image>` to your prompt in the place where the image should be embedded for correct generation.
This model was contributed by [RaushanTurganbay](https://huggingface.co/RaushanTurganbay).
The original code can be found [here](https://github.com/baaivision/Emu3).
|
172_1_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/emu3.md
|
https://huggingface.co/docs/transformers/en/model_doc/emu3/#text-generation-inference
|
.md
|
Here's how to load the model and perform inference in half-precision (`torch.bfloat16`) to generate textual output from text or text and image inputs:
```python
from transformers import Emu3Processor, Emu3ForConditionalGeneration
import torch
from PIL import Image
import requests
processor = Emu3Processor.from_pretrained("BAAI/Emu3-Chat-hf")
model = Emu3ForConditionalGeneration.from_pretrained("BAAI/Emu3-Chat-hf", torch_dtype=torch.bfloat16, device_map="cuda")
|
172_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/emu3.md
|
https://huggingface.co/docs/transformers/en/model_doc/emu3/#text-generation-inference
|
.md
|
# prepare image and text prompt
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
prompt = "What do you see in this image?<image>"
inputs = processor(images=image, text=prompt, return_tensors="pt").to(model.device, dtype=torch.bfloat16)
# autoregressively complete prompt
output = model.generate(**inputs, max_new_tokens=50)
print(processor.decode(output[0], skip_special_tokens=True))
```
|
172_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/emu3.md
|
https://huggingface.co/docs/transformers/en/model_doc/emu3/#image-generation-inference
|
.md
|
Emu3 can also generate images from textual input. Here is how you can do it:
```python
processor = Emu3Processor.from_pretrained("BAAI/Emu3-Gen-hf")
model = Emu3ForConditionalGeneration.from_pretrained("BAAI/Emu3-Gen-hf", torch_dtype="bfloat16", device_map="auto", attn_implementation="flash_attention_2")
|
172_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/emu3.md
|
https://huggingface.co/docs/transformers/en/model_doc/emu3/#image-generation-inference
|
.md
|
inputs = processor(
text=["a portrait of young girl. masterpiece, film grained, best quality.", "a dog running under the rain"],
padding=True,
return_tensors="pt",
return_for_image_generation=True,
)
inputs = inputs.to(device="cuda:0", dtype=torch.bfloat16)
|
172_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/emu3.md
|
https://huggingface.co/docs/transformers/en/model_doc/emu3/#image-generation-inference
|
.md
|
neg_prompt = "lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry."
neg_inputs = processor(text=[neg_prompt] * 2, return_tensors="pt").to(device="cuda:0")
image_sizes = inputs.pop("image_sizes")
HEIGHT, WIDTH = image_sizes[0]
VISUAL_TOKENS = model.vocabulary_mapping.image_tokens
|
172_3_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/emu3.md
|
https://huggingface.co/docs/transformers/en/model_doc/emu3/#image-generation-inference
|
.md
|
def prefix_allowed_tokens_fn(batch_id, input_ids):
height, width = HEIGHT, WIDTH
visual_tokens = VISUAL_TOKENS
image_wrapper_token_id = torch.tensor([processor.tokenizer.image_wrapper_token_id], device=model.device)
eoi_token_id = torch.tensor([processor.tokenizer.eoi_token_id], device=model.device)
eos_token_id = torch.tensor([processor.tokenizer.eos_token_id], device=model.device)
pad_token_id = torch.tensor([processor.tokenizer.pad_token_id], device=model.device)
|
172_3_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/emu3.md
|
https://huggingface.co/docs/transformers/en/model_doc/emu3/#image-generation-inference
|
.md
|
pad_token_id = torch.tensor([processor.tokenizer.pad_token_id], device=model.device)
eof_token_id = torch.tensor([processor.tokenizer.eof_token_id], device=model.device)
eol_token_id = processor.tokenizer.encode("<|extra_200|>", return_tensors="pt")[0]
|
172_3_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/emu3.md
|
https://huggingface.co/docs/transformers/en/model_doc/emu3/#image-generation-inference
|
.md
|
position = torch.nonzero(input_ids == image_wrapper_token_id, as_tuple=True)[0][0]
offset = input_ids.shape[0] - position
if offset % (width + 1) == 0:
return (eol_token_id, )
elif offset == (width + 1) * height + 1:
return (eof_token_id, )
elif offset == (width + 1) * height + 2:
return (eoi_token_id, )
elif offset == (width + 1) * height + 3:
return (eos_token_id, )
elif offset > (width + 1) * height + 3:
return (pad_token_id, )
else:
return visual_tokens
|
172_3_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/emu3.md
|
https://huggingface.co/docs/transformers/en/model_doc/emu3/#image-generation-inference
|
.md
|
out = model.generate(
**inputs,
max_new_tokens=50_000, # make sure to have enough tokens for one image
prefix_allowed_tokens_fn=prefix_allowed_tokens_fn,
return_dict_in_generate=True,
negative_prompt_ids=neg_inputs.input_ids, # indicate for Classifier-Free Guidance
negative_prompt_attention_mask=neg_inputs.attention_mask,
)
|
172_3_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/emu3.md
|
https://huggingface.co/docs/transformers/en/model_doc/emu3/#image-generation-inference
|
.md
|
image = model.decode_image_tokens(out.sequences[:, inputs.input_ids.shape[1]: ], height=HEIGHT, width=WIDTH)
images = processor.postprocess(list(image.float()), return_tensors="PIL.Image.Image") # internally we convert to np but it's not supported in bf16 precision
for i, image in enumerate(images['pixel_values']):
image.save(f"result{i}.png")
```
|
172_3_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/emu3.md
|
https://huggingface.co/docs/transformers/en/model_doc/emu3/#emu3config
|
.md
|
This is the configuration class to store the configuration of a [`Emu3Model`]. It is used to instantiate a
emu3 model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the
[Emu3-community/Emu3-Chat-hf](https://huggingface.co/Emu3-community/Emu3-Chat-hf).
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
172_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/emu3.md
|
https://huggingface.co/docs/transformers/en/model_doc/emu3/#emu3config
|
.md
|
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vq_config (`Union[Dict, Emu3VQVAEConfig]`, *optional*):
Emu3VQVAEConfig instance containing the configuration for the VQ-VAE model.
text_config (`Union[Dict, Emu3TextConfig]``, *optional*):
Emu3TextConfig instance containing the configuration for the language model.
vocabulary_map (`dict`, *optional*):
|
172_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/emu3.md
|
https://huggingface.co/docs/transformers/en/model_doc/emu3/#emu3config
|
.md
|
Emu3TextConfig instance containing the configuration for the language model.
vocabulary_map (`dict`, *optional*):
A dictionary containing the vocabulary map from the tokenizer. Used to obtain tokens from the image inputs.
|
172_4_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/emu3.md
|
https://huggingface.co/docs/transformers/en/model_doc/emu3/#emu3vqvaeconfig
|
.md
|
This is the configuration class to store the configuration of a [`Emu3VQVAE`]. It is used to instantiate an VQ-VAE
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a configuration to the VQ model presented in Emu3 paper.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
|
172_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/emu3.md
|
https://huggingface.co/docs/transformers/en/model_doc/emu3/#emu3vqvaeconfig
|
.md
|
documentation from [`PretrainedConfig`] for more information.
Args:
codebook_size (`int`, *optional*, defaults to 32768):
Codebook size of the VQ model.
embed_dim (`int`, *optional*, defaults to 4):
Dimension of the quantized vector in codebook.
latent_channels (`int`, *optional*, defaults to 4):
Dimension of the output channel of encoder and the input channel of decoder
double_latent (`bool`, *optional*, defaults to `False`):
Whether double the output dim of the encoder.
|
172_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/emu3.md
|
https://huggingface.co/docs/transformers/en/model_doc/emu3/#emu3vqvaeconfig
|
.md
|
double_latent (`bool`, *optional*, defaults to `False`):
Whether double the output dim of the encoder.
in_channels (`int`, *optional*, defaults to 3):
Input channel of encoder.
out_channels (`int`, *optional*, defaults to 3):
Output channel of decoder.
temporal_downsample_factor (`int`, *optional*, defaults to 4):
Temporal downsample factor.
base_channels (`int`, *optional*, defaults to 256):
Basic channel number of the intermediate blocks.
|
172_5_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/emu3.md
|
https://huggingface.co/docs/transformers/en/model_doc/emu3/#emu3vqvaeconfig
|
.md
|
base_channels (`int`, *optional*, defaults to 256):
Basic channel number of the intermediate blocks.
channel_multiplier (`List[int]`, *optional*, defaults to `[1, 2, 2, 4]`):
Channel scaling factor of the intermediate blocks.
num_res_blocks (`int`, *optional*, defaults to 2):
Residual block number in each stage.
attn_resolutions (`List[int]`, *optional*, defaults to `[3]`):
Stage indices to apply attention.
hidden_size (`int`, *optional*, defaults to 1024):
|
172_5_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/emu3.md
|
https://huggingface.co/docs/transformers/en/model_doc/emu3/#emu3vqvaeconfig
|
.md
|
Stage indices to apply attention.
hidden_size (`int`, *optional*, defaults to 1024):
Dimension of the hidden representations in the attention layer.
num_attention_heads (`int`, *optional*, defaults to 1):
Number of attention heads for each attention layer.
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
```python
>>> from transformers import Emu3VQVAE, Emu3VQVAEConfig
|
172_5_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/emu3.md
|
https://huggingface.co/docs/transformers/en/model_doc/emu3/#emu3vqvaeconfig
|
.md
|
>>> # Initializing a video VQ model of Emu3 configuration
>>> configuration = Emu3VQVAEConfig()
>>> # Initializing a model from the Emu3 VQ model style configuration
>>> model = Emu3VQVAE(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
172_5_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/emu3.md
|
https://huggingface.co/docs/transformers/en/model_doc/emu3/#emu3textconfig
|
.md
|
This is the configuration class to store the configuration of a [`Emu3TextModel`]. It is used to instantiate a
emu3 model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the
[Emu3-community/Emu3-Chat-hf](https://huggingface.co/Emu3-community/Emu3-Chat-hf).
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
172_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/emu3.md
|
https://huggingface.co/docs/transformers/en/model_doc/emu3/#emu3textconfig
|
.md
|
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 184622):
Vocabulary size of the Emu3 model. Defines the number of different tokens that can be represented by the
`inputs_ids` passed when calling [`Emu3Model`]
hidden_size (`int`, *optional*, defaults to 4096):
Dimension of the hidden representations.
|
172_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/emu3.md
|
https://huggingface.co/docs/transformers/en/model_doc/emu3/#emu3textconfig
|
.md
|
hidden_size (`int`, *optional*, defaults to 4096):
Dimension of the hidden representations.
intermediate_size (`int`, *optional*, defaults to 14336):
Dimension of the MLP representations.
num_hidden_layers (`int`, *optional*, defaults to 32):
Number of hidden layers in the Transformer decoder.
num_attention_heads (`int`, *optional*, defaults to 32):
Number of attention heads for each attention layer in the Transformer decoder.
num_key_value_heads (`int`, *optional*, defaults to 8):
|
172_6_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/emu3.md
|
https://huggingface.co/docs/transformers/en/model_doc/emu3/#emu3textconfig
|
.md
|
num_key_value_heads (`int`, *optional*, defaults to 8):
This is the number of key_value heads that should be used to implement Grouped Query Attention. If
`num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if
`num_key_value_heads=1 the model will use Multi Query Attention (MQA) otherwise GQA is used. When
converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
|
172_6_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/emu3.md
|
https://huggingface.co/docs/transformers/en/model_doc/emu3/#emu3textconfig
|
.md
|
converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
by meanpooling all the original heads within that group. For more details checkout [this
paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to
`num_attention_heads`.
hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
The non-linear activation function (function or string) in the decoder.
max_position_embeddings (`int`, *optional*, defaults to 9216):
|
172_6_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/emu3.md
|
https://huggingface.co/docs/transformers/en/model_doc/emu3/#emu3textconfig
|
.md
|
max_position_embeddings (`int`, *optional*, defaults to 9216):
The maximum sequence length that this model might ever be used with. Emu supports up to 9216 tokens,
rms_norm_eps (`float`, *optional*, defaults to 1e-05):
The epsilon used by the rms normalization layers.
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if `config.is_decoder=True`.
|
172_6_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/emu3.md
|
https://huggingface.co/docs/transformers/en/model_doc/emu3/#emu3textconfig
|
.md
|
relevant if `config.is_decoder=True`.
pad_token_id (`int`, *optional*, defaults to 151643):
Padding token id.
bos_token_id (`int`, *optional*, defaults to 151849):
Beginning of stream token id.
eos_token_id (`int`, *optional*, defaults to 151850):
End of stream token id.
tie_word_embeddings (`bool`, *optional*, defaults to `False`):
Whether to tie weight embeddings
rope_theta (`float`, *optional*, defaults to 1000000.0):
The base period of the RoPE embeddings.
rope_scaling (`Dict`, *optional*):
|
172_6_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/emu3.md
|
https://huggingface.co/docs/transformers/en/model_doc/emu3/#emu3textconfig
|
.md
|
The base period of the RoPE embeddings.
rope_scaling (`Dict`, *optional*):
Dictionary containing the scaling configuration for the RoPE embeddings. NOTE: if you apply new rope type
and you expect the model to work on longer `max_position_embeddings`, we recommend you to update this value
accordingly.
Expected contents:
`rope_type` (`str`):
The sub-variant of RoPE to use. Can be one of ['default', 'linear', 'dynamic', 'yarn', 'longrope',
'llama3'], with 'default' being the original RoPE implementation.
|
172_6_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/emu3.md
|
https://huggingface.co/docs/transformers/en/model_doc/emu3/#emu3textconfig
|
.md
|
'llama3'], with 'default' being the original RoPE implementation.
`factor` (`float`, *optional*):
Used with all rope types except 'default'. The scaling factor to apply to the RoPE embeddings. In
most scaling types, a `factor` of x will enable the model to handle sequences of length x *
original maximum pre-trained length.
`original_max_position_embeddings` (`int`, *optional*):
Used with 'dynamic', 'longrope' and 'llama3'. The original max position embeddings used during
pretraining.
|
172_6_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/emu3.md
|
https://huggingface.co/docs/transformers/en/model_doc/emu3/#emu3textconfig
|
.md
|
Used with 'dynamic', 'longrope' and 'llama3'. The original max position embeddings used during
pretraining.
`attention_factor` (`float`, *optional*):
Used with 'yarn' and 'longrope'. The scaling factor to be applied on the attention
computation. If unspecified, it defaults to value recommended by the implementation, using the
`factor` field to infer the suggested value.
`beta_fast` (`float`, *optional*):
Only used with 'yarn'. Parameter to set the boundary for extrapolation (only) in the linear
|
172_6_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/emu3.md
|
https://huggingface.co/docs/transformers/en/model_doc/emu3/#emu3textconfig
|
.md
|
`beta_fast` (`float`, *optional*):
Only used with 'yarn'. Parameter to set the boundary for extrapolation (only) in the linear
ramp function. If unspecified, it defaults to 32.
`beta_slow` (`float`, *optional*):
Only used with 'yarn'. Parameter to set the boundary for interpolation (only) in the linear
ramp function. If unspecified, it defaults to 1.
`short_factor` (`List[float]`, *optional*):
Only used with 'longrope'. The scaling factor to be applied to short contexts (<
|
172_6_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/emu3.md
|
https://huggingface.co/docs/transformers/en/model_doc/emu3/#emu3textconfig
|
.md
|
`short_factor` (`List[float]`, *optional*):
Only used with 'longrope'. The scaling factor to be applied to short contexts (<
`original_max_position_embeddings`). Must be a list of numbers with the same length as the hidden
size divided by the number of attention heads divided by 2
`long_factor` (`List[float]`, *optional*):
Only used with 'longrope'. The scaling factor to be applied to long contexts (<
`original_max_position_embeddings`). Must be a list of numbers with the same length as the hidden
|
172_6_11
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/emu3.md
|
https://huggingface.co/docs/transformers/en/model_doc/emu3/#emu3textconfig
|
.md
|
`original_max_position_embeddings`). Must be a list of numbers with the same length as the hidden
size divided by the number of attention heads divided by 2
`low_freq_factor` (`float`, *optional*):
Only used with 'llama3'. Scaling factor applied to low frequency components of the RoPE
`high_freq_factor` (`float`, *optional*):
Only used with 'llama3'. Scaling factor applied to high frequency components of the RoPE
mlp_bias (`bool`, *optional*, defaults to `False`):
|
172_6_12
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/emu3.md
|
https://huggingface.co/docs/transformers/en/model_doc/emu3/#emu3textconfig
|
.md
|
mlp_bias (`bool`, *optional*, defaults to `False`):
Whether to use a bias in up_proj, down_proj and gate_proj layers in the MLP layers.
attention_bias (`bool`, *optional*, defaults to `False`):
Whether to use a bias in the query, key, value and output projection layers during self-attention.
attention_dropout (`float`, *optional*, defaults to 0.1):
The dropout ratio for the attention probabilities.
initializer_range (`float`, *optional*, defaults to 0.02):
|
172_6_13
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/emu3.md
|
https://huggingface.co/docs/transformers/en/model_doc/emu3/#emu3textconfig
|
.md
|
The dropout ratio for the attention probabilities.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
```python
>>> from transformers import Emu3Model, Emu3Config
|
172_6_14
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/emu3.md
|
https://huggingface.co/docs/transformers/en/model_doc/emu3/#emu3textconfig
|
.md
|
>>> # Initializing a Emu3-community/Emu3-Chat-hf style configuration
>>> configuration = Emu3Config()
>>> # Initializing a model from the Emu3-community/Emu3-Chat-hf style configuration
>>> model = Emu3Model(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
172_6_15
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/emu3.md
|
https://huggingface.co/docs/transformers/en/model_doc/emu3/#emu3processor
|
.md
|
Constructs a Emu3 processor which wraps a Emu3 image processor and a GPT2 tokenizer into a single
processor.
[`Emu3Processor`] offers all the functionalities of [`Emu3ImageProcessor`] and [`GPT2TokenizerFast`].
See the [`~Emu3Processor.__call__`] and [`~Emu3Processor.decode`] for more information.
Args:
image_processor ([`Emu3ImageProcessor`]):
The image processor is a required input.
tokenizer ([`Emu3TokenizerFast`]):
The tokenizer is a required input.
|
172_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/emu3.md
|
https://huggingface.co/docs/transformers/en/model_doc/emu3/#emu3processor
|
.md
|
The image processor is a required input.
tokenizer ([`Emu3TokenizerFast`]):
The tokenizer is a required input.
chat_template (`str`, *optional*): A Jinja template which will be used to convert lists of messages
in a chat into a tokenizable string.
|
172_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/emu3.md
|
https://huggingface.co/docs/transformers/en/model_doc/emu3/#emu3imageprocessor
|
.md
|
Constructs a Emu3 image processor that dynamically resizes images based on the original images.
Args:
do_resize (`bool`, *optional*, defaults to `True`):
Whether to resize the image's (height, width) dimensions.
resample (`PILImageResampling`, *optional*, defaults to `Resampling.BICUBIC`):
Resampling filter to use when resizing the image.
do_rescale (`bool`, *optional*, defaults to `True`):
Whether to rescale the image by the specified scale `rescale_factor`.
|
172_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/emu3.md
|
https://huggingface.co/docs/transformers/en/model_doc/emu3/#emu3imageprocessor
|
.md
|
do_rescale (`bool`, *optional*, defaults to `True`):
Whether to rescale the image by the specified scale `rescale_factor`.
rescale_factor (`int` or `float`, *optional*, defaults to `1/255`):
Scale factor to use if rescaling the image.
do_normalize (`bool`, *optional*, defaults to `True`):
Whether to normalize the image.
image_mean (`float` or `List[float]`, *optional*, defaults to `[0.48145466, 0.4578275, 0.40821073]`):
|
172_8_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/emu3.md
|
https://huggingface.co/docs/transformers/en/model_doc/emu3/#emu3imageprocessor
|
.md
|
image_mean (`float` or `List[float]`, *optional*, defaults to `[0.48145466, 0.4578275, 0.40821073]`):
Mean to use if normalizing the image. This is a float or list of floats for each channel in the image.
image_std (`float` or `List[float]`, *optional*, defaults to `[0.26862954, 0.26130258, 0.27577711]`):
Standard deviation to use if normalizing the image. This is a float or list of floats for each channel in the image.
do_convert_rgb (`bool`, *optional*, defaults to `True`):
|
172_8_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/emu3.md
|
https://huggingface.co/docs/transformers/en/model_doc/emu3/#emu3imageprocessor
|
.md
|
do_convert_rgb (`bool`, *optional*, defaults to `True`):
Whether to convert the image to RGB.
do_pad (`bool`, *optional*, defaults to `True`):
Whether to pad the image. If `True`, will pad the patch dimension of the images in the batch to the largest
number of patches in the batch. Padding will be applied to the bottom and right with zeros.
min_pixels (`int`, *optional*, defaults to `512 * 512`):
The min pixels of the image to resize the image.
max_pixels (`int`, *optional*, defaults to `1024 * 1024`):
|
172_8_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/emu3.md
|
https://huggingface.co/docs/transformers/en/model_doc/emu3/#emu3imageprocessor
|
.md
|
The min pixels of the image to resize the image.
max_pixels (`int`, *optional*, defaults to `1024 * 1024`):
The max pixels of the image to resize the image.
spatial_factor (`int`, *optional*, defaults to 8):
The spatial downsample factor the image will be downsampled in feature extracting phase
Methods: preprocess
|
172_8_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/emu3.md
|
https://huggingface.co/docs/transformers/en/model_doc/emu3/#emu3vqvae
|
.md
|
The VQ-VAE model used in Emu3 for encoding/decoding images into discrete tokens.
This model follows the "Make-a-scene: Scene-based text-to-image generation with human priors" paper from
[ Oran Gafni, Adam Polyak, Oron Ashual, Shelly Sheynin, Devi Parikh, and Yaniv Taigman](https://arxiv.org/abs/2203.13131).
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
|
172_9_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/emu3.md
|
https://huggingface.co/docs/transformers/en/model_doc/emu3/#emu3vqvae
|
.md
|
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
|
172_9_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/emu3.md
|
https://huggingface.co/docs/transformers/en/model_doc/emu3/#emu3vqvae
|
.md
|
and behavior.
Parameters:
config ([`Emu3VQVAEConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
|
172_9_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/emu3.md
|
https://huggingface.co/docs/transformers/en/model_doc/emu3/#emu3textmodel
|
.md
|
The bare Emu3Text Model outputting raw hidden-states without any specific head on top.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
172_10_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/emu3.md
|
https://huggingface.co/docs/transformers/en/model_doc/emu3/#emu3textmodel
|
.md
|
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`Emu3Config`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
|
172_10_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/emu3.md
|
https://huggingface.co/docs/transformers/en/model_doc/emu3/#emu3textmodel
|
.md
|
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Transformer decoder consisting of *config.num_hidden_layers* layers. Each layer is a [`Emu3TextDecoderLayer`]
Args:
config: Emu3TextConfig
Methods: forward
|
172_10_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/emu3.md
|
https://huggingface.co/docs/transformers/en/model_doc/emu3/#emu3forcausallm
|
.md
|
No docstring available for Emu3ForCausalLM
Methods: forward
|
172_11_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/emu3.md
|
https://huggingface.co/docs/transformers/en/model_doc/emu3/#emu3forconditionalgeneration
|
.md
|
No docstring available for Emu3ForConditionalGeneration
Methods: forward
|
172_12_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/squeezebert.md
|
https://huggingface.co/docs/transformers/en/model_doc/squeezebert/
|
.md
|
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
173_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/squeezebert.md
|
https://huggingface.co/docs/transformers/en/model_doc/squeezebert/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
173_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/squeezebert.md
|
https://huggingface.co/docs/transformers/en/model_doc/squeezebert/#overview
|
.md
|
The SqueezeBERT model was proposed in [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, Kurt W. Keutzer. It's a
bidirectional transformer similar to the BERT model. The key difference between the BERT architecture and the
SqueezeBERT architecture is that SqueezeBERT uses [grouped convolutions](https://blog.yani.io/filter-group-tutorial)
|
173_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/squeezebert.md
|
https://huggingface.co/docs/transformers/en/model_doc/squeezebert/#overview
|
.md
|
SqueezeBERT architecture is that SqueezeBERT uses [grouped convolutions](https://blog.yani.io/filter-group-tutorial)
instead of fully-connected layers for the Q, K, V and FFN layers.
The abstract from the paper is the following:
*Humans read and write hundreds of billions of messages every day. Further, due to the availability of large datasets,
large computing systems, and better neural network models, natural language processing (NLP) technology has made
|
173_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/squeezebert.md
|
https://huggingface.co/docs/transformers/en/model_doc/squeezebert/#overview
|
.md
|
large computing systems, and better neural network models, natural language processing (NLP) technology has made
significant strides in understanding, proofreading, and organizing these messages. Thus, there is a significant
opportunity to deploy NLP in myriad applications to help web users, social networks, and businesses. In particular, we
consider smartphones and other mobile devices as crucial platforms for deploying NLP models at scale. However, today's
|
173_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/squeezebert.md
|
https://huggingface.co/docs/transformers/en/model_doc/squeezebert/#overview
|
.md
|
consider smartphones and other mobile devices as crucial platforms for deploying NLP models at scale. However, today's
highly-accurate NLP neural network models such as BERT and RoBERTa are extremely computationally expensive, with
BERT-base taking 1.7 seconds to classify a text snippet on a Pixel 3 smartphone. In this work, we observe that methods
such as grouped convolutions have yielded significant speedups for computer vision networks, but many of these
|
173_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/squeezebert.md
|
https://huggingface.co/docs/transformers/en/model_doc/squeezebert/#overview
|
.md
|
such as grouped convolutions have yielded significant speedups for computer vision networks, but many of these
techniques have not been adopted by NLP neural network designers. We demonstrate how to replace several operations in
self-attention layers with grouped convolutions, and we use this technique in a novel network architecture called
SqueezeBERT, which runs 4.3x faster than BERT-base on the Pixel 3 while achieving competitive accuracy on the GLUE test
set. The SqueezeBERT code will be released.*
|
173_1_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/squeezebert.md
|
https://huggingface.co/docs/transformers/en/model_doc/squeezebert/#overview
|
.md
|
set. The SqueezeBERT code will be released.*
This model was contributed by [forresti](https://huggingface.co/forresti).
|
173_1_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/squeezebert.md
|
https://huggingface.co/docs/transformers/en/model_doc/squeezebert/#usage-tips
|
.md
|
- SqueezeBERT is a model with absolute position embeddings so it's usually advised to pad the inputs on the right
rather than the left.
- SqueezeBERT is similar to BERT and therefore relies on the masked language modeling (MLM) objective. It is therefore
efficient at predicting masked tokens and at NLU in general, but is not optimal for text generation. Models trained
with a causal language modeling (CLM) objective are better in that regard.
|
173_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/squeezebert.md
|
https://huggingface.co/docs/transformers/en/model_doc/squeezebert/#usage-tips
|
.md
|
with a causal language modeling (CLM) objective are better in that regard.
- For best results when finetuning on sequence classification tasks, it is recommended to start with the
*squeezebert/squeezebert-mnli-headless* checkpoint.
|
173_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/squeezebert.md
|
https://huggingface.co/docs/transformers/en/model_doc/squeezebert/#resources
|
.md
|
- [Text classification task guide](../tasks/sequence_classification)
- [Token classification task guide](../tasks/token_classification)
- [Question answering task guide](../tasks/question_answering)
- [Masked language modeling task guide](../tasks/masked_language_modeling)
- [Multiple choice task guide](../tasks/multiple_choice)
|
173_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/squeezebert.md
|
https://huggingface.co/docs/transformers/en/model_doc/squeezebert/#squeezebertconfig
|
.md
|
This is the configuration class to store the configuration of a [`SqueezeBertModel`]. It is used to instantiate a
SqueezeBERT model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the SqueezeBERT
[squeezebert/squeezebert-uncased](https://huggingface.co/squeezebert/squeezebert-uncased) architecture.
|
173_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/squeezebert.md
|
https://huggingface.co/docs/transformers/en/model_doc/squeezebert/#squeezebertconfig
|
.md
|
[squeezebert/squeezebert-uncased](https://huggingface.co/squeezebert/squeezebert-uncased) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 30522):
Vocabulary size of the SqueezeBERT model. Defines the number of different tokens that can be represented by
the `inputs_ids` passed when calling [`SqueezeBertModel`].
|
173_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/squeezebert.md
|
https://huggingface.co/docs/transformers/en/model_doc/squeezebert/#squeezebertconfig
|
.md
|
the `inputs_ids` passed when calling [`SqueezeBertModel`].
hidden_size (`int`, *optional*, defaults to 768):
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (`int`, *optional*, defaults to 12):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 12):
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (`int`, *optional*, defaults to 3072):
|
173_4_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/squeezebert.md
|
https://huggingface.co/docs/transformers/en/model_doc/squeezebert/#squeezebertconfig
|
.md
|
intermediate_size (`int`, *optional*, defaults to 3072):
Dimensionality of the "intermediate" (often named feed-forward) layer in the Transformer encoder.
hidden_act (`str` or `Callable`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"silu"` and `"gelu_new"` are supported.
hidden_dropout_prob (`float`, *optional*, defaults to 0.1):
|
173_4_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/squeezebert.md
|
https://huggingface.co/docs/transformers/en/model_doc/squeezebert/#squeezebertconfig
|
.md
|
`"relu"`, `"silu"` and `"gelu_new"` are supported.
hidden_dropout_prob (`float`, *optional*, defaults to 0.1):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1):
The dropout ratio for the attention probabilities.
max_position_embeddings (`int`, *optional*, defaults to 512):
The maximum sequence length that this model might ever be used with. Typically set this to something large
|
173_4_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/squeezebert.md
|
https://huggingface.co/docs/transformers/en/model_doc/squeezebert/#squeezebertconfig
|
.md
|
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (`int`, *optional*, defaults to 2):
The vocabulary size of the `token_type_ids` passed when calling [`BertModel`] or [`TFBertModel`].
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
|
173_4_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/squeezebert.md
|
https://huggingface.co/docs/transformers/en/model_doc/squeezebert/#squeezebertconfig
|
.md
|
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (`float`, *optional*, defaults to 1e-12):
pad_token_id (`int`, *optional*, defaults to 0):
The ID of the token in the word embedding to use as padding.
embedding_size (`int`, *optional*, defaults to 768):
The dimension of the word embedding vectors.
q_groups (`int`, *optional*, defaults to 4):
The number of groups in Q layer.
k_groups (`int`, *optional*, defaults to 4):
|
173_4_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/squeezebert.md
|
https://huggingface.co/docs/transformers/en/model_doc/squeezebert/#squeezebertconfig
|
.md
|
q_groups (`int`, *optional*, defaults to 4):
The number of groups in Q layer.
k_groups (`int`, *optional*, defaults to 4):
The number of groups in K layer.
v_groups (`int`, *optional*, defaults to 4):
The number of groups in V layer.
post_attention_groups (`int`, *optional*, defaults to 1):
The number of groups in the first feed forward network layer.
intermediate_groups (`int`, *optional*, defaults to 4):
The number of groups in the second feed forward network layer.
|
173_4_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/squeezebert.md
|
https://huggingface.co/docs/transformers/en/model_doc/squeezebert/#squeezebertconfig
|
.md
|
intermediate_groups (`int`, *optional*, defaults to 4):
The number of groups in the second feed forward network layer.
output_groups (`int`, *optional*, defaults to 4):
The number of groups in the third feed forward network layer.
Examples:
```python
>>> from transformers import SqueezeBertConfig, SqueezeBertModel
|
173_4_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/squeezebert.md
|
https://huggingface.co/docs/transformers/en/model_doc/squeezebert/#squeezebertconfig
|
.md
|
>>> # Initializing a SqueezeBERT configuration
>>> configuration = SqueezeBertConfig()
>>> # Initializing a model (with random weights) from the configuration above
>>> model = SqueezeBertModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
173_4_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/squeezebert.md
|
https://huggingface.co/docs/transformers/en/model_doc/squeezebert/#squeezeberttokenizer
|
.md
|
Construct a SqueezeBERT tokenizer. Based on WordPiece.
This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
Args:
vocab_file (`str`):
File containing the vocabulary.
do_lower_case (`bool`, *optional*, defaults to `True`):
Whether or not to lowercase the input when tokenizing.
do_basic_tokenize (`bool`, *optional*, defaults to `True`):
|
173_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/squeezebert.md
|
https://huggingface.co/docs/transformers/en/model_doc/squeezebert/#squeezeberttokenizer
|
.md
|
Whether or not to lowercase the input when tokenizing.
do_basic_tokenize (`bool`, *optional*, defaults to `True`):
Whether or not to do basic tokenization before WordPiece.
never_split (`Iterable`, *optional*):
Collection of tokens which will never be split during tokenization. Only has an effect when
`do_basic_tokenize=True`
unk_token (`str`, *optional*, defaults to `"[UNK]"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
|
173_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/squeezebert.md
|
https://huggingface.co/docs/transformers/en/model_doc/squeezebert/#squeezeberttokenizer
|
.md
|
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
sep_token (`str`, *optional*, defaults to `"[SEP]"`):
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
pad_token (`str`, *optional*, defaults to `"[PAD]"`):
|
173_5_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/squeezebert.md
|
https://huggingface.co/docs/transformers/en/model_doc/squeezebert/#squeezeberttokenizer
|
.md
|
token of a sequence built with special tokens.
pad_token (`str`, *optional*, defaults to `"[PAD]"`):
The token used for padding, for example when batching sequences of different lengths.
cls_token (`str`, *optional*, defaults to `"[CLS]"`):
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
|
173_5_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/squeezebert.md
|
https://huggingface.co/docs/transformers/en/model_doc/squeezebert/#squeezeberttokenizer
|
.md
|
instead of per-token classification). It is the first token of the sequence when built with special tokens.
mask_token (`str`, *optional*, defaults to `"[MASK]"`):
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
tokenize_chinese_chars (`bool`, *optional*, defaults to `True`):
Whether or not to tokenize Chinese characters.
This should likely be deactivated for Japanese (see this
|
173_5_4
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.