For more information (including how to compress models yourself), check out https://huggingface.co/DFloat11 and https://github.com/LeanModels/DFloat11

Feel free to request for other models for compression as well (for either the diffusers library, ComfyUI, or any other model), although compressing models that are of architectures that are unfamiliar to me might be more difficult.

Important: These weights are only compatible with transformers>=4.52. Ensure your transformers library is sufficiently up-to-date, because v4.51 and earlier use a different submodule configuration (or something of the sort). For older versions of transformers, use the weights in this branch instead.

How to Use

transformers (as a standalone multimodal LLM)

from transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
from dfloat11 import DFloat11Model

model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
    "Qwen/Qwen2.5-VL-7B-Instruct", device_map="cpu", torch_dtype=torch.bfloat16
) # Immediately loading the BF16 weights onto the GPU appears to cause problems with replacing them with DF11 weights later

# We recommend enabling flash_attention_2 for better acceleration and memory saving, especially in multi-image and video scenarios.
# model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
#     "Qwen/Qwen2.5-VL-7B-Instruct",
#     torch_dtype=torch.bfloat16,
#     attn_implementation="flash_attention_2",
# )

DFloat11Model.from_pretrained("mingyi456/Qwen2.5-VL-7B-Instruct-DF11", device="cpu", bfloat16_model=model)

model.to("cuda")

# default processer
processor = AutoProcessor.from_pretrained("Qwen/Qwen2.5-VL-7B-Instruct")
# The default range for the number of visual tokens per image in the model is 4-16384.
# You can set min_pixels and max_pixels according to your needs, such as a token range of 256-1280, to balance performance and cost.
# min_pixels = 256*28*28
# max_pixels = 1280*28*28
# processor = AutoProcessor.from_pretrained("Qwen/Qwen2.5-VL-7B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels)
messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "image",
                "image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
            },
            {"type": "text", "text": "Describe this image."},
        ],
    }
]
# Preparation for inference
text = processor.apply_chat_template(
    messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
    text=[text],
    images=image_inputs,
    videos=video_inputs,
    padding=True,
    return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference: Generation of the output
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
    out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
    generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)

diffusers (as the text encoder for Qwen-Image based models)

from diffusers import DiffusionPipeline, QwenImageTransformer2DModel, GGUFQuantizationConfig
import torch
from dfloat11 import DFloat11Model


transformer_path_or_link = "https://huggingface.co/city96/Qwen-Image-gguf/blob/main/qwen-image-Q6_K.gguf"

transformer = QwenImageTransformer2DModel.from_single_file(
    transformer_path_or_link,
    quantization_config=GGUFQuantizationConfig(compute_dtype=torch.bfloat16),
    torch_dtype=torch.bfloat16,
    config="Qwen/Qwen-Image",
    subfolder="transformer"
)

pipe = DiffusionPipeline.from_pretrained(
    model_name,
    transformer=transformer,
    torch_dtype=torch.bfloat16,
)
DFloat11Model.from_pretrained(
    "mingyi456/Qwen2.5-VL-7B-Instruct-DF11",
    device="cpu",
    bfloat16_model=pipe.text_encoder,
)

pipe.enable_model_cpu_offload()

positive_magic = {
    "en": "Ultra HD, 4K, cinematic composition.", # for english prompt,
    "zh": "超清,4K,电影级构图" # for chinese prompt,
}

prompt = A coffee shop entrance features a chalkboard sign reading "Qwen Coffee 😊 $2 per cup," with a neon light beside it displaying "通义千问". Next to it hangs a poster showing a beautiful Chinese woman, and beneath the poster is written "π≈3.1415926-53589793-23846264-33832795-02384197".

negative_prompt = ""

width, height = 1328, 1328
image = pipe(
    prompt=prompt + positive_magic["en"],
    negative_prompt=negative_prompt,
    width=width,
    height=height,
    num_inference_steps=50,
    true_cfg_scale=4,
    generator=torch.Generator(device="cpu").manual_seed(42)
).images[0]
image.save()
max_memory = torch.cuda.max_memory_allocated()
print(f"Max memory: {max_memory / (1000 ** 3):.2f} GB")

ComfyUI

Currently, this is not supported. I am unsure if I can get it to work, but I will get around to testing it sometime in the future.

Compression Details

This is the pattern_dict for compression:

pattern_dict={
    "lm_head": [],
    "model\\.language_model\\.embed_tokens": [],
    "model\\.language_model\\.layers\\.\\d+": [
        "self_attn.q_proj",
        "self_attn.k_proj",
        "self_attn.v_proj",
        "self_attn.o_proj",
        "mlp.gate_proj",
        "mlp.up_proj",
        "mlp.down_proj"
    ],
    "model\\.visual\\.blocks\\.\\d+": [
        "attn.qkv",
        "attn.proj",
        "mlp.gate_proj",
        "mlp.up_proj",
        "mlp.down_proj"
    ],
    "model\\.visual\\.merger\\.mlp": [
        "0",
        "2"
    ]
}	
Downloads last month
82
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for mingyi456/Qwen2.5-VL-7B-Instruct-DF11

Quantized
(119)
this model

Collection including mingyi456/Qwen2.5-VL-7B-Instruct-DF11