wikeeyang commited on
Commit
9f24b15
·
verified ·
1 Parent(s): 09b9179

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +163 -0
README.md ADDED
@@ -0,0 +1,163 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: tencent-hunyuan-community
4
+ license_link: LICENSE
5
+ pipeline_tag: text-to-image
6
+ library_name: transformers
7
+ language:
8
+ - zh
9
+ - en
10
+ tasks:
11
+ - text-to-image-synthesis
12
+ frameworks: PyTorch
13
+ base_model:
14
+ - Tencent-Hunyuan/HunyuanImage-3.0
15
+ base_model_relation: quantized
16
+ ---
17
+ ====================================================================================================
18
+
19
+ 本模型为 https://modelscope.cn/models/Tencent-Hunyuan/HunyuanImage-3.0 模型的 qint4 量化版本,采用 https://github.com/huggingface/optimum-quanto 技术量化,采用非官方技术保存的权重文件。
20
+
21
+ 本量化模型目前在 H20 96GB 单卡上通过测试。模型加载方式,采用非官方代码,详见 load_quantized_model.py 代码,目前里面包含两种加载方式,供大家参考,欢迎大家相互交流、共同研究学习,谢谢!
22
+
23
+ 加载方式一:模型初始化加载需要 CPU 大约 160GB 左右,GPU 初始占用 50GB;推理开始后 CPU 占用降至 70GB 左右,GPU 占用约 55-60 GB。模型加载时会出现模型键值的警告信息,但不影响使用。
24
+
25
+ 加载方式二:模型初始化加载需要 CPU 大约 75GB,GPU 初始占用 50GB;推理开始后 CPU 保持 75GB 占用, GPU 占用约 55-60GB。模型加载时,由于提供了键值 map , 所以不会出现任何警告信息。
26
+
27
+ 两种方法推理时间大致相同,在 H20 上大约 12 分钟一张(9:16 / 16:9)。
28
+
29
+ <img src="./example.jpg" alt="Example Generated Image" width="800">
30
+
31
+ ====================================================================================================
32
+
33
+ ### HunyuanImage-3.0 是一个非常出色的全模态混合专家模型!以下介绍内容引用自官方原模型介绍页。本项目提供的模型和代码仅用于社区分享和技术研究学习使用,请遵守腾讯混元官方的 License 相关规定。
34
+
35
+ ====================================================================================================
36
+
37
+ <div align="center">
38
+
39
+ <img src="./logo.png" alt="HunyuanImage-3.0 Logo" width="600">
40
+
41
+ # 🎨 HunyuanImage-3.0: A Powerful Native Multimodal Model for Image Generation
42
+
43
+ </div>
44
+
45
+
46
+ <div align="center">
47
+ <img src="./banner.png" alt="HunyuanImage-3.0 Banner" width="800">
48
+
49
+ </div>
50
+
51
+ <div align="center">
52
+ <a href=https://hunyuan.tencent.com/image target="_blank"><img src=https://img.shields.io/badge/Official%20Site-333399.svg?logo=homepage height=22px></a>
53
+ <a href=https://huggingface.co/tencent/HunyuanImage-3.0 target="_blank"><img src=https://img.shields.io/badge/%F0%9F%A4%97%20Models-d96902.svg height=22px></a>
54
+ <a href=https://github.com/Tencent-Hunyuan/HunyuanImage-3.0 target="_blank"><img src= https://img.shields.io/badge/Page-bb8a2e.svg?logo=github height=22px></a>
55
+ <a href=https://arxiv.org/pdf/2509.23951 target="_blank"><img src=https://img.shields.io/badge/Report-b5212f.svg?logo=arxiv height=22px></a>
56
+ <a href=https://x.com/TencentHunyuan target="_blank"><img src=https://img.shields.io/badge/Hunyuan-black.svg?logo=x height=22px></a>
57
+ <a href=https://docs.qq.com/doc/DUVVadmhCdG9qRXBU target="_blank"><img src=https://img.shields.io/badge/📚-PromptHandBook-blue.svg?logo=book height=22px></a>
58
+ </div>
59
+
60
+
61
+ <p align="center">
62
+ 👏 Join our <a href="./assets/WECHAT.md" target="_blank">WeChat</a> and <a href="https://discord.gg/ehjWMqF5wY">Discord</a> |
63
+ 💻 <a href="https://hunyuan.tencent.com/modelSquare/home/play?modelId=289&from=/visual">Official website(官网) Try our model!</a>&nbsp&nbsp
64
+ </p>
65
+
66
+ ## 🔥🔥🔥 News
67
+ - **September 28, 2025**: 📖 **HunyuanImage-3.0 Technical Report Released** - Comprehensive technical documentation now available
68
+ - **September 28, 2025**: 🚀 **HunyuanImage-3.0 Open Source Release** - Inference code and model weights publicly available
69
+
70
+
71
+ ## 🧩 Community Contributions
72
+
73
+ If you develop/use HunyuanImage-3.0 in your projects, welcome to let us know.
74
+
75
+ ## 📑 Open-source Plan
76
+
77
+ - HunyuanImage-3.0 (Image Generation Model)
78
+ - [x] Inference
79
+ - [x] HunyuanImage-3.0 Checkpoints
80
+ - [ ] HunyuanImage-3.0-Instruct Checkpoints (with reasoning)
81
+ - [ ] VLLM Support
82
+ - [ ] Distilled Checkpoints
83
+ - [ ] Image-to-Image Generation
84
+ - [ ] Multi-turn Interaction
85
+
86
+
87
+ ## 🗂️ Contents
88
+ - [🔥🔥🔥 News](#-news)
89
+ - [🧩 Community Contributions](#-community-contributions)
90
+ - [📑 Open-source Plan](#-open-source-plan)
91
+ - [📖 Introduction](#-introduction)
92
+ - [✨ Key Features](#-key-features)
93
+ - [🛠️ Dependencies and Installation](#-dependencies-and-installation)
94
+ - [💻 System Requirements](#-system-requirements)
95
+ - [📦 Environment Setup](#-environment-setup)
96
+ - [📥 Install Dependencies](#-install-dependencies)
97
+ - [Performance Optimizations](#performance-optimizations)
98
+ - [🚀 Usage](#-usage)
99
+ - [🔥 Quick Start with Transformers](#-quick-start-with-transformers)
100
+ - [🏠 Local Installation & Usage](#-local-installation--usage)
101
+ - [🎨 Interactive Gradio Demo](#-interactive-gradio-demo)
102
+ - [🧱 Models Cards](#-models-cards)
103
+ - [📝 Prompt Guide](#-prompt-guide)
104
+ - [Manually Writing Prompts](#manually-writing-prompts)
105
+ - [System Prompt For Automatic Rewriting the Prompt](#system-prompt-for-automatic-rewriting-the-prompt)
106
+ - [Advanced Tips](#advanced-tips)
107
+ - [More Cases](#more-cases)
108
+ - [📊 Evaluation](#-evaluation)
109
+ - [📚 Citation](#-citation)
110
+ - [🙏 Acknowledgements](#-acknowledgements)
111
+ - [🌟🚀 Github Star History](#-github-star-history)
112
+
113
+ ---
114
+
115
+ ## 📖 Introduction
116
+
117
+ **HunyuanImage-3.0** is a groundbreaking native multimodal model that unifies multimodal understanding and generation within an autoregressive framework. Our text-to-image module achieves performance **comparable to or surpassing** leading closed-source models.
118
+
119
+
120
+ <div align="center">
121
+ <img src="./framework.png" alt="HunyuanImage-3.0 Framework" width="90%">
122
+ </div>
123
+
124
+ ## ✨ Key Features
125
+
126
+ * 🧠 **Unified Multimodal Architecture:** Moving beyond the prevalent DiT-based architectures, HunyuanImage-3.0 employs a unified autoregressive framework. This design enables a more direct and integrated modeling of text and image modalities, leading to surprisingly effective and contextually rich image generation.
127
+
128
+ * 🏆 **The Largest Image Generation MoE Model:** This is the largest open-source image generation Mixture of Experts (MoE) model to date. It features 64 experts and a total of 80 billion parameters, with 13 billion activated per token, significantly enhancing its capacity and performance.
129
+
130
+ * 🎨 **Superior Image Generation Performance:** Through rigorous dataset curation and advanced reinforcement learning post-training, we've achieved an optimal balance between semantic accuracy and visual excellence. The model demonstrates exceptional prompt adherence while delivering photorealistic imagery with stunning aesthetic quality and fine-grained details.
131
+
132
+ * 💭 **Intelligent World-Knowledge Reasoning:** The unified multimodal architecture endows HunyuanImage-3.0 with powerful reasoning capabilities. It leverages its extensive world knowledge to intelligently interpret user intent, automatically elaborating on sparse prompts with contextually appropriate details to produce superior, more complete visual outputs.
133
+
134
+ ## 📚 Citation
135
+
136
+ If you find HunyuanImage-3.0 useful in your research, please cite our work:
137
+
138
+ ```bibtex
139
+ @article{cao2025hunyuanimage,
140
+ title={HunyuanImage 3.0 Technical Report},
141
+ author={Cao, Siyu and Chen, Hangting and Chen, Peng and Cheng, Yiji and Cui, Yutao and Deng, Xinchi and Dong, Ying and Gong, Kipper and Gu, Tianpeng and Gu, Xiusen and others},
142
+ journal={arXiv preprint arXiv:2509.23951},
143
+ year={2025}
144
+ }
145
+ ```
146
+
147
+ ## 🙏 Acknowledgements
148
+
149
+ We extend our heartfelt gratitude to the following open-source projects and communities for their invaluable contributions:
150
+
151
+ * 🤗 [Transformers](https://github.com/huggingface/transformers) - State-of-the-art NLP library
152
+ * 🎨 [Diffusers](https://github.com/huggingface/diffusers) - Diffusion models library
153
+ * 🌐 [HuggingFace](https://huggingface.co/) - AI model hub and community
154
+ * ⚡ [FlashAttention](https://github.com/Dao-AILab/flash-attention) - Memory-efficient attention
155
+ * 🚀 [FlashInfer](https://github.com/flashinfer-ai/flashinfer) - Optimized inference engine
156
+
157
+ ## 🌟🚀 Github Star History
158
+
159
+ [![GitHub stars](https://img.shields.io/github/stars/Tencent-Hunyuan/HunyuanImage-3.0?style=social)](https://github.com/Tencent-Hunyuan/HunyuanImage-3.0)
160
+ [![GitHub forks](https://img.shields.io/github/forks/Tencent-Hunyuan/HunyuanImage-3.0?style=social)](https://github.com/Tencent-Hunyuan/HunyuanImage-3.0)
161
+
162
+
163
+ [![Star History Chart](https://api.star-history.com/svg?repos=Tencent-Hunyuan/HunyuanImage-3.0&type=Date)](https://www.star-history.com/#Tencent-Hunyuan/HunyuanImage-3.0&Date)